Me: “Where can I find a course catalogue online?”
SmooSmoo: “Yep, I believe I know this one. 😉 I’ve found there’s an answer for your question, but it is not a public article. You need to login first to reveal the answer.”
That was a part of my interaction with SmooSmoo, Singapore Management University’s chatbot, meant to make navigating the school’s cyber presence easier for prospective students.
I found SmooSmoo smart, engaging, and full of useful information. More and more, chatbots are featured on business websites. In the case of SmooSmoo, the bot was designed to answer more than 1,000 commonly asked questions across student areas like academic, admissions, student life, internships, and exchanges.
“The chatbot acts as the first line of contact to handle simple, frequently asked questions and requests. It also serves as a new channel of communication – online self-service for our students. This frees up staff to handle more complicated and high-value demands when serving the student community,” says Gregory Krygsman, head of the Student Services Hub, Office of Dean of Students at SMU.
It’s not just college students; adults are using chatbots. Everything from cell phone companies, to healthcare enterprises are offering interactive chatbots to handle customer questions. However, I’ve found myself wondering if these chatbots have a dark side? In reaching out to some of the world’s top chatbot experts, the unanimous answer is “yes,” they do. Both in human behavior and, more pertinent to the MSP, security.
On the human side, a professor at Missouri State University found that up to 50 percent of interactions with chatbots are abusive. People simply like to unload profanity or vulgarity on something they know isn’t “real.” We’ll let philosophers and psychologists deal with why that happens. However, as an MSP we will let you focus on security.
What happens, for instance, when a customer decides to enter their credit card number into the chatbot window? Alternatively, divulge some sensitive medical information?
Some of the finest chatbot work in the world is being done at Singapore Management University, home of SmooSmoo. Where the chatbot was designed with security as paramount of importance.
Krygsman informs us, “The chatbot resides on the SMU server and is subjected to the same security protections and guidelines as any SMU portal. The backend to the chatbot is the Watson Assistant service that runs on the IBM Cloud, and the front-end is on the SMU infrastructure that also runs on the Amazon Cloud. We have a cloud web application firewall security appliance protecting all our sites, and we have had no security issues this far.”
Chatbot becomes an “Evil bot”
Still, not all chatbots are created equal, and if they aren’t secured correctly, there can be a dark side to these convenient and chatty bots.
“Just like any software, chatbots can be attacked. One feature of chatbot is that it learns from its interactions with users which means it may be subjected to malicious input attack. Such attacks may maliciously train chatbot to become an “evil bot” and deviate from its designed functionality,” explains Robert Deng, Professor of Information Systems at SMU. That’s not all, though. The chatbot can serve as a conduit to other areas of the business.
“Attackers can also exploit software vulnerabilities to use chatbot as a gateway to attack other bots and the backend systems,” Deng exclaims.
What can MSPs do?
MSPs should be involved in a client’s chatbot plans. Some business owners, eager to engage with their customers, may roll out a chatbot program without notifying their MSP. If the chatbots are not adequately secured, that can leave them vulnerable.
“Security measures such as strong user authentication, malware detection, and intrusion detection should be in place to protect chatbot as well as the users’ sensitive information it handles. Excellent and secure software engineering practice should be adopted in the design, and implementation of deployment of chatbot,” instructs Deng. These are all features that cheap “off-the-shelf” chatbot builders may not offer sufficient strength in, while MSPs can.
Dr. Michal Podpora, an artificial intelligence expert and assistant professor at the Opole University of Technology in Poland, shared his security concerns about chatbots, saying that they present “huge” security risks.
“When the chatbots’ AI will eventually get access to the information systems of a company, there will be more (risks) and they will be even bigger,” said Podpora.
In a recent paper, Podpora talked about extending AI of a robot/chatbot by making it a part of a smart infrastructure system. If they aren’t included as part of a smart system, the bots pose risks.
“If a company has the most basic (“dumb”) chatbot with no access to any systems, that’s not so interesting. However, if a company has a chatbot that is able to make decisions or to influence processes, to read/change/erase data, or even transfer money, that becomes a challenge to secure,” Podpora details.
Podpora thinks most companies are unprepared for a chatbot attack. Securing information from customers who might misuse the chatbot, such as entering a credit card number, is the easy part.
“It is just an information system. It can be layered, secured, and protected like any other one,” Podpora explains. It is when a bad actor gets involved that the risk of chatbots increase.
SecurityRoundtable predicts that the increasing popularity of chatbots will offer “new opportunities for phishing, hacking, and general mischief.” While there hasn’t been a major chatbot attack yet, this is a trend MSPs must remain vigilant about. If a chatbot isn’t secure, it’s a mess waiting to happen. If a chatbot is secure, it can be an engaging, dynamic way for businesses and customers to interact. Just ask SmooSmoo.
Photo: Panuwat Phimpha / Shutterstock