blue and black ball on blue and white checkered textile

AI chatbots, the most recently popular of which is ChatGPT, work by web crawling. Unluckily for corporations who like keeping their proprietary data under its proverbial lock and key, web crawlers seem to be able to breach weak security protocols. Presumably, this is not an intentional feature of the average chatbot, but it’s a risk that companies and organizations should consider when updating security protocols going forward, especially if users within the company interact with a chatbot.

To maximize data security, it’s important for organizations to monitor their security protocols and best practices. Any weakness is a potential vector for chatbot incursion. Even if its intentions are benevolent, the chatbot can and will disseminate any information it finds if it proves useful or relevant to its tasks. For example, if you have the Krabby Patty formula, you don’t need chatbots churning out mysteriously identical recipes and emailing them to Plankton.

The Rise of the AI Chatbot

Originally, chatbots were intended to perform basic conversational functions, but they largely depended on the user for the bulk of the exchange. Since Eliza was created in the 1960s, the bots have become more sophisticated, and in recent years they have grown from an annoying bot on your bank’s Contact Us page to reinforcement-based AI learners like ChatGPT. ChatGPT and similar bots can generate essays and other long-form writing, provide project ideas, explain complex topics, create content in multiple languages, and complete many other tasks.

This is already changing the way some industries operate. For example, if you hire a freelancer to write copy for your website, it’s very possible a chatbot contributed to the piece (hey, not this one!). Musicians can use chatbots to write lyrics, and AI art generators, bots developed much the same way as the chatbots, are seeing a rise in popularity. For many users, the chatbots do the heavy lifting and time-consuming work of getting content started, and then the writer or artist can revise as needed. While the chatbots’ original function was largely conversational, its increasing popularity is at least in part due to its newfound utility.

The Security Risks of AI Chatbots

Handy though chatbots may be, they come with a few security flaws that can impact your organization.  Amazon has found possible evidence that ChatGPT, while being used to write code, was able to access sensitive and proprietary data. One of the first red flags was interview questions. When presented with interview questions designed for a software engineer, ChatGPT responded with correct answers that were suspiciously similar to Amazon’s key. Evidently, the web crawling chatbot had gotten itself inside Amazon.

A lawyer for Amazon later warned employees not to share any more proprietary code with chatbots to minimize the likelihood that chatbots would disseminate it. This could easily happen to your organization. If your developers use chatbots to help write code for your website, web apps, or other company properties, there is a risk that those chatbots will give that same code (in addition to whatever they were provided by your developers) to other companies.

According to CPO Magazine, in addition to the privacy risks from web crawling and generating content, using chatbots to help write code could introduce detrimental vulnerabilities. Chatbots spit out answers to questions without qualifiers and with high confidence. Unless the person using the chatbot reviews the code generated carefully, there may be mistakes. Where there are mistakes in code, there are exploitable security flaws.

If your web application is built on a combination of open-source code and AI chatbot-generated code, it is vulnerable to both known exploits and novel errors introduced by an overly confident chatbot. As a result, your data may no longer be private or secure, which could create financial troubles or compliance violations down the line.

Although building more specific, targeted AI tools in the future could help solve some of these problems, for the moment, many people are using chatbots to save themselves some time and labor. As this is not the intended purpose of the chatbot, and as there are no security standards for using those chatbots this way, it’s important to be cautious of chatbot-generated content. You might not have to pay a team of developers overtime if they can use a chatbot to speed things along as a deadline approaches, but you’ll have bigger (and more expensive) fish to fry if the chatbot’s code turns out to be flawed.

Managing Chatbot Security Risks

Until universal security standards can be developed and implemented, there are some things you can do to reduce your risk. Minimize individuals’ access to data, and create clear company policies for what can or cannot be shared with a chatbot. Protect internal data (especially consumer data and personal health information if applicable) with encryption, access controls, and data loss prevention technology.

Above all else, invest in training for your employees. Using chatbots is typically not malicious and may be done with the company’s best interests in mind. However, if employees do not understand the risks and how to mitigate them, they may become a non-malicious insider threat, and your proprietary data or organizational security may be compromised.

While AI tools may be a good move for your company, remember that chatbots are not built to generate code or website content; even if they were, it’s early in their development. Tread cautiously as you use these tools, and be sure to treat the chatbots as a potential security risk. Taking appropriate access control measures and training employees can go a long way toward managing your risk and keeping your organization secure.