This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| less than a minute read

Do chatbots make it easier to commit cyber-crimes?

Today, Google announced a new conversational artificial intelligence technology, Bard, that is set to rival OpenAI's popular AI service, ChatGPT. While the use of artificial intelligence is not a new concept, some new fears about generative AI have become warranted as several cybersecurity firms have examined ChatGPT's threat model. Results show that the chat bot could be used by bad actors to create phishing emails, malicious code, automated attacks, and to develop specialized skills to hack IT systems. If used positively, artificial intelligence has many benefits. However, the consequence of such advanced technology also comes with risks that your cybersecurity defenses need to be able to meet head on. 

“At a basic level, I have been able to write some great phishing lures with it, and I expect it could be utilized to have more realistic interactive conversations for business email compromise and even attacks over Facebook Messenger, WhatsApp, or other chat apps,” Wisniewski told TechCrunch.