IT Leaders Predict ChatGPT-Enabled Cyberattacks Are Imminent
Can we expect the popular artificial intelligence chatbot ChatGPT to be used against our organizations in the form of AI-infused cyberattacks in the next 12 to 24 months?
The answer is a resounding yes, according to new research conducted by BlackBerry.
This is just one of several insights from a January 2023 survey of 1,500 IT and cybersecurity decision-makers across North America, Australia, and the UK. The research reveals that worries about ChatGPT expressed on social media platforms are widespread among those managing our technology and cyber defenses.
Research on ChatGPT and Cyberattacks
One of the key findings uncovered in the BlackBerry research on ChatGPT and cyberattacks is that 51% of IT professionals predict that we are less than a year away from a successful cyberattack being credited to ChatGPT. Some think that could happen in the next few months. And more than three-fourths of respondents (78%) predict a ChatGPT credited attack will certainly occur within two years.
In addition, a vast majority (71%) believe nation-states may already be leveraging ChatGPT for malicious purposes.
Although nearly three-quarters of respondents believe ChatGPT will be used mainly for “good,” they also shared fears that the AI chatbot will be used for a variety of malicious purposes.
Here are the top five ways they think threat actors may harness the AI chatbot:
- To help hackers craft more believable and legitimate-sounding phishing emails (53%)
- To help less experienced hackers improve their technical knowledge and develop their skills (49%)
- For spreading misinformation/disinformation (49%)
- To create new malware (48%)
- To increase the sophistication of threats/attacks (46%)
BlackBerry’s Chief Technology Officer said that he believes these concerns are valid, based on what we’re already seeing. It’s been well documented that people with malicious intent are testing the waters and over the course of this year, we expect to see hackers get a much better handle on how to use AI-enabled chatbots successfully for nefarious purposes.
In fact, both cyber criminals and cyberdefense professionals are actively investigating how they can utilize ChatGPT to augment and improve their intended outcomes, and they will continue to do so. Time will tell which side is ultimately more successful.
Should ChatGPT and Similar AI Tools Be Regulated?
Considering the concerns around the growing power of publicly available AI bots and tools, our survey also asked the following question: “To what extent, if at all, do you think that governments have a responsibility to regulate advanced technologies like ChatGPT?”
95% of respondents say governments have some responsibility to regulate these types of technologies, with 85% rating that level of responsibility as either “moderate” or “significant.” While they clearly are looking for regulatory relief to the anticipated threat, the IT professionals we surveyed are not waiting. The majority (82%) tell us they are already planning actions of their own to defend their organizations against AI-augmented cyberattacks.
Fighting AI Threats with AI Defenses
We also asked respondents if cybersecurity technology is currently keeping pace with innovation in cybercrime. A substantial number of those surveyed said the answer is yes — for now. This includes 54% of Canadian respondents, 48% of U.S. IT leaders, and 46% of IT and cybersecurity decision-makers in the UK.
However, most are keenly aware that new AI-powered cyberthreats will demand cyber defenses built on AI-powered tools. The survey results reveal that the majority (82%) of IT decision-makers would consider investing in AI-driven cybersecurity in the next two years, and almost half (48%) would consider investing before the end of 2023. This reflects an encouraging trend to replace obsolete signature-based security solutions with more effective, AI-driven endpoint protection technology that offers enhanced abilities to prevent new and increasingly sophisticated threats.
BlackBerry sees this is quite a timely pivot — as the maturity of ChatGPT and similar platforms increases, and the hackers putting it to use make it progressively more difficult to protect our organizations without also using AI defensively — to level the playing field.
There are many benefits to be gained from this kind of advanced technology, and we’re only beginning to scratch the surface. That’s exciting. But we must take into account that threat actors also see the benefits, and they will waste no time in adding these new technologies to their malicious arsenals.