The impact of open source AI on hacking: the FBI’s point of view
Hackers have found a new weapon to make their illicit activities more effective: artificial intelligence (AI). According to the FBI, cybercriminals are increasingly exploiting open source AI models to improve their tools and trick Internet users.
Use of AI by cybercriminals
Cybercriminals use chatbots based on language models such as ChatGPT, Google Bard or Claude to facilitate their malicious activities. By exploiting precise requests, they manage to bypass the security measures and restrictions put in place by the creators of these tools. The FBI has sounded the alarm, but it has noticed that the AI models most popular with Internet users are not the favorites of hackers.
Open source models, a tool favored by hackers
Hackers prefer to use free, customizable open source AI models rather than those controlled by companies. These open source models, accessible to everyone on the internet, can easily be used to generate illegal content. In addition, they are lighter and require less computing power than the large models developed by the giants of the sector. Therefore, they can be used locally on a computer or even on a smartphone, an advantage for developers and cybercriminals. It is also interesting to note that criminals use custom AI models developed by other hackers. On the dark web, there are many chatbots designed by hackers to generate illegal content, such as malware. Recently, two chatbots dedicated exclusively to criminal activities have appeared on black markets: WormGPT and FraudGPT. These chatbots are designed to create persuasive phishing emails, encode viruses like ransomware, and orchestrate attacks. They are sold at a high price on the dark web.
The different uses of AI by cybercriminals
Hackers use AI in different ways to carry out their illicit activities. In particular, they use AI to design phishing pages imitating the interface of official platforms. In addition, they exploit the capabilities of generative AI to create polymorphic viruses, capable of modifying their code with each execution, thus making their detection more complicated for traditional antiviruses. Scammers also use deepfake technology to extort money from their victims. They generate falsified images and videos, depicting their targets in compromising situations. They then use this content to harass their victims by posting it on social networks or pornographic sites. In addition, hackers do not hesitate to use voice cloning AI to manipulate their victims over the phone. By imitating the voices of loved ones, they manage to convince targets to trust them and give them money.
The future of AI and hacking
The FBI predicts an increase in the criminal use of AI as the technology becomes more widely available. It is therefore essential to develop prevention and protection strategies to counter the malicious use of AI by hackers. The effort should focus on securing open source AI models and implementing more robust security measures to prevent manipulation. AI has the potential to benefit society in many areas, but ensuring its responsible and ethical use is imperative.
Source: PCMag
