According to research by threat intelligence company, Check Point Research; cybercriminals are using OpenAI’s ChatGPT to build malware, dark web sites and other tools to launch cyber attacks. The threat intelligence company has discovered a thread under a well-known hacking site by a professional hacker who claims to be testing the famous AI chatbot to “recreate malware strains.”
Using OpenAI’s Chatbot to Code Malware
Using ChatGPT, a user uploaded Python code that he claimed could encrypt files. He wrote; “The python file stealer that searches for common file types that can self-delete after the files are uploaded or if any errors occur while the program is running, therefore removing any evidence.” Another user utilized ChatGPT to create a dark web marketplace script that can be used in a number of different ways, including selling personal information obtained in data breaches, selling illegally obtained payment card information or selling cyber crime-as-a-service products.
It must be noted that cybersecurity experts have previously predicted a top threat to cyber security in 2023 would be crime-as-a-service. A cybersecurity expert, Adam Levin, said, “ The cyber-crime syndicates behind current as-a-service platforms are set to grow over the next 12 months as they can make more money enabling entry-level cyber criminals to commit crimes than they can directly targeting victims and with less risk.”
New York City Schools Block Access to ChatGPT
The New York City Schools have blocked access to the AI chatbot over negative impacts on student learning and concerns regarding the safety and accuracy of the content. The city’s department of education said, “ChatGPT is restricted on New York City Public Schools’ networks and devices. While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.”