The AI research organization took action against specific accounts associated with the hacking groups that were misusing its large language model (LLM) services for malicious purposes after receiving key information from Microsoft's Threat Intelligence team.
In a separate report, Microsoft provides more details on how and why these advanced threat actors used ChatGPT.
Activity associated with the following threat groups was terminated on the platform:
Generally, the threat actors used the large language models to enhance their strategic and operational capabilities, including reconnaissance, social engineering, evasion tactics, and generic information gathering.
None of the observed cases involve the use of LLMs for directly developing malware or complete custom exploitation tools.
Instead, actual coding assistance concerned lower-level tasks such as requesting evasion tips, scripting, turning antivirus off, and generally the optimization of technical operations.
In January, a report from the United Kingdom's National Cyber Security Centre (NCSC) predicted that by 2025 the operations of sophisticated advanced persistent threats (APTs) will benefit from AI tools across the board, especially in developing evasive custom malware.
Last year, though, according to OpenAI's and Microsoft's findings, there was an uplift in APT attack segments like phishing/social engineering, but the rest was rather exploratory.
OpenAI says it will continue to monitor and disrupt state-backed hackers using specialized monitoring tech, information from industry partners, and dedicated teams tasked with identifying suspicious usage patterns.
"We take lessons learned from these actors' abuse and use them to inform our iterative approach to safety," reads OpenAI's post.
"Understanding how the most sophisticated malicious actors seek to use our systems for harm gives us a signal into practices that may become more widespread in the future, and allows us to continuously evolve our safeguards," the company added.
Source: bleepingcomputer.com
All Rights Reserved | John&Partners LLC.