OpenAI blocks state-sponsored hackers from using ChatGPT

22 tháng 2, 2024

OpenAI has removed accounts used by state-sponsored threat groups from Iran, North Korea, China, and Russia, that were abusing its artificial intelligence chatbot, ChatGPT.



The AI research organization took action against specific accounts associated with the hacking groups that were misusing its large language model (LLM) services for malicious purposes after receiving key information from Microsoft's Threat Intelligence team.


In a separate report, Microsoft provides more details on how and why these advanced threat actors used ChatGPT.


Activity associated with the following threat groups was terminated on the platform:


  1. Forest Blizzard (Strontium) [Russia]: Utilized ChatGPT to conduct research into satellite and radar technologies pertinent to military operations and to optimize its cyber operations with scripting enhancements.
  2. Emerald Sleet (Thallium) [North Korea]: Leveraged ChatGPT for researching North Korea and generating spear-phishing content, alongside understanding vulnerabilities (like CVE-2022-30190 "Follina") and troubleshooting web technologies. 
  3. Crimson Sandstorm (Curium) [Iran]: Engaged with ChatGPT for social engineering assistance, error troubleshooting, .NET development, and developing evasion techniques. 
  4. Charcoal Typhoon (Chromium) [China]: Interacted with ChatGPT to assist in tooling development, scripting, comprehending cybersecurity tools, and generating social engineering content. 
  5. Salmon Typhoon (Sodium) [China]: Employed LLMs for exploratory inquiries on a wide range of topics, including sensitive information, high-profile individuals, and cybersecurity, to expand their intelligence-gathering tools and evaluate the potential of new technologies for information sourcing.


Generally, the threat actors used the large language models to enhance their strategic and operational capabilities, including reconnaissance, social engineering, evasion tactics, and generic information gathering.


None of the observed cases involve the use of LLMs for directly developing malware or complete custom exploitation tools.

Instead, actual coding assistance concerned lower-level tasks such as requesting evasion tips, scripting, turning antivirus off, and generally the optimization of technical operations.


In January, a report from the United Kingdom's National Cyber Security Centre (NCSC) predicted that by 2025 the operations of sophisticated advanced persistent threats (APTs) will benefit from AI tools across the board, especially in developing evasive custom malware.


Last year, though, according to OpenAI's and Microsoft's findings, there was an uplift in APT attack segments like phishing/social engineering, but the rest was rather exploratory.


OpenAI says it will continue to monitor and disrupt state-backed hackers using specialized monitoring tech, information from industry partners, and dedicated teams tasked with identifying suspicious usage patterns.


"We take lessons learned from these actors' abuse and use them to inform our iterative approach to safety," reads OpenAI's post.


"Understanding how the most sophisticated malicious actors seek to use our systems for harm gives us a signal into practices that may become more widespread in the future, and allows us to continuously evolve our safeguards," the company added.



Source:  bleepingcomputer.com


Bạn cũng có thể quan tâm

4 tháng 6, 2024
Bộ định tuyến chơi game TP-Link Archer C5400X dễ mắc phải các lỗi bảo mật có thể cho phép kẻ tấn công từ xa, không được xác thực thực thi các lệnh trên thiết bị.
3 tháng 6, 2024
Ngày 27 tháng 5 Check Point đã cảnh báo rằng các tác nhân đe dọa đang nhắm mục tiêu vào các thiết bị VPN truy cập từ xa của Check Point trong một chiến dịch đang diễn ra nhằm xâm phạm mạng doanh nghiệp.
31 tháng 5, 2024
Công ty quản lý đơn thuốc Sav-Rx cảnh báo hơn 2,8 triệu cá nhân ở Hoa Kỳ việc họ đã bị vi phạm dữ liệu và dữ liệu cá nhân của họ đã bị đánh cắp trong một cuộc tấn công mạng năm 2023.
Thêm bài viết
Share by: