Microsoft disrupts cybercrime network exploiting stolen AI credentials

610     0
Microsoft disrupts cybercrime network exploiting stolen AI credentials
Microsoft disrupts cybercrime network exploiting stolen AI credentials

Microsoft takes legal action against foreign cybercriminals exploiting AI for harmful content.

Microsoft has filed a lawsuit aimed at disrupting cybercriminal operations that abuse generative AI technologies, according to a Jan. 10 announcement.

The legal action, unsealed in the Eastern District of Virginia, targets a foreign-based threat group accused of bypassing safety measures in AI services to produce harmful and illicit content.

The case highlights cybercriminals’ persistence in exploiting vulnerabilities in advanced AI systems.

Malicious use

Microsoft’s Digital Crimes Unit (DCU) highlighted that the defendants developed tools to exploit stolen customer credentials, granting unauthorized access to generative AI services. These altered AI capabilities were then resold, complete with instructions for malicious use.

Steven Masada, Assistant General Counsel at Microsoft’s DCU, said:

“This action sends a clear message: the weaponization of AI technology will not be tolerated.”

The lawsuit alleges that the cybercriminals’ activities violated US law and Microsoft’s Acceptable Use Policy. As part of its investigation, Microsoft seized a website central to the operation, which it says will help uncover those responsible, disrupt their infrastructure, and analyze how these services are monetized.

Microsoft has enhanced its AI safeguards in response to the incidents, deploying additional safety mitigations across its platforms. The company also revoked access for malicious actors and implemented countermeasures to block future threats.

Combating AI misuse

This legal action builds on Microsoft’s broader commitment to combating abusive AI-generated content. Last year, the company outlined a strategy to protect users and communities from malicious AI exploitation, particularly targeting harms against vulnerable groups.

Microsoft also highlighted a recently released report, “Protecting the Public from Abusive AI-Generated Content,” which illustrates the need for industry and government collaboration to address these challenges.

The statement added that Microsoft’s DCU has worked to counter cybercrime for nearly two decades by leveraging its expertise to tackle emerging threats like AI abuse. The company has emphasized the importance of transparency, legal action, and partnerships across the public and private sectors to safeguard AI technologies.

According to the statement:

“Generative AI offers immense benefits, but as with all innovations, it attracts misuse. Microsoft will continue to strengthen protections and advocate for new laws to combat the malicious use of AI technology.”

The case adds to Microsoft’s growing efforts to reinforce cybersecurity globally, ensuring that generative AI remains a tool for creativity and productivity rather than harm.

 

Sophie Walker

Artificial intelligence (AI), Cybercrime, Microsoft

Read more similar news:

10.01.2023, 16:50 • Tech
Millions of PC users in danger from today – and you can't afford to ignore it
01.02.2023, 10:09 • Tech
Xbox will remove nearly 50 games from the 360 store
03.02.2023, 14:08 • World News
Inside quietest room in the world where no one can stay inside for over an hour
18.01.2023, 14:38 • Money
Major tech giant to axe 10,000 jobs due to global economic crisis
07.02.2023, 16:21 • Tech
What is chatbot Bard and how do I use it?
20.01.2023, 11:58 • Money
Google owner Alphabet to slash 12,000 jobs following Microsoft and Amazon
08.02.2023, 19:45 • Tech
People are just realizing how Windows got its name – after awful first title
10.02.2023, 15:35 • Tech
Millions of Chrome users told to update after three major safety hacks revealed
13.02.2023, 20:54 • Tech
Internet Explorer will die on billions of devices from tomorrow after 28 years
14.02.2023, 12:09 • Tech
Microsoft AI gets into hilarious argument after failing to answer easy question