Cybercrime costs are spiraling to unprecedented levels. According to the FBI’s Internet Crime Complaint Center, losses from cybercrime reached $12.5 billion in 2024, up more than 20% year-over-year. And at the center of the storm lies phishing—still the number one attack vector. Now, with generative AI tools in play, what once looked like clumsy “Nigerian Prince” scams have evolved into eerily convincing corporate breaches.
In Q2 2025, cybersecurity firm Proofpoint reported a 170% rise in AI-driven phishing emails targeting Fortune 500 companies, the sharpest increase since 2016. The attacks are no longer riddled with grammatical errors or suspect links. Instead, they read like polished business memos, often tailored with insider-level detail about individual companies.
The controversy is clear: AI isn’t just helping businesses—it’s helping hackers too. Microsoft, as the largest enterprise IT vendor and email provider to more than one billion people, is under scrutiny for its role in both enabling and defending against AI-enhanced cyberattacks. Investors worry about ballooning security costs, while enterprise CISOs see their defenses stretched thin by threats that look almost indistinguishable from legitimate correspondence. Employees across industries, meanwhile, are on the frontlines—one careless click could trigger millions in damages.
The Data
The statistics tell the story of a cyber landscape transformed by AI.
- A 2025 Verizon Data Breach Investigations Report found that 92% of breaches still begin with phishing attacks, with AI-generated messages boosting click-through rates by nearly 40% compared to traditional phishing emails.
- According to a Microsoft Threat Intelligence report, AI-assisted phishing scams breached over 9 million corporate email accounts in 2024, up from 3.4 million in 2022.
- Cybersecurity firm Proofpoint reported that deepfake audio fraud attempts increased 250% in 2024, with AI voice impersonation targeting CFOs and finance departments.
- IBM’s 2025 Cost of a Data Breach Report estimates the average phishing-related breach now costs enterprises $5.1 million, up 29% from the prior year.
- According to Gartner, global enterprises are expected to spend $219 billion on cybersecurity this year, up 14% from 2024. Gartner also notes that much of the increase traces back to “AI-powered phishing defense layers.”
- The U.S. Federal Trade Commission reported that Americans lost $2.7 billion to phishing-related fraud in 2024, nearly double the 2022 total.
Here’s the thing: Microsoft is both the gatekeeper (via Outlook, Teams, and Azure-hosted security tools) and the exposed flank. Attackers refine generative AI tools against Microsoft’s own products. For example, hackers have used ChatGPT-like systems to craft perfectly fluent spear-phishing emails in multiple languages, targeting multinational organizations. The catch? These same victims are already Microsoft customers paying for Defender for Office 365—the company’s flagship anti-phishing tool.
The People
To understand this tension, you have to look at the people closest to it.
A former Microsoft security engineer told Forbes: “We’re in an arms race. Every defensive update we push gets stress-tested by attackers within 48 hours. The uncomfortable truth is attackers innovate faster because they don’t have compliance or bureaucracy to slow them down.”
On the enterprise side, CISOs are sounding alarms. “My phishing simulations used to have a 5% click rate,” said the CISO of a large financial services firm, who requested anonymity. “Now it’s 17%. That’s not because my staff got dumber overnight—it’s because the emails are indistinguishable from genuine messages.”
Cybercriminals, naturally, remain silent—at least in public. But leaked Telegram channels show AI now openly marketed as a service: bundles of phishing kits, deepfake generators, and ready-to-use scripts sold for a few hundred dollars in cryptocurrency. “What used to require a skilled hacker with good English writing skills can now be done by anyone with $30 to buy an AI tool,” one dark-web researcher noted.
Meanwhile, Microsoft executives continue to strike an optimistic tone. In April, Vice Chair and President Brad Smith said: “AI poses extraordinary security challenges, but also extraordinary opportunities. We believe we can use AI to defeat AI.” Still, that message lands differently with CISOs struggling under skyrocketing cyber insurance premiums and CEOs taking blame after AI-assisted hacks.
The Fallout
The real-world consequences are already hitting.
Take the case of Retalix Pharma, a mid-sized European pharmaceutical firm. It disclosed earlier this year that attackers used an AI-generated email appearing to come from its CEO to trick a finance executive into transferring $9 million to a fraudulent account. The email mimicked his writing style, down to recurrent typos and sign-off quirks. Retalix has since slashed IT budgets in other areas just to bolster Microsoft 365 threat protection licenses—a trend analysts say is becoming increasingly common.
Microsoft is also under pressure from regulators. The U.K.’s National Cyber Security Centre is investigating whether cloud-email vendors are doing enough to address AI-enabled phishing. The European Commission opened hearings on “AI abuse in cybercrime,” with Microsoft executives quietly testifying that responsibility is “shared between customer education and platform safeguards.” But many CISOs find that line too convenient.
Investors sense the stakes. Microsoft’s cybersecurity revenue (including Defender, Entra, Sentinel) hit $24 billion in fiscal 2024, making it the fastest-growing segment of the company. But here’s the paradox: those revenue streams rely on rising threat intensity. A cynical read suggests the worse phishing gets, the more Microsoft profits from selling defense software. This smells like a perverse incentive.
For employees, the fallout is more personal. One slip—opening a malicious invoice, clicking a corrupted Teams link—can end careers. Training programs are becoming relentless. Tech employees report “phishing test fatigue,” where quarterly corporate simulations feel indistinguishable from the real thing. “You don’t know what’s real anymore,” said one mid-level accountant at a U.S. manufacturing firm. “It’s constant paranoia.”
Meanwhile, cyber insurers are pulling back. Marsh, a global insurance broker, says premiums for phishing-related coverage jumped 40% year-over-year, and some insurers have begun excluding AI-enabled incidents entirely from coverage. That forces companies to self-fund incident response in precisely the moment when attacks are multiplying.
Closing Thought
Microsoft, more than any other company, stands as both shield and target in this AI-phishing saga. Its tools guard the inboxes of the world’s biggest corporations, yet its platforms also carry the majority of attack traffic. The battle is escalating faster than boards and employees can adapt, and AI is only leveling the playing field further—for criminals.
So the final question becomes: Can AI really save us from AI-driven phishing, or are we entering a permanent new era where trust in email, chat, and voice is forever broken?