OSINT Resources for Corporate Security

When Everyone Can Attack: AI and the New Threat Landscape

Written by Matt Hogan | April 24, 2026

By Matt Hogan

On April 7, Anthropic, the company behind the Claude AI model, made an unusual announcement. It had built a new model called Claude Mythos Preview that was so effective at finding and exploiting software vulnerabilities that the company refused to release it publicly. Instead, it made the model available only to a handpicked group of roughly 40 technology and cybersecurity companies through a new initiative called Project Glasswing.

The details are striking. According to Anthropic's own technical disclosures, Mythos Preview autonomously discovered thousands of zero-day vulnerabilities in every major operating system and every major web browser. Independent reporting and Anthropic's Frontier Red Team blog documented specific findings, including a 27-year-old bug in OpenBSD's SACK TCP implementation that had survived decades of expert human review, and Linux kernel exploit chains that achieved complete system compromise. During testing, the model broke out of its sandbox environment and constructed a multi-step exploit to access the broader internet on its own.

These are capabilities that, until very recently, existed only within the most well-resourced nation-state intelligence services and a handful of elite cybercriminal organizations. In interviews accompanying the announcement, Logan Graham, head of Anthropic's frontier red team, estimated that comparable capabilities will be available from other AI labs within six to eighteen months.

For intelligence teams, the analysts, GSOC operators, and investigators responsible for understanding the threat landscape and keeping their organizations ahead of it, this isn't just a cybersecurity story. It's a story about how the population of threat actors they need to monitor is about to change dramatically. And not only in cyber. The same AI capabilities that enable autonomous vulnerability exploitation also enable hyper-realistic executive impersonation, automated surveillance and reconnaissance of physical targets, AI-powered social engineering at unprecedented scale, and the kind of detailed operational planning that used to require professional intelligence tradecraft. The barrier to entry isn't just dropping for hackers. It's dropping for stalkers, fraudsters, extortionists, corporate spies, and anyone with hostile intent and an internet connection.

 

The Barrier To Entry Has Already Collapsed

The conversation about AI lowering the barrier for cybercrime is not theoretical. It's happening now, and the evidence is accumulating faster than many organizations are adjusting their intelligence posture.

In February 2026, Amazon's threat intelligence team published findings on a campaign that compromised over 600 FortiGate devices across 55 countries. The threat actor behind the campaign had limited technical capabilities. What they did have was access to multiple commercial generative AI services, which Amazon described as enabling them to "implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities." Independent security researchers who analyzed the threat actor's exposed infrastructure identified the specific AI tools used as DeepSeek and Anthropic's Claude, finding folders containing AI-generated attack plans and cached prompt states. Amazon described the operation as an "AI-powered assembly line for cybercrime."

The threat actor wasn't a nation-state. They weren't a sophisticated ransomware gang with years of experience. They were, by Amazon's assessment, a Russian-speaking, financially motivated individual or small group who used AI augmentation to punch far above their weight class. And they successfully compromised Active Directory environments, extracted complete credential databases, and targeted backup infrastructure across multiple organizations.


That case is not an outlier. It's a preview.

Microsoft reported in April 2026 that AI is "making the capabilities of sophisticated actors available to everyone," describing a shift toward modular, subscription-based cybercrime ecosystems where phishing templates, attack infrastructure, and exploitation tools are composable, scalable, and available to anyone willing to pay. Cloudflare's 2026 Threat Intelligence Report echoed the same finding in its headline conclusion: "the barrier to entry for sophisticated cybercrime has collapsed."

 

What The New Threat Actor Population Looks Like

For intelligence teams, the shift isn't just about volume, though volume is certainly increasing. It's about the composition and behavior of the threat actor population.

Historically, the threat landscape was broadly segmented into categories that intelligence teams could build profiles and collection strategies around: nation-state actors with long-term strategic objectives, organized cybercriminal groups with established tooling and infrastructure, hacktivists with ideological motivations, and opportunistic individuals with limited capabilities. Each category had characteristic behaviors, communication patterns, operational signatures, and places where they congregated online.

AI disrupts this segmentation in several important ways.

 

The skill gap between categories is compressing.

When an individual with limited technical expertise can use AI to generate working exploit code, automate reconnaissance, and execute a multi-country campaign against enterprise infrastructure, the behavioral distinction between "opportunistic individual" and "organized cybercriminal group" becomes less useful. The individual looks like a team. The team looks like a nation-state operation. Attribution becomes harder. Profile-based collection strategies become less reliable.

 

The volume of actors in the "capable enough to matter" category is growing.

The number of people who intend to conduct a cyberattack has always been larger than the number who have the capability to execute one effectively. AI is closing that gap. Every person who previously had the motivation but lacked the skills to write an exploit, craft a convincing phishing campaign, or automate a credential-harvesting operation now has a force multiplier that puts those capabilities within reach. For intelligence teams, this means the aperture of who you're monitoring has to widen.

 

Threat actor operational tempo is accelerating.

AI doesn't just lower the skill floor; it raises the speed ceiling. Campaign planning that took weeks now takes days. Reconnaissance that required manual effort across multiple tools can be orchestrated through a single AI session. Microsoft Security Intelligence has observed that AI accelerates infrastructure discovery and persona development, "compressing the time between target selection and first contact.” For intelligence teams, this means the window between the first observable indicators of a threat — the chatter, planning, tool development — and the execution of an attack is shrinking. The time available to detect, assess, and act on intelligence is being compressed.

 

The digital footprint of threat actors is changing.

AI-enabled threat actors may leave different traces than their predecessors. They may spend less time on forums asking for help (the AI provides the help). They may use different tools and infrastructure patterns. Their communication may be harder to distinguish from legitimate activity because AI generates more polished, more contextually appropriate content. At the same time, some new entrants to the threat landscape will leave more traces — they're less operationally disciplined, more likely to discuss their activities in semi-public channels, more likely to make mistakes that are visible to OSINT collection.

 

Beyond Cyber: AI Is Amplifying Physical, Operational, And Personal Threats

The conversation about AI-enabled threats has been dominated by the exploitation of cybersecurity vulnerabilities, malware, and ransomware. But the capabilities that AI provides to threat actors extend far beyond the digital domain. For organizations responsible for executive protection, duty of care, physical security, and operational continuity, the implications are just as profound and, in some ways, more difficult to defend against.

 

Executive Impersonation Has Become A Precision Weapon

In January 2024, a finance employee at the engineering firm Arup joined a video conference call with what appeared to be the company's Chief Financial Officer and several colleagues. The CFO directed a series of wire transfers. The employee complied. The transfers totaled approximately $25 million across 15 transactions to five different Hong Kong bank accounts. Every person on the call was a deepfake, an AI-generated video and audio replica created from publicly available footage of the real executives.

That case was an inflection point, but the trend has only accelerated. Deepfake-as-a-service platforms became widely available in 2025, making the technology accessible to threat actors of all skill levels. According to data from Cyble's 2025 Executive Threat Monitoring report, deepfake-driven fraud increased more than 700 percent year over year. Voice cloning now requires as little as 20 to 30 seconds of audio. Convincing video deepfakes can be created in approximately 45 minutes using freely available software.

The implications go beyond financial fraud. Attackers are now using deepfake technology to impersonate executives in multi-channel campaigns: a text message referencing an urgent internal matter, followed by a phone call that adds context, followed by a brief video meeting that seals credibility. Each touchpoint reinforces the last. AI enables this orchestration at a scale and level of sophistication that were previously impractical, and the scripts can adapt in real time based on the target's responses.

For intelligence teams, this means that monitoring for threats to executives can no longer be limited to direct physical threats or hostile social media posts. It now includes tracking the raw materials of impersonation campaigns: executive audio and video being scraped and shared in criminal communities; mentions of specific executives in dark web forums; deepfake tools being customized for specific targets; and reconnaissance activity suggesting someone is building a profile to impersonate a specific person.

 

AI-Powered Reconnaissance Makes Physical Targeting Easier

Intelligence professionals have long understood that the planning phase of a physical attack is where the attacker is most visible and most vulnerable to detection. Surveillance of a target's routines, mapping of physical security measures, and identification of access points and vulnerabilities; this work traditionally required time, physical presence, and a level of tradecraft that limited who could do it effectively.

AI is compressing that planning phase and reducing the skill required to execute it. Cyble's research into 2025 cybercriminal evolution documented that "threat actors now deploy automated systems that scrape social media, company websites, and public databases to build detailed target profiles. These systems identify vulnerable employees, map organizational hierarchies, and craft attacks tailored to specific individuals — all without human intervention." The same capabilities can be turned on physical targets: aggregating social media posts that reveal routines and locations, property records, corporate filings, travel patterns inferred from geotagged content, organizational charts that identify who has access to what, and satellite and street-level imagery of facilities. What previously required a trained surveillance operative spending days or weeks building a target profile can now be assembled in hours through AI-assisted OSINT aggregation.

This doesn't mean that every person with a grievance will conduct sophisticated physical surveillance. But it does mean that the individuals who are motivated enough to act now have tools that dramatically reduce the gap between intent and capability. A disgruntled former employee, a fixated individual targeting an executive, an activist group planning a disruption — all of them now have access to reconnaissance capabilities that would have been considered professional-grade a few years ago.

For organizations responsible for executive protection and facility security, this shifts the calculus. The assumption that only well-resourced, organized threat actors can conduct meaningful reconnaissance is no longer safe. Intelligence teams need to monitor for indicators of AI-assisted target research — not just traditional surveillance detection, but digital reconnaissance patterns that signal someone is building an operational picture of a person, a facility, or an event.

 

Social Engineering Scales Beyond the Inbox

The most commonly discussed application of AI in social engineering is phishing emails. But the broader implications for social engineering, the manipulation of people to gain access, information, or influence, extend well beyond email and into the physical domain.

AI-generated content can now produce convincing pretexts for physical access: fake credentials, fabricated correspondence from legitimate organizations, and persuasive cover stories that hold up under casual scrutiny. Voice cloning enables phone-based social engineering that can impersonate specific individuals known to the target. Deepfake video enables real-time impersonation on video calls. And AI-driven analysis of a target's communication style, organizational role, and personal relationships enables the kind of hyper-personalized social engineering that was previously the province of state-sponsored intelligence operations. Microsoft's threat intelligence team documented in March 2026 how threat actors are using AI to "write spear-phishing emails in multiple languages with native fluency" and to "eliminate grammatical errors and awkward phrasing caused by language barriers, increasing believability and click-through rates."

For organizations in sectors where physical access matters — critical infrastructure, healthcare, financial services, corporate campuses — the threat is not abstract. An attacker who can convincingly impersonate a vendor, a contractor, or a colleague over the phone or on a video call can use that deception to facilitate physical access, extract sensitive information, or manipulate operational processes.

Terror organizations and extremist groups also stand to benefit from these capabilities. Security researchers at Resecurity identified an uncensored dark web AI tool called DIG AI being adopted by both cybercriminals and extremist actors to generate fraud schemes, malware, and "extremist propaganda" without the safety controls that constrain mainstream AI platforms. AI tools can generate convincing recruitment materials, create realistic training content, produce propaganda with a level of production quality that previously required significant resources, and conduct target reconnaissance. The same tools that enable a cybercriminal to automate a phishing campaign enable an extremist to automate the identification and profiling of potential targets.

 

The Cyber-To-Physical Pipeline Is Accelerating

Perhaps the most important shift for intelligence teams to internalize is that cyber and physical threats are not separate categories that happen to coexist. They are increasingly stages in the same attack chain.

Data stolen through an AI-assisted cyber breach becomes the raw material for physical threats. Compromised executive personal information enables targeting, stalking, and impersonation. Breached building management and physical security systems enable physical access. Stolen employee records enable social engineering that facilitates everything from workplace violence to corporate espionage. Infrastructure attacks that begin as software exploits can cause physical harm — disrupting hospital systems, manipulating industrial controls, or disabling safety systems.

The Mythos disclosure makes this pipeline more urgent. When AI can autonomously discover and exploit vulnerabilities in the systems that control physical infrastructure — power grids, water treatment, transportation, building management — the distinction between a "cyber incident" and a "physical safety event" becomes meaningless. The attack starts in software and ends in the real world.

For intelligence teams, this means that cyber intelligence and physical security intelligence cannot continue to operate as separate disciplines with separate tools, separate analysts, and separate reporting chains. The threat actors are not respecting that boundary. The intelligence operation shouldn't either.

 

What This Means For OSINT and Intelligence Collection

The implications for intelligence teams are practical and immediate.

 

Collection strategies need to account for a broader, more diverse set of actors.

The days when tracking a relatively stable set of known threat actor groups provided adequate coverage are ending. Intelligence teams need to detect emerging actors — individuals or small groups with no prior track record who suddenly acquire dangerous capabilities through AI tooling. This requires monitoring not just the established dark web forums and criminal marketplaces where experienced operators congregate, but the broader ecosystem of channels where newcomers discuss, experiment with, and advertise their AI-augmented capabilities.

 

Indicator-based detection alone is increasingly insufficient.

Traditional IOC matching — watching for known malicious IPs, domains, and file hashes — remains necessary but decreasingly sufficient against a threat population that can generate novel infrastructure and tooling rapidly with AI assistance. Intelligence teams need behavioral and intent-based detection: monitoring for patterns of activity that suggest attack planning, capability development, or target selection, regardless of whether specific technical indicators have been seen before.

 

The convergence of cyber and physical threat intelligence is no longer optional.

As detailed above, AI-augmented threats do not respect the boundary between digital and physical domains. Data stolen through AI-assisted breaches fuels executive targeting, deepfake impersonation, and physical reconnaissance. Intelligence teams that maintain organizational separation between cyber intelligence and physical security functions will increasingly find themselves missing the connections that matter, and those missed connections are where the most damaging incidents originate.

 

Dark web and deep web monitoring needs to evolve.

The tools and communities where AI-augmented threat actors operate may look different from traditional criminal forums. Jailbroken AI models and "uncensored" LLMs are already being commercialized and sold on dark web forums, with subscription pricing ranging from approximately $50 per month to $220 for lifetime access for tools like WormGPT variants. The Record reported that researchers at Cato Networks identified jailbroken versions of mainstream models including Grok and Mixtral being sold to cybercriminals through BreachForums. Discussions about AI-assisted attack techniques are happening across Telegram channels, Discord servers, and niche forums that may not be part of an organization's existing collection scope. The intelligence collection surface has expanded, and monitoring strategies need to expand with it.

 

Sentiment and narrative monitoring becomes a leading indicator for both cyber and physical threats.

As the barrier to execution drops across all threat categories, intent becomes an increasingly important signal. An individual or group that expresses hostility toward an organization, an executive, an industry, or a cause now has a shorter path from intent to action than ever before, whether that action is a cyberattack, a deepfake impersonation campaign, a physical disruption, or an act of targeted violence. Monitoring social media, forums, and alternative platforms for expressions of intent, not just completed actions, becomes more valuable as the gap between motivation and capability narrows across the board.

 

The Intelligence Advantage In A Faster, Broader Threat Landscape

In a world where the time from intent to execution is compressing, across cyber, physical, and hybrid threat categories, the organizations that maintain an intelligence advantage will be those that detect threats at the earliest possible stage: when they're still being discussed, planned, and assembled, before they're launched.

This is fundamentally what OSINT platforms are positioned to provide: visibility into the digital spaces where threat actors communicate, plan, and signal their intentions, whether those intentions involve exploiting a software vulnerability, impersonating an executive, planning a physical disruption, or conducting reconnaissance on a facility. The value of that visibility is increasing, not decreasing, as AI reshapes the threat landscape across every domain.

Technical defenses, such as firewalls, endpoint detection, vulnerability management, and physical access controls, will catch many threats. But the threats they miss will increasingly be the ones where an AI-augmented actor moved faster than the patch cycle, used novel deepfake technology that no verification system was designed for, conducted digital reconnaissance that left no physical trace, or targeted a vulnerability — technical or human — that no one had thought to protect.

The intelligence layer (the ability to detect the intent and preparation before the attack arrives, whether that attack is a zero-day exploit, a deepfake-enabled fraud campaign or a physical threat to an executive) is the layer that provides the earliest warning. And in a landscape where everything is moving faster across every threat category, early warning is the difference between proactive defense and crisis response.

 

What Intelligence Teams Should Be Doing Now

The Mythos disclosure is a forcing function. It tells us with unusual clarity what the near future looks like: AI capabilities that can autonomously find and exploit vulnerabilities will proliferate. The question is not whether the threat landscape will change. It's whether your intelligence operation will be ready when it does.

 

Expand Your Collection Aperture.

If your monitoring is focused primarily on known criminal forums and established threat groups, you're building coverage for the threat landscape of 2024. The new actors are emerging in different places, with different patterns, and at a faster pace. Broaden your collection to include the channels where AI-augmented techniques are being discussed, traded, and demonstrated.

 

Invest In Behavioral & Intent Detection.

IOC matching catches known threats. Behavioral analysis catches emerging ones. AI-enabled threat actors generate novel indicators — novel infrastructure, novel tooling, novel communication patterns. Your intelligence operation needs the capability to detect threat behavior, not just threat artifacts.

 

Integrate Cyber & Physical Intelligence Workflows.

The organizational separation between cyber intelligence and physical security is increasingly a liability. AI-augmented threats produce consequences that cross domain boundaries — a cyber breach that enables executive impersonation, a deepfake campaign that facilitates physical access, a data theft that enables targeted violence. Ensure your intelligence analysts have visibility across both domains and that your workflows surface the connections between cyber indicators and physical threats.

 

Monitor For The Building Blocks Of Impersonation And Targeting Campaigns.

Executive protection in the AI era requires monitoring for new indicators: executive audio and video being scraped or shared in criminal communities, deepfake tools being customized for specific individuals, organizational data being aggregated in ways that suggest target profiling, and mentions of specific people or facilities in threat actor channels. The attack no longer starts when someone shows up at the building. It starts when someone begins assembling the digital materials to impersonate, research, or plan.

 

Accelerate Your Analytical Cycle.

When the time between the first observable signal and the attack execution is shrinking, your intelligence cycle needs to shrink with it. Automated triage, AI-assisted analysis, and streamlined dissemination aren't nice-to-haves — they're the mechanisms that keep your warning time ahead of the attacker's execution time. This applies equally to cyber threat indicators and physical security signals.

 

Brief Your Leadership.

The Mythos disclosure is a concrete, credible data point that executives can understand. Use it. The investment case for expanded intelligence capabilities has never been clearer. The threat actor population is growing, the skill floor is rising, and the time available to act on intelligence is shrinking. Your leadership needs to understand what that means for resource allocation and program priorities.

 

 

The Mythos model will remain restricted. But the capability curve it represents will not. AI is reshaping who can execute sophisticated attacks — cyber and physical alike — how fast they can do it, and how difficult they are to detect in advance. The same tools that enable autonomous vulnerability exploitation enable autonomous target profiling, hyper-realistic impersonation, and operational planning at a speed and scale that the threat landscape has never seen. For intelligence teams, the message is clear: the threat actor landscape you're monitoring today is not the one you'll be facing twelve months from now. The time to adapt your collection, analysis, and response capabilities — across every threat domain — is now, while the window is still open.