AI providers ban accounts, not users. That distinction matters more than it sounds.

When a provider detects misuse, whether it is an influence operation, an attempt to extract dangerous information, or large-scale model distillation, the standard response is to restrict the offending account. But accounts are cheap. A new email address, an anonymous SIM card, and a VPN are enough to start over. OpenAI has reported that in multiple disruption rounds against state-linked influence campaigns, banned accounts were “consistently replaced by new ones exhibiting similar usage patterns.” North Korean threat actors whose accounts were terminated were observed re-registering with different email addresses and resuming similar activities. Anthropic has documented comparable patterns.

This is not a failure of detection. It is a failure of enforcement.

Three specific gaps

Current enforcement has three key gaps.

First, users can create new accounts with the same provider after being restricted. Most consumer AI products require only an email address for basic access. Even OpenAI’s Verified Organisation program, the most ambitious AI-specific identity verification effort to date, has been characterised as a “baby step.”

Second, users can move to other providers. Chinese state-affiliated groups were documented using both OpenAI and Microsoft services for reconnaissance and scripting. Disrupting a threat actor’s access at one provider does not stop the underlying activity; actors simply migrate.

Third, users can distribute misuse across providers, splitting a malicious task into seemingly benign subtasks. Each interaction looks innocent in isolation. Without knowledge of a user’s activity elsewhere, no single provider can see the full picture.

These are not theoretical concerns. They are patterns documented in threat intelligence reports published by OpenAI, Anthropic, and other leading AI providers.

The financial KYC precedent

The most developed precedent for identity-based enforcement is the financial sector’s Know-Your-Customer regime. Banks spend an estimated $190 billion per year globally on KYC and anti-money laundering compliance. But the regime’s track record is sobering: transaction monitoring systems generate 90-95% false positive rates, and less than 0.1% of criminal finances are intercepted.

So why look to financial KYC at all?

Because the failures are concentrated in a specific layer: continuous transaction monitoring. The identity verification layer, where banks verify who their customers are at onboarding, is a different story. That layer is bounded, structured, and predictable. The interventions I propose borrow only from this working layer while substituting AI-native mechanisms for the transaction monitoring that drives the financial sector’s failures.

AI providers also have structural advantages. Banks see only transaction metadata and must infer intent from numerical patterns. AI providers have the full content of every interaction. Banks must coordinate across chains of intermediaries; AI providers control centralised API gateways through which every request passes. And modern digital identity verification costs $0.80-$1.50 per check, orders of magnitude cheaper than financial KYC’s $13-$130.

Evidence from other domains

Domains with similar structural features, centralised chokepoints with layered enforcement, show that verification works. Google Play blocked 1.75 million policy-violating apps in 2025 while developer account bans dropped from 333,000 to 80,000, a decline Google attributes to verification barriers making the cost-benefit calculation “unfavorable for all but the most sophisticated threat actors.” Chemical precursor controls achieved a 97% reduction in meth lab seizures through tiered access requirements. The UK’s online gambling verification regime found low rates of illegal underage gambling, with failures occurring through “misuse of an adult’s account rather than a failure of the verification process.”

Digital platforms have independently reached the same conclusion. YouTube requires government ID or video selfie verification for advanced features, explicitly linking this to ban evasion prevention. Discord is rolling out mandatory age verification globally. OpenAI has deployed AI-based age prediction that triggers additional verification when a user is flagged. In each case, the platform concluded that account-level enforcement was insufficient without some form of user-level identification.

What AI providers can do

I propose three categories of intervention.

Strengthen baseline KYC. Providers can leverage existing digital ID infrastructure, such as the EU Digital Identity Wallet, Aadhaar, and mobile platform IDs from Google and Apple, along with third-party verification services, to more reliably identify and distinguish between users. This directly addresses the problem of users creating new accounts after being banned.

Escalate KYC conditionally. Rather than applying uniform requirements, providers can scale verification to match risk. Repeated tripping of misuse classifiers, suspicious usage patterns, or requests for access to particularly powerful capabilities can trigger progressively stronger identity checks. This is already happening in pockets: OpenAI’s Verified Organisation program gates access to advanced reasoning models behind government ID and facial scans.

Coordinate across providers. The hardest but most important step. Without inter-provider coordination, restrictions do not follow users across services. At a minimum, providers can share binary misuse flags through a common portal, so that a user banned by one provider cannot seamlessly register with another. More structured models could involve centralised KYC through a trusted organisation issuing verifiable user certificates, drawing on precedents like the Global Internet Forum to Counter Terrorism’s hash-sharing database, which operates across 30+ platforms, and the financial sector’s information-sharing frameworks.

The tradeoffs are real

These interventions involve genuine costs. Stronger KYC requirements increase user friction and may exclude users who lack access to digital ID infrastructure; the World Bank estimates approximately 850 million people globally lack any official ID. Cross-provider information sharing raises privacy, surveillance, and antitrust concerns. Research documents tangible chilling effects from identity verification on legitimate expression. The track record of identity verification providers includes significant data breaches. And all three interventions have limited applicability to self-hosted and open-weight models, where providers do not mediate user access.

There are also legal gaps. There is no statutory safe harbour for AI safety information sharing. The Cybersecurity Information Sharing Act expired in September 2025 and was only temporarily extended. The DOJ has moved to case-by-case enforcement for competitor information sharing.

These tradeoffs demand careful design: tiered requirements rather than blanket mandates, privacy-preserving architectures like the EU Digital Identity Wallet’s attribute proofs, robust appeals processes, and external audits of information-sharing practices. South Korea’s failed real-name internet verification system, struck down unanimously by its Constitutional Court, is a cautionary tale of what happens when verification is imposed as a blanket mandate without proportionality.

But the status quo has costs too. AI providers are already spending significant resources on a whack-a-mole dynamic that their own threat intelligence reports describe as unsustainable. As AI systems become more capable, and as tools with dual-use potential proliferate, the gap between account-level enforcement and user-level accountability will only widen. AI legal risks are proving uninsurable. The question is not whether to strengthen identity-based enforcement, but how to do it responsibly.

The building blocks already exist: digital ID infrastructure is maturing, verification costs are falling, and precedents for cross-competitor coordination are well-established in adjacent domains. What is missing is the will to apply them.