[Draft] AI misuse enforcement has a blind spot
AI providers ban accounts, not users. That distinction matters more than it sounds. When a provider detects misuse, whether it is an influence operation, an attempt to extract dangerous information, or large-scale model distillation, the standard response is to restrict the offending account. But accounts are cheap. A new email address, an anonymous SIM card, and a VPN are enough to start over. The Stanford HAI AI Index Report 2025 recorded a 56% increase in AI-related incidents in 2024, with reports of malicious actors using AI rising eightfold since 2022. Between February 2024 and October 2025, OpenAI alone disrupted over 40 malicious networks. Google’s Threat Intelligence Group identified APT groups from more than 20 countries abusing Gemini. Anthropic documented what it described as the first AI-orchestrated cyber espionage campaign, a Chinese state-sponsored operation targeting approximately 30 entities with human intervention limited to 20 minutes per phase while the model operated for hours. These instances of misuse constitute an enforcement failure, not a detection failure.