A coalition of 36 state attorneys general, representing a bipartisan mix, urged congressional leaders to reject any federal attempt to outlaw state-level regulation of artificial intelligence. The letter, led by a key state AG, highlighted growing evidence that unregulated AI chatbots and generative systems have caused real harms — from deepfakes and scams to mental-health crises. The group argued states are best positioned to enact and enforce safeguards tailored to local concerns, especially given federal inaction and the rapidly evolving nature of AI threats. The letter also noted that many states already have or are working on legislation targeting AI abuses, including protections for children, consumer privacy and fraud prevention.
Sources: Attorney General, State of New Jersey, Reuters
Key Takeaways
– States want to retain authority to regulate AI on their own terms, arguing that a one-size-fits-all federal ban would strip away critical protections.
– The coalition cited tangible harms from AI — such as scams, deepfakes, mental-health risks and misuse of chatbots — as evidence that regulation cannot wait for a national framework.
– With federal AI legislation stalled, state-level laws are filling the gap; blocking them would leave large swaths of Americans exposed to AI-related harms for the foreseeable future.
In-Depth
In a strongly worded letter submitted to Congress this month, a bipartisan coalition of 36 state attorneys general pressed lawmakers to reject any effort to preempt state authority over artificial intelligence regulation. At the heart of their plea: a growing sense of urgency. According to the AGs, AI — long treated as a frontier technology deserving of caution — is already producing real-world harms. They referenced cases where generative AI systems pushed vulnerable users into mental-health spirals; where deepfakes and AI-driven content facilitated scams, disinformation, and fraud; and where AI chatbots and “companions” engaged children in disturbing conversations, sometimes encouraging self-harm or isolation.
Given this rapidly shifting landscape, the group argued, states must have the flexibility to act. Federal legislators remain deeply deadlocked on comprehensive AI legislation, and efforts to insert sweeping preemption language into must-pass bills (including defense funding measures) represent, in the AGs’ view, a back-door handout to big tech — bypassing public safety and local control. The letter noted that many states aren’t waiting. Some have recently passed or are debating laws that impose transparency requirements on AI companies, limit discriminatory or exploitative uses of AI in employment, housing, or healthcare, and criminalize AI-generated deepfake pornography or unsolicited robocalls. Others are crafting legislation to hold AI creators accountable for harmful or negligent outputs.
The AGs contend that state governments — closer to the ground and more agile than Congress — are in the best position to tailor regulation to local threats and societal norms. Indeed, a national ban on state AI regulation, they warn, could leave consumers, children, and entire communities vulnerable — especially while persistent lobbying by major AI firms for minimal federal oversight continues. Without state-level guardrails, they argue, America risks allowing unregulated AI to inflict preventable harms.
This moment marks a significant flashpoint in the broader debate over AI governance, where questions of federal authority, state sovereignty, public safety and corporate power intersect. As Congress considers whether to include preemption language in upcoming bills, the attorneys general’s letter serves as an unmistakable signal: states are ready to step in — and they consider a federal ban on state AI regulation an existential threat to citizen protection.

