With annual legislative sessions beginning to wind down, lawmakers in 44 states and D.C. have introduced over 800 bills related to artificial intelligence during the 2026 session so far, according to the National Council of State Legislature’s AI bill tracker. This is a remarkable volume of regulatory interest from lawmakers who have not had much of a chance to understand any possible related market failures or to study the costs and benefits of regulations for a new technology that has only recently entered mainstream use. This regulatory interest represents continued momentum from the past few years. In the 2025 session, the 50 states and D.C. introduced over 1,000 AI bills altogether.
The bills during this and recent sessions cover an extraordinarily wide range of targets and approaches. Some bills target AI developers such as Anthropic and OpenAI. Some target deployers of AI such as social media companies or businesses that use AI internally. Others target other parties such as data brokers. Many bills are sector-specific: AI in healthcare, AI in housing, AI in employment, AI in insurance, and AI in elections. And many bills are issue-specific: for example, lawmakers in the 2026 session have introduced 188 bills in 38 states on AI deepfakes and 22 bills in 22 states covering AI chatbots.
A few examples illustrate the range of the 726 bills pending in statehouses and awaiting governor signatures: An Illinois bill would require AI developers to report safety incidents and publicly publish their protocol on risk management, transparency, and cybersecurity (2026 IL SB3312). A Hawaii bill would require AI deployers to run risk management programs for algorithmic discrimination and cybersecurity, including pre-market and ongoing testing, and recordkeeping (2026 HI SB2967). A Minnesota bill would prohibit, “surveillance-based price discrimination,” or the use of AI in using certain consumer data to set prices (2026 MN HF 3764). A New Jersey bill would require companies to conduct AI safety tests and report results to the state (2026 NJ S 1802). A New York bill would hold companies liable for harm caused by AI chatbots offering medical, legal, and other types of regulated speech (2025 NY S7263).
So far, 13 states this session have enacted or adopted 14 pieces of legislation. A few examples illustrate the range of what lawmakers are passing: Indiana placed restrictions on when healthcare insurance providers can use AI (2026 IN H 1271). New York state and local government may not use AI to reduce staffing, or as the language reads, from using AI in a way that would displace governments jobs (2025 NY S 8831). South Carolina placed restrictions on how data can be collected from minors and implicated AI in the law (2025 SC H 3431). In Vermont, AI videos of political candidates must now be labeled as such (2025 VT S 23).
In a recent Perspectives from FSF Scholars, my colleague, Joe Kennedy, suggests the need for a streamlined AI regulatory framework that incentivizes the build-out of a robust supporting infrastructure and that encourages competition and innovation. What the nation really needs is an overarching federal framework that avoids ex ante heavy-handed regulation and that supplants the growing patchwork of state laws.
Without such a framework, companies must navigate a growing and inconsistent patchwork of state regulations, each with varied definitions, thresholds, compliance timelines, and enforcement mechanisms. States may still decide to pass legislation on AI as it pertains to their specific state criminal codes, public education requirements, state government use of AI, or other state matters. But at the current rate, an AI developer, deployer, or other AI party could theoretically face 51 different pieces of legislation regulating the same activity. And the burden of complying with this patchwork falls even harder on startups and emerging competitors trying to offer better alternatives for consumers. AI has potential to improve countless dimensions of everyday life. The emerging regulate-first patchwork of state laws is not the path to realizing that potential.





