Wednesday, April 01, 2026

Sanders' AI Bill Is a Red Herring and Blackburn's Has Problems

With the White House calling for a national AI framework to end the patchwork of state regulation, two notable proposed pieces of federal legislation have emerged. And the one getting less attention at the moment is the one that matters more.

Senator Marsha Blackburn (R-TN) released a discussion draft of the TRUMP AMERICA AI Act (you read that right, The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act) on March 18, 2026, a 291-page federal framework developed in response to the Trump administration's call for a national AI policy. Senator Bernie Sanders (D-VT), joined by Representative Alexandria Ocasio-Cortez (D-NY), introduced the 13-page Artificial Intelligence Data Center Moratorium Act on March 25, 2026. It would halt data center construction until Congress enacts legislation to ensure: that future AI products are "safe and effective"; that AI does “not threaten the health and well-being of working families”; and that AI does not displace jobs. The two AI bills are not comparable in scope or consequence.

The Sanders moratorium bill has received the most mainstream coverage, possibly in part because it is the only one of the two to be formally introduced. But it’s easy to see why all the fuss. The moratorium bill takes advantage of anxieties that translate directly into headlines: job displacement, strain on the power grid, and industrial construction in people's backyards. While these concerns affect real people, the bill's moratorium is ill-conceived and would be harmful. Pausing data center construction pending new AI legislation would be a significant brake on American AI infrastructure at precisely the moment the Trump administration is pushing to accelerate it and would let foreign competitors move ahead.

But Sanders’ moratorium bill is almost certainly a political statement about AI as a threat rather than a realistic proposal. It is unlikely to gain serious legislative traction, and its primary practical effect may be to divert attention from more consequential legislation.

The Blackburn bill is one piece of potentially more consequential legislation. As a proposed comprehensive federal AI framework, it is more technically complex and far-reaching than the moratorium bill. Yet it has received a fraction of the coverage. Other think tanks including the Competitive Enterprise Institute and the Cato Institute have explained how the bill would impose heavy-handed regulation across the AI ecosystem.

Some aspects of Senator Blackburn’s bill that may be problematic and require close attention include: a full-on repeal of Section 230 of the Communications Act of 1934; imposing “duty of care” on chatbot developers; holding AI developers liable for harms beyond existing laws on fair and deceptive practices; requiring federal contracts to use "unbiased" large language models; creating a Department of Energy testing program for adverse incidents in AI systems; and directing DOE to develop certification procedures, licensing requirements, and broad regulatory oversight.

I wrote last week that the federal AI framework needs a light-handed approach grounded in free market competition. I explained that “robust competition among American companies is the precondition for national competitiveness” and consumer satisfaction. While established developers may fare fine under such a burdensome scheme, their products would fall behind other nations not facing such operating and compliance costs. And startups and emerging competitors would fare even worse.

The Sanders moratorium deserves the criticism it has received. But the current Blackburn bill has problematic provisions that deserve scrutiny it has not yet gotten.