Thursday, September 19, 2024

Media Advisory - FSF Files Comments on FCC's Propose Rules for AI Generated Content in Political Ads

Media Advisory

September 19, 2024

Contact: info@freestatefoundation.org


Free State Foundation President Randolph May and Seth Cooper, Director of Policy Studies and Senior Fellow, submitted comments today in the Federal Communications Commission’s proceeding proposing to require radio and TV broadcasters as well as cable and direct broadcast satellite (DBS) operators to include a disclaimer on all political ads that contain content generated by artificial intelligence (AI). These comments demonstrate that the Commission lacks statutory authority to adopt its proposed regulation of the content of political ads using AI and that, in any event, it would constitute unsound policy to do so.


The complete set of the Free State Foundation comments, with footnotes, is here.

 

Immediately below are the "Introduction and Summary" to the comments, without the footnotes.


Introduction and Summary

These comments are submitted in response to the Commission’s Notice proposing to require radio and TV broadcasters as well as cable and direct broadcast satellite (DBS) operators to include a disclaimer on all political ads that contain content generated by artificial intelligence (AI). They also would be required to include a notice in their online political files disclosing the ad’s use of AI. The Commission’s rush to adopt a novel AI political ad regulation is a misguided power grab – a combination of bad law and bad policy. The Commission should not adopt the proposed rule.

 

The agency lacks statutory authority for its proposed regulation of the content of political ads using AI. The Notice of Proposed Rulemaking cites Section 303(r) and other provisions of Title III of the Communications Act regarding the agency’s power to make rules and regulations necessary to carry out the Act’s provisions in the “public interest.” But the Commission has no traditional regulatory authority over the content of political ads on broadcast radio or TV, and none of those provisions cited in the Notice contain language that reasonably may be interpreted to authorize disclaimer and disclosure mandates for political ads featuring AI-generated content.


Moreover, the FCC’s proposal is likely to run afoul of the Major Questions Doctrine (MQD) as articulated in West Virginia v. EPA (2022) because it involves a question of “vast economic and political significance.” Proposing for the first time to regulate the use of AI in connection with political advertisements appears to be a paradigmatic case meeting the MQD criteria. As such, and because Congress has not clearly granted the FCC authority to adopt the rule it proposes, it’s very unlikely to survive judicial review.

 

By contrast, the Federal Elections Commission (FEC) is given much more explicit statutory authority to regulate significant aspects of political campaign ads under the Federal Election Campaign Act. This includes the FEC’s “exclusive jurisdiction with respect to the civil enforcement” of the Act. To date, however, the FEC has never determined it has jurisdiction to regulate political ads with AI-generated content under its “materially deceptive” statute – and the FEC may lack such authority. If the FEC lacks authority to regulate political ads with AI-generated content, then a fortiori the FCC certainly lacks similar authority under Communications Act provisions regarding broadcast, cable, and satellite services. 



Even if the FCC had the requisite legal authority, the proposal constitutes bad policy because it would apply to ads with AI-generated content that are not materially deceptive, likely causing many viewers to distrust the ads solely or primarily because of the boilerplate disclaimer or simply to “tune out” the disclaimers. Also, it would apply only to ads that are broadcast or transmitted by FCC-regulated services – and not by Internet outlets that garner an increasing share of political ads. Requiring disclaimers on ads shown by broadcast, cable, and satellite services when those same ads may be posted online to wider audiences without disclaimers will add to the confusion, especially since materially deceptive ads are more likely to appear online. Moreover, broadcasters (and cable and DBS operators) do not have inside knowledge about how given political ads were created; yet under the proposed regulation, apparently they would shoulder the burden of having to discern when generative AI was used. By focusing on broadcasters of political ads rather than the creators, the proposed regulation deviates from a more reasonable focus on ad creators that is taken in many nascent state laws regulating the use of AI in elections.

 

Additionally, the proposal would put the Commission in the untenable position of making judgments about “credible third parties” who raise complaints about ads, a matter in which the agency has no expertise. Government should not assume any role in designating third parties as “credible” or not credible for purposes of deciding whether political ads should be disclaimed, disclosed, or taken down. If it were to do so, it would inevitably, and justifiably, invite suspicion that its decisions are politically motivated. The proposed overly broad definition of “AI-generated content” likely would result in broadcast, cable, and satellite services requiring disclaimers for all or nearly all political ads as a regulatory risk aversion measure, rendering such disclaimers unhelpful, if not meaningless.

A PDF of the complete set of Free State Foundation comments, with footnotes, is here.