Research / Evaluating AI Policy Proposals

Evaluating AI Policy Proposals

DOWNLOAD PDF

Evaluating AI Policy Proposals

Artificial intelligence (AI) is important because human intelligence is so important. AI amplifies human intelligence, enabling us to complete tasks with greater efficiency, productivity, quality, and creativity.

Poorly crafted regulations will delay or prevent AI-driven improvements to healthcare, education, and overall economic growth. It will cost America its leadership in AI development and allow China to dominate the global marketplace. To ensure well-crafted, targeted legislation that effectively addresses legitimate concerns without stifling innovation, it's essential for policymakers to ask the right questions and cut through the technical jargon.

Key Questions for Evaluating AI Legislation

Ask these questions to determine if proposed AI legislation addresses legitimate risks without hindering innovation.

Q1: Has AI been sufficiently defined? 

The definition of 'AI' is the foundation of any regulatory proposal, yet there is no consensus on a scientific definition. Many proposals use overly broad definitions that sweep in large swaths of existing software, threatening to stifle innovation across the entire digital ecosystem. Focusing on specific types and uses of AI, such as "generative AI" (systems that create original content based on learned patterns), can help target legitimate concerns while limiting unintended consequences for other beneficial applications. Avoid using broad terms and phrases like “algorithm,” “automated decision making,” “computational processes,” and “machine learning,” because these and similar phrases would regulate traditional computer software.  

Q2: Is this really about AI? 

Many proposed "AI" regulations merely repackage longstanding policy fights around issues like intermediary liability, privacy, intellectual property, and bias. Vague definitions of "AI" can be a backdoor to revive stagnant proposals under the guise of addressing supposedly novel AI risks. Understanding the history of a proposal provides vital context for evaluating whether it is a well-targeted response to new challenges posed by AI or a preexisting agenda in new clothing.

Q3: Has the proposed legislation adequately identified specific harms? 

Effective legislation must be crafted to address clearly articulated, concrete harms. Vague or subjective harms open the door to arbitrary enforcement and government overreach. If proponents cannot specify the tangible harms to people that the legislation prevents or addresses, that's a giant red flag. Legislation targeting clear harms like physical injury or financial loss is more likely to benefit constituents without unduly burdening innovation.

Q4: Have current legal and regulatory powers failed to address these specific harms? 

As a general-purpose technology, AI is being deployed across a wide range of industries, many of which already have robust regulatory structures in place. For example, the leadership from the Department of Justice, Federal Trade Commission, Consumer Financial Protection Bureau, and Equal Employment Opportunity Commission released a joint statement in 2023 that declared, “Our Agencies’ Enforcement Authorities Apply to Automated Systems.” New AI legislation should aim to fill genuine gaps, not duplicate or conflict with existing sector-specific laws at the state and federal levels. Proponents of new legislation should be able to explain why current regulations are inadequate to address the identified harms.

Q5: Will this legislation support an open and dynamic industry? 

Open source software – software that is built and shared collaboratively, often by volunteers – powers major parts of the software economy. It remains a vital channel of innovation and competition that fosters disruptive change and wild creativity. Legislation can impose compliance burdens that are difficult or impossible for open source projects to comply with, because there is no single company with a revenue stream and business interest to drive that compliance. Much legislation does not consider this negative effect; some legislation even directly targets open source software.

↳ JOIN OUR NEWSLETTER