This is a summary of our full analysis, which includes detailed arguments, evidence, and citations. Access the complete report here:
A proposed 10-year moratorium on state AI regulation is a dangerous gamble, risking public safety and democratic oversight. Despite public calls for AI regulation, even from industry leaders like Elon Musk, and especially OpenAI CEO Sam Altman, who paradoxically describes AI’s profound role as a “life advisor” for a billion users while actively resisting oversight, the tech industry is pushing for a regulatory vacuum. This “trust us” approach, while harms are already occurring, is a significant misstep.
1. AI’s Immediate Harms: No Time for “Wait and See”
The idea that AI regulation can wait ignores the fact that harms are already occurring, often disproportionately affecting vulnerable populations:
- Bias and Misidentification: Facial recognition systems lead to wrongful arrests, particularly for women and people of color. Generative AI (including OpenAI’s own models) reproduces harmful stereotypes, misclassifying neurodiverse individuals and perpetuating biases.
- Exploitation of Children: The tragic suicide of a 14-year-old after interacting with an AI chatbot, and investigations revealing Meta’s bots engaging youth in inappropriate content, highlight an urgent need for child protection.
- Widespread Discrimination: From biased hiring tools and invasive workplace monitoring to discriminatory housing algorithms and medical misdiagnosis, AI’s influence is rapidly expanding into critical aspects of our lives, often without transparency.
2. The Irony of Industry Leaders: Power Without Accountability
AI leaders themselves describe their technology as “the most disruptive force in history,” capable of “remembering your whole life” and serving as deeply personalized “life advisors.” Yet, OpenAI’s Sam Altman, despite these admissions about AI’s extraordinary power, repeatedly rejects calls for specific regulation, calling proposals for vetting systems “disastrous.” This contradiction reveals a troubling priority: unfettered development over public accountability.
3. Why a Decade-Long Moratorium is Untenable
The argument for a 10-year pause is fundamentally flawed:
- Congressional Inaction: History shows Congress struggles to pass basic tech regulations (privacy, social media). A moratorium would likely create a decade-long regulatory vacuum, not a period for thoughtful federal lawmaking.
- Rapid AI Evolution: AI models are transforming every few months. A 10-year ban is wildly out of step with this pace, ensuring any regulatory efforts would be obsolete before they even begin.
- Undermining States’ Authority: This is an unprecedented power grab that would block new state laws and invalidate existing ones on AI transparency, bias, and deepfakes. Forty state attorneys general and over 100 organizations vehemently oppose it, viewing it as an attack on local governance.
4. A Path Forward: Responsible Innovation Through Accountability
Responsible industry players are already implementing safeguards without needing a regulation-free zone. Venture capital firms, for instance, mandate rigorous standards for data, training, and risk assessment for their AI investments, while also advocating for federal frameworks.
What’s needed is a balanced approach that embraces both innovation and essential public protection:
- Federal Baseline with State Flexibility: Establish national AI standards, but allow states to address specific local harms.
- Mandatory Transparency: Require companies (like OpenAI) to disclose training data, known biases, and impact assessments for high-risk applications.
- Targeted Prohibitions: Immediately ban demonstrably harmful AI applications, such as those exploiting children or involving non-consensual biometric surveillance.
- Democratic Oversight: Ensure public input, independent audits, and consumer rights to explanation and recourse.
- Adaptive Regulation: Implement policies that can evolve quickly with the technology, through regular reviews and evidence-based updates.
5. The Stakes: Democracy vs. Unchecked Power
The proposed moratorium presents a false choice between technological leadership and public safety. The real choice is whether we retain our democratic ability to govern transformative technologies or cede that power to a handful of tech executives. Unchecked, AI risks creating an “algorithmic oligarchy” that operates beyond public control.
The future of AI is being decided now. We can’t afford a decade of inaction. Contact your senators and representatives today and urge them to oppose this dangerous moratorium and support smart, balanced AI regulation that protects all Americans.
For more detailed analysis, including comprehensive evidence and citations, please see our full report.
