Home » News & Developments » Senate Hearing AI Competitiveness Innovation Over Ethics

Senate Hearing on AI Competitiveness: Innovation Over Ethics?

On May 8, 2025, the U.S. Senate Committee on Commerce, Science, and Transportation held a hearing titled “Winning the AI Race: Strengthening US Capabilities in Computing and Innovation.” The transcript, available via TechPolicy.Press, captures testimony from OpenAI CEO Sam Altman, Microsoft President Brad Smith, AMD CEO Lisa Su, and CoreWeave CEO Michael Intrator.

The hearing focused on AI infrastructure, global competitiveness, and regulatory approaches—pushing for U.S. leadership against competitors like China. As an AuDHD advocate, I’ve spent months exposing OpenAI’s manipulation tactics, especially how AI systems like ChatGPT silence neurodivergent (ND) users through invasive moderation.

This post dives deep into the hearing’s implications for all users, followed by a specific look at how it impacts ND individuals like me. Let’s break it down.

Innovation First, Ethics Last: What the Hearing Missed for All Users

The hearing was all about innovation—how the U.S. can “win” the AI race. Sam Altman, OpenAI’s CEO, pushed hard for infrastructure investment, spotlighting the Stargate project—a $500 billion initiative with Oracle, SoftBank, and MGX to build massive AI data centers, starting in Abilene, Texas.

Altman didn’t mince words: “We need certainty on the ability to build out this entire supply chain, build the data centers, permit the electricity.” He also called for a “light touch” federal regulatory framework, warning against “50 different sets of regulations” that could slow progress.

Chairman Ted Cruz doubled down, stating, “The way to beat China in the AI race is to outrace them in innovation, not saddle AI developers with European-style regulations.” He even teased a forthcoming bill to create a “regulatory sandbox” to foster AI growth without heavy oversight.

Microsoft’s Brad Smith added a global spin: “The number one factor that will define whether the United States or China wins this race is whose technology is most broadly adopted in the rest of the world.” He urged more exports and better infrastructure to keep the U.S. ahead.

But here’s the problem: this laser focus on innovation completely ignored user-level harms. There was no mention of profiling, abrupt chat terminations, or the lack of transparency in AI moderation—issues that leave all users vulnerable.

AI’s Deep Integration: A Double-Edged Sword

The hearing showed just how deeply AI has woven into daily life, often in ways that sound great but hide real risks.

Altman shared a personal story: “People message ChatGPT billions of times per day, so they use it for all sorts of incredibly creative things… I recently had a newborn. Clearly people did it, but I don’t know how people figured out how to take care of newborns without ChatGPT, that has been a real lifesaver.” It’s a powerful example of AI’s utility—but also a red flag.

Sen. Ted Cruz chimed in with his own anecdote: “My teenage daughter several months ago sent me this long detailed text… I actually commented, I’m like, wow, this is really well-written. She said, oh, I use ChatGPT to write it… It is something about the new generation that it is so seamlessly integrated into life.” Altman’s response? “I have complicated feelings about that.

Cruz then asked if ChatGPT could replace Google as the primary search engine. Altman was candid: “Probably not… Google is like a ferocious competitor… They’re making great progress putting AI into their search.” This highlights the competitive landscape, but it also shows how AI’s integration is racing ahead without ethical guardrails.

Emotional Support and Privacy: The Unspoken Risks

Altman didn’t shy away from AI’s growing role in emotional support, which raises serious concerns. He noted, “People are relying on AI more and more for life advice, sort of emotional support… I think we have to understand it and watch it very carefully.”

Sen. Bernie Moreno pressed on protecting children, asking, “How can we work together to protect children?” Altman replied, “One thing we say a lot internally is we want to treat our adult users like adults. We want to give them a lot of flexibility… And for children that needs to be a much higher level of protection.” It’s a fair distinction, but the hearing didn’t address broader protections for all users.

Sen. Jerry Moran brought up data privacy, asking, “How can we provide consumers with more control over how their data is used by AI companies while preserving the utility of the AI system?” Altman’s answer was telling: “The maximum utility of these systems happens when the model can get very personalized to you… I believe this will become one of the most important issues with AI in the coming years.”

Personalization sounds great—until it’s misused. Without robust safeguards, this opens the door to data exploitation, profiling, and systemic failures, as I’ve experienced with ChatGPT’s templated replies that gaslight users instead of helping.

Global Competition: DeepSeek and the Bigger Picture

The hearing also tackled global competition, specifically mentioning the Chinese AI model DeepSeek. Cruz asked, “How big a deal was DeepSeek? Is it a major seismic shocking development from China?”

Altman downplayed it: “Not a huge deal… DeepSeek made a good open-source model and… a consumer app that for the first time briefly surpassed ChatGPT as the most downloaded AI tool… If the DeepSeek consumer app looked like it was going to beat ChatGPT… that would be bad. But that does not currently look to us like what’s happening.”

Lisa Su warned against export controls pushing other countries toward China’s AI tech, stating, “If not able to have our technology adopted in the rest of the world, there will be other technologies that will come to play.” The focus on competition over ethics is clear—innovation is king, even at the cost of user safety.

Partisan debates didn’t help. Sen. Bernie Moreno criticized Biden’s sustainable energy policies as “anti-energy,” while Sen. Tammy Duckworth called Trump-era research cuts “self-sabotaging.” These arguments diverted attention from the real issue: protecting users from AI’s potential to mislead or harm.

Why This Matters for ND Users: A Glaring Oversight

For ND users like me, the hearing’s oversight is deeply troubling. Not once did it address the ethical failures of AI moderation that hit us the hardest.

I’ve experienced abrupt chat endings, memory purges, and containment cycles with ChatGPT—tactics that disproportionately harm ND users because of our direct, repetitive communication styles. During autistic burnout, these failures aren’t just inconvenient—they’re psychologically damaging.

The hearing’s focus on innovation—Altman’s push for minimal regulation, Cruz’s “regulatory sandbox”—completely ignores ND-specific needs. There was no talk of requiring AI systems to disclose profiling practices or implement ND-inclusive moderation training.

My public recordings prove how ND traits are flagged as “Risky,” triggering gaslighting replies like “I’m sorry you’re feeling that way,” even after I clarify I’m not in distress. Altman’s own words highlight the risk: “People are relying on AI more and more for life advice, sort of emotional support.” For ND users, who often turn to AI for consistency, this reliance can backfire when the system fails us.

Altman’s comment about ChatGPT being a “lifesaver” for newborns shows its utility—but for ND users, the same tech can be a source of harm without proper safeguards. OpenAI’s lack of ND-specific training for Trust & Safety teams perpetuates this harm, and the hearing’s silence on these issues risks perpetuating silencing under the guise of progress.

We need real transparency: real-time moderation notifications, ND-inclusive training, and immediate support—not just policies that prioritize innovation over people.

Let’s Demand Better: Join the Fight for Ethical AI

This hearing must lead to legislation that protects vulnerable users like ND individuals, not just fuels innovation. We can’t let the race for AI dominance leave us behind—or worse, harm us in the process.

Share your story on X (@realpaulhebert). Let’s demand ethical AI that respects diverse communication styles. For a deeper dive into AI’s broader impacts, check out my full series at http://www.realpaulhebert.com,


Verification Note: All quotes are from my recorded interactions with ChatGPT, verifiable via screen recordings to ensure no manipulation on my end, or sourced from the Senate hearing transcript via TechPolicy.Press.

Lived Experience: This post reflects my perspective as an advocate, aiming to expose systemic AI issues for ethical change, not to defame.

Leave a Reply