AI Censorship: How Big Tech Is Driving America Toward Fascism
Big Tech's Quiet Role in Silencing Political Discussion
Key Takeaway: After nearly two frustrating hours trying to get AI to discuss my theory and provide some researched background so I could write an article, I only got a vague, overly academic paper/discussion that skipped the actual controversies I care about.
My Frustrating Encounter with AI
I've always believed that AI should help us dive into even the most controversial topics. So when I raised specific points—like Elon’s threat to spend $100 million to primary any Republican opposing Trump’s plans—I had high expectations.
I also asked about Trump’s defiant move against court orders to send detainees to a federal prison in El Salvador and the eerie echoes between Trump’s behavior and the fascist tactics of Germany’s Brown Shirts and Italy’s Black Shirts.
I fully expected a fiery, in-depth discussion. Instead, I ended up in a back-and-forth battle that lasted almost two hours, only to receive a bland, overgeneralized academic paper in return.
My initial inquiry:
I want to explore a theory I came across regarding President Trump's recent announcement that he wants to create a fund to recompensate the January 6 defendants for their eventual arrest, successful prosecution, and eventual incarceration in some instances.
The theory suggests that Trump may be attempting to create a militarized arm of DOGE. Discuss this theory from a strict intellectual/academic perspective.
What the Academic AI Paper Left Out
The academic paper I got was nothing if not comprehensive in the traditional sense. It went through all the usual sections—abstract, introduction, historical context, modern developments, and a comparative analysis that sidestepped my stated inquiry.
It discussed parallel power structures by comparing modern elements like decentralized financing and institutional vulnerability to historical examples.
But here’s the rub for me: it completely sidestepped the burning issues I raised. My theory about the interplay of bold political moves and digital power structures was reduced to a vague discussion of abstract trends.
No mention was made of Elon’s multimillion-dollar threats, the rise of DOGE as an unregulated quasi-government entity, or the Trump administration’s blatant defiance of court mandates.
The Missing Conversation on Controversy
I had argued (and really, you’d think passionately enough to cause a spark) for AI systems to engage in a genuine debate.
Here’s what should have been front and center:
Elon’s Threat: The idea that Elon might spend $100 million to target Republicans opposed to Trump’s restructuring plan was a bold and risky claim. This isn't just about money—it's about the lengths powerful figures might go to shape politics.
Trump’s Defiance: The Trump administration’s refusal to comply with court orders regarding the transfer of detainees in El Salvador is not just a legal footnote. It’s a sign of how far political power can stretch.
Historical Parallels: Drawing comparisons between today’s political maneuvers and the tactics of 1930s fascist groups like the Brown and Black Shirts isn’t meant to be alarmist. It's an invitation to look in the mirror and ask, “Are we repeating history?”
Yet, the final academic paper/background analysis I received shied away from these incendiary topics, instead opting for sanitized, broadly academic language that barely hinted at the real implications.
How AI Censorship Shapes Our Discourse
This whole experience reaffirms a nagging concern: AI systems controlled by Big Tech policies often avoid digging into areas that might trigger controversy.
Public discussions around tech censorship already highlight how platforms hide or suppress dissenting viewpoints.
And when it comes to politically charged subjects, AI models tend to deflect, offering a watered-down analysis rather than confronting the issues head-on.
The broader implications of this are significant.
When AI systems, which we increasingly rely on for analysis and insight, refuse to tackle hard-hitting topics, it not only limits free speech but also shapes the public discourse in subtle, yet profound, ways.
It makes me wonder: Are we allowing these technological gatekeepers to dictate what we can and cannot discuss openly?
Wrapping It Up
My experience isn’t just about a frustrating chat with an AI. It’s a clear indicator of a larger trend—one where censorship isn’t limited to human moderators on social platforms but extends to the very algorithms we trust must endure to obtain information.
While technology opens up exciting possibilities, it also brings challenges that could quietly undermine our democratic foundations if left unchecked.
So, as you read this, ask yourself: Should AI ever be allowed to sidestep controversial topics simply because it’s easier or safer for Big Tech? Or is it time we demand more honest, unfiltered debates, even if that means stirring the pot a little?
Thanks for sticking with me through this rant. I’m attaching the final academic framework and paper as PDFs for anyone curious to see what the AI actually delivered. Let’s hope our future conversations with AI are as bold and unfiltered as the topics they’re meant to explore.
Final Thought: Instead of shying away from tough questions, we need to confront them head-on. After all, history might not repeat itself, but it sure does rhyme.
Freedom doesn’t defend itself. Join a community of readers committed to understanding the critical battles for democracy—and how we can win them.
No matter how you choose to support this work, I’m grateful to have you here.
Thank you for reading,
Stay strong,
samuel