I'm Going to Rant About How Much "AI" Fucking Sucks
It sucks ass. Get that shit out of my face. And as for this entry? Get that shit out of my face too, I'm tired of looking at it.
I've made abundantly clear that I am hyper-critical of this AI stuff. To quote the formerly great Dave Chappelle my opinion is:

"Well Actually--"
We're going to be discussing generative AI. There are other applications that can do a lot of good, particularly in the medical field– it's not those that catch my ire. I specifically have a bone to pick with the ones that regurgitate things of a creative nature. Don't play dumb and demand I be hyper-specific; you know exactly the type I'm talking about and the context I'm coming from.
"You leave my AI-Powered Personal Assistant outta this!"
Shut the fuck up or I'm coming for that, too! Don't play me!
Why Are You So Mean?
People often say ‘stop being angry and educate us’, not understanding that the anger is part of the education. (Amy Dentata)
- My tone doesn't seem to matter; being a Black woman-read person the overculture assumes I'm angry, anyway (and you can say I'm reclaiming being "Loud" on top of that).
- I am frustrated, and I don't feel like pulling any punches. Do not tone police me, either.
- Consider this venting/stress relief.
- I really am this uncouth.
This is the energy we're going with today, and I'm sure it's the same energy that radiates off me whenever I'm surrounded by people gushing over how great it is, and being called a killjoy and hater and Luddite when I grit my teeth and growl out something contrarian. Hey, I'm not denying those allegations. The shoe fits, more aptly than they imply:
The Luddites were primarily skilled craftspeople in the textile industry, who found their livelihoods threatened by the introduction of mechanized looms and knitting frames. They viewed these machines not just as threats to their employment, but also as causes in the degradation of the quality and craftsmanship of textiles. ... This historical context sets the stage for understanding the Luddites not as technophobes, but as advocates for the preservation of quality and dignity in labor. (The Age of AI Luddites: The Rational Fear of Average Quality)
This is someone giving a damn about the value of skills and art.
I get told a lot that AI is the future and I should embrace it. To which I enthusiastically retorted: I ain't gotta do SHIT! And I'm going to tell you why!
It's Really Hard To Not Use Any Memes For This Section
Just let me get it out of my system.

Overly simplified, for sure, but let's not get too deep in the weeds here. My initial thoughts were, just, "oh, ok." Really, all it does is accept input– regardless of quality and consent– and outputs it in some sort of recognizable way, with varying results.
Also, that bit about not coming for the AI Assistants? I lied. 😊
My favorite anecdote is Google's AI recommending that you add glue to your pizza sauce. Yes, it would technically make the sauce tackier and have cheese adhere better, and technically non-toxic glue is edible. But that is part of the problem: no matter how much data you throw in there, language has nuances that a machine is incapable of catching. It's all math and really good guesses at the end of the day, and it just can't account for emotional weights, social matters, and general subjective human intricacies.
And on top of that, the damn thing is prone to just making up shit. Dubbed "hallucinations," statements with no basis in reality are occasionally shat out. Which requires you to double-check the work or have enough working knowledge of the topic you're ChatGPTing about to be able to catch the mistakes.
Not everyone has that.
I don't know anything about cars, despite driving since I was able to smoke legally. If DuckDuckGo's AI Assistant told me that my Ferrari requires mayonnaise lubricant for its backup engine, I would have taken it at casual face value. Fortunately, I still employ critical thinking and follow the "Trust, But Verify" proverb (well, these days it's "Trust No Bitch and Verify Anyway").
For a less humorous example: if there was an answer to a medical emergency and I'm too stressed to double-check, I'd be in a bad situation if I was fed misleading information. My medical knowledge is also lacking, so already I have to put some trust on the line and hope it works. The worst that could happen is that I could die.
Other people do put trust in a technology so black-boxed, not even its engineers know what's fully going on in there. So I feel the band-aid of "just double-check behind the thing!" to be irresponsible. It's like telling people to "just don't get scammed"– it doesn't really address the fact that scams still exist or are even encouraged.
So if you're using generative AI for knowledge or shortcuts, but you gotta do the legwork to make sure it's accurate anyway... then what's the point?
Why don't you just do the work from the jump?
Wait, Wait, Where Was I
By the way. Can you be sure your generated bullshit was "ethically sourced?"
Can you, for realsies? Without any doubt?
No You Fucking Can't
For LLMs to work as best as possible, it needs a lot of data. Ideally, all the data, but best we can do is "as much as we can possibly get our hands on, by any means necessary." It was decided that the best way was to plug in a connection to the WWW and call it good.
There is a lot of useful information out there, from academic papers to articles to research broken down in layperson's terms. Unfortunately bigotry, biases, jokes, and lies is also part of that data. (I'd like to remind folx of the GIGO principle: Garbage In, Garbage Out. We shouldn't have been surprised when the thing is occasionally a bigoted plagiarizing weirdo.)
There's also the matter of AI being trained on a lot of other people's hard work. Largely, without permission. Sure, you have your "opt-out" checkbox... but why is it never "opt-in"? And how is it I only find out such a thing had been quietly added via social media word-of-mouth?
Not that your consent seems to matter. The AI scrapers and crawler bots of the big guys will ignore your robots.txt file telling them to fuck off– and swipe it anyway. And lie about it. And get caught in that lie.
Artists have been getting their work bootlegged and stolen since time immemorial, and digital artists were the canary in the coal mine when it came to generative AI. They were the first to rally against it, but they were scoffed and thought over-reacting. Writers thought they were safe. Programmers assumed they were untouchable. White-ish Collar workers turned their nose to the warning signs.
Then the LLMs got gud at coding, writing, and holding conversations too. Good enough for their bosses to notice. And the bosses realized that it didn't need a wage...
You Creatively Bankrupt, Cheap, and Greedy Fucking Bastards
Why pay actors those pesky ongoing salaries when you can just dump their likeness into a machine one time and use it forever and ever?
Why pay a real human being the effort and practice they put into a profession when you can just regurgitate?
Why put in the hard work or actually putting together something with your own two hands when you can just throw a prompt at a robot?
it's cheaper!
Creativity isn't profitable for these leeches. Companies would sooner fire the art department than not have the money number go up– bad enough that people undervalue the creative arts, in general. And generative AI is perfect cost-cutting measure (until they need to hire people to fact-check and sanitize the output).
it's faster!
Once you're all about profit, all you care about is quantity. The more content you can pump out, the more eyes are on it, the more engagement, and the money you can get. With this magic plagiarism machine's amazing turnaround, you can pump things out even quicker! It's why there's dorks with 20 "faceless" YouTube channels.
Pop Quiz! Out of the Project Management Triangle of "Good, Fast, Cheap" where you can only pick two– which one did you leave on the cutting room floor?
Yeah, that's fucking right.
because it's not good!
I don't care how sophisticated these things get, or how much clean up and hand holding you had to do to get it passible. It'll always be, and I hear this in James Stephanie Sterling's voice: fuuuuuking shiiiiiit.
You think the term "AI Slop" came out of nowhere? It is some of the most below-bare-minimum content I have had the misfortune to battle on every social media I still have a foot on. It floods the sites, crowding out real people who took time and money for their hard-earned craft. It impacts livelihoods, disrupts online communities, and helps in further wrecking the planet.
This Was Not The Future I Envisioned
Listen, bitch. I grew up with many sci-fi scenarios. I look around me and despair not only because I still don't have a jetpack, but we seemed to take the wrong lesson from what the more optimistic sci-fi tales has been telling us.
Machines were supposed to be crunching the numbers and doing the boring and dangerous menial tasks so humanity can have more time to create and enrich ourselves. Now, it is machines who "create," replacing real people and lining the pockets of those that only see creativity as a souless cashgrab. Content is just means to an end (and that is money).
Say it with me: CREAM. Capitalism Ruins Everything Around Me. The money number has to go up. Forever. And they'll literally set the world on fire to do it.
re: Wrecking the Planet
To generate shit takes an enormous amount of computer power, to the point where Microsoft helped resurrect Three Mile Island to keep their data centers churning. Training the model is intense enough, and once you factor in the usage of said model by a lot of people it only gets worse. Further pollution, increased carbon footprint, a– shit, I lost you, didn't I?
Go look up how much water it takes to chunk out a single generated image. I'll wait. Is doing all that instead of picking up a damn pencil or planning an outline really worth it?
Still Not Convinced?
I didn't do this for you (initially, or mainly). This post was a failed pressure valve before I ended up cussing people out at the next function with the energy of Extinction Party.
Do I feel better? Physically, yes; can breathe easier and all that. Venting is good sometimes. While I may revisit this topic on a much smaller scale, I feel like I have unloaded a lot of it's weight. I can let go. I release it!
But otherwise? Er. No. It feels like not enough people care, and the people that should don't listen to me anyway. So into the void this goes. If some people see it and come around, that'd be a nice bonus.
Guess What Tho
I did all that shit up there and I didn't need a nuclear power plant or very specific wording to do it! Chew on that. And go eat some rocks too, I guess. Gemini says we can.

Now to work on something more positive. Next time, I'll ramble on about smallweb!
Shut up and go in peace.
For More Reading
And hey, you don't have to take my angry words for it! Here are some measured responses, academic papers, and so on you can read up on! ReadingRainbowChord.wav
Luddites and Other Killjoys
- What is a 'Luddite' in the age of ChatGPT and AI?
- Hidden Costs of AI and the Case for Luddite Thinking
- What the Luddites Can Teach Us About Artificial Intelligence
- The Global Creative Community Stands Unified Against Unchecked AI Use
- Artists stand up for their rights against AI threats
- Visual artists fight back against AI companies for repurposing their work
In General, LLMs
- What is a Large Language Model (LLM)
- How Large Language Models work
- A jargon-free explanation of how AI large language models work (but it is still pretty long– it's also thorough)
- A Beginner's Guide to LLMs – What's a Large-Language Model and How Does it Work?
- Why We Need to See Inside AI’s Black Box
- AI's mysterious ‘black box’ problem, explained
Plagiarism and Creative Theft
Misinformation
AI hallucinations may seem like a technical oddity, but their real-world implications are far from trivial. In fields like healthcare, law, journalism, education, and finance, a hallucinated answer can lead to misinformation, misdiagnosis, or misjudgment. (What Are AI Hallucinations and Why Do They Happen?)
- AI hallucinations lead to a new cyber threat: Slopsquatting
- AI Hallucination: A Guide With Examples
- How generative AI is boosting the spread of disinformation and propaganda
- Generative AI is the ultimate disinformation amplifier
Social Impact
I agree with powell and Menendian’s assessment of othering, and argue that emerging information technology is amplifying otherness through neglect, exclusion, and disinformation, all of which have significant consequences. (Amplifying "Otherness")
- When AI Gets It Wrong: Addressing AI Hallucinations and Bias
- New Research Will Explore How Marginalized Communities are Addressing AI
- Social Dangers of Generative Artificial Intelligence: Review and Guidelines
- The social impact of generative LLM-based AI
- The impact of generative artificial intelligence on socioeconomic inequalities and policy making (lists positives and negatives!)
Environmental Impact
- AI’s Secret Guilt: Unveiling the Hidden Energy Costs of Image Generation
- How much electricity does AI consume?
- Making an image with generative AI uses as much energy as charging your phone
- The Hidden Cost of AI Images: How Generating One Could Power Your Fridge for Hours
- AI Energy Consumption: Is It a Problem?
- Power Hungry Processing: Watts Driving the Cost of AI Deployment? Is the paper referenced in most articles.
Other Articles I Just Liked I Guess
The massive time investment and vulnerability required to open your art up to criticism is one of the most beautiful things about the process. Where is the joy in operating through a command line? Where is the sense of self-accomplishment?