The Real Threat Isn’t AI. It’s What We’re Willing to Sacrifice.
How generative tools are exposing the cracks in how we value creative labor, education, and human contribution
I've been trying to write an article about AI for a few months now. I've drafted a lengthy article a handful of times and not published it— not been able to fully articulate my genuine thoughts on the issue. I work in creativity. I am a writer in progress where writing in AI is an intense huge conversation.
Every few weeks, someone on social media argues that using AI-generated writing or art is fine because they “are disabled” or they "can’t draw" or because the software is simply "inspired" by existing art as if copyright law and artistic ethics are just a vibe. But generative tools like ChatGPT, Midjourney, and ElevenLabs don’t create in the way people do. They synthesize based on scraped, unconsented data. Derivative designs. Theft, making the platforms unethical before a user even interacts with it.
Still, AI is not the enemy. It’s a tool, like any other. The problem is people. The people who make unethical choices on how to train their AI tools. The unethical choices people make when they use it. And more importantly, the willingness of people to devalue craft to justify that use.
AI Narration and the Audible Dilemma
A few years ago, my mutual RSK, author of The Adventures of Hemera Nyx in the Galaxy of the Future, uploaded an audiobook to Audible using a narrator who'd spent months carefully crafting their performance. This was early AI days and so when that narrator used AI to voice the women’s voices in the final product, RSK couldn’t necessarily tell. The quality was high, and the product was good. There was something a little off, he had to admit, but not anything he could pin down.
Soon ACX, Audible’s distribution platform, rejected the audiobook specifically for its use of AI and RSK was left with an expensive product he could not sell.
RSK argues that at the time, the rejection wasn’t about profit. That Audible made money every time someone purchased that audiobook. They weren’t investing anything into it, only taking a sizable cut. No work, no production, just passive gain. So why reject it?
Because AI narration, back then, was poor quality. Maybe not RSK’s audiobook specifically, which sounds quite good, but in general most AI audiobooks sounded like trash and it was hurting Amazon’s bottom line. If consumers were going to spend their precious money, they didn’t want or deserve a robotic voice with no nuance. Audible took a principled stance that they would only accept human-read audiobooks, to protect quality standards and consumer trust.
But fast forward to now, Amazon has completely flipped the script sharing in May 2025 that they plan to “offer more than 100 artificial intelligence-generated voices in English and other languages”.
Why now? When they were so firm before. Its because AI voices and narration has improved. If an AI created audiobook sounds good enough that people won’t complain to Amazon, or ask for returns, it becomes much cheaper for Amazon to produce them that way, churn them out, and make a book. They haven’t changed their standard because it’s more ethical. They’ve changed it because of their own greed.
Hypocrisy in the Writing Community
Most writers I know rail against the use of AI to generate books, fearing it will dilute the value of authentic storytelling and the craft of an art. Yet many writers often defend use of AI-generated art for book covers or promotional materials with statements like "I’m not an illustrator" or "It’s just a tool."
But if you demand respect for your writing as labor, if you want readers to pay you, not pirate your work, and to understand the value of your craft, you can’t simultaneously devalue the labor of writing, illustration, or narration by supporting the use of AI in assisting the craft. You can’t say, "I told the AI what to do, so it’s my vision," while slapping your name onto generative text scraped from your favorite authors. Or refusing to pay a real artist or narrator who would bring that vision to life.
This is creative hypocrisy.
I recently spoke with someone in traditional publishing who said, "Everyone is using AI. They just don’t talk about it" and this is backed by data.
As AI tools become more normalized, he suspects that human-made work will become a premium, a luxury. If you want a real narrator, a real illustrator, or a real writer, that will be the expensive option. The default will be automation. And that means a future where most of our stories, images, and voices will be generated by code trained on unpaid labor.
We’re actively choosing to devalue creative work by choosing to erase the humans behind it.
I have had so many conversations with my fellow writers, “Can AI use be ethical at any stage?” Their answers were all over the board.
Some said AI is ok for small things like generating names. Or brainstorming ideas. But where do we draw the line? Drafting a scene? What about structuring an outline or developing character bios? What if its 1am and you need another person as a sounding board for ideas? Or does the line only get crossed when an AI writes the entire novel?
None of us could agree on a hard and fast line, yet all of us were vehemetly against AI. And that tells me that we are being reactive, scared, rather than thinking critically about the issue.
After months of conversations some things I’ve determined (for my own value system) are a need for regulation and a conversation about ethics. More on that later.
Anti-intellectualism in Education
One tiktoker recently summed up my mangled thoughts on the issue, at least outside of the writing world. As a former college professor, he taught a class centered on ethics, community, and being a good person. When AI tools started becoming more available, he noticed a shift in his students. Many asked what’s the point of doing the work when a tool can do it for me? Why should I reflect on morality or community when I’m just here for a degree to get a job?
They weren’t being lazy. They’d been taught to optimize. They had learned the truth of our system, that education is no longer seen as a process of learning, growth, or about becoming a critically thinking citizen. Instead it is a credentialing pipeline to labor. This is a sentiment I have felt for nearly a decade.
Many years ago I applied for my dream job at a library of a private college and, long story short, when speaking to the student workers I mentioned that college is a time to explore your options and learn who you are and what you want in life. Behind the students, a librarian bristled. A LIBRARIAN. And honestly they were right to. With the high price of education, who can actually afford to liberally explore their interests.
Now insert AI, a potentially quick ticket to your expensive piece of paper and a shiny new job. Capitalism has taught students that growth, curiosity, and ethical development aren’t valuable. Only productivity and income are.
But AI isn’t breaking education. Remember the thesis of this article is that people are the problem. AI simply revealed what we’ve already lost.
Banning AI won’t fix this. Pretending students or creators won’t use it is naive.
We Need Ethics, Not Just Tools
People of the 90s feared the internet because of its potential to spread harm, misinformation, and displace labor. Those fears weren’t unfounded, but the internet didn’t vanish. It evolved and so did we. AI is in a similar moment. The concern is real, and in some ways justified.
Just like at the burgeoning of the internet, we need ethical training as we grow further entangled with this new technology. We need AI education rooted in moral philosophy that teaches people how to evaluate technology through the lens of harm, equity, and responsibility.
Because AI isn’t harmful. People are. People stole works to train AI. People developed data centers that are damaging the environment. Those people need to be held accountable for real change. Just like with any product, the effort of positive change shouldn’t be placed on the consumer.
Instead of shaming students who use AI to write a paper, we should teach genuinely the pros and cons of the tool. Increase the rigor of work by teaching them to ask themselves hard questions.
Creative labor is not just a means to a product. Education is not just a means to a paycheck. If we treat them that way, we will lose more than jobs. We will lose the very reason we create, learn, and grow in the first place.
AI won’t destroy us. But our lack of ethics surrounding the tool, will.