The triple-edged sword of Generative AI
Generative AI lets me realise more of my thinking but it also deepens my dependence on a system that devalues thought, displaces labour, and is built on the unpaid work of others.
This post is more personal than most I’ve written for Artificial Thought. It begins with a tension I’ve been sitting with: I’m thriving in a system that’s harming others and I’m not pretending otherwise. It’s a long and potentially uncomfortable read, so grab a coffee and sit in a comfortable chair.
Over the past year, generative AI has shifted from novelty to infrastructure. It’s no longer a future threat; it’s already reshaping creative and white-collar work by hollowing out job roles, rewriting expectations, and reducing once-specialised tasks to generic outputs. This post sits inside that tension - written by someone who’s both reliant on these tools and deeply uneasy about the system they reinforce.
Many companies now act on the assumption that “good enough” can be automated. That logic naturally leads to job cuts but it also redefines what creative work is. Content becomes supply: fast, cheap, and infinitely replaceable. And the rules of visibility and value shift accordingly.
Yet here’s the contradiction: job losses are treated as a systemic crisis, while individual AI use is still seen as a shortcut. We criticise organisations for replacing people with machines, then judge people for using the same tools to stay afloat. The tool is suspect whether it’s imposed on someone or used by them.
I’ve hesitated to talk about how I use AI professionally because of the stigma, but the reality is, these tools have fundamentally reshaped how I work, think, and create. Finding an ethical balance isn’t straightforward, and I don’t want Artificial Thought to suggest that it is. I’m not complacent about what these tools are replacing or what they’re reshaping.
It’s not just roles that are disappearing, it’s the value of slower processes, less visible expertise, and the kind of intellectual labour that doesn’t scale well. When I use AI to move faster or be more prolific, I benefit from a system that increasingly sidelines the people who don’t work that way. Ignoring that would feel like complicity.
An engine of transformation
AI has made it possible for more of my thinking and more of my creative intent to exist in the world but the fact that I need it at all says something about the system I’m working inside.
Generative AI has been genuinely transformative for me. It’s helped me do something I’ve always struggled with: keep pace with my own brain. My ideas tend to arrive faster than I can capture them, shifting before they’re fully formed let alone articulated. Large language models offer a strange kind of synchrony because they let me stay with an idea long enough to develop it. They also help externalise half-formed concepts before I lose interest and give shape to things that would otherwise stay as drafts.
It’s not just about writing: I often see ideas visually - in layouts, metaphors, images but I don’t have the design training or time to execute them well. Generative tools help me realise those concepts in a way that feels close enough to what I imagined. That shift from half-expressed to fully realised is emotional, because there is a lot of joy in being able to do justice to what I see and think. In short, generative AI has helped me become, professionally, closer to the person I’ve always wanted to be.
Of course, expressing ideas is only half the story. If ideas aren’t visible, they don’t count - and if they’re not shaped to travel, they don’t move.
Generative AI has helped me become, professionally, closer to the person I’ve always wanted to be. But while these tools have made me more productive, more prolific, more visible, they’ve also made me complicit in a system that demands all of those things just to stay afloat.
I don’t write so prolifically just because I enjoy it - I also write because I have to. I work for myself, and in this ecosystem, visibility is survival. The only way to signal expertise or relevance (especially in work that’s conceptual and hard to measure) is to share ideas. Repeatedly. Publicly. Across platforms that reward activity more than insight.
LinkedIn rewards consistency. Substack rewards rhythm. To stay present in the feed, I have to continuously produce digestible ideas that travel easily and accumulate volume. It’s not just depth that matters anymore—it’s the cadence. And sustaining that rhythm is almost impossible without help. That’s where AI becomes a lifeline. It lets me move from idea to output quickly enough to maintain presence without burning out. Without it, the return on effort would be too low to justify the frequency required.
The true cost of free knowledge
We all benefit from unpaid intellectual labour - whether we’re using LLMs or just reading carefully shaped thinking online without paying for it.
There’s a widening gap between being a knowledge worker and being a public knowledge creator, and that gap is filled with invisible labour. The work I share here and on Thinking About Behavior isn’t paid for, but it isn’t free to make. There’s the thinking itself, and then there’s the structure, tone, and rhythm. The shaping of an idea so it’s legible in a feed, clickable in a newsletter, skimmable in a browser. Visual framing, metadata, formatting form the soft-edge work that makes something feel coherent and intentional. None of it happens by accident, and none of it is automated.
I learned how to do that work through years in branding and marketing. In salaried knowledge roles, much of the scaffolding is built around you, but when you publish for free, in public, without institutional backing, you’re building the entire frame yourself - I am my own marketing department, R&D and press office.
That difference came into focus when I described how I use AI tools to create Artificial Thought to my sister, who’s spent her career in salaried knowledge jobs. She was impressed by what I’d made but also quietly shocked. Why give away expertise for free? In her world, knowledge is paid for. The idea of packaging it for public consumption without being commissioned felt somewhere between baffling and absurd.
And that’s the divide: knowledge work as employment versus knowledge work as public output. One assumes value. The other has to prove it repeatedly, visibly, and with polish. It’s a performance of credibility, sustained until it sticks.
Without generative tools, I’d still write—but given that I’m not paid to write, the cost of turning ideas into finished work would often outweigh the return. Only a small fraction of readers ever become clients, and I have to guess what that 1% might find relevant or persuasive enough to trust me with their own complex problems.
For contrast: a 1,000-word piece like this would cost minimum €800 to commission from a professional copywriter or more if I engaged a technical writer. Yet I write these for free, every week.
That dynamic isn’t unique to AI: if you’re reading this, learning from it, using it to sharpen your thinking (and wouldn’t consider paying for it) you’re still benefiting from someone else’s unpaid intellectual labour. The only difference is that this was written for you, not scraped and repackaged by a model. Yet the expectation is similar: that high-quality, professionally relevant knowledge should be free at the point of use. That someone, somewhere, will keep producing it for visibility, goodwill, or speculative future return.
It’s easy to critique generative AI for using work without permission, but the cultural shift to consuming thoughtful work without compensation came first. Generative AI didn’t invent the logic - it just scaled it.
The glass house of capitalism
The irony is that the tools I rely on were trained, in part, on the unpaid work of others - writers, designers, and creators whose outputs were scraped, repurposed, and absorbed into systems that now replace them. That labour is invisible too, and it sits beneath everything I make with these tools.
At some point, I may put Artificial Thought behind a paywall. If that happens, would you keep reading? Would you pay for it - even if it was just a few euros a month?
If the answer is no, not even a little, then it’s worth sitting with what that implies - not as a judgment from me, but to notice a pattern. Most people in knowledge work have come to expect that insight will be shared for free. That intellectual labour is a form of self-promotion. That self-promotion is now part of the job, and it’s rewarded by exposure - a dynamic those in the creative arts have long known well.
It’s a strange setup: we consume writing that looks professional, thoughtful, and intentional, often without asking what it cost to produce—or who’s covering that cost if we’re not. We expect polish and clarity from people whose income doesn’t come from writing, but from other work they hope the writing will one day bring them.
It’s a giant glasshouse, and most of us are in it.
We critique the changing shape of creative work and throw stones at the use of generative tools, but we rarely look at the system that makes them necessary. One that demands visibility, punishes slowness, and turns unpaid intellectual output into a baseline expectation.
It’s not just a quirk of platform culture - it’s the logic of neoliberal capitalism, where value is demonstrated through performance, attention is treated as currency, and self-promotion becomes a condition of survival. In that context, generative AI isn’t the disruptor. It’s the enabler. It makes the system more livable, even as it reinforces its shape.
I’m not outside it either. I benefit from free ideas as much as anyone. I move faster when content is clean and well-framed. I read work shaped by the same pressures that shape mine. This isn’t about hypocrisy. There are no villains here - only a system, and the incentives it creates. Even now, most generative tools are subsidised by venture capital. People aren’t paying real costs yet but value is still being extracted from someone else, somewhere else.
Not a conclusion, but a reckoning
This is the triple-edged sword of Generative AI: augmentation, dependence, entanglement.
It has helped me think more clearly and create more fully. It’s augmented my capacity not just to produce, but to stay with ideas long enough to shape them. It’s made it possible to express things I would otherwise lose, and in doing so, has pulled more of my thinking into the world.
Yet that augmentation comes at a cost: to stay visible, to stay relevant, to stay viable, I’ve come to depend on these tools to keep pace with the rhythm that platforms and economies now expect. It’s a structural dependence - not on AI itself, but on the logic it enables: faster cycles, constant presence, uninterrupted output.
With that dependence comes entanglement, because the tools I use are built on systems I question. They’re trained on the unpaid labour of others. They reinforce the pace and shape of a culture I’m not sure I want to be part of. They make me more productive, even as they narrow the space where slower, less visible forms of work can survive.
I don’t think there’s a clean way to separate those edges because they overlap, reinforce each other, and blur at the margins. Recognising them helps because it keeps me from pretending that this is simple, neutral or free.
So yes, I’ll keep using these tools. I’ll keep writing, thinking, publishing. But I’ll also keep noticing the system I’m participating in and what it asks of those who want to stay visible inside it.
If you are interested in the economics of the Generative AI companies, Ed Zitron’s newsletters do a good job of explaining things:
Generative AI lacks the basic unit economics, product-market fit, or market penetration associated with any meaningful software boom, and outside of OpenAI, the industry may be pathetically, hopelessly small, all while providing few meaningful business returns and constantly losing money. - - -
As a result of the costs of running these services, a free user of ChatGPT is a cost burden on OpenAI, as is every free customer of Google's Gemini, Anthropic's Claude, Perplexity, or any other generative AI company. Said costs are also so severe that even paying customers lose these companies money. Even the most successful company in the business appears to have no way to stop burning money.
There Is No AI Revolution (Feb 24, 2025)
OpenAI is burning money, will only burn more money, and to continue burning more money it will have to raise money from investors that are signing a document that says "we may never make a profit."
The Subprime AI Crisis (Sep 16, 2024)
Also see: How Does OpenAI Survive? (Jul 29, 2024) and The Generative AI Con (Feb 17, 2025)