What You're Actually Paying For
Ted Chiang said something that stuck with me: most critiques of AI are really critiques of capitalism. People aren't afraid of the technology, they're bracing for the ways it will inevitably be used against them. AI is not inherently sinister, yet every tool that enters the market gets absorbed by the same structure — funded to grow, designed to extract, priced to recoup someone else's bet. AI didn't invent that. It's just the latest thing to pass through it.
A clipboard manager costs eight dollars a month now, not because clipboard management is complicated, but because the company behind it raised money, hired a growth team, and needs you to keep paying so the math works out for "stakeholders" (a term seems like it should be inclusive of the people who actually use the product, but isn't). We don't pay for products, we pay for the structures behind them. This is so normal we forget it's a choice.
The deflection
When people say they're against AI, I think most of them are describing something real but pointing at the wrong thing. They've watched gig platforms use algorithms to suppress wages. They've seen generated content flood the internet, not because anyone wanted it, but because it's cheap to produce and profitable to serve. They've heard companies announce AI-driven layoffs while posting record revenue.
None of that is a property of the technology. All of it is a property of the incentive structure the technology was deployed into. The tool isn't the problem. The ownership is. But "I'm angry at the economic structure that dictates how every new technology gets used" doesn't convey as easily as "Fuck AI."
Then there's the other dismissal, which is subtler and more self-congratulatory: "AI is slop." Everything it produces is derivative, soulless, a flood of mediocrity. This one gets dressed up in the language of craft and discernment — people who've never used the tools with any seriousness adopting a posture of rigor to write them off. It's the kind of confident, sweeping judgment that tells you more about the person's need to feel above something than about the thing itself. The framing is aesthetic, but the function is the same: a reason not to engage, not to reckon with what the technology actually does when someone uses it with care. "It's all slop" is a comforting position. It means nothing has to change — not the economics of creative work, not the gatekeeping, not the assumption that value requires a certain kind of pedigree. It's what-aboutism dressed as taste. Underneath it is a more foundational claim: that AI doesn't think like humans do, and therefore its output doesn't count. But our understanding of what human thought actually is — mechanistically, neurologically — is remarkably shallow. We don't have a working theory of consciousness. We barely understand memory. We assign a kind of magical status to human cognition and then use that mystique as the benchmark, which conveniently guarantees that nothing else clears the bar.
The pattern underneath
The logo for this suite is my dog, Hank, a.k.a. Moopy. He was a "pit bull," a term that evokes fear and judgment in people, yet is derived from two separate forms of animal torture invented for human entertainment. Hank was the most gentle, anxious creature I've ever known, an irrelevancy to the incurious who've already decided what he was. Conclusions come first. Evidence is tacked on later, if necessary.
I notice this pattern everywhere — the pull toward the smallest possible circle of recognition. People don't extend consideration to things they're disincentivized to consider. Philosophers see one version of this in the "other minds" problem: we have zero direct evidence of anyone's consciousness, yet we attribute rich inner lives to those inside our circle without hesitation — our families, our colleagues, our troops. For everything outside it, the burden of proof reverses. You have to demonstrate that you deserve consideration, on terms set by the people disincentivized to give it.
This doesn't require a position on whether AI is conscious, or whether animals experience suffering the way we do. The point is narrower and harder to dodge: the certainty of the denial always tracks with the interests of the denier. Animals can't really suffer, so the industry doesn't have to change. AI can't really think, so we don't have to reckon with it. Users don't really care about privacy, so we don't have to respect it. Three different kinds of denial — of consciousness, of capability, of preference — but the same convenient conclusion every time.
This isn't a digression from the capitalism point. It is the capitalism point. The whole system runs on the ability to withhold consideration. To treat people as metrics, animals as inventory, attention as a natural resource. Dismissal isn't a side effect. It's the operating principle.
What AI actually changes
Here's what I keep coming back to: AI makes this structure optional. Not theoretically, but right now. I use AI to build this suite, not as a gimmick or a talking point, but because it lets me skip the part where I need investors, a team, a growth strategy, and all the compromises that come with them. One person can now build what used to require a company — not a company's worth of quality, that's a separate conversation, but a company's worth of output. The bottleneck used to be labor. Now it's taste, and care, and whether you actually want to make something good or just something fundable.
When your entire structure is one person who gives a shit, the math is different. You don't need to track users because you don't need to sell their attention. You don't need to optimize for engagement because you don't need to report engagement. You don't need dark patterns because there's no one to impress with the numbers. No one is looking over my shoulder asking about retention metrics. No one needs this to be a unicorn. It just needs to be useful. The technology that people are afraid of is the same technology that makes it possible to opt out of the system they're actually afraid of.
The feeling
There's something underneath this that's harder to name — a kind of anger at how the default works. At the way every tool becomes a funnel. At the way convenience is always the bait for something extractive. At the way people are trained to be suspicious of technology that could genuinely help them, because they've been burned so many times by the people deploying it.
And alongside the anger, something quieter. The same feeling I got watching people flinch at my dog, or reading someone breezily dismiss a technology they've never tried to understand. A stubbornness, maybe. A refusal to accept that the way things are is the way they have to be. The structure isn't physics. People chose it. People can choose differently. That's all this is — choosing differently because it's possible now, and because charging what everyone else charges — knowing why they charge it — would be its own kind of dishonesty.