Difference Engines and Whale Song

Many people have misgivings about AI, especially the generative flavour. It’s not really intelligent, they say. It has no feelings. Fine. I’ll cede those points without so much as a flinch.

But here’s the thing: some use cases don’t require intelligence, and feelings would only get in the way.

Take one of mine. I feed my manuscripts into various AIs – is that the accepted plural? – and ask them, “What does this read like? Who does it read like?” I want to know about content, flavour, format, cadence, posture, and gait.

A human could answer that too – if that human had read my manuscript, had read a million others, and could make the connexions without confusing me with their personal taste, petty grievances, or wine intake. AI just spits out patterns. It doesn’t need a soul. It needs data and a difference engine.

Cue the ecologists, stage left, to witter on about climate change and saving the whales. Worthy topics, granted, but that’s a different issue. This is where the conversation slides from “AI is bad because…” to “Let’s move the goalposts so far they’re in another sport entirely.”

I’m not asking my AI to feel, or to virtue-signal, or to single-handedly fix the carbon cycle. I’m asking it to tell me whether my chapter reads like Woolf, Vonnegut, or the back of a cereal box. And for that, it’s already doing just fine.

Why I Create Audiobooks for All My Books

This isn’t a promotional post. I’ve recently discovered the hidden value of audiobooks—and it has nothing to do with selling them.

Back in 2024, when I released Hemo Sapiens: Awakening, I must have read the manuscript a thousand times. I even recorded an audiobook, using an AI voice from ElevenLabs. At the time, Audible wouldn’t accept AI narration. The rules have since changed. It’s now available—though still not on Audible (and therefore not on Amazon).

I’d hired a few proofreaders and beta readers. They helped. The book improved. And yet, even after all that, I still found typos. Those bastards are insidious.

The real revelation came when I started listening.

Since I’d already created the audiobook, I began proofreading by ear. That’s when it hit me: hearing the story is nothing like reading it. Sentences that looked fine on the page fell flat aloud. So I rewrote passages—not for grammar, but for cadence, clarity, flow.

Then came the second benefit: catching mistakes. Typos. Tense slips. I favour first-person, present-tense, limited point of view—it’s immersive, intimate, synchronised with the protagonist’s thoughts. But sometimes, I slip. Listening helped catch those lapses, especially the subtle ones a skim-reading brain politely ignores.

For Sustenance, the audiobook was an afterthought. I submitted the print files, requested a proof copy, and while I waited, I rendered the audio. When the proof arrived, I listened instead of reading. I found errors. Again. Thanks to that timing, I could fix them before production. Of course, fixing the manuscript meant updating the audiobook. A pain—but worth it.

I hadn’t planned to make an audiobook for Propensity—some of the prose is too stylistic, too internal—but I did anyway, because of what I’d learned from Sustenance. And again, I found too many errors. Maybe I need better proofreaders. Or maybe this is just the fallback system now.

I’ve had Temporal Babel, a novelette, on hold for months. I won’t release it until I do the same: make an audiobook, listen, reconcile with the page.

Lesson learned.

I’ve got several more manuscripts waiting in the wings—some have been loitering there for over a year. Their release has been deprioritised for various reasons, but when they go out, they’ll have audio versions too. Not for the sake of listeners. For me.

Honestly, I should do this for my blog posts as well. But editing on the web is easier. The stakes are lower. Mistakes don’t print themselves in ink.