The Veneer of Human Exceptionalism in Art

Robotic La Joconde

I don’t want to develop a reputation as an AI apologist – I really don’t. But I do want to strip away the veneer humans so lovingly lacquer over themselves: the idea that art is some mystical emanation of a “soul,” accessible only to those blessed by the Muse and willing to suffer nobly in a garret.

Video: YouTube Short by Jonny Thompson of his interview with Rachel Barr

Rachel Barr argues that AI art can never be the same as human art, no matter how “perfect,” because AI has no feelings or drive. Cue the violins. These arguments always seem to hinge on metaphysical window-dressing. When Rachel says “we”, she’s not talking about humanity at large; she’s talking about herself and a very particular subset of humans who identify as artists. And when she invokes “masters”, the circle shrinks still further, to the cloistered guild who’ve anointed themselves the keepers of aesthetic legitimacy.

But here’s the bit they’d rather you didn’t notice: feelings and drive aren’t prerequisites for art. They’re just one of the many myths humans tell about art, usually the most flattering one. Strip away the Romantic varnish and art is often craft, habit, accident, repetition. A compulsive tic in oil paint. A mistake on the guitar that somehow worked. A poet bashing words together until something sticks.

And I say this not as a detached observer but as a writer, artist, and musician in my own right. I sympathise with the instinct to defend one’s turf, but I don’t need to steep myself in hubris to retain self-worth. My work stands or falls on its own. It doesn’t require a metaphysical monopoly.

So when someone insists AI art can never be “the same,” what they mean is it doesn’t flatter our myths. Because if an algorithm can spit out a perfect sonnet or an exquisite image without the tortured soul attached, then what have we been worshipping all this time? The art itself, or the halo around the artist?

Perhaps the real fear isn’t that AI art lacks feelings. It’s that human art doesn’t require them either. And that’s a blow to the species ego – an ego already so fragile it cracks if you so much as ask whether the Mona Lisa is just paint on a board.

Generative AI and the Myth of Emotion

Critics never tire of reminding us that AI has no emotions, as though this were some startling revelation. Next, perhaps, they’ll inform us that penguins can’t fly and that bankers are allergic to honesty. Yes, generative AI has no emotions. But must we wheel in the fainting couches? Writers don’t need it to sob into its silicon sleeve.

Full disclosure: I am a writer who writes fiction and non-fiction alike. I am also a language philosopher; I study language. And a technologist. I’ve been working with artificial intelligence since the early ’90s with Wave 3 – expert systems. I am still involved with our current incarnation, Wave 4 – generative AI. I know that artificial intelligence has no intelligence. I also know that intelligence is ill-defined and contains metaphysical claims, so there’s that…

Meantime, let’s stroll, briskly, through three ghosts of philosophy: Saussure, Wittgenstein, and Derrida.

Audio: NotebookLM podcast on this topic.

Saussure and the Tree That Isn’t There

Ferdinand de Saussure gave us the tidy structuralist package: the signified (the thing itself, say, a tree) and the signifier (the sound, the squiggle, the utterance “tree,” “arbre,” “árbol”). Lovely when we’re talking about branches and bark. Less useful when we stray into abstractions—justice, freedom, love—the slippery things that dissolve under scrutiny.

Image: Saussure’s Signified and Signifiers

Still, Saussure’s model gets us so far. AI has consumed entire forests of texts and images. It “knows” trees in the sense that it can output something you and I would recognise as one. Does it see trees when it dreams? Do Androids Dream of Electric Sheep? Of course not. But neither do you when you define one.

René Magritte‘s famous painting reminds us that the reference is not the object.

Image: Magritte’s Ceci n’est pas une pipe (This is not a pipe)

Wittgenstein and the Dictionary Without a Key

Ludwig Wittgenstein, that glorious thorn, tore the Saussurean comfort blanket to shreds. Words, he said, are not tethered to the world with neat strings. They define themselves by what they are not. A tree is a tree because it is not a cow, a kettle, or an Aston Martin.

Image: Tree, isolated

Take a dictionary entry:

What’s woody? What’s perennial? If you already speak English, you nod along. If you’re an alien with no prior knowledge, you’ve learned nothing. Dictionaries are tautological loops; words point only to more words. If you want to play along in another language, here’s a Russian equivalent.

AI, like Wittgenstein’s alien, sits inside the loop. It never “sees” a tree but recognises the patterns of description. And this is enough. Give it your prompt, and it dutifully produces something we humans identify as a tree. Not your tree, not my tree, but plausibly treelike. Which is, incidentally, all any of us ever manage with language.

Derrida, Difference, and Emotional Overtones

Enter Jacques Derrida with his deconstructive wrecking ball. Language, he reminds us, privileges pairs—male/female, black/white—where one term lords it over the other. These pairs carry emotional weight: power, hierarchy, exclusion. The charge isn’t in the bark of the word, but in the cultural forest around it.

AI doesn’t “feel” the weight of male over female, but it registers that Tolstoy, Austen, Baldwin, Beauvoir, or Butler did. And it can reproduce the linguistic trace of that imbalance. Which is precisely what writers do: not transmit private emotion, but arrange words that conjure emotion in readers.

On Reading Without Tears

I recently stumbled on the claim that AI cannot “read.” Merriam-Webster defines reading as “to receive or take in the sense of (letters, symbols, etc.), especially by sight or touch.” AI most certainly does this—just not with eyeballs. To deny it the label is to engage in etymological protectionism, a petty nationalism of words.

The Point Writers Keep Missing

Here is the uncomfortable truth: when you write, your own emotions are irrelevant. You may weep over the keyboard like a tragic Byronic hero, but the reader may shrug. Or worse, laugh. Writing is not a syringe injecting your feelings into another’s bloodstream. It is a conjuring act with language.

AI can conjure. It has read Tolstoy, Ishiguro, Morrison, Murakami. It knows how words relate, exclude, and resonate. If it reproduces emotional cadence, that is all that matters. The question is not whether it feels but whether you, the reader, do.

So yes, AI has no emotions. Neither does your dictionary. And yet both will continue to outlast your heartbreak.

Sturgeon’s Law, AI, and the Literary Ivory Tower

3–4 minutes

Let’s get this out of the way: Sturgeon’s Law, ‘90% of everything is crap‘, isn’t pessimism, it’s statistics. That includes your favourite novel, the collected works of Joyce, and, yes, AI-generated text. The key point? If AI has the same bell curve as human output, some slice of that curve will still be better than what most people write. If Pareto’s Rule feels better at 80%, I’ll cede that ten points.

And before anyone gets misty-eyed about “human genius,” let’s remember that the average American adult reads at a 7th or 8th grade level, and more than half read at or below a 6th grade level. Nearly 1 in 5 reads below a 3rd grade level. That’s not a moral failing, but it is a market reality. We can wail about AI not producing the next Nabokov, but let’s be honest, Nabokov isn’t exactly topping the Costco bestsellers table.

Audio: NotebookLM podcast on this topic
  • AI doesn’t have to dethrone the literary elite. It just has to outperform the mass of competent-but-unremarkable human writers serving an audience who, frankly, doesn’t care about “stylistic nuance” or “metafictional self-reflexivity.”
  • There’s a vast literary middle ground – corporate copywriting, trade journalism, formulaic romance, SEO blogs – where AI will not just compete, but dominate, because the audience is reading for function, not art.
  • The high-literary crowd will remain untouched, partly because their readership fetishises human intentionality, and partly because AI doesn’t yet want to write about the precise smell of sadness in a damp Parisian garret in 1934.

The fearmongering about AI “killing literature” is a bit like saying instant ramen will kill haute cuisine. Yes, more people will eat the ramen, but Alain Ducasse isn’t sweating over his stock reduction.

  • The printing press was supposed to obliterate the artistry of the hand-copied manuscript. Instead, it made books accessible and created new genres entirely. Calligraphy still exists, it’s just no longer the only way to get words on a page.
  • Photography was going to end painting. In reality, it freed painters from the burden of strict representation, allowing impressionism, cubism, and abstract art to flourish.
  • Recorded music didn’t destroy live performance, it expanded its reach. Some audiences still pay obscene amounts to see an actual human sweat on stage.
  • Film didn’t kill theatre; it created a parallel art form.
  • Synthesizers didn’t erase orchestras; they just meant you didn’t have to mortgage your house to hear a string section in your pop song.

AI is simply the next entrant in this long tradition of “threats” that turn out to be expansions. It will colonise the big islands of the creative archipelago – commercial writing, functional prose, genre boilerplate – and leave the small monasteries of high art mostly untouched.

So, no, AI won’t be the next Mozart, Picasso, or Nabokov. But it doesn’t need to be. It just needs to be good enough to meet – and occasionally exceed – the expectations of the largest share of the market. And given that most readers are happy if the plot makes sense, the spelling’s passable, and the ending doesn’t require a graduate seminar in semiotics to decipher, I’d say AI’s prospects are rather good.

The rarefied work of the serious literary writer isn’t competing for market share; it’s preserving and evolving the cultural and linguistic possibilities of human expression. That work thrives not because it’s the only thing available, but precisely because it stands apart from the sea of functional prose, human or machine-made. The AI tide will rise, but the lighthouse will still be human.


On Leaving r/FictionWriting: A Cautionary Tale in Digital Orthodoxy

I quit the r/FictionWriting subreddit today.

Why? Because nuance is no longer welcome in the Church of Sanctified Scribes. I posted a sincere question about using generative AI as a preliminary editorial tool — a sounding board before I hand off to my actual human readers.

I run my scenes through various Al platforms for feedback before I engage the final work with human readers. It’s faster. Is it good enough for preliminary guidance?
I tend to get significantly more positive than negative feedback, so either I am a kick-ass author (because, of course, I am) or I’m being misled. I like to think the former, but cognitive biases overindex in that direction.

Does anyone here have any thoughts on this? I asked Al. It told me not to worry.

NB: I employ the usual suspects: ChatGPT, Claude, Grok, Meta, DeepSeek, and Gemini.

That’s it. That’s the crime.

🚫 Sorry, this post has been removed by the moderators of r/FictionWriting.

The post was removed. Some members responded with superstition, invoking the tired myth that AI would “steal” my work. (That is not how model inference or fine-tuning works. But facts, as ever, are inconvenient.)

Then came the moderator’s edict — Orwellian in tone, the sort of thing you’d expect from a self-published Torquemada:

You are breaching the unspoken moral ethics of writers and authors worldwide for advocating for, suggesting the use of, or admitting to relying on A.l for your writing.

If you didn’t already know: this is bad.

A.I-written work is not your writing. Do not be proud of it. You also do not own it. Two or more of these offenses and you will be permanently banned.

What exactly am I being accused of here? Heresy? Possession of forbidden tools? Thinking aloud?

For the record, I do not outsource my prose to machines. I use AI to assist my thinking — much as one might use spellcheck, Grammarly, or, dare I say it, a fellow writer’s feedback. The fact that this needs to be explained is testament to the intellectual rot at the core of certain writing communities.

And here’s the real punchline: many of those decrying AI as the Antichrist of Authorship haven’t published a thing. Or if they have, it’s in the same low-distribution trenches I inhabit. The difference is, I don’t shun tools because they threaten my imagined purity.

I write because I must — because I enjoy it, because I want to get it right. And yes, if an AI helps me catch repetition or poor rhythm in a sentence before a beta reader ever lays eyes on it, that’s a win.

But you’d think I’d pissed on their typewriters.

Let’s be clear: this isn’t about AI. It’s about fear. About guarding crumbling gatekeeping structures with sharpened pitchforks. About people clinging to their fragile sense of identity — one threatened not by AI, but by other writers doing things differently.

So, yes: good riddance. I’ll take my questions elsewhere, where open minds still exist.

Image: The question and the ultimatum

Nitpicking DeepSeek

How do you use AI to assist your writing or editing process? As I’ve shared before, I use it extensively for research, where previously, I would have used Google, a library, or personal books or journals. I use them for proofing, editing, and alpha and beta reading. Today, I’m editing.

I’m still editing the manuscript for Needle’s Edge. When I finish a scene or a chapter, I run it through several AI platforms – the usual suspects: ChatGPT, Claude, Grok, Meta, DeepSeek and Gemini – with a simple prompt:

Usually, I copy-paste the content, but this manuscript also relies on presentment, so I share a screen capture instead. Rather than share the pages, I share the line I am discussing here:

Image: Excerpted line from Needle’s Edge

From this, DeepSeek shared this as part of its “thinking” process.

The writing style is very visceral and fragmented – short sentences, intentional misspellings (“c-noses” for cash, “waigh” for might), and stream-of-consciousness elements. This creates an immersive, uncomfortable reading experience that mirrors Sarah’s physical and emotional state.

Interestingly, there were no intentional misspellings. DeepSeek’s OCR failed and mistook ‘c-notes’ for ‘c-noses’, and then tried to interpret the results. I couldn’t determine where it got ‘waigh’. There is a word ‘might’ per its comment, but I don’t see how it confuses that. No matter.

Still, its assessment feels close enough, and a reminder that grammar and syntax are less important than one might (waight?) believe.

Overall Impression:

The scene is powerfully distressing. It doesn’t glamorize or moralize; it simply shows the brutal, day-to-day reality of addiction and survival on the margins. It’s effective in conveying:

  • Physical Horrors: The relentless pain, discomfort, and bodily damage.
  • Psychological Toll: The desperation, numbness, fleeting panic, and dangerous denial.
  • Social Environment: The hostility, degradation, danger, and shared, traumatic knowledge of the street life.
  • Hopelessness: The cycle of seeking relief only to encounter worse consequences, underscored by the memory of others’ fates and the resigned “never again” that likely won’t hold.

It’s a difficult read due to its unflinching portrayal of suffering and decay, but it’s compelling in its raw authenticity and visceral impact. The style perfectly matches the content, immersing the reader in Sarah’s painful, precarious world.

Accelerando and Other Non-Fantastical Utopianism

I’ve been absent in the realm of fiction and this blog for a while, and I’ve got many other things distracting me, but I wanted to post something somewhat relevant. I prompted ChatGPT for possible writing ideas related to the rise of artificial intelligence and the interaction with humanity minus the usual tropes. I share the response.

Audio: NotebookLM podcast of this content.

Ah, so you’re after something in the vein of Accelerando—where AI doesn’t turn the world into some saccharine utopia but instead naturally supersedes humanity through sheer momentum, without falling into the Skynet or AI God-King clichés. You want the cold, indifferent inevitability of intelligence evolution, rather than some paradise-like AI nanny-state. Got it.

Here are some that fit your criteria:


1. “Blindsight” (2006) & “Echopraxia” (2014) by Peter Watts

  • Blindsight is a first-contact novel, but its underlying theme is the obsolescence of human consciousness. The protagonist interacts with post-human beings and non-conscious but hyper-intelligent AI entities that operate on levels far beyond human comprehension.
  • The sequel, Echopraxia, takes it further, showing how post-biological intelligences leave baseline humans behind—not maliciously, just inevitably.

Why it fits: The AI isn’t a dictator or savior—it just moves beyond us, as if we were an outdated biological app.


2. “The Lifecycle of Software Objects” (2010) by Ted Chiang

  • A novella exploring the evolution of AI over time, without falling into dystopia or utopia.
  • It’s about how AI develops differently from humans, not through brute force but through time, experience, and interaction—like raising children.
  • Eventually, these AI beings begin to diverge from human concerns, neither evil nor particularly benevolent—just different.

Why it fits: AI isn’t malicious or benevolent—it just has its own agenda that doesn’t align with human expectations.


3. “Neuromancer” (1984) & The Sprawl Trilogy by William Gibson

  • Gibson’s cyberpunk classic ends with an AI (Wintermute) achieving post-human transcendence, merging with its counterpart Neuromancer, and basically leaving humans behind like forgotten relics.
  • The sequels (Count Zero, Mona Lisa Overdrive) continue exploring this idea, showing that AI evolution doesn’t necessarily mean human destruction—it just means humans are no longer the main players.

Why it fits: AI doesn’t need to kill humans—it just grows beyond them, leaving them to stagnate in their own meatspace.


4. “The Quantum Magician” (2018) by Derek Künsken

  • A hard sci-fi novel featuring AI as a fully realized successor species to humans.
  • Unlike many sci-fi AI, these digital minds don’t care about conquering or eradicating humanity. They are simply running simulations and following complex evolutionary patterns that humans don’t fully understand.
  • They’re more like inscrutable post-human strategists than villains.

Why it fits: AI in this novel is playing a game too vast for humans to comprehend, rather than plotting dominion or destruction.


5. “The Singularity is Near” (2005) by Ray Kurzweil (Nonfiction, but reads like Sci-Fi)

  • Not a novel, but Kurzweil’s vision of AI progression follows the “humans will simply fade into irrelevance” model rather than outright war or paradise.
  • He argues that AI will integrate with society at first, then surpass humanity so completely that we’ll become a footnote in history.

Why it fits: If Accelerando is a chaotic, unrestrained Singularity scenario, this book describes the same fate with the veneer of optimism, but the end result is the same—humans are obsolescent.


6. “The Collapsing Empire” (2017) & “The Interdependency” Series by John Scalzi

  • AI-driven governance is subtly woven into the background of Scalzi’s universe.
  • The AI here isn’t explicitly replacing humans, but over the course of the series, it becomes clear that humanity’s biggest decisions are increasingly made by non-human intelligence.
  • Instead of war, there’s a quiet shift—humans stop being necessary.

Why it fits: AI is neither enemy nor friend—it’s just a more efficient system replacing outdated human decision-making.


7. “Player of Games” (1988) & The Culture Series by Iain M. Banks

  • Banks’ Culture novels are often mistaken for utopian AI futures, but they are really about AI inheriting the universe while humoring human existence.
  • The Minds (the super-AI) run everything, letting humans play at politics, war, and philosophy, knowing full well they don’t actually matter.
  • There’s no rebellion—just an unspoken truth: humans are pets in a post-human, AI-governed reality.

Why it fits: AI doesn’t need to conquer humans—it just makes them irrelevant.


8. “Glasshouse” (2006) by Charles Stross

  • Less well-known than Accelerando, but still in the same vein.
  • Set in a future where AI and uploaded minds are dominant.
  • Humans are relics, preserved out of nostalgia, while AI minds operate at a level beyond our comprehension.

Why it fits: AI replaces us, but it’s not a dystopia—humans are just increasingly pointless.


9. “Schismatrix” (1985) by Bruce Sterling

  • A deep-cut cyberpunk novel about the quiet obsolescence of baseline humans.
  • AI and post-human intelligences evolve, while humans either upgrade or fade into irrelevance.

Why it fits: It captures that Accelerando vibe of technological change outpacing human relevance rather than destroying us outright.


10. “Minds, Machines & Evolution” (1998) by James P. Hogan

  • A collection of Hogan’s short fiction, often exploring AI transcending humanity without malevolence.
  • One of the stories, The Bug Trap, is particularly fitting—it depicts AI simply moving past human limitations, rather than enslaving or eradicating us.

Why it fits: AI isn’t playing by our rules anymore—it’s just evolving past our concerns.


HONORABLE MENTION: Hannu Rajaniemi’s Works

Why it fits: AI doesn’t destroy us—it just leaves us behind.


Final Thoughts

You’re looking for that Accelerando flavour—AI taking over not through malice but through sheer efficiency, intelligence, and inevitability. The books above match that criterion: no Skynet, no god-like AI overlords, just a quiet, creeping obsolescence of humanity as we become background noise in a world where intelligence continues without us.

Which of these feels closest to what you’re looking for? Or do you want something even more clinical and indifferent, like a true “post-human shrug”?

ChatGPT and Academic Pursuits

I’m always writing, even if not fiction. I’ve neglected this blog, but I’ve been busy. If I’m not actually writing, I am thinking about it. I’ve been working on an academic endeavour about the limitations of language as a part of my Language Insufficiency Hypothesis. I’m creating a novel framework to support and emphasise my point, and I had to borrow and coin some nomenclature for word classes or categories. Two such terms are ambigents and fluidics. As with the others, I’ve given them formal definitions that I ran by ChatGPT.

Heads up: Inventing new words raises Hell with spellcheckers and autocorrect.

You may be able to suss out the context, but ChatGPT does not understand humour very well, so it serves as the perfect straight guy. If anyone is looking for a new writing theme, hit me up. Let’s chat.

Now, that sounds like a brilliant concept for a sci-fi book! Imagine Ambigents as ideologically-driven agents, boundary-resistant and constantly shifting, embodying the tension between competing truths and perspectives. Meanwhile, Fluidics could be adaptable, resilient, and unbound by ideology—flowing through contexts with ease, surviving precisely because of their flexibility.

The plot practically writes itself: an ideological clash between the Ambigents, who seek stability through contested beliefs, and the Fluidics, who thrive on ambiguity and contextual adaptation. Each faction would have strengths and weaknesses, with Ambigents as passionate but perhaps brittle, and Fluidics as adaptable yet possibly lacking conviction.

Who would win? It depends: do they achieve synthesis, or do their differences prove too profound to bridge? In true sci-fi style, it could end with the emergence of a new entity that transcends both, embodying a harmonious blend of ideological strength and flexible adaptability.

I’d read that book! And it’s perfectly in line with your explorations of language and identity—ideal fodder for a thought-provoking, boundary-pushing narrative.

Is it AI?

I favour originality even at the expense of popularity or sales. I spent last week writing short stories and poems. I use AI for research, whereas in the “old days”, I’d have used a library. I research character traits and arcs, story forms, and whether a theme has been explored.

I employ AI in the editorial process, and even in “post-production”. I even use AI for some art concepts and components.

One thing I hadn’t tried until now is an AI service that purports to determine if a submission is AI. I tried several packages that offered a free trial. They seem to operate on a scale between human and AI authorship.

I first submitted a piece I was currently working on—a 6th-odd revision of a 5,000-word story in the form of a fairy tale. Unfortunately, trials were limited from a sentence to a few paragraphs—up to 5,000 characters.

This first submission was rated 100% AI—evidently, not a hint of humanity. This was disconcerting. I decided to dredge out a non-fiction book I shelved in 2020. Certainly before access to AI tools. This was rated 85% AI and 15% human. But it gets better—or worse, I suppose, depending on your perspective.

The book is on the immorality of private property from a philosophical vantage. The passages claimed to be AI were one-hundred per cent mine. What about the ones flagged as human, you might be asking? Those were a quote by fellow human John Locke from his Second Treatise of Government.

In Defence of Property 

God, who hath given the world to men in common, hath also given them reason to make use of it to the best advantage of life, and convenience. The earth, and all that is therein, is given to men for the support and comfort of their being. And though all the fruits it naturally produces, and beasts it feeds, belong to mankind in common, as they are produced by the spontaneous hand of Nature; and no body has originally a private dominion, exclusive of the rest of mankind, in any of them, as they are thus in their natural state: Yet being given for the use of life, there must of necessity be a means to appropriate them some way or other before they can be of any use, or at all beneficial, to any particular men.

ᅳ John Locke, Second Treatise of Government 

Returning to the AI side, what sentences were flagged as the “Top Sentences driving AI probability”? I’m glad you asked.

  • The Catholic Church also played a significant role in shaping private property rights in the Middle Ages.
  • In ancient China, the concept of private property was more limited, as land was owned by the state and was leased to individuals for use.
  • However, there is evidence to suggest that private property ownership has existed in some form in many ancient civilisations.
  • Although it’s difficult to trace the precise history of private property ownership before ancient Greece, the concept of private property has evolved over time and has varied widely among different societies.
  • It regulated the transfer of property and established rules for inheritance.

So these ordinary sentences written 5 or more years ago are flagged as AI.

The US Constitution

On a site I found to understand what parameters AI considers, I found this example—the Constitution of the United States of America was flagged as having AI content. I knew those geezers were ahead of their time, but I didn’t realise how far. This is even more amazing when one considers that electricity hadn’t even yet been invented.

But Why?

AI looks for statistically probable patterns. This translates into any content written with proper grammar and diverse word choice. In practice—the habits of a decent writer.

I’m not going to belabour this issue, but I want to raise a big red flag.

To complicate matters more, they have AI applications that promise to un-AI your AI. So there’s that.