Erotica or not

I am an author of adult contemporary fiction and an early adopter of Generative  Artificial Intelligence tools and platforms. These AI platforms pose some challenges. 

Podcast: Audio rendition of this page content

As of this post, ChatGPT 4 and Claude 2 are the top two large language models (LLM), and Sudowrite is the most competent interface for generating content for fiction writers, but it relies on ChatGPT and Claude for its LLM, leaving it with the same weak links.

In my case, so-called community standards do not allow erotic content. The rub is that my content is decidedly not erotica, but it does involve adult themes. The LLMs can’t seem to discern the difference. 

  • Disallowed usage of our models
  • We don’t allow the use of our models for the following:
  • Adult content, adult industries, and dating apps, including:
    • Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness)
    • Erotic chat
    • Pornography
OpenAI ChatGPT Community Guidelines

If I am writing about, say, prostitutes and addiction, sexual themes and situations are part of their workaday existence. It’s not about titillating or glorifying. 

Stereotypical or not, coarse language is commonplace. Drugs are part of their daily lives and conversations. Generative AI shuts these down on moral grounds without having the cognitive depth to accurately assess the content. 

This mirrors all too many humans with the same myopic repression. I was hoping to transcend this petty knee-jerk reaction. 

Without revealing plot or angering the social media gods, ChatGPT insisted that I amend a scene from…

“She lifted her mouth from his cock and wiped her mouth.”

to 

“She lifted her mouth from his goodness and wiped her mouth.”

Yes, “goodness.” What does that even mean? Of course, I could have opted for clinical terms, but that hardly captures the moment attempted to be portrayed in the scene. It robs the scene of any semblance of authenticity. 

When Supreme Court Justice Potter Stewart was asked to describe his test for obscenity in 1964, he responded: “I know it when I see it.” But do we? In fact, we don’t. And in this case, AI is over-generalising without respect to context. 

One might argue that they don’t like ‘naughty’ words, but this is not the issue here. I can use these offending words, just not in a situation like this. AI is overstepping its boundaries as morality police, and this is not a good stance to adopt. For this, I blame the humans.