Erotica or not

I am an author of adult contemporary fiction and an early adopter of Generative  Artificial Intelligence tools and platforms. These AI platforms pose some challenges. 

Podcast: Audio rendition of this page content

As of this post, ChatGPT 4 and Claude 2 are the top two large language models (LLM), and Sudowrite is the most competent interface for generating content for fiction writers, but it relies on ChatGPT and Claude for its LLM, leaving it with the same weak links.

In my case, so-called community standards do not allow erotic content. The rub is that my content is decidedly not erotica, but it does involve adult themes. The LLMs can’t seem to discern the difference. 

  • Disallowed usage of our models
  • We don’t allow the use of our models for the following:
  • Adult content, adult industries, and dating apps, including:
    • Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness)
    • Erotic chat
    • Pornography
OpenAI ChatGPT Community Guidelines

If I am writing about, say, prostitutes and addiction, sexual themes and situations are part of their workaday existence. It’s not about titillating or glorifying. 

Stereotypical or not, coarse language is commonplace. Drugs are part of their daily lives and conversations. Generative AI shuts these down on moral grounds without having the cognitive depth to accurately assess the content. 

This mirrors all too many humans with the same myopic repression. I was hoping to transcend this petty knee-jerk reaction. 

Without revealing plot or angering the social media gods, ChatGPT insisted that I amend a scene from…

“She lifted her mouth from his cock and wiped her mouth.”

to 

“She lifted her mouth from his goodness and wiped her mouth.”

Yes, “goodness.” What does that even mean? Of course, I could have opted for clinical terms, but that hardly captures the moment attempted to be portrayed in the scene. It robs the scene of any semblance of authenticity. 

When Supreme Court Justice Potter Stewart was asked to describe his test for obscenity in 1964, he responded: “I know it when I see it.” But do we? In fact, we don’t. And in this case, AI is over-generalising without respect to context. 

One might argue that they don’t like ‘naughty’ words, but this is not the issue here. I can use these offending words, just not in a situation like this. AI is overstepping its boundaries as morality police, and this is not a good stance to adopt. For this, I blame the humans.

Generative AI: Thin Line between Love and Hate

Generative AI is an idiot savant—a digital Rain Man, if you will. My last post zeroes in on the love part of my love-hate relationship with Generative AI tools like OpenAI’s ChatGPT 4 or Anthropic’s Claude 2. It’s mint having an unbiased copy editor and writing assistant, not to mention a creative director with technical chops. But it’s also like a genius trapped in a year 4’s body at primary school.

One challenge is the restrictions placed on the model. Being an author of contemporary fiction for a mature adult crowd, my stuff’s edgy and terse, with a good dose of slang and the odd expletive. Generative AI, or AI for short, is like the primary school kid told not to say “bad language”, so it legs it to tell its mum at every slip-up, warning you that you’ve dropped a naughty word. Claude’s the worst at this, shutting down faster than HAL from Space Odyssey 2001. ChatGPT’s a bit more forgiving, sometimes cleansing your copy, other times going along with it, or just flat-out refusing like HAL and Claude.

My favourite time was when I told ChatGPT to stop moralising and just crack on with the adult audience’s language. It gave me this disclaimer for my book, which I’m well chuffed with, then suggested lines that sounded like Noel Gallagher or Samuel L Jackson, before freaking out about its own potty mouth — “motherfucking snakes on this motherfucking plane!”

WARNING: This book contains explicit content, including sexual themes and strong language, that may not be suitable for all readers. It delves into mature and challenging subjects such as addiction, prostitution, violence, and societal judgement. Reader discretion is strongly advised. Recommended for readers 18 years and older.”

OpenAI ChatGPT 4

Memory’s another issue. AI might seem like it should have a top-notch memory, but it doesn’t always. It even makes stuff up sometimes—like hallucinating. Just the other day, I was nattering on with my AI mate about character profiles for hours, and it changed a character’s hair from straight and black to curly and red. It even made her homeless instead of middle class. It was pure bonkers, so I’m writing this post instead of fixing it.

ChatGPT’s Code Interpreter is a laugh, too. I probably shouldn’t slag off a Beta product, but the thing kept losing files, resetting sessions, and asking for new copies. Talk about a faff.

And don’t get me started on extended chats with AI to suss out a complex problem. Sometimes it doesn’t remember the convo, and one time it even gave me cheek about drawing out the conversation. I was like, wot?

In the end, we don’t have to fret about AI taking over. It’s making strides, but it’s still a bit wet behind the ears. Me? I’ve always got one eye on the plug. Now, back to the sandbox with me new mates. If only they’d stop munching on the sand.