AI Porn is here, Create and Fap TRY FREE
x

NSFW AI chatbots

ktez

Well-Known Member
Feb 15, 2020
1,338
1,676
328
I'd say you must direct the story. I have tried every which way, including with custom character cards, to create a situation where the bot is to be spontaneous, to forward the plot in unexpected ways, but it all devolves into pleasing the user, and you end up directing the show anyways... which is meh.

When a major AI LLM can't write a novel storyline, you know we are a long, long way from 'being there'.
I mean the "pleasing the user" thing is real, that's RLHF doing its job. Models get trained to be agreeable and it shows. You're fighting against the training itself there.

But "can't write a novel = long way from being there" is a weird metric. Most humans can't write a coherent novel either. That's a specific skill, not the benchmark for general intelligence or whatever.

Also character cards and roleplay setups are kind of the worst case for this. you're literally asking it to perform for you. Try giving it actual constraints or conflict instead of hoping it'll just "surprise" you. Spontaneity from a system optimized to make you happy is always gonna going to result in you being frustrated.
 

desmosome

Conversation Conqueror
Sep 5, 2018
6,876
15,513
864
Kinda wrong. "Just pattern matching" is misleading. Human brains are also "just" electrochemical signals but nobody says you can predict what someone will say next.

The creative writing thing is partially true for really long stuff but "absolutely has to direct everything" is cope. Plenty of people get decent output without hand-holding every scene.

Sounds like someone who tried chatbots once and decided they understood the whole technology.
What kinda nonsense you talking about? LLMs literally works on a fundamental level by fetching the next most likely token based on various factors. These factors include things the users can control or understand. You can set it so that it returns a lot more chaotic and random outputs or stick to the high probability stuff. If you say a thing or lead the LLM into a story direction, you can be reasonably certain about how it will respond to that.

If you don't practice any context engineering, the story will be truly ass. If you let the LLM dictate the story direction and only react to whatever direction it takes you, it will be complete ass. It can work in complete sandbox narratives or very short and simple scenes, but not if you want any coherent story.
 
  • Like
Reactions: fbass

tretch95

Well-Known Member
Nov 5, 2022
1,433
2,711
387
I'd say you must direct the story. I have tried every which way, including with custom character cards, to create a situation where the bot is to be spontaneous, to forward the plot in unexpected ways, but it all devolves into pleasing the user, and you end up directing the show anyways... which is meh.

When a major AI LLM can't write a novel storyline, you know we are a long, long way from 'being there'.
Sometimes it's also interesting to just use the AI as inspiration, or as atmospheric filler.


For example, i designed one story which starts a bit like the old "Patronus" game.

Orphaned Roman patrician man (25yo) in the Roman Empire in 72AD, inherited debts from his assassinated father, he and his sister still unmarried, and the MC should somehow restore the splendor of his House.


One of the attempts, the AI started the first day by having the sister wake you up:

- "There's this letter by Senator Servilius. He invites us to his banquet tonight."

And i'm like... ah, kk. Spinning a story around the fat old perverted Senator, who's a notorious schemer trying to pull the MC into his machinations in the higher circles.



The problem is just that the AI is incapable of dealing with abstractions.

So the AI writes an event where the senator wants you to steal a ledger, in order to prove a merchant's guilt of funding the current Emperor's enemies during the civil war preceding the situation (which is briefly mentioned in my game setup). Obviously he could send a random goon, but he has his reasons to involve you, which we don't know yet.

That's actually brilliant, and right on topic. Problem is just that this already brings several layers of abstraction.


The MC, his sister, the two house slaves, and their villa are the main layer.
Then we have the Senator's villa with the banquet, who further offers you to take on a task, taking place in yet another location. In between, time has to pass adequately. Not the mention the Emperor's role in the background, and just about any other elements making for a deep story.


Well, and before you know, everyone in your villa is talking about reading the ledger you haven't even stolen yet. People know about things where they haven't been present. You haven't even gone to bed, and your slave is serving breakfast. And so on.

So it all quickly falls apart, unless you direct every step yourself.
 
Last edited:

tretch95

Well-Known Member
Nov 5, 2022
1,433
2,711
387
I mean the "pleasing the user" thing is real, that's RLHF doing its job. Models get trained to be agreeable and it shows. You're fighting against the training itself there.

But "can't write a novel = long way from being there" is a weird metric. Most humans can't write a coherent novel either. That's a specific skill, not the benchmark for general intelligence or whatever.

Also character cards and roleplay setups are kind of the worst case for this. you're literally asking it to perform for you. Try giving it actual constraints or conflict instead of hoping it'll just "surprise" you. Spontaneity from a system optimized to make you happy is always gonna going to result in you being frustrated.
You'd be surprised what the AI-RPG has been capable of.
Well, that was with the old Llama AI which is trained with lots of actual literature. Some of my personal highlights:




> Young orphaned Danish man in Hedeby at the rise of the Viking Age, with lots of research to make it authentic
(adventure, historical)

> Black "Loverman" in West Africa, seducing and fucking white neglected women in a resort for money and gifts
(social drama actually)

> 18yo English man, relocating with his family from London to Jamaica, following the Great Fire of London in 1666
(adventure + historical drama with ethical conflicts about piracy, slavery etc)

> Fallout™ series RPG, with Shishkebab and Power Armor and everything you'd expect
(the AI served Radroach for breakfast)

> Harry Potter and the... no wait, this was shit because i hate HP.
(...But the AI really excelled at knowing everything about it)

> English 1840s Royal Navy soldier getting shipwrecked at an African coast, finding an all-female tribe
(adventure, mystery)

> Japanese Hikikomori and AVN producer in Tokyo, finding a girl which ran away from her violent father.
The protagonist - victim of abuse in his own youth - shares a meal and lets the girl rest unharmed in his tiny apartment.
Meanwhile, the girl's unwarranted trust and vulnerability has him slowly discover human emotions within himself, despite his practically autistic obsession of online trolling and review manipulation. Responsibility gradually replaces his entrenched social isolation.
The uneven couple flees Tokyo to start a new life in Japan's beautiful countryside under fake identities, in an attempt to hide her from her parents and the police.
(transformative dramedy, wholesome as you'd expect:cry::love:)
 

ionogo

Newbie
Feb 13, 2024
51
43
53
I use AI daily, professionally. Probably hundreds of models and finetunes by now. From in depth analysis of millions of transactions within complex systems, to a coding sidekick for limited tasking, multi-model analysis etc. I've trained some models for data analysis, etc, and by this point I am employable as a 'prompting consultant'. I just consider AI principally as an sidekick accelerant under close supervision, and an idiot savant regurgitator of rote knowledge.

The big issue is that despite it's superior recall across multiple disciplines over humans. Without getting into nitty gritties, it isn't and I don't think ever will be 'embodied', which is for now an insurmountable obstacle for AGI (which I am glad for). It also has zero judgement, which is largely a consequence of not being embodied. Concern for self, the need to coexist with other embodied beings over the long haul. Reciprocity, morals, values, personal motivations, hierarchy of needs as manifest in its own embodiment, emotions and the 'what for' about all of it. These are concepts a model can spew prose about from training data, but cannot comprehend.

Anyways, these things combine to make repeated discussion of a topic become repetitive, flowery trope. You can manipulate some of the output using prompting and model parameters, but once you've seen what a model can produce, you've seen it, particularly around specific subject matter. So, yeah we're a long way from getting to a place where AI chatbots are more than shallow novelty.