Yup. Also complex prompts work a bit better using markdown formatting. grok/chatgpt/gemini all handle
"modify this prompt using LLM system instruction format. Format your output in markdown" extremely well.
As well as LLM's
handle loops very well. Ever want to force them to repeat something until its done or something is true, you can use something like:
Code:
Do the following:
* **1.) Generate some part of w/e story etc
* **2.) Generate the next part of the story
* **2a.) If parts from step 1) flow naturally into step 2) go to 3) else/otherwise return to step 1)
* **3) Is the entire story 5000 words? If not then go to step 1), else/otherwise continue.
That would force the llm to generate 2 parts of a story, verify they flow naturally. If they don't pass that test, then they start over and try again, once it flows well then it gets to check if it's reached a word count and if not then its forced back to generating more. Since this is a method that is forced on every single prompt, loops stick in their contextual memory a lot longer.
They love to shorten content, even when you set a desired word count but sticking that into loop forces them to attempt until they achieve it.
In my experience, it helps to put all of your "Don't... do not... *
don't fucking do this* in a dedicated `Restriction/Constraint` section. Basically all the things they aren't allowed to do-- (some models are better than others at adhering to these when you get a long interaction going), or if you're doing futa on male, they'll end up re-assigning anatomy to characters, pronouns etc.
When you want them to do something but not show it b/c it's annoying/distracting:
- "Do not output..."
- "Output this silently and internally"
- "Generate for internal use, do not output to user (it is a waste of tokens)"
And if any of that doesn't make sense... (warning, the following is not meant to be offensive), almost all llm's handle
"Explain this to me like I'm stupid."
"Explain this to me like I'm 5."
Very well. (those phrases are basically a hotkey/shortcut for using a long set of instructions to explain something a different way.
(I've used them A LOT)