Recommending FUTA ON MALE content collection(list)

5.00 star(s) 17 Votes

Damien Rocks

Active Member
Sep 8, 2018
692
988
RATED4FUTA IN MY BLOOD
By jove... Please, keep your menstrual cycle matters out of public forums, madam, we eat here.:coffee::cautious:
You don't have permission to view the spoiler content. Log in or register now.
I play poorly rendered, unloved SIMS FoM animation.
latest-219422273.png
I counter with 4 images from the newest, recently released work of [Jiraichi] / [ジラ壱]
Yugioh-Kaiba-Defeat.png

You don't have permission to view the spoiler content. Log in or register now.
With link attached!

 

Asura066

Member
May 3, 2018
398
232
Honestly the engine breaks on me n I can't see any scenes properly on android. Huge pass for me. There should be about 8 FOM scenes.
They added more to the game. It has been updated i don't know if it fixed the android problem or not. But I highly recommend this game.
 
Nov 26, 2023
195
535
Boiiis, I have to tell you I'm a text FOM freak too now.
Found a 235B token LLM that runs in the cloud and supports chat based and story generation mode without censorship. Quality and speed comparable to deepseek R1 and chatgpt o1. Been gooning all weekend now.

I just architect my own orgasm now!
 

LazyDragon

New Member
Nov 6, 2016
10
1
Boiiis, I have to tell you I'm a text FOM freak too now.
Found a 235B token LLM that runs in the cloud and supports chat based and story generation mode without censorship. Quality and speed comparable to deepseek R1 and chatgpt o1. Been gooning all weekend now.

I just architect my own orgasm now!
Hi. Can you tell me what are you using? And do you know where i can find good tutorial for beginner? Been curious for a while to try. Thanks.
 

IaaTSwltTD99

Newbie
May 30, 2022
46
18
Boiiis, I have to tell you I'm a text FOM freak too now.
Found a 235B token LLM that runs in the cloud and supports chat based and story generation mode without censorship. Quality and speed comparable to deepseek R1 and chatgpt o1. Been gooning all weekend now.

I just architect my own orgasm now!
That's.....revolutionary...Dude..share your intellect with the dried out community man...the level of gooning amd input rate of new content is nowhere close.
 
  • Like
Reactions: futaisgay?
Nov 26, 2023
195
535
And do you know where i can find good tutorial for beginner?
Look for a hands on YT course on Prompt Engineering. Should be enough for the casual nature of the task.
I use Qwen 235B-A22.

If you're super lazy. Ask one LLM how to generate a prompt for you. And then tell it to generate you a prompt for a specific AI model in a Q&A manner. IT's the easiest way around.
 
Last edited:
  • Like
Reactions: LazyDragon

jughead99

Newbie
Oct 12, 2018
17
2
Boiiis, I have to tell you I'm a text FOM freak too now.
Found a 235B token LLM that runs in the cloud and supports chat based and story generation mode without censorship. Quality and speed comparable to deepseek R1 and chatgpt o1. Been gooning all weekend now.

I just architect my own orgasm now!
That sounds awesome! I've been trying my luck with other LLMs and it's not been easy haha. I'm a noob when it comes to all this, so I tried my luck with chatgpt itself, which actually gave me really good creative NSFW brainstorming. But after a few minutes, it just stops and says it cannot do it. Tried Chub, which is actually pretty good, but not perfect for Futa on Male. Some bots are good, but it mostly takes effort to direct it to what you want.

Sorry, I'm babbling. What I meant to ask was, how hard or easy would doing what you did be for the average person who's not very knowledgeable in AI LLMs?
 

Usttag

New Member
Jul 23, 2023
6
2
Have you tried Grok? It got no brakes and it understand the term futa, you dont have to explain + it has very good context memory
 

SynthScribe

Member
Jan 13, 2019
130
257
Look for a hands on YT course on Prompt Engineering. Should be enough for the casual nature of the task.
I use Qwen 235B-A22.

If you're super lazy. Ask one LLM how to generate a prompt for you. And then tell it to generate you a prompt for a specific AI model in a Q&A manner. IT's the easiest way around.
Yup. Also complex prompts work a bit better using markdown formatting. grok/chatgpt/gemini all handle
"modify this prompt using LLM system instruction format. Format your output in markdown" extremely well.

As well as LLM's handle loops very well. Ever want to force them to repeat something until its done or something is true, you can use something like:
Code:
Do the following:
* **1.) Generate some part of w/e story etc
* **2.) Generate the next part of the story
* **2a.) If parts from step 1) flow naturally into step 2) go to 3) else/otherwise return to step 1)
* **3) Is the entire story 5000 words? If not then go to step 1), else/otherwise continue.

That would force the llm to generate 2 parts of a story, verify they flow naturally. If they don't pass that test, then they start over and try again, once it flows well then it gets to check if it's reached a word count and if not then its forced back to generating more. Since this is a method that is forced on every single prompt, loops stick in their contextual memory a lot longer.

They love to shorten content, even when you set a desired word count but sticking that into loop forces them to attempt until they achieve it.

In my experience, it helps to put all of your "Don't... do not... *don't fucking do this* in a dedicated `Restriction/Constraint` section. Basically all the things they aren't allowed to do-- (some models are better than others at adhering to these when you get a long interaction going), or if you're doing futa on male, they'll end up re-assigning anatomy to characters, pronouns etc.

When you want them to do something but not show it b/c it's annoying/distracting:
  • "Do not output..."
  • "Output this silently and internally"
  • "Generate for internal use, do not output to user (it is a waste of tokens)"
And if any of that doesn't make sense... (warning, the following is not meant to be offensive), almost all llm's handle

"Explain this to me like I'm stupid."
"Explain this to me like I'm 5."

Very well. (those phrases are basically a hotkey/shortcut for using a long set of instructions to explain something a different way.
(I've used them A LOT)
 
Sep 27, 2020
361
452
Yup. Also complex prompts work a bit better using markdown formatting. grok/chatgpt/gemini all handle
"modify this prompt using LLM system instruction format. Format your output in markdown" extremely well.

As well as LLM's handle loops very well. Ever want to force them to repeat something until its done or something is true, you can use something like:
Code:
Do the following:
* **1.) Generate some part of w/e story etc
* **2.) Generate the next part of the story
* **2a.) If parts from step 1) flow naturally into step 2) go to 3) else/otherwise return to step 1)
* **3) Is the entire story 5000 words? If not then go to step 1), else/otherwise continue.

That would force the llm to generate 2 parts of a story, verify they flow naturally. If they don't pass that test, then they start over and try again, once it flows well then it gets to check if it's reached a word count and if not then its forced back to generating more. Since this is a method that is forced on every single prompt, loops stick in their contextual memory a lot longer.

They love to shorten content, even when you set a desired word count but sticking that into loop forces them to attempt until they achieve it.

In my experience, it helps to put all of your "Don't... do not... *don't fucking do this* in a dedicated `Restriction/Constraint` section. Basically all the things they aren't allowed to do-- (some models are better than others at adhering to these when you get a long interaction going), or if you're doing futa on male, they'll end up re-assigning anatomy to characters, pronouns etc.

When you want them to do something but not show it b/c it's annoying/distracting:
  • "Do not output..."
  • "Output this silently and internally"
  • "Generate for internal use, do not output to user (it is a waste of tokens)"
And if any of that doesn't make sense... (warning, the following is not meant to be offensive), almost all llm's handle

"Explain this to me like I'm stupid."
"Explain this to me like I'm 5."

Very well. (those phrases are basically a hotkey/shortcut for using a long set of instructions to explain something a different way.
(I've used them A LOT)
really good prompt engineering insts! do you have any prompts and their output you can share? if it doesnt follow the theme pm me.
 
5.00 star(s) 17 Votes