HTML [Anlatan] NovelAI [all fetishes] - AI-powered storyteller

stone46

Member
Dec 2, 2016
150
390
Does any of you ever use Sudowrite? What do you guys think about Sudowrite?
Its openai, so it has a variety of issues that come with using any AI from them, which basically boils down to censoring of content you can write, which will result in the banning of your account.

They also do in fact read your stories so if privacy is an issue for you, don't use openai stuff.

But if none of these are an issue for you, then its good.
Its a strong model, but on the expensive side.
 

Virgile85

Active Member
Jan 21, 2020
679
339
I downloaded the ntr file from ntr paladin.and the training file from paladin again.i just want text story for passing time.are these two file enough for building ntr stories?
 

stone46

Member
Dec 2, 2016
150
390
I downloaded the ntr file from ntr paladin.and the training file from paladin again.i just want text story for passing time.are these two file enough for building ntr stories?
If that was a module, then yeah, you just add it to your modules on novelai, select it and get to writing.

I would still reccomend you use memory/author notes along with world info to guide the AI.
 
  • Like
Reactions: BaloneyAmone

stone46

Member
Dec 2, 2016
150
390
It seems I can't enter the model without having an account. My account is out of ink
Makes sense, the free account trial thing is pretty limited.
And to be honest, if you plan to use this, you're gonna need a shit ton of generations, sometimes using 10 or more on a single sentence.

I recommend just getting the euterupe model sub.
Kraken is decent, but a work in progress.
 
  • Like
Reactions: BaloneyAmone

naraz

New Member
Jan 8, 2023
2
1
Does any of you ever use Sudowrite? What do you guys think about Sudowrite?
Im just a beginer and bad at writing(But i cant really find good stuff for my tastes so trying this ai craze out hehe), but i had some good sucess with Sudowrite. It generates really good stuff and I think they have a deal for erotica stuff with GPT. it's like night and day compared to NovelAI. I spend less time editing and the writing is output stays on track for longer than a sentence and pulls good from your keywords and all that you put in. The writing output is so much better too, it's like college level compared to the basic stuff. They also put in some cool things like rough drafts and like write an outline and it writes a story for you(i havn't tried these yet due to reasons i'll explain below. It's really really good but there is a huge problem with it, especially for non professional writers that uses it for text adventures than just a writing aid.

THe problem with the service though is a big one too, which is why I went back to novel ai(and stilld ebating on asking for a refund). the only affordable option for non professional writers is the middle tier. It's 20$ a month if you pay annually. that's not that bad for what it is. THe problem though is you only get 90k words a month. that seems like a lot at first but let me tell ya i went through my 90k in2 days and didnt even get close to finishing my smut sci-fi loli story lol. The next tier up only gives 300k words too, which I think i'd use up in a week of good smut generation. From there you buy packs taht cost about 10$ per 10k words. It turns into a mobil loot boxgame pretty fast imo. you could possibly get around this maybe just a little by having 3 accounts at maybe the middle tier that gives you more words for the money. I actually would get the top tier if they simply just let ya do unlimited generations and maybe knock it down to 80-85ish a month$. It's just not far behind that Jasper.ai in marketing to people only interesting in using to make money rather than people that just want to have fun on it.

But, Past the monetization it's like everything you'd want in story generation. Im not really too scared in who sees my stuff anymore as I'm pretty sure as long as I use windows, nothing I do is really confidential (Into conspiracy stuff btw, adhd and all here hehe). It's really good from my exp, just limiting due to the word count thing.
 

Virgile85

Active Member
Jan 21, 2020
679
339
Makes sense, the free account trial thing is pretty limited.
And to be honest, if you plan to use this, you're gonna need a shit ton of generations, sometimes using 10 or more on a single sentence.

I recommend just getting the euterupe model sub.
Kraken is decent, but a work in progress.
Where is this sub model?can I write ntr story with it?
 

Otacotora

Newbie
Dec 6, 2016
85
73
Im just a beginer and bad at writing(But i cant really find good stuff for my tastes so trying this ai craze out hehe), but i had some good sucess with Sudowrite. It generates really good stuff and I think they have a deal for erotica stuff with GPT. it's like night and day compared to NovelAI. I spend less time editing and the writing is output stays on track for longer than a sentence and pulls good from your keywords and all that you put in. The writing output is so much better too, it's like college level compared to the basic stuff. They also put in some cool things like rough drafts and like write an outline and it writes a story for you(i havn't tried these yet due to reasons i'll explain below. It's really really good but there is a huge problem with it, especially for non professional writers that uses it for text adventures than just a writing aid.

THe problem with the service though is a big one too, which is why I went back to novel ai(and stilld ebating on asking for a refund). the only affordable option for non professional writers is the middle tier. It's 20$ a month if you pay annually. that's not that bad for what it is. THe problem though is you only get 90k words a month. that seems like a lot at first but let me tell ya i went through my 90k in2 days and didnt even get close to finishing my smut sci-fi loli story lol. The next tier up only gives 300k words too, which I think i'd use up in a week of good smut generation. From there you buy packs taht cost about 10$ per 10k words. It turns into a mobil loot boxgame pretty fast imo. you could possibly get around this maybe just a little by having 3 accounts at maybe the middle tier that gives you more words for the money. I actually would get the top tier if they simply just let ya do unlimited generations and maybe knock it down to 80-85ish a month$. It's just not far behind that Jasper.ai in marketing to people only interesting in using to make money rather than people that just want to have fun on it.

But, Past the monetization it's like everything you'd want in story generation. Im not really too scared in who sees my stuff anymore as I'm pretty sure as long as I use windows, nothing I do is really confidential (Into conspiracy stuff btw, adhd and all here hehe). It's really good from my exp, just limiting due to the word count thing.
I finally tried Sudowrite myself. I chose the cheapest one just for the sake of trying.

So I got 30k AI words to spend. When I started the writing, Sudowrite gave me 6 outcomes with 50 AI words each. I can choose which one the one that I wanted to put in my stories.

Unfortunately, I'm a bit picky. So sometime I would take one part from 1 outcome and other part from the other outcome. But most of the time I just re rolling again to get another 6 outcomes.

By the end of the day, out of 30k AI word, only 2,8k end up in my story and it still long way from finish.

So in my opinion, the AI outcomes wasn't that good and it was very pricey.

But then again, I probably don't know how to use Sudowrite the correct way
 

stone46

Member
Dec 2, 2016
150
390
Where is this sub model?can I write ntr story with it?
sub as in subscription.
Buy the middle tier subscription to gain access to the euterpe model.

But if you want a free alternative, koboldai exists.
It can either be used via google collab or on your machine.

Local needs a decent machine to generate prompts at even a slow pace and I am unsure of the online one as I've never used it.

You wont be able to use that module you downloaded, but you can write ntr perfectly fine with koboldai, assuming you have moderate writing skills and a basic understanding on guiding Ai.

There is also probably a ntr focused module available in koboldai's wide selection.
 
  • Like
Reactions: BaloneyAmone

BaloneyAmone

Active Member
Mar 3, 2021
589
1,081
Where is this sub model?can I write ntr story with it?
Euterpe (available for a limited free trial to get a taste of what it can do) is the model that you'll be most interested in. If you're interested in subscribing, the Scroll tier (the $15 USD/month one) is your best bet since you get the maximum token memory for the best user experience. Krake (available in the $25USD/month tier) isn't presently strong enough to cough up the extra 10 dollars for it, but may be worth considering if you really want to go all the way. To compare their abilities, Euterpe will generally understand you but won't be very detailed, and Krake is vivid when it gets what you're trying to do but has a high chance of losing track (and also doesn't have modules AFAIK).
And yes, both models can do NSFW, including NTR. A shit ton of modules for Euterpe and Sigmund (a 6B model that's older and generally weaker, but had been around for a while so it has an extensive legacy) can be . Of interest for you: there's the specialized NTR model for Sigurd, and the C. D. E Cuckold module (basically a module trained on NTR stories from a specific web author) for Euterpe.

But if you want a free alternative, koboldai exists.
It can either be used via google collab or on your machine.

Local needs a decent machine to generate prompts at even a slow pace and I am unsure of the online one as I've never used it.

You wont be able to use that module you downloaded, but you can write ntr perfectly fine with koboldai, assuming you have moderate writing skills and a basic understanding on guiding Ai.

There is also probably a ntr focused module available in koboldai's wide selection.
Occasional KoboldAI user here.
Currently, the TPU Colab for the 13B/20B models (equivalent to Euterpe/Krake respectively) are bugged on Google's end and are nonfunctional until those nerds get around to fixing it. You can still run the GPU colab though; the 6B models and even the 2.7B models are pretty usable, though they'll need more handholding altogether. Both systems aren't as fast as servers providing a dedicated service, but they generate at a decent rate altogether.
If you prefer a pay-as-you-go system, there's through RunPod that lets you rent workstation cards for running larger models. An A40/A6000 will let you run anything short of the truly gargantuan 30B+ models on them for less than a dollar an hour, though note that if you're doing this often enough you'll may reach a point where it'd have been cheaper to just subscribe to NovelAI instead and not have to do any finangling of your own.
For proper local use, KoboldAI recommends 8GB of VRAM to run 2.7B models (most middle-level cards released in the past few years), and 16GB for the 6B models. If you don't have sufficient VRAM and can't throw a second card in to bolster it, you can move some layers onto system RAM to make up the difference, although it'll slow down generations substantially. Alternatively, there's a that let you contract how much VRAM you need, although they will require a good bit of setup to get off the ground.
For NTR stuff, there isn't a specific model dedicated to it per se, although Erebus should have it within its repertoire of fetishes. For softprompts/modules, the KoboldAI discord (not linked here since I feel linking that a discord that has nothing to do with the thread's topic is bad form, but it's very much within googling range) has a channel for them.
 
Last edited:

cooperdk

Engaged Member
Jul 23, 2017
3,494
5,135
The best combo for running storytellers is which for chatbased storytelling can be run through . This provides an interface much like Novel AI, and a set of characters to chat with, and it is very easy to make your own. This allows for chatbased stories of SFW and mild to extreme NSFW.

I use the pygmalion-6b model for the chat and it also allows for regular storytelling (novel writing like Novel AI).
Another good model is koala-13B-GPTQ-4bit-128g model (the koala-13B-4bit-128g.no-act-order.ooba.pt model must be used for KoboldAI).

I can write up a more specific installation guide, but basically it is just downloading KoboldAI from Github, then in a command prompt go to the directory of installation and run install_requirements.bat and then play.bat to run it, after copying the wanted models to the models directory (within their own subdirectory). All .json and .model files must be downloaded including the larger, actual model file.

The Koala model is insanely faster than fx Alpaca and Llama, also supports NSFW and author styles etc, and should be run directly within KoboldAI.
KoboldAI works just like Novel AI, you set up a memory and world information after which you start the story, and it adds to the story so you direct it and it fills in the rest in chunks.

To use Tavern AI, the Pygmalion (or another) model must be loaded within KoboldAI, and in Tavern settings, the KoboldAI API should be connected.

Last night, I had a chat with a submissive neko who screamed and squirted because she loved how i fisted her ass while pinching her tiny nipples, lol.

elly1.png

Elly is a chat character included with Tavern AI.

In my opinion, this setup is good enough to write stories for VNs and the like. Best of all, this is completely free (and legal).

I would never run stuff like this on rented servers, or on Colab. I do not believe that they don't log what you do and also, stability is questionable.
All of the tools listed here support CPU-only generation, but you would likely need at least either 32 GB RAM on Windows or 16 GB RAM on LInux.

The complete runtime directory after installation (for KoboldAI, TavernAI (small tool) and the two big models) is 59 GB.
For the small models, it will be 12 GB less.

KoboldAI:
TavernAI:
Koala 13B model (7.5 GB):
Koala 7B model (5.5 GB):
Pygmalion-6B model (15 GB, split in 10+5 GB files which allows it to run on 10 GB cards):
Pygmalion-2.7B model (5.5 GB for smaller GPUs):

If people want, I will write an app for automatic installation...

EDIT: I just found this addon which enables SD image creation, TTS and other things so it becomes more like NovelAI. It does require a forked, modified version of TavernAI (read the readme):
 
Last edited:

ririmudev

Member
Dec 15, 2018
304
308
Wowee, doesn't even require GPU? Although I recently loaded up on decent GPUs, so maybe I should toy around with this combo.
 

cooperdk

Engaged Member
Jul 23, 2017
3,494
5,135
Wowee, doesn't even require GPU? Although I recently loaded up on decent GPUs, so maybe I should toy around with this combo.
Nope, the use of CPU is configurable in settings.
You also have the option of you can assign a certain amount of VRAM to the app from any of your GPUs, or none. If the model uses fx 32 chunks and you assign 18 chunks to one GPU, you have 14 chunks for another card, or for system memory.

You just cannot exceed the amount of VRAM on the card, of course.

And, as all other AI projects, it prefers nVidia since the libraries are built for CUDA. Other GPUs need other libraries which emulate some of the AI tasks, making them a tiny bit slower.
 

ririmudev

Member
Dec 15, 2018
304
308
Nope, the use of CPU is configurable in settings.
You also have the option of you can assign a certain amount of VRAM to the app from any of your GPUs, or none. If the model uses fx 32 chunks and you assign 18 chunks to one GPU, you have 14 chunks for another card, or for system memory.

You just cannot exceed the amount of VRAM on the card, of course.

And, as all other AI projects, it prefers nVidia since the libraries are built for CUDA. Other GPUs need other libraries which emulate some of the AI tasks, making them a tiny bit slower.
I see, thanks. If one had GPU with 8GB, would it be better to use the smaller Pygmalion, or the bigger Pygmalion and split chunks with sys memory? I'm guessing first choice is faster, but model is less robust, whereas the latter option is the opposite.
 

CobraPL

NTR PALADIN
Donor
Sep 3, 2016
2,013
4,021

If I understand correctly, we will have a model comparable to 12B Pythia offline, with CPU support probably. Free, open source, offline-working (no need for expensive 40GB Nvidia enterprise cards) model.
 

yadaweg

Newbie
May 9, 2018
64
30

If I understand correctly, we will have a model comparable to 12B Pythia offline, with CPU support probably. Free, open source, offline-working (no need for expensive 40GB Nvidia enterprise cards) model.
Your thoughts about new announced from-scratch model and 8k memory pool?
 

SharkVampire

Active Member
Sep 12, 2018
679
1,209
Your thoughts about new announced from-scratch model and 8k memory pool?
I've read the impressions of people who have used this model. Although no one can use the Storywriter with all 150k tokens anyway (It needs 4 A100 video cards), they say that despite the extra context, the model is not that smart. Also, I myself tried another 13b model with 4k context (don't remember the name) but I could only use 2500 tokens on my 3060 in 4bit. I can say that it is not too smart, in some ways even worse than other models. But the very progress in this direction is very encouraging! I liked the Pygmalion 7b in many ways. If the developers fix the bugs and make a 13b model with additional context, it will be very good.
 

EvolutionKills

Well-Known Member
Jan 3, 2021
1,159
3,796
I'm not gonna lie, but I really don't want the those 'teaching' AI to use my personal fetishes as learning material. :oops: