Truly I just remembered this thread a few days ago. Currently using NAI +Well, NAI smashes all local models easily.
Try to continue this one.I was wondering if some of you could share your NAI stories and lorebooks? My stories always ends up being quite lame, It could help my inspiration for better stories. If it's NTR oriented all the better
Isnt the new kayra model on NovelAI already uncensored? It's pretty good at writing smut, really good evenGuys, I'm being serious, you should try that, it gives you one of the best AI without any censorship. (it might be slow depending on your system specifications, take that into account)
I suppose, I paid 2 months for NovelAI and the results were lackluster imo, even when I went deep into the configurations and scenario descriptions. That one is free atleast, so no need to pay to try. (maybe novelAI improved a lot since I used it)Isnt the new kayra model on NovelAI already uncensored? It's pretty good at writing smut, really good even
were you using the kayra model? NAI has had some major updates these last few months, and I've been enjoying it. I think it's one of, if not, the best AI models to write NSFW. It's also server sided and instead of client (for people who don't have good PC's), which could be a plus for someI suppose, I paid 2 months for NovelAI and the results were lackluster imo, even when I went deep into the configurations and scenario descriptions. That one is free atleast, so no need to pay to try. (maybe novelAI improved a lot since I used it)
No, that one didnt exist at the time so I might try again, hopefully it writes better now, I remember it started looping after some time writing before.were you using the kayra model? NAI has had some major updates these last few months, and I've been enjoying it. I think it's one of, if not, the best AI models to write NSFW. It's also server sided and instead of client (for people who don't have good PC's), which could be a plus for some
That's interesting, but there are some strategies to prevent looping (that have worked for me atleast) like keeping in memory N amount of tokens from the start of the story and N amount from the end, that sometimes gives the model enough context to not loop.It loops, all text models do. The lowest tier has 4096 tokens of memory. So it remembers, but after a long time it forgets the main story (Lore entrances help, but sometimes it needs a little bit of help) And yes, Kayra sometimes is amazing. I don't know what happens with that model, because i said "Sometimes". It can be really good and go with the flow of a scene, and other days (it happened to me twice) it dumbs down for some reason, and the narration and characters start to sound like a broken record. It's super random. And no, it has no censorship. I wrote some pretty hardcore stuff without any issues.
You write the end of your story before starting? I usually come up with an idea of the setting, and the "goal" (f.ex; In my current story, is a fantasy pirate setting. And I've decided that the goal of the main character is to seduce and fuck the Five Pirate Queens that rule the oceans) So I just wrote sex encounter after sex encounter during ship travels, between battles and sneaking into fancy Caribbean colonial manors, etc. And When I get tired. I log out. And when time passes and i get bored ot jerking off to custom smut of the same setting. I just start a different story. But I never know what i will write, what the end will be, and what will happen.That's interesting, but there are some strategies to prevent looping (that have worked for me atleast) like keeping in memory N amount of tokens from the start of the story and N amount from the end, that sometimes gives the model enough context to not loop.
Ah nice, they have an option for having multiple personas.I've triedYou must be registered to see the linksa couple days ago, it's honestly pretty good, especially since it's free. Only issue for me is that I like to play from a female POV and the male always cum after two messages when shit's starting to get good lol
Ah nice, they have an option for having multiple personas.
We’ve finally received our new inference hardware! As part of this process, we’re currently migrating our operations to a brand new compute cluster. You may have noticed some speed upgrades already, but this change will improve server and network stability, as well.
Since everything is finally coming together, it is time to announce the upcoming release schedule for our coming 70 billion parameter text generation model, Llama 3 Erato.
Built with Meta Llama 3: Erato
In order to add our special sauce, we continued pre-training the Llama 3 70B base model for hundreds of billions of tokens of training data, spending more compute power than even our previous text generation model, Kayra. As always, we finetuned it on our high quality literature dataset, making it our most powerful storytelling model yet.
Llama 3 Erato will be released for Opus users next week, so get ready for the release, the wait is almost over!
Until then, we are busy migrating to the new cluster, and switching our text generation models, Kayra and Clio, to a new inference stack, which serve these unquantized models more efficiently. However, this stack does not play well with CFG, so we will need to say goodbye to CFG sampling.
To make up for this, we are releasing two new samplers, which will also be supported for Erato: Min P and Unified Sampling:
The popular Min P sampler sets a simple threshold at a proportion of the top token’s probability, and prunes any tokens below it.
Unified Sampling is designed to replace your entire sampling chain, so you can use it alone without any other samplers. It’s based on new research, so the quality should be a minor improvement over your existing presets, while being much simpler.
To use it as intended, navigate to the Change Settings Order button, enable Unified and Randomness, and disable all the others. Then set Randomness to 1. Unified has three parameters: Linear, Quad, and Conf. Increasing Linear will prioritize high-probability tokens. Increasing Quad will delete the lowest-probability tokens. Increasing Conf will prioritize the highest-probability tokens, but only when the highest probability token is small.
To use it as intended, navigate to the Change Settings Order button, enable Unified and Randomness, and disable all the others. Then set Randomness to 1. Unified has three parameters: Linear, Quad, and Conf. Increasing Linear will prioritize high-probability tokens. Increasing Quad will delete the lowest-probability tokens. Increasing Conf will prioritize the highest-probability tokens, but only when the highest probability token is small.