Hey there, im making an AI chat

Inoi

Newbie
Jan 11, 2022
15
17
You don't have permission to view the spoiler content. Log in or register now.

Aaand thanks chatGpt to this translate - version now xD

Hey f95!

I got inspired to write this post thanks to a random thread by some awesome person here working on their own game. I've been using this fantastic site for quite a while now, but only as a reader. I figured, where better to share this than here?

Ever since ChatGPT came out, I've been having fun with it for sexting, to varying degrees. I even got one of my accounts banned by OpenAI. Then I messed around with Tavern and some other niche resources, including ones advertised here, but none of them really clicked for me.

At some point, I decided this could be good practice and a way to learn some cool, modern tech. So, I dove deep into Stable Diffusion (SD) and eventually decided to create my own sexting API. I've been working on it for about three months now, pretty consistently.

Right now, I've got the server-side stuff set up. It's an ASP .NET API app where I've been honing my development skills, focusing on proper design, separating business logic, domain entities, and all that technical jazz. I'm using two databases: MSSQL via EF for core data and MongoDB for messages and SD-generated image chunks. I've refactored it a couple of times, and it's probably the best part of my app right now, though some API endpoints still need a bit of work.

The frontend is my first serious attempt at React, so it's probably a bit rough in terms of component design and overall aesthetics, as I came up with the design on the fly. AIs - I'm using SD with a model I painstakingly selected, along with a few additional Lora models for characters (just test ones for now) and a fine-tuned merge of Llama with something else I can’t remember. It's not the best I've seen – for instance, Command-R seemed cooler, but since I run everything on my home PCs, Command-R is just too slow on my 3080.

The frontend interacts with my API, which in turn connects with the AIs, DeepL API, and Google Translate – the first for translating text to and from the user's language, and the latter for translating user prompts for the SD.

Why I'm writing this here
Sadly, among my friends, I seem to be the only one using something like f95. The rest are a bit more innocent. They're curious, some even tried it out a couple of times, but it's hard to get into something when you're not the target audience, so to speak.

I thought maybe here, I could find someone interested in checking this out. Perhaps offer advice or feedback (especially on AI character description patterns – that would be super helpful). Maybe even a user review or interface suggestions.

This is my first serious attempt at working with AI at this level, so I have tons of questions about how to improve things and how they are perceived by others. My main goal is to make something better than what I've seen, as I think I have a lot of ideas for the future that could take AI sexting to a more interesting level. When it comes to text, I'd love to hear about your experiences writing prompts for characters, especially if you've done something similar for Tavern. I'm keen on making something truly great, and I feel confident about my ideas and the process, but I'm still figuring out the character prompt part.

Right now, it's still in development, very alpha, but the basic functionality works. My two home PCs are chugging along, and they can definitely handle a couple of people if anyone's interested.

I haven't set up any Boosty, Patreon, or anything like that – it doesn’t feel right to me. But before writing this post, I did set up a new Discord channel. If it’s okay to share a link, it is.

I'd be thrilled to read any advice, opinions, or recommendations right here too.

Cheers!
 
Last edited:

Inoi

Newbie
Jan 11, 2022
15
17
So, here's a little more info about what we have now and how it works. First, a few images:

You don't have permission to view the spoiler content. Log in or register now.

You don't have permission to view the spoiler content. Log in or register now.

You don't have permission to view the spoiler content. Log in or register now.


Of course, almost everything here is still sketches, without the involvement of designers, and I will likely continue to redo many elements. I’ve already completely refactored the design once (though the general concept remained the same).

A bit of technical stuff:

This is my first-ever React application.


You don't have permission to view the spoiler content. Log in or register now.


On the front end, it's Vite+React, Tailwind for styling, and SWR for data mutations and revalidation that I get from the backend. There's also I18next for multilingual support - it's more of a groundwork, but it's already fully functional, and the interface language switches completely. SWR was fine-tuned by my front-end developer when he refactored my code - initially, I wasn't revalidating data dynamically at all.

This is only a client-side application, entirely executed by the browser - everything else is done by the backend.

The backend is an ASP.NET API application (which ideally needs to be transitioned to sockets for a start), designed through services and various domain entities. All of them seem to be isolated from each other, working only within their own structure.


You don't have permission to view the spoiler content. Log in or register now.


What else is there? Entity Framework, AutoMapper, Mongo, and JWT for user tokens.
For data that needs constant internal connections - MSSQL.


1717275168857.png


It is assumed that most of the data inside should literally be enums, because it’s better for tag-based navigation or search. But in fact, this is not the case everywhere yet, but all connections between tables already exist, and accordingly, they are all mapped into DataAccess entities using EF. The logic for working with the database is encapsulated through UnitOfWork.

It also works with a second database, Mongo, where only messages are stored separately.

Oh, not only. Photos sent by the SD are also stored there - GridFS is used for this, which splits files into chunks and stores them in one collection, leaving metadata in another.


1717275220249.png


And the most mysterious part - Stable Diffusion, here the prompt sent to it by the backend, contains not only what the user sends but also a pre-set piece. Besides quality descriptions and negatives, there is a Name, with a specific lore - a trained SD addon, which based on around 1500 photos always draws the same character. (I didn't train them myself; these are ready-made lores that you can choose from). Accordingly, SD is operated by a single service that responds to the key phrase "send a photo" in the chat. The API endpoint is the same as for messages, of course, and then the backend processes them through the AIs.

I tried a lot text models, started with Kobolds, wasn’t impressed, tried something on Colab, and finally used the divine LM Studio, which basically does a huge part of the work itself - for example, formatting the prompt for a specific model.

A little research was done here, and I probably highlighted two cool models - this one: And the one currently in use:

Command-R shows itself better, but it's very resource-intensive; on my 3080 and 32 GB RAM (where the model is loaded) a single response is generated in more than a minute, while TheBloke's LLaMa-based model does the same in a few seconds. Overall, it’s quite decent too.

In total, besides the hosting with the front end, there are four servers. One with server Ubuntu, with Mongo and MSSQL deployed in Docker (you can’t put MSSQL on the latest Ubuntu without a container). The second with SD for photos and its API. The third with the text AI and its API. The fourth with the backend, which communicates with all of them. The last three are on Windows.
And thats just old PCs under my table and from my friends.

Google Translate and DeepL are connected on the backend - the first translates prompts for SD, the second, more complex, translates the text chat, allowing the user to write in almost any language, translating their text into English for the AI and then back into the user's language after its response. (This is still a debatable solution that I am working on.)

Regarding AIs, the idea of integrating ChatGPT, which is obviously the best neural network for chatting, doesn’t leave me alone, but I am terrified of how many tedious conditions and filters I will have to write to protect the user from its occasional refusals to talk about certain topics, which would have to be recognized literally by words. This is not impossible, of course, but it looks daunting.

That's it.



Currently, I am working on text prompts and small extensions to the API endpoints to properly transmit JSON with multiple control strings instead of parsing key system words in one string, as I foolishly did initially. Overall, I plan to transition to sockets.

Since all of this is actually running at my home, it’s not always very fast or stable, but most of the time, if there's a desire, it can be tried out.
 
Last edited:

Inoi

Newbie
Jan 11, 2022
15
17
Quick updates (actually big ones). I've been experimenting a lot with text models, and I've now settled on the relatively new and robust Llama 3. It still has some censorship, but I've worked hard on the prompts to make it respond the way we want. (I often start testing these things on the character Zara, as she is the most candid.) So far, I've managed to get only one refusal out of about a hundred iterations, which is great! Additionally, I've rewritten all the initial prompts and descriptions for all the characters. They now respond much more naturally and, at times, more simply (which I believe is how it should be). Moreover, the characters should now occasionally send pictures upon request, if they feel like it, without needing to press the special camera icon at the bottom. Essentially, the request for a photo is sent from one neural network, which then prompts the second neural network, effectively without direct user intervention. This feature is unfortunately somewhat unpredictable and hard to ensure works flawlessly every time, but it seems to be functioning for now. I've also fixed a few minor UI bugs, but that's minor stuff. The major work has been on the models and prompts. I'm always ready for more improvements, so feel free to share your experiences boldly.

For a better experience, please use English. However, the built-in DeepL should automatically recognize any language and translate your messages back and forth, so they are always in English for the AI and in your language for you. But DeepL sometimes can make a funny mistakes :)
 

Deleted member 7357281

Active Member
May 1, 2024
594
988
Welcome to the forums! I hope the chatai is going well! I will join the discord and try to help in anyway I can! This project looks fun and very interesting! Super impressive by this all! Good job!
 
  • Heart
Reactions: Inoi

Inoi

Newbie
Jan 11, 2022
15
17
Welcome to the forums! I hope the chatai is going well! I will join the discord and try to help in anyway I can! This project looks fun and very interesting! Super impressive by this all! Good job!
Thank you so much for your support! :)
Even just likes on my posts, and especially such support, are very important, because its always motivate me go further :3
 
Last edited:

lir1234

Newbie
Mar 31, 2023
21
25
Hi! Following this and hope to try it in the future. Also, I hope it's something you can try first without paying!

So far every such chat I've tried was either too limited to try out, or if I did get to try it the AI capabilities did not make me want to make it more than just a one hour experiment.

Good luck!
 
  • Heart
Reactions: Inoi

Inoi

Newbie
Jan 11, 2022
15
17
Hi! Following this and hope to try it in the future. Also, I hope it's something you can try first without paying!

So far every such chat I've tried was either too limited to try out, or if I did get to try it the AI capabilities did not make me want to make it more than just a one hour experiment.

Good luck!


Hello!

(I apologize in advance for any mistakes, I’m using a neural network to translate my stream of thoughts into English - I can handle short responses myself, but longer texts are more challenging for me)

Thank you very much for your interest, and I’ll try to address your quite reasonable complaint a bit.

I’ve also tried a huge number of similar AI chats and was very disappointed with all of them. I did pay for them, usually, the price wasn’t too high (except maybe for Foxy, but I tried that one too).

$10 a month seems like a small cost if you get some decent content for that money. However, I quickly realized that the dialogues I had with ChatGPT were far more interesting, richer, and more detailed.

Primarily, it was this - the desire to at least try to give users a somewhat better experience - that motivated me to start studying this whole thing.

For the most part, even the additional content that exists in the environment now - like voice messages or audio calls, which seem quite easy to implement, still doesn’t solve the main problem - the text dialogues themselves quickly become boring.

Models quickly go into loops, lose initiative, and in some cases, just start continuing the user’s narrative. It’s the desire to try and do better, rather than a passion for making money, that drives me :)

Of course, I keep in mind that a quality solution should always make money eventually; besides, neural networks do require resources. Even now, twenty simultaneous users will have to wait in line for a few minutes on my 3080 to have a proper conversation with the bot. All this infrastructure doesn’t come cheap. It’s great if it can be done well, come up with a decent subscription system, and make money.

But none of this makes any sense if you’re just making another ai-chat clone without first pushing your project’s functionality to the maximum of its capabilities.

So it’s this idea and desire to just make something really cool that has driven me all my life.

I’ve been doing entertaining pet projects on the internet for quite a while. I had an L2 server fifteen years ago with an average online presence of two thousand people, where the only donation functions were nickname color changes and special symbols in it. Not long ago, I ran a GTA5RP server, which I sold after finishing everything I wanted to create on it, and the only interface it didn’t have was a donation interface :)

So, of course, someday I’ll need to come up with “limits,” but right now I’m definitely not thinking about that.

Currently, as long as my home infrastructure is sufficient and I’m only developing, almost all functionality is available completely free. (Almost always because I sometimes just turn off the neural network, and there are no limits, restrictions, or even any user parameters other than the nickname on my backend yet).

That’s about it, I hope I conveyed my thoughts clearly.
Thanks again for your interest and wishes :)
I’m always happy and open to any advice and feedback!

P.S. and I’m also a graphomaniac :D

P.S.S. btw, ive been working on the mobile version for a few days:

1719428669746.png

To be honest, I don't always understand the point of it, :D especially since it's obvious I'll have to redraw the entire UI with a professional designer.
But I want even the test version to work well everywhere.
 
Last edited:

lir1234

Newbie
Mar 31, 2023
21
25
Hello!

(I apologize in advance for any mistakes, I’m using a neural network to translate my stream of thoughts into English - I can handle short responses myself, but longer texts are more challenging for me)

Thank you very much for your interest, and I’ll try to address your quite reasonable complaint a bit.

I’ve also tried a huge number of similar AI chats and was very disappointed with all of them. I did pay for them, usually, the price wasn’t too high (except maybe for Foxy, but I tried that one too).

$10 a month seems like a small cost if you get some decent content for that money. However, I quickly realized that the dialogues I had with ChatGPT were far more interesting, richer, and more detailed.

Primarily, it was this - the desire to at least try to give users a somewhat better experience - that motivated me to start studying this whole thing.

For the most part, even the additional content that exists in the environment now - like voice messages or audio calls, which seem quite easy to implement, still doesn’t solve the main problem - the text dialogues themselves quickly become boring.

Models quickly go into loops, lose initiative, and in some cases, just start continuing the user’s narrative. It’s the desire to try and do better, rather than a passion for making money, that drives me :)

Of course, I keep in mind that a quality solution should always make money eventually; besides, neural networks do require resources. Even now, twenty simultaneous users will have to wait in line for a few minutes on my 3080 to have a proper conversation with the bot. All this infrastructure doesn’t come cheap. It’s great if it can be done well, come up with a decent subscription system, and make money.

But none of this makes any sense if you’re just making another ai-chat clone without first pushing your project’s functionality to the maximum of its capabilities.

So it’s this idea and desire to just make something really cool that has driven me all my life.

I’ve been doing entertaining pet projects on the internet for quite a while. I had an L2 server fifteen years ago with an average online presence of two thousand people, where the only donation functions were nickname color changes and special symbols in it. Not long ago, I ran a GTA5RP server, which I sold after finishing everything I wanted to create on it, and the only interface it didn’t have was a donation interface :)

So, of course, someday I’ll need to come up with “limits,” but right now I’m definitely not thinking about that.

Currently, as long as my home infrastructure is sufficient and I’m only developing, almost all functionality is available completely free. (Almost always because I sometimes just turn off the neural network, and there are no limits, restrictions, or even any user parameters other than the nickname on my backend yet).

That’s about it, I hope I conveyed my thoughts clearly.
Thanks again for your interest and wishes :)
I’m always happy and open to any advice and feedback!

P.S. and I’m also a graphomaniac :D

P.S.S. btw, ive been working on the mobile version for a few days:

View attachment 3773071

To be honest, I don't always understand the point of it, :D especially since it's obvious I'll have to redraw the entire UI with a professional designer.
But I want even the test version to work well everywhere.

Thanks for your reply!

A few remarks on my part:
- I also have some GPU power (4080), not sure if there is a way to decentralize the calculation
- I agree that the goal being to make money, it makes sense to pay for it. I never have paid for adult stuff as I have never had the ways to pay for it anonymously (I know crypto is a thing but that's a lot of effort :) )
- If I was to pay for it it would be easier if I could try it for free first, so that I see that it's really worth it. But then you do run the risks of people creating free accounts forever, which is the issue
- One thing I think the best AI chat lack was in case of fetish stuff, it has less content than vanilla (makes sense, there is less material for the AI to learn from). All the context settings are an important part, and on my part I wasnt sure how to fill them properly. This might be where you can make a difference: If you provide a better way to configure the setting.
- Not sure about this one, but most chats have a limited memory, and I guess the longer the memory, the exponentially larger the computing capacity you need right? Maybe that the general setting could automatically update as you go with the chat, to bypass this memory capactity limitation.

Good luck!
 
  • Heart
Reactions: Inoi

Inoi

Newbie
Jan 11, 2022
15
17
Thanks for your reply!

A few remarks on my part:
- I also have some GPU power (4080), not sure if there is a way to decentralize the calculation
- I agree that the goal being to make money, it makes sense to pay for it. I never have paid for adult stuff as I have never had the ways to pay for it anonymously (I know crypto is a thing but that's a lot of effort :) )
- If I was to pay for it it would be easier if I could try it for free first, so that I see that it's really worth it. But then you do run the risks of people creating free accounts forever, which is the issue
- One thing I think the best AI chat lack was in case of fetish stuff, it has less content than vanilla (makes sense, there is less material for the AI to learn from). All the context settings are an important part, and on my part I wasnt sure how to fill them properly. This might be where you can make a difference: If you provide a better way to configure the setting.
- Not sure about this one, but most chats have a limited memory, and I guess the longer the memory, the exponentially larger the computing capacity you need right? Maybe that the general setting could automatically update as you go with the chat, to bypass this memory capactity limitation.

Good luck!
Hello again :)

Regarding decentralization - technically it is possible to distribute the queue across different servers. I haven't really thought about it yet, because my initial plan was to rent servers specifically for neural networks with multiple GPUs (at least for the initial MVP stage).

Thanks for the clarification about crypto - I have dealt with crypto payments before when I ran a store for computer game keys. This is an important point that I hadn't considered because I usually paid for such things with my card without any hesitation. But it is important, of course, and I will definitely think about how to implement it. I think it's quite feasible given the modern accessibility of cryptocurrency.

Of course, you are absolutely right about having a free period. I also do not support fully paid content; I don't want to pay for something I can't try first.

Vanilla content is a terrible thing :) To properly set up context prompts, you need to listen to feedback first and foremost. Come and try talking to Zara :) That's precisely why I started talking about my small startup; because I can't imagine how to properly set up content for a neural network without some testing and numerous iterations, where you listen to feedback and rewrite the prompts. I simply don't have the time to talk to everyone long enough xD

You are absolutely right in your last assumption. I cut out the central part of the conversation history stored in memory, leaving the initial prompt and the first messages from the user, as well as the most recent ones, when the model approaches the limit.

Just in case - right now all this functionality is still in a sort of dev status, and it's completely free whenever the neural network on my computer is running.

I'm currently working on graphical models after I finished writing the instructions, which I think are in a MORE-OR-LESS suitable state (as I said, further work involves testing and listening to feedback) and more or less fixed the mobile version.
 

Inoi

Newbie
Jan 11, 2022
15
17
Sooooo
Whats new

Firstly, I've reworked the visual models of some of the girls a bit.

1720784971676.png



I've been testing SillyTavern for a couple of days now—it's a ready-made solution for connecting it to your neural networks. It's quite extensive on the front end because it connects a lot of things right away and allows you to use them. In short, it has pretty flexible settings for prompts, connections, TTS, and SD all in one package.

I connected it to my neural network and added an additional layer through Claude, which further formats the prompt after my neural network. And yeah, it looks pretty cool and has a lot of features (although, at this point, it's unclear why I even need my backend since it stores everything itself).

But

First of all, as far as I understand, this is just a front end, meaning I either can't or it's pretty complicated to try to use it as an intermediary since it simply doesn't have an API. You can write your own plugins, replacing some functionality, but making an interface this way is just a huge pain. It's really messed up.

Maybe I could redesign its entire existing interface, separating accesses, but whether it's worth it is a big question because it looks pretty cool. But I could just rewrite my own prompts in my software, process formatting from the neural network, and overall produce the same thing.

Moreover, this is specifically geared towards being an interactive story, i.e., there are descriptions of characters and actions around their text. I initially intended for a chat that's more realistic in terms of perceiving the interlocutor, and this thing seems to break the fourth wall too much in my case.

Maybe, just for the sake of experiment, I could create separate characters around whom the world and events are described in the third person, but I'm not sure if it's worth making all of them like that.

But I could be wrong, of course.

What else—I ended up applying the Tavern principles in a test version for Amber, just to try chatting and see if I like it more. So, there will be third-person descriptions. At the same time, the first response to the user to maintain the story is always static (other models generate the first response themselves). And the other girls will continue as usual, just having dialogues.

1720785005748.png

Amber is a girl with a surprise who doesn't match the current description on the site. :sneaky:
 

Inoi

Newbie
Jan 11, 2022
15
17
Sounds good! I hope you will post here once you release :)
Hi, thank you! Already now most of the time all this works and is completely free. If you want, feel free to try it, leave your feedback, suggestions and wishes for virtual women :)
 

Inoi

Newbie
Jan 11, 2022
15
17
Just felt like dropping an update — what's changed recently in the project.


Last time I think we left off somewhere around voice gen experiments?


So yeah, I've been messing around a lot with prompts again.
I mean, obviously. You could probably do that forever and never get bored.


Right now, each girl has her own prompt setup — personality description, etc — plus shared prefix/suffix "shield" prompts.

Surprisingly, GPT-4 (regular ChatGPT version) turned out to be way more chill than I expected.
Way better than 3.5 in this kind of context. You can actually have fun with it sometimes.


That honestly shocked me — I was so sure it would be all filtered to death, but turns out you don’t even really need jailbreaking anymore.
Just need to clearly frame it as a fictional roleplay, with mutual consent, and it gets the idea.


Setting some basic limits (like no kids, no real-world harm, etc) actually helps — makes things easier to steer, and doesn't kill immersion.


So yeah, on my site there’s now a switch: standard (local model) vs premium (GPT).
And GPT is just way more alive in convos, obviously.

1751262551506.png


Thing is — even with complex prompt setups, GPT sometimes skips the initial system message.
So I force that part through a local "alibirated" LLaMA model first, which never refuses anything, and then let GPT continue the convo once it has context/history.


Still had to implement a ton of keyword filtering like “I can’t”, “I’m sorry”, “I’m an AI” etc — if that shows up, it reroutes to local automatically.
So users never see rejection messages.


It’s not perfect, but I’m constantly updating the filter when I catch weird responses.



Frontend had a bunch of minor bugs I can’t even remember now, but they’re fixed.
I added a token system (temporarily disabled when I started working on the Telegram stuff).
It worked though — tokens were added daily and spent on pics, etc.

1751262567242.png
1751262574404.png


At some point I got sick of VPN issues (couldn’t get traffic through), so I just slapped in a second network card
Switched the messaging from HTTP to WebSockets.


Tried Qwen 2 and 3 — didn’t like them.


Voice generation is now entirely on the backend since ElevenLabs is blocked in Russia.
Also added proper rerolling of replies on the site (rebuilds history correctly).


Then started building a second frontend — the Telegram mini web app:

1751262594161.png 1751262603252.png


Turned out pretty decent. Basically same UX but inside Telegram.



Initially I wanted girls to talk from real Telegram user accounts, not bots. It’s doable.
But: if you go too hard with messages, Telegram just nukes the accounts eventually. No go.
So I had to simulate the immersion via individual bots, minimal UI, etc.

1751262639550.png

It's not random — I actually put thought into how it should feel.


It still supports everything — photos, voice messages, “typing…”, “recording audio…” — all the fun stuff.


Only real differences:

  • You can only reroll the last message (unlike the website).
  • I can’t delete chat history on the user’s side (well, technically I could track Telegram message IDs but honestly... nah).
    So if the user wants messages gone — they gotta do it manually.

Other stuff:

  • No model selector in the app yet — no user dashboard implemented, so everything runs through GPT.
  • Local LLM only kicks in when GPT’s output matches my filter.
  • And for the first system prompt, like I said earlier.

And of course — the endless cycle of prompt tweaking, jailbreak hell, trying to stay in-character, etc etc. That shit never ends.


Two recent tech headaches:

  1. Roskomnadzor randomly started blocking Cloudflare IPs.
    So I ended up staring at dicks on mobile instead of girls. Not really my thing.
    No offense to anyone — just not my vibe.
    Solution?
    • Bought another domain (clone of the current one).
    • Moved it to Cloudflare DNS like before.
    • Got a cheap VPS in Germany, hosted my own nameserver (PowerDNS).
    • Pointed old domains to my nameserver.
    • Wrote a geo-DNS logic: if the request comes from Russia — resolve to Russian subnets; otherwise — let CF handle it.
  2. Worked surprisingly well. If the request is coming from a Russian DNS resolver — it routes through my RU infra.
    Everyone else goes through Cloudflare like normal.
  3. Realized I need proper queueing for API calls to the local LLM/SD servers.
    My rig isn’t top-tier, aand once 10 users request pics at the same time — SD starts timing out.
    So now I’ve got Redis in the middle managing job queues.
    Both the site and Telegram bots send their requests through it.
Feels like I’ve done a lot — even though most of it is invisible stuff like chat scroll fixes.
But man, real-life workload has exploded this year. From actual work shit to some guys asking me to manage a RDR server.
Still, whenever I get free time — I dive right back into it.


Next steps:

  • Fix ElevenLabs voices — most of them are garbage.
    Just got a decent payment card via some RU dudes in Germany (that’s how GPT API works too now), and I’ll hook it up soon.
  • Telegram UX improvements:
    Right now Telegram accounts aren’t tied to the site ones at all. Gotta fix that — add model selector, language choice, etc.
  • Rethinking dialog design:
    Originally aimed for “looks like real chat” vibe, but judging from anonymized messages — people actually prefer roleplay.
    Especially RU users. Like set-piece scenes, tavern-style stuff.
    So I might split it:
    One mode for casual convos, and another full-on roleplay mode — with the "softly breathing on your ear" kind of thing.
    Could just expand the prompts, or maybe integrate something like TavernAI with my backend. We’ll see.

Also, Telegram's probably the easiest to monetize.
Tokens will eventually be rechargeable — via stars, TON, whatever.

But as usual, that’s all somewhere waaay at the end of the list.


That’s it for now.