AI to generate art from photographs?

bull3tg0d

New Member
Oct 21, 2017
11
3
Disclaimer: I know absolutely nothing about ai to generate images.

Is it possible to feed a photographic image to an ai and have it produce a 2d/drawn/anime/cartoon version of it? I took photographs of models, environments, objects for a visual novel game that I've finished almost everything on except for the art.

I've hired two different 2d artists to work on a game I'll have finished this December, but both dropped off and didn't see the project to completion. Even offering cash incentives for milestones of the work and profit sharing, it's been an absolute challenge to find anybody with some discipline and work ethic. I'm sitting on a nearly finished game that has only placeholder art. I desperately need a way to do the art myself, but I can't draw to save my life, and I don't want to do it in 3d.

Can ai give you back the same image in multiple styles? If so, which would be the best ai to do that for?

I've also heard of ai being able to change details by text prompts. That would also be incredibly valuable. I don't need anyone to walk me through the entire process, just point me in the right direction, and I'll be able to learn it all on my own.

Thanks in advance for all replies.
 
  • Like
Reactions: dontcarewhateverno

jordanja

Newbie
Apr 8, 2021
80
723
You can use the img2img tool in stable diffusion to generate new images based on an input. Just use an image as the base and text prompts describing the new image you want.

example
Original image created in Daz booba.jpg

new image created using img2img and text prompts
1111.png

There's also an addon for stable diffusion called ControlNet that gives you better control of the final image but I've never used it
 

bull3tg0d

New Member
Oct 21, 2017
11
3
You can use the img2img tool in stable diffusion to generate new images based on an input. Just use an image as the base and text prompts describing the new image you want.


There's also an addon for stable diffusion called ControlNet that gives you better control of the final image but I've never used it
This seems like it would work, but I am curious: can I maintain the same characters easily throughout different prompts? In other words, once I generate a character, is it possible to maintain that same character (with hairstyle/outfit/etc) and put that character in different poses with different facial expressions?
 
  • Like
Reactions: dontcarewhateverno

jordanja

Newbie
Apr 8, 2021
80
723
You can create your own custom models for stable diffusion to generate specific characters or outfits but I've never tried it before. Try searching "stable diffusion custom Lora" on youtube.
 
  • Like
Reactions: bull3tg0d

digitbrush

Member
Apr 5, 2019
330
2,960
This seems like it would work, but I am curious: can I maintain the same characters easily throughout different prompts? In other words, once I generate a character, is it possible to maintain that same character (with hairstyle/outfit/etc) and put that character in different poses with different facial expressions?
i have tried. it is still very difficult to keep faces, clothes, etc the same through more than one picture in Stable Diffusion, even if you train your own model (Lora, etc). The best i could come up with is to create my scenes like i normally do in daz, then use a trained Lora on the faces in the finished renders. Even then, a lot of people did not like it, so i stuck with daz moving forward. Maybe one day , it will reach that level you speak of ...by the way, the word used for this type of method is called STYLE TRANSFER. You feed it several images, and they come back with a different Art style (comic, realistic, etc) but keep all faces, clothes, etc intact