[Stable Diffusion] Prompt Sharing and Learning Thread

me3

Member
Dec 31, 2016
316
708
i "recreated" a similar prompt (to the large image), never really got the chance to work on finding a good seed or tweak the prompt cause things broke completely for me after a few images. So i can't even load the SDXL model again now, but if anyone is interested the prompt, as far as i got, should be in the image data.

00003-924026384.png
00001-4069027772.png 00002-1520687688.png 00004-2484495013.png

(Updated: running this prompt on sd1.5 models has been quite the disappointment so far, fairly bland and empty backgrounds, "body" is basically just a Robo cop type armor/body :( )
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Is there any prompts or LoRA's that can make realistic and consistent eyes View attachment 2869222
what do you mean by "realistic eyes"?
You can get very detailed and "correctly looking" eyes just with adding (and weighting) by adding detailed eyes, adding in other tags to focus on details in the image as a whole can improve it further. Unfortunately the "quality" of the eyes will drop the further away they are, since there's less pixels to make those details with.
If you're more thinking of things closer to dead eyes, looking spaced out, uninterested etc, adding things like looking at viewer or specific direction or objects fixes some of those things. Don't use focus/focusing on <object>, that makes the camera/view focus on that thing. There's also the good old crosseyed for negatives, it can often fix other eye related issues too as it makes eyes get more attention in general.
 

me3

Member
Dec 31, 2016
316
708
View attachment 2870906

Randomly testing to get black and white and colour in certain places and it didn't come out to bad so I thought I would share.
Not looked at the prompt nor model used, but i came a cross a model that might fit the "style" of this kind of images. No idea if the model is actually any good yet as i've not tested it and it's still being developed, but might be worth a try or keeping an eye on.

The description said:
Lomostyle is nod to analog/lo-fi photography. Of which, the most consistent commonalities is the use of film over digital. This can be black and white film, reverse color, color negative, ect.. combined with cross processing techniques that would involve physical manipulation of the film. Overall, you can think of the aesthetic as being 'vintage'.
 
  • Like
Reactions: DD3DD and Mr-Fox

Synalon

Member
Jan 31, 2022
225
663
Not looked at the prompt nor model used, but i came a cross a model that might fit the "style" of this kind of images. No idea if the model is actually any good yet as i've not tested it and it's still being developed, but might be worth a try or keeping an eye on.
I'll give it a try, thanks for the heads up.
 

me3

Member
Dec 31, 2016
316
708
Continuing my prompt testing on SDXL in different "tools", just to see how it works, or if i can even get it to run, i've gotten to comfy...
If it hadn't taken 51min to do the generating with upscaling i'd try to improve things like the face, yea, not happening with that sort of time.
(without upscaling it's obviously much faster, but still...)

comfy_0001.jpg
 
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
When the AI decides to completely ignore prompting for "real" and just does it's own thing...
I guess a working title would be "Puss.....in boots joins Star Wars" txt2img_0005.png


Continuing the long running image generation (long running in a literal sense).
One image takes 1,5 hours to do, reaching the point where artists could probably do a very good job by hand in the same amount of time...
SDXL​
phoenix.jpg
phoenix_dream.jpg
phoenix_jug.jpg
(not upscaled)​
(not upscaled)​
(not upscaled)​
dynavisionxl.png
nightvisionxl.png
protovisionxl.png


You don't have permission to view the spoiler content. Log in or register now.

(Edited to add images from different models with same prompt and seed)
(Second edit, the last three images aren't upscaled to drastically reduce time, i also found out that the workflow/details wasn't getting save. If anyone is interested in it, any of the last 3 images should have it, one of the advantages of only having changed the model, so it's at least accessible/recoverable)
 
Last edited:

me3

Member
Dec 31, 2016
316
708
Just to spam more boring images ppl probably don't care much about. There's boobs this time though...
Did some tests to see how the amount of steps affect the image in XL, and a small comparison on models to see how a person would look.
Done in a1111...but thanks to the absolutely amazing optimizing there i had to run it with --lowvram due to it having absolutely horrible memory spikes and very weird times. So yes you can use XL in a1111 on very low specs, just used one of the models not requiring a refiner and have something else to keep you busy while waiting.

Images generated using dreamshaper xl:
grid_steps.jpg
No idea what's with the 10 step one

Model grid using 30 steps, links to the models can be found in this post:
grid_models.jpg

You don't have permission to view the spoiler content. Log in or register now.
If you look at the prompt you might notice there's one specific thing in there that all the images failed to include, but hilariously the cat in the linked post took that instruction very well :p
 

Artiour

Member
Sep 24, 2017
295
1,149
Just to spam more boring images ppl probably don't care much about. There's boobs this time though...
Did some tests to see how the amount of steps affect the image in XL, and a small comparison on models to see how a person would look.
Done in a1111...but thanks to the absolutely amazing optimizing there i had to run it with --lowvram due to it having absolutely horrible memory spikes and very weird times. So yes you can use XL in a1111 on very low specs, just used one of the models not requiring a refiner and have something else to keep you busy while waiting.

Images generated using dreamshaper xl:
View attachment 2873940
No idea what's with the 10 step one

Model grid using 30 steps, links to the models can be found in this post:
View attachment 2873939

You don't have permission to view the spoiler content. Log in or register now.
If you look at the prompt you might notice there's one specific thing in there that all the images failed to include, but hilariously the cat in the linked post took that instruction very well :p
try two handed sword (just a suggestion, I don't know the outcome)
as for the cat, how about this, you don't write a thing about any cat and yet there is one in the picture anyways
You don't have permission to view the spoiler content. Log in or register now.
the mic and the stage/lights ...etc probably came from the tag platform heels, and the drooling of the cat maybe from "dripping", but the cat itself, I would so much love to make that cat[/SPOILER]
 
Last edited:
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
try two handed sword (just a suggestion, I don't know the outcome)
as for the cat, how about this, you don't write a thing about any cat and yet there is one in the picture anyways
View attachment 2874143
the mic and the stage/lights ...etc probably came from the tag platform heels, and the drooling of the cat maybe from "dripping", but the cat itself, I would so much love to make that cat
You're using word(s) that are synonyms of it, and the model might be in some way biased towards that type of image. It's easy to notice background bias in models if you render alot of images without specifying things that requires a specific type of background.

And no, models generally don't understand "simple" concepts like two handed swords. or naming types of swords. You might have something close to it show up, but it's not held at all or just in some random way. One of the things you use controlnet for.
 
  • Like
Reactions: Mr-Fox

me3

Member
Dec 31, 2016
316
708
Since XL is meant to be "better" at some things i wondered how it would be to create images in some XL model and then train a lora using those on a 1.5 model. If it worked it would if nothing else be a way to have "new" faces pretty easily as, lets call them default, faces in XL will be different than the ones we're very familiar with from 1.5.
So i generated a bunch of 1024x1024 images, since even that would be pushing my ability to train on them, dropped the ones with obvious problems/faults and started training...and noticed the expected time (stopwatch replaced with calendar) :p
Anyway, ran it for 8 epochs, sample images suggested the output was pretty consistent, so i stopped training and tested the loras and they showed a similarly pretty consistent "look".

This is 4 images generated on the trained model, :
cath.jpg

Did a test across some other models to see how it much of a difference there would be:
cath_models.jpg

Is there any interest in the lora?
I've got no idea what kind of issues it has with regards to flexibility etc, i just did a very basic training setup and since it worked seemingly ok there wasn't no point in doing anything more.

(Updated to add link etc)
00152-1040718168.png
Adding image for advertisement?

, please no sale/profit usage, no claiming credit etc, just the usual respecting other ppls work :)
 
Last edited:

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,802
Since XL is meant to be "better" at some things i wondered how it would be to create images in some XL model and then train a lora using those on a 1.5 model. If it worked it would if nothing else be a way to have "new" faces pretty easily as, lets call them default, faces in XL will be different than the ones we're very familiar with from 1.5.
So i generated a bunch of 1024x1024 images, since even that would be pushing my ability to train on them, dropped the ones with obvious problems/faults and started training...and noticed the expected time (stopwatch replaced with calendar) :p
Anyway, ran it for 8 epochs, sample images suggested the output was pretty consistent, so i stopped training and tested the loras and they showed a similarly pretty consistent "look".

This is 4 images generated on the trained model, cyber realistic v3.3:
View attachment 2878282

Did a test across some other models to see how it much of a difference there would be:
View attachment 2878283

Is there any interest in the lora?
I've got no idea what kind of issues it has with regards to flexibility etc, i just did a very basic training setup and since it worked seemingly ok there wasn't no point in doing anything more.
yes ofc we want it. :D (y)
 
  • Like
Reactions: onyx