Honestly, I find I prefer janky homebrew art, so long as it catches certain details. Given their consumption of high quality images, the diffusion models are good at generating what looks nice to the eye, but this itself becomes uncanny, like those smoothed out "average faces" for different nations. Attractive on one level (certainly they become more symmetrical) but at the cost of certain details. 
The struggle for using a diffusion model is always that detail is complex and fractal -- the most salient example is that people notice details in faces that they won't in arms, and perhaps even details in lips, nostrils, eyes, that they don't in foreheads and hair. Diffusion models do a good job now at capturing the gestalt of a character, but it feels more like several different nice renditions of the same character, and these details are demanding beyond a reasonable capacity for production of images. Yet beyond producing broad areas of color in somewhat consistent ways (time consuming for an artist) even a relatively novice artist can notice and fix details that the models mess up. 
But there is also an arms race -- art acts as an often somewhat abstracted method of communication, and people oppose AI because on some basic level, they're trying to fight back against being tricked into thinking they are being communicated to by a person when in fact the person responsible is avoiding doing so, they know they are being conned on some level. This level isn't initially visible, but once you see enough of these AI images, you begin to get that uncanny feeling, and begin to prefer "real" art in an irrational way, like wanting scuffed jeans because the rich kids realized jeans are cool and are buying ultra-expensive designer ones. 
As an aside I love AI glitch images, ones that are really bad. That is art! It really shows the spirit of the silly, mute machine. But I've come to prefer lower "quality" images in terms of things like shading, background details, or details in things like clothing (i.e. irrelevant details that can be generated easily, often by accident, by diffusion models) -- these in part show what the artist knows is important. This also communicates effective art, animating puts hard limits on how much detail you can really put into things, so you save your hard work for what's important to you.
JSK's janky art is great, maybe it doesn't have every contour humanly possible, but it always does its job well of communicating what matters. I suppose you could train a diffusion model on only his stuff, but this as the name suggests would simply be producing an emanation of the average expected JSK depiction based on what's been already made. 
This AI adventure isn't anything but a fad -- some uses for LLMs and diffusion models are valuable and will persist, but the rest of it will end in tears (or violent revolution, unfortunately, but also then, tears...) and massive lost fortunes. By some reckonings, massive fortunes have already been lost, it is just that the sunk costs have yet to be accounted for.