Is anyone here up to date on the cutting edge of machine art?
Iterating on what I said on page 2,
if (WHEN) a company like Adobe
were to fully embraces machine art and empowers it with more context and awareness, and make things super easy for creators to touch-up and iterate on the works, I think we'll have a golden age of higher quality sex games.
Let me give a very very basic example of touching up an image for something specifically made to remove text overlays and watermarks
You must be registered to see the links
You must be registered to see the links
Apologies if those two images don't work, maybe these will instead
You must be registered to see the links
You must be registered to see the links
This is a machine learning tool that is purpose built with contextual knowledge about what text and overlays are.
Its quite possible that such a tool could be confused if an area highlighted were on top of a billboard which also had text. But I was lucky and the text was on top of essentially empty space, and the content-aware fill technology was able to do a decent job at removing it.
Human artists are contextually aware of so many things that we take for granted. We are evolved creatures that have millions of years of instinctual knowledge that appreciates certain things that AI have no chance of understanding without being taught.
Don't get me wrong, AI can learn and imply miraculous things with finding patterns and connections in data that humans don't because of how time consuming and unintuitive it would be for us.
We recognize a lazy eye instinctually, but might not be able to effectively communicate it to a machine.
We know to appreciate symmetry in faces. A machine doesn't know that more symmetrical = more betterer.
View attachment 2755266
In this image we know it looks wrong in... many many ways, but lets just go with these two things. A machine doesn't know that the eyes are looking in two different directions, and even if it did, it doesn't know that that looks bad.
For the arm, the machine doesn't know that its drawing an arm, and it did, doesn't know that it looks off.
But I do think that the tools will one day exist to be able for them to do so, but it will take tools built by humans to teach the machine artists context and judgement.
A completely separate AI could look at this image and be able to identify that it as a picture of an asian woman.
It would be able to identify the sky, the green bushes behind her, her blue shirt, all of her facial features, her hair/neck/arm/hand, etc.
That identifier AI could then chop the image up, with it's associated meta-data, possibly create a 3D-wireframe if the image has depth, and feed it back into the AI artist's workspace for iterating.
(Edit: not all that different than myself reading my own forum submission and identifying that my grammar and wording is dogshit with many mistakes)
The AI artist could then know that, "Oh, I am drawing an asian woman with long strawberry blonde hair", and then know to automatically do some extra work before presenting a final draft to the human.
The only way for it to know if what it has drawn is anatomically accurate, would be to zoom out, and skeleton the entire asian woman. Upon doing so, the machine could then look at the original draft's meta-data, and determine that it is horribly flawed.
But it just feels like machine art is extremely primitive right now. Many many tools will need to be created to support it. I don't think simply relying on advanced pattern recognition will ever be able to fill in the blanks of the missing context