The original Unity version of the Ferdafs game (A Way Back Home) used transformational animation. It took static still images like the one below of individual body parts, and then stretched or rotated them according to a script ("move BOOB_LEFT up 2 pixels", "move NIPPLE_LEFT up 3 pixels") to create an animation. Basically what Flash did. The problem is that it takes processing power to do these animations, which increases performance overhead, or can cause lags or stutters. But if you edit the still images before the transformation animation is played, the changes will show up in the resulting animation. Night Mirror was able to create a non-condom version of existing scenes by making the condom image (red arrow pointing at it) transparent so the animation would play without it.Regarding A), I don't see how weighting rigging models can't be altered algorithmically based on what sliced image(s) a code base pulls
The current Unity version of the game apparently uses a new system that captures the changes frame by frame and only saves those. This is something that is less processing intensive (more akin to playing back a video than mathematically transforming 2D images). But now the animation source material looks like this:
You'd need to reverse-engineer how the Unity engine animates the scene in order to recreate the source images so you could then modify them and have the game play them back so you could capture the changes. I don't think anyone has the secret sauce for this currently.
EDIT: Not to mention, since the only export is frame capturing the final animation as it plays back, someone would have to manually edit breast or stomach appearance on like 300 individual frames for a ten-second animation and that's probably not realistic.
Last edited: