Unreal Engine how can we achieve soft Skin Physics in Unreal Engine

Velomous

Member
Jan 14, 2024
265
227
In Unreal, is it possible to create a SkinnedMesh at runtime? (I believe they are called Skeletal Meshes in Unreal). Basically, creating a bone hierarchy and assigning weights to a static mesh? If so, is it feasible to create a real softbody system similar to those used in Unity, based on particles. From what I’ve seen, ChaosFlesh works that way, but it’s too cumbersome.

I've been wanting to experiment with C++ and Unreal for a while, but for now, I have enough on my plate with Unity. We'll see during my next holidays, lol.

Softbody Particles in Unity:

View attachment 3952538
Wow that looks both really bad and really good at the same time somehow, i don't know about creating a bone hierarchy at runtime (I kinda doubt that), but if you already have the bone hierarchy you can very easily slap static meshes on the individual bones at runtime no problem.

24-08-176.png

You can have a skeletal mesh that is just the skeleton without the actual mesh i believe.

I don't know anything about unity softbody, but your effect looks in a similar ballpark as the niagara (particle) softbody techniques that were explored a little bit by me earlier in this thread.
 
Last edited:

darkevilhum

Newbie
Sep 9, 2017
77
78
You're welcome, I'm glad you like it.

The makeup stuff seems simple enough, i take it you just open the base texture of your mesh in photoshop and draw the makeup over the face on a new layer to get a mask, right?

The tattoo is a lot more complicated, I barely understand heads or tails of it, how are you mapping it to individual UV islands? How do you prevent it from getting distorted?

What I'd personally like to do is just use one mask for all makeup and tattoos (i'd just use the rendertarget's alpha for opacity control (although making that editable would be somewhat challenging, I have a few ideas though))

I've been trying to work out a system that uses mesh painting to position the tattoo on the body, and it is hard.

I wouldn't mind sharing how i've done what I have so far but it's basically just the tutorial I linked to a while back (you can find it on the first page), i basically took the mesh painting tutorial and improved on it a little bit (normally it needs to use a scene capture to unwrap the uv every single frame you want to paint, I converted it all from world space to relative space so it only has to unwrap the uv once on beginplay (this makes it orders of magnitude more performant), i also made it so it supports multiple materials, below model for instance has face and body on separate mats.)
View attachment 3952477
















I haven't figured out how to fix that ugly seam on the edge of each uv island, though i admittedly haven't really tried very hard to fix that one yet because i keep getting stuck on the problem that I cannot for the life of me figure out a way to draw with a texture instead of a mathematically generated shape, in other words, with what I have i could allow a user to paint custom makeup and tattoos on the mesh by hand, which is very cool in and of itself.

But even if theoretically it should be easy to slap on a texture instead of a shape, it's anything but. the shape masks (sphere and box) for some reason can understand a 3D position, but a texture sampler certainly cannot, so just the very first step of centering the texture on the position has me stuck :HideThePain:
Do you mind explaining what your method was for changing the Unwrapping process from world to local/relative? I'm still using the unwrap via scene capture, would be interesting to see how much more performant it is without that janky part.
 

Velomous

Member
Jan 14, 2024
265
227
It's quite simple actually, on begin play I save the actor transform to a local variable, I then set the actor transform to the world origin (just 0,0,0 across the board except scale), unwrap it, set the scencapture world rotation to 0,-90,-90, capture the scene and then set the actor transform back to the original transform (and although you can't see it in the below code, i tend to also just delete the scene capture after this, because it is no longer needed)
24-08-184.png Then when I send the location data I just make it relative like this before sending it to the hitmask material.
24-08-185.png

I originally wanted to, in the uv unwrap material, transform the position from world to local, but it produced inaccurate results (I think the transform position in materialbp isn't 100% reliable, or maybe just buggy sometimes) which is why I am instead just moving the mesh to the world origin and unwrapping it there which produces what should be an identical result to a local space unwrap.
 

darkevilhum

Newbie
Sep 9, 2017
77
78
It's quite simple actually, on begin play I save the actor transform to a local variable, I then set the actor transform to the world origin (just 0,0,0 across the board except scale), unwrap it, set the scencapture world rotation to 0,-90,-90, capture the scene and then set the actor transform back to the original transform (and although you can't see it in the below code, i tend to also just delete the scene capture after this, because it is no longer needed)
View attachment 3953373 Then when I send the location data I just make it relative like this before sending it to the hitmask material.
View attachment 3953374

I originally wanted to, in the uv unwrap material, transform the position from world to local, but it produced inaccurate results (I think the transform position in materialbp isn't 100% reliable, or maybe just buggy sometimes) which is why I am instead just moving the mesh to the world origin and unwrapping it there which produces what should be an identical result to a local space unwrap.
Hmm.. but doesn't the hit mask material use sphere masks which require a world position? But you're sending a relative position instead. How does that work or am I missing something about sphere masks
 

Velomous

Member
Jan 14, 2024
265
227
Hmm.. but doesn't the hit mask material use sphere masks which require a world position? But you're sending a relative position instead. How does that work or am I missing something about sphere masks
Ugh, it's hard for me to put in words because i'm not exactly an expert on this, it's stuff I've just picked up and learned by doing, you have world space, a position in the world, and relative space, a position in relation to the actor or component itself. All I'm doing is converting the world space coordinates to relative space ones. (If memory serves, converting a location from world space to relative space is as simple as subtracting the actor world position from the world position you want to convert to that actor's relative space)

Like when you create a blueprint actor, and you put some components in it, you have that little viewport in the blueprint editor, and that viewport is a visual representation of relative space, and the coordinates you get to fiddle with in the blueprint details panel are relative space coordinates,

The sphere masks accept 2 parameters, the 'area' and the 'center' so when we use it for mesh painting, we feed it the UV unwrap as the 'area' and the positional data as the 'center'. If we unwrapped the UV at world origin it is functionally the same as if we unwrapped it in relative space, which means if we do that and feed that into the sphere mask it will work (only) with relative space coordinates, if we unwrapped the UV at world position instead, it'll only work with world position coordinates.

If both of them were made using relative space, we get accurate results just like if both of them were made using world space we also get accurate results, it's only if one is not using the same space as the other that we'd get inaccurate results.

Edit: Second attempt to explain

The normal way of doing it, we unwrap the UV in world space and use world space coordinates with that unwrap and the sphere mask to always get accurate results.

What I'm instead doing, is unwrapipng the UV on 0,0,0 coords and then sending that unwrap and relative space coords along with it for the painting, the reason this works is taht any world location data will be a perfect match to relative space data for an actor positioned at 0,0,0 because of how relative space works (which I tried to explain in my old explanation).

The sphere mask is ultimately a material node, and the material is only as aware of world space as you make it, so by unwrapping in relative space and feeding relative space coordinates, it just works (tm) the same as in world space, if anything it's more reliable this way. In fact I personally was more confused that the world space approached worked, cuz I didn't know you could send world space data through a render target, I still can't wrap my head around that idea in fact.
 
Last edited:

darkevilhum

Newbie
Sep 9, 2017
77
78
Ugh, it's hard for me to put in words because i'm not exactly an expert on this, it's stuff I've just picked up and learned by doing, you have world space, a position in the world, and relative space, a position in relation to the actor or component itself. All I'm doing is converting the world space coordinates to relative space ones. (If memory serves, converting a location from world space to relative space is as simple as subtracting the actor world position from the world position you want to convert to that actor's relative space)

Like when you create a blueprint actor, and you put some components in it, you have that little viewport in the blueprint editor, and that viewport is a visual representation of relative space, and the coordinates you get to fiddle with in the blueprint details panel are relative space coordinates,

The sphere masks accept 2 parameters, the 'area' and the 'center' so when we use it for mesh painting, we feed it the UV unwrap as the 'area' and the positional data as the 'center'. If we unwrapped the UV at world origin it is functionally the same as if we unwrapped it in relative space, which means if we do that and feed that into the sphere mask it will work (only) with relative space coordinates, if we unwrapped the UV at world position instead, it'll only work with world position coordinates.

If both of them were made using relative space, we get accurate results just like if both of them were made using world space we also get accurate results, it's only if one is not using the same space as the other that we'd get inaccurate results.

Edit: Second attempt to explain

The normal way of doing it, we unwrap the UV in world space and use world space coordinates with that unwrap and the sphere mask to always get accurate results.

What I'm instead doing, is unwrapipng the UV on 0,0,0 coords and then sending that unwrap and relative space coords along with it for the painting, the reason this works is taht any world location data will be a perfect match to relative space data for an actor positioned at 0,0,0 because of how relative space works (which I tried to explain in my old explanation).

The sphere mask is ultimately a material node, and the material is only as aware of world space as you make it, so by unwrapping in relative space and feeding relative space coordinates, it just works (tm) the same as in world space, if anything it's more reliable this way. In fact I personally was more confused that the world space approached worked, cuz I didn't know you could send world space data through a render target, I still can't wrap my head around that idea in fact.
Ohh right. Sorry, yeah you explained it fine now. I forgot that with your initial one time unwrap, you've basically unwrapped it at 0,0,0 so you've just got the 0-1 UV so you can actually just give a relative coordinate to the sphere mask. Gotcha! That's clever. All of the methods I've tested so far rely on the logic being in separate components which all fall back to a single Scene Capture component that ends up unwrapping repeatedly on demand. This will be an interesting performance gain as I was starting to see some losses with so many different effects on top of each other. Thank you
 
  • Like
Reactions: Velomous

Velomous

Member
Jan 14, 2024
265
227
This will be an interesting performance gain as I was starting to see some losses with so many different effects on top of each other. Thank you
I mean yeah, my fps was dipping down to 40 when the scenecapture went off which made mesh painting prohibitively expensive, mind you that fps was already locked to 60 so it could easily have been costing more than 60fps to do a scene capture like that everytime it was done. And from the moment I first heard abuot this method my first thought was "ok, so they're unwrapping the UV every frame that they're painting... why? why don't they just unwrap it once?", then I tried it and it just worked, I was fully expecting to get stuck on it since it was such an obvious optimization that surely people would have tried. but apparently not.

Granted I imagine they may have tried the transform position world to local inside the materialbp like I originally did which for whatever reason didn't work properly. But that hacky workaround of just moving the actor to world origin and unwrapping there worked just fine.
 

darkevilhum

Newbie
Sep 9, 2017
77
78
I mean yeah, my fps was dipping down to 40 when the scenecapture went off which made mesh painting prohibitively expensive, mind you that fps was already locked to 60 so it could easily have been costing more than 60fps to do a scene capture like that everytime it was done. And from the moment I first heard abuot this method my first thought was "ok, so they're unwrapping the UV every frame that they're painting... why? why don't they just unwrap it once?", then I tried it and it just worked, I was fully expecting to get stuck on it since it was such an obvious optimization that surely people would have tried. but apparently not.

Granted I imagine they may have tried the transform position world to local inside the materialbp like I originally did which for whatever reason didn't work properly. But that hacky workaround of just moving the actor to world origin and unwrapping there worked just fine.
Haha you nailed it. I had the same thought and tried exactly what you mentioned but inside the material. And for whatever reason that just doesn't work. At least doing it this way does, running that Scene Capture all the time was just awful.
 

razfaz

Member
Mar 24, 2021
171
195
Quick UE (5.x+) Dev-front update regarding: Nanite SkelMesh >> Morphing/Skinning_
- Some work has been done there (past weeks), but still very early and experimental.
- It is still on "ToDo" for the "User-Work-Pipeline".

So... I'll keep You updated.

hav fun (y)
 
Last edited:

darkevilhum

Newbie
Sep 9, 2017
77
78
Velomous When you have a sec, can you try something out for me? I'm testing your depth fade discovery but I'm hitting a wall that I feel like I shouldn't be hitting. So might just be doing something dumb.

I've got a simple actor with a character mesh on it.
I also created a render target texture in my content (so i can open and preview it).

I've got this Material applied to the character mesh: (It's set to Unlit (For Render target compatibility) and Translucent (to read scene depth)) ss+(2024-09-03+at+12.32.54).png


If you swap it to Lit and put the output into base color, it works on the character visually, you can see white on the character from intersecting meshes. So it's making an ideal mask for any kind of soft body interaction we might want.

But when I try to apply this material to a render target to actually create a mask texture that we can use, like so:
ss+(2024-09-03+at+12.39.11).png

I get either all black or all white. I suspect either
A: I'm tired
B: There's no connection between the material and the mesh when applying it to the render target so it's just applying that material as if it was applied to no mesh or something and therefore there's nothing for it to interact/intersect with via the depth buffer.

If you get the same results then the only solution I can think of is to.. unfortunately have to unwrap a mesh that is using that material via a scene capture.. every frame. Eugh. But that would potentially give us a mask that would enable fairly good soft touching results across the board. What do you think?
 
Last edited:

Velomous

Member
Jan 14, 2024
265
227
Velomous When you have a sec, can you try something out for me? I'm testing your depth fade discovery but I'm hitting a wall that I feel like I shouldn't be hitting. So might just be doing something dumb.
Allright, so if i'm understanding you right, if you put the above material on the mesh and test with depth fade it works normally, if you draw that to rendertarget it draws the rendertarget properly too I'm guessing, you might not have tested that but that's probably how it would be.

But when you have the regular material on the mesh and you try to use this material to generate a mask, it comes out all white, right?

I have a guess about what's happening. Depth fade detects opaque surfaces (I think). If you have the normal material on the mesh, then it's an opaque surface so it's detecting it's own surface and therefore turning all white.

If my guess is right, we'd have to figure out a way to exclude the mesh itself from the depth fade calculations, i wonder, since depth fade seems to detect opaque objects, it's possible it might not be affected if it's in proximity to the transparent side of a non-two-sided object, so maybe offsetting it inwards so that it detects collisions from just below the skin would do the trick, otherwise it'd have to be offset either further away from the skin or even deeper inside of it instead. (You'll have to test this yourself unless i get around to it first)

Also a tip, you probably want to clear the rendertarget on each tick before drawing to it so that you only get whatever is happening on the current frame drawn to the rendertarget as opposed to everything since begin play.

An alternative way if all else fails (again assuming my guess was right) would be to quickly swap the main material out for the mask generation material, draw the mask to the rendertarget and then swap back to the original material every time the mask needs to be updated so that the mesh isn't opaque at the time when you generate the mask. (Actually since we might have to account for clothes and stuff this might be the better option since otherwise any opaque clothes would be affecting the mask which would not always be desirable)
 
Last edited:

darkevilhum

Newbie
Sep 9, 2017
77
78
Allright, so if i'm understanding you right, if you put the above material on the mesh and test with depth fade it works normally, if you draw that to rendertarget it draws the rendertarget properly too I'm guessing, you might not have tested that but that's probably how it would be.

But when you have the regular material on the mesh and you try to use this material to generate a mask, it comes out all white, right?

I have a guess about what's happening. Depth fade detects opaque surfaces (I think). If you have the normal material on the mesh, then it's an opaque surface so it's detecting it's own surface and therefore turning all white.

If my guess is right, we'd have to figure out a way to exclude the mesh itself from the depth fade calculations, i wonder, since depth fade seems to detect opaque objects, it's possible it might not be affected if it's in proximity to the transparent side of a non-two-sided object, so maybe offsetting it inwards so that it detects collisions from just below the skin would do the trick, otherwise it'd have to be offset either further away from the skin or even deeper inside of it instead. (You'll have to test this yourself unless i get around to it first)

Also a tip, you probably want to clear the rendertarget on each tick before drawing to it so that you only get whatever is happening on the current frame drawn to the rendertarget as opposed to everything since begin play.

An alternative way if all else fails (again assuming my guess was right) would be to quickly swap the main material out for the mask generation material, draw the mask to the rendertarget and then swap back to the original material every time the mask needs to be updated so that the mesh isn't opaque at the time when you generate the mask. (Actually since we might have to account for clothes and stuff this might be the better option since otherwise any opaque clothes would be affecting the mask which would not always be desirable)
Swapping the material out temporarily might be an idea, like we did with that unwrapper.
But the main issue I'm seeing at first is that it just doesn't write to a render target correctly. If the mesh is visually working correctly, so I can see white parts where a sphere is intersecting mine and the rest of the mesh is black.
When I grab the material from that mesh and try and write it to a render target it all just comes out fully white. The render target is set to the default write mode (overwrite) so it should just be replacing the state every frame. Feels like I'm missing something simple.
 
  • Like
Reactions: Velomous

darkevilhum

Newbie
Sep 9, 2017
77
78
Did some hair related stuff.

Tested;
  1. Hair card with bones - Visuals: decent, Physics: sub par, Performance: Best
  2. Strand based groom - Visuals: Orgasmic, Physics: sub par (really hard to get these to look realistic as you need the strands to be stiff enough to not deform badly near the scalp but stiff hair may as well not have physics. Performance: Really bad
  3. Hair card with cloth simulation - Visuals: Good, Physics: Best, Performance: Not bad

Hair card with cloth sim is probably what I'd go with. On the topic of hair, I put together a quick hair that does the job for me. Only thing unique about this material is that it includes a wet value that changes the hair visuals to look wet. My wet value comes from a material parameter collection that is driven by a plugin (UltraDynamicSky). But you could drive this however you like tbh.

My final results:
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.
You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:

Velomous

Member
Jan 14, 2024
265
227
Did some hair related stuff.

Tested;
  1. Hair card with bones - Visuals: decent, Physics: sub par, Performance: Best
  2. Strand based groom - Visuals: Orgasmic, Physics: sub par (really hard to get these to look realistic as you need the strands to be stiff enough to not deform badly near the scalp but stiff hair may as well not have physics. Performance: Really bad
  3. Hair card with cloth simulation - Visuals: Good, Physics: Best, Performance: Not bad
That is very good info, I had some ideas about trying niagara particle based hair. I believe head game had a particle hair option, at least it had something it's creator called particle hair; but thats a bit of ways away for me.

Hair card with cloth sim gives a really good result too, I think that's what epic did with their paragon models, I messed a bit with the countess one; or well, the hair was the only stuff that was in the 'clothing' slots of the mesh. (I also discovered that working with said clothing slots at runtime is one of the most confusing things you can attempt iwth the engine, there's almost no infrastructure in place for it)
 
  • Like
Reactions: darkevilhum

darkevilhum

Newbie
Sep 9, 2017
77
78
Had some fun with particles and materials. (Don't judge the ejaculation particles LOL. They're just plain white ovals atm).

Probably not the most performant way to do this but I get lost in niagara, it's too in depth for my patience.

I did this using a simple niagara particle system that "drops" a single particle, which I record via a scene capture into a render target and then use as a mask in a slightly more complex material to get the effect below. I know there's a way to somehow write particles to a RT in niagara itself but that's beyond me, the scratch pad stuff is also wayy out of my headspace at the moment. But the results here are pretty good I think.

The actual triggering is just done via a niagara particle collision callback in blueprints. So when a particle collides, it triggers a callback function in the blueprint and then in there I just do a abit of math to do a trace towards the mesh along the collision normal and get the UVs via the RuntimeVertexComponent plugin and then apply the effect.


cummm.gif
 
Last edited:
  • Like
Reactions: TheExordick

Velomous

Member
Jan 14, 2024
265
227
Had some fun with particles and materials. (Don't judge the ejaculation particles LOL. They're just plain white ovals atm).
That effect is really good! How do you do the semen leaking effect? Would you care to share some more in depth details about the code you're using to do this?

Also niagara ain't so bad, just gotta learn some basics

You don't have permission to view the spoiler content. Log in or register now.
 
Last edited:
  • Red Heart
Reactions: darkevilhum

darkevilhum

Newbie
Sep 9, 2017
77
78
That effect is really good! How do you do the semen leaking effect? Would you care to share some more in depth details about the code you're using to do this?

Also niagara ain't so bad, just gotta learn some basics

You don't have permission to view the spoiler content. Log in or register now.
Thanks for the overview, that's super helpful. Especially some of the events and their flow.

So I have an actor setup that functions like a recorder. When I want to add some fluid, I give it the UV coordinates and it places the niagara emitter in those coordinates (Relative to the actor and the scene capture component in that actor). Then it activates the emitter and captures the particle effect for the duration of the effect to the render target. So it's basically like an animated render target which Is then just used in a material on the character.

Does that make sense? In unity we would have done this in the past with a separate camera (before HDRP) which only captures particles and write them to a render target for similar effects.

I'm not doing any complicated unwrapping or such here so if this lands close to a seam it will just get cut off. But I don't really think that's a huge issue tbh as this type of effect doesn't really need to move too much. I'll post the material here soonish, it's quite messy, needs some TLC and tidy up. But the material will make it make sense
 
  • Like
Reactions: Velomous

mikeblack

Newbie
Oct 10, 2017
35
30
Had some fun with particles and materials. (Don't judge the ejaculation particles LOL. They're just plain white ovals atm).

Probably not the most performant way to do this but I get lost in niagara, it's too in depth for my patience.

I did this using a simple niagara particle system that "drops" a single particle, which I record via a scene capture into a render target and then use as a mask in a slightly more complex material to get the effect below. I know there's a way to somehow write particles to a RT in niagara itself but that's beyond me, the scratch pad stuff is also wayy out of my headspace at the moment. But the results here are pretty good I think.

The actual triggering is just done via a niagara particle collision callback in blueprints. So when a particle collides, it triggers a callback function in the blueprint and then in there I just do a abit of math to do a trace towards the mesh along the collision normal and get the UVs via the RuntimeVertexComponent plugin and then apply the effect.


View attachment 4009328
Looks great.
Worlds better than my failed niagara experiments.
For niagara they have very basic setups for fluids but no non fountain versions.

test.gif

Sadly most of the promising plugins for a simulation route are for Unity.
Like this one:
There is a unreal version of it but it lacks the important features (like skeletal mesh collision) and they don't seem to have much interest in adding them to Unreal.
 

Velomous

Member
Jan 14, 2024
265
227
Wow that fluid stuff looks amazing, but it's worthless without skm collision :(

Thanks for the overview, that's super helpful. Especially some of the events and their flow.
I see (Except for how you can place a niagara emitter on UV coordinates but other than that i get it, it's not all that different from what I did in the semen experiments i talked about here), there was an interesting thread about this on the unreal forums, note particularly the posts by Raskolnikow(hGosling, he was 100% using this for his semen effect which is the best i've ever seen), as he was trying to achieve the same things we are, but notably that thread is mostly focused on c++ which we have not touched on. I believe is the first fully working function that can find the UV Coordinate on a mesh from a hit result, and further down are various improvements and alternatives. It's a very interesting thread. You unfortunately cannot replicate this functionality without c++.

And I look forward to seeing your material, because I've wanted to create a semen material but i'm just not much of a material expert so it always comes out real bad for me :HideThePain:

And I could share some more info on niagara, there seems to be a real lack of good free introduction tutorials for it online (I only learned cuz I paid for it lol); the comparison between spawn&update vs beginplay&event tick was very helpful for me to feel like i had at least some idea of what was going on which is why i shared that one.

This is a good one though, if a bit dense, it's 300 seconds, but it'll take you a bit more than 300 seconds to fully absorb the knowledge shared in it (might need to watch it twice):

Also as a follow up (albeit with an outdated UI) where just one effect is created, but it shows the general niagara effect creation process rather well, you set a renderer and number of particles you want and so on, then you toss in new modules (often forces) and tweak them to change the behavior until it behaves the way you want. And although it didn't show float from curve, it showed vector from curve which is the same thing but with 3 curves instead of one.

As a general VFX creation tip, use random range values a lot, it will make your effects look more organic, like lets say you want to scale your sprites, instead of just setting it from 50 units to 10 units, set it instead to a range between 8 and 15 or something along those lines so each particle can be a bit different in size, and the same applies for forces, instead of static values, use a random range value for basically as many things as you can, only avoid it when you have a particular reason not to use it honestly.

And if you ever want to try your hand at scratch pad (save this for until you have a fairly decent understanding of how to create vfx already, scratch pad is kinda only for very advanced particle effects), the thing that's most likely to trip you up is namespaces, you gotta put everything in the correct namespace or you'll get errors and nothing will work.
 
Last edited:

darkevilhum

Newbie
Sep 9, 2017
77
78
Wow that fluid stuff looks amazing, but it's worthless without skm collision :(



I see (Except for how you can place a niagara emitter on UV coordinates but other than that i get it, it's not all that different from what I did in the semen experiments i talked about here), there was an interesting thread about this on the unreal forums, note particularly the posts by Raskolnikow(hGosling, he was 100% using this for his semen effect which is the best i've ever seen), as he was trying to achieve the same things we are, but notably that thread is mostly focused on c++ which we have not touched on. I believe is the first fully working function that can find the UV Coordinate on a mesh from a hit result, and further down are various improvements and alternatives. It's a very interesting thread. You unfortunately cannot replicate this functionality without c++.

And I look forward to seeing your material, because I've wanted to create a semen material but i'm just not much of a material expert so it always comes out real bad for me :HideThePain:

And I could share some more info on niagara, there seems to be a real lack of good free introduction tutorials for it online (I only learned cuz I paid for it lol); the comparison between spawn&update vs beginplay&event tick was very helpful for me to feel like i had at least some idea of what was going on which is why i shared that one.

This is a good one though, if a bit dense, it's 300 seconds, but it'll take you a bit more than 300 seconds to fully absorb the knowledge shared in it (might need to watch it twice):

Also as a follow up (albeit with an outdated UI) where just one effect is created, but it shows the general niagara effect creation process rather well, you set a renderer and number of particles you want and so on, then you toss in new modules (often forces) and tweak them to change the behavior until it behaves the way you want. And although it didn't show float from curve, it showed vector from curve which is the same thing but with 3 curves instead of one.

As a general VFX creation tip, use random range values a lot, it will make your effects look more organic, like lets say you want to scale your sprites, instead of just setting it from 50 units to 10 units, set it instead to a range between 8 and 15 or something along those lines so each particle can be a bit different in size, and the same applies for forces, instead of static values, use a random range value for basically as many things as you can, only avoid it when you have a particular reason not to use it honestly.

And if you ever want to try your hand at scratch pad (save this for until you have a fairly decent understanding of how to create vfx already, scratch pad is kinda only for very advanced particle effects), the thing that's most likely to trip you up is namespaces, you gotta put everything in the correct namespace or you'll get errors and nothing will work.
I tried something vaguely like what is shown in Head Game but it's too particle heavy for me at the moment. If I understand it correctly though it basically uses ribbon particles which sort of stick to the hit skeletal mesh and then a material to blend them? I'd love to to try a system like that but again, particles in UE will take me a while to get to grips with. Thanks for the resources, I'll have a crack at them soon hopefully.

Actually that reminds me, the semen particle test you did a while back looked pretty good too in the sense that, that method would follow gravity no matter what position the character is in and it would have no problems with uv seams. How complex was that method? I remember you mentioning needing to write particles to every vertex of the mesh (that could be expensive with a high density mesh?). Could that be optimized to place particles only where needed? I suspect Head Game works similarly on that front.