Personally hair strand effects should offset more to ai computational implementation (via CPU processing), but I have always felt that way since tessellation was introduced
IE just a general comment about focused usage of resources. although dlss has come a long way
I'm not sure what you mean by ai computational implementation on the CPU, but effects like these are really only possible through using CUDA. Doing this on the CPU and not via compute shaders would turn this into frames per minute instead frames per second.
Using compute shaders is generally the choice when doing thousands or millions of smaller tasks, since it can do it parallel, especially when branching code can be minimized. And it's especially useful since the data is already on the graphics card for rendering, which is done via a specific rasterizer rendering to get a high quality line display.
CPU using jobs and burst computation has given a big boost in parallelization on CPU and I use it whenever it's applicable, but it's not even a competition when it comes to tasks like these.