Rasterization vs ray tracing vs path tracing

khumak

Engaged Member
Oct 2, 2017
3,432
3,474
I watched a fascinating video recently talking about the difference between different types of rendering, why we're still using rasterization, why we would want to move to ray tracing, path tracing, etc. It's an old video but I didn't know any of this stuff.

It gives you a decent feel for how much more power we need before we can really get the full benefits from ray tracing or path tracing (hint, the 2080Ti isn't even remotely close). Here's a link to the video. I'll include a few timestamps for people who might want to skip ahead to certain sections since it starts off with sort of a history of rasterization before it starts talking about ray tracing.



00:00 - history of rasterization leading up to current generation rendering
10:00 - explanation of how ray tracing works and a bit of info about what it's good for (mostly just good for shadows but potentially a bit better than rasterization for reflections)
12:00 - explanation of what hybrid rendering is (this is what RTX does), basically uses ray tracing for only shadows and reflections and rasterization for everything else
- this is heavily dependent on using a denoiser because it only uses 1 or 2 rays per pixel so the initial image quality is absolutely horrible, to get a good looking image without the denoiser would require thousands, millions, even billions of rays per pixel
15:00 - talks a bit about what we really want from full scene ray tracing (which RTX can not do), things like soft shadows, color bleeding, caustics, reflections, etc
16:00 - explains how path tracing works (RTX does not use this, it uses ray tracing), talks a bit about why path tracing is superior to ray tracing if you have the horse power for it
20:00 - shows an example of Octane rendering software on a couple of Voltas doing full scene path tracing plus denoising at less than 1 FPS but showing image quality far superior to what we can get now

So I can see why the hype is starting for ray tracing. It shows the promise for eventually getting a massive improvement in image quality over the best we have now. The problem is we just don't have the horsepower to do it in real time on current hardware. The 2080Ti is not even close to powerful enough. We're talking 100x the power of the 2080Ti might be enough horsepower to do full scene path tracing with only a few samples per pixel plus denoising. Full scene path tracing without the need for denoising would need to be thousands of times faster than the 2080Ti.

Discuss.
 

Saki_Sliz

Well-Known Member
May 3, 2018
1,403
992
Just to clarify
Path tracing is ray tracing.
Rasterization is the final step of turning a 3D scene into a 2D image.
What the video shows is called branch path ray tracing and its advantage over conventional single path ray tracing, as well as the general benefits and downsides of ray tracing itself.
The only program I know that does branch path tracing is Blender's Cycles Engine. branch path tracing makes glass look amazing and make shadows much more realistic and vibrant/colorful. Not sure if IRay can achieve this.
 

camshaftjim

New Member
Jun 14, 2017
2
0
Appreciate that you're getting into these topics, but you're getting a bit mixed on the point of both types of rendering and it's very unlikely to change anytime soon. Graphics cards and CPU's are VERY good at specific types of math, and unfortunately Ray Tracing is a million miles away from what they're really good at. It's also a technology that's based entirely on professional use where you can afford to have millions of dollars in hardware attacking the renders. That means there's zero incentive for anybody to optimize for speed at the cost of accuracy, do you think movie studios or even small design studios (think of the guys doing concept car renders) would sacrifice even a 1-2% dip in quality to half their render time. Even the people selling the hardware don't want that to happen because they won't half their times, they'll just buy half the gear.

The bottom line is ray tracing will never be an end user solution, it is good when used by talented people but that also applies to the vast majority of real time 3d now. Graphics cards just happen to be REALLY good at calculating triangles and that's about it, ask them to venture outside of that and they really struggle.

----> Half asleep, all of the math is wrong but multiple PS3's running linux were one of the first instances of real time ray tracing, it's not a realistic situation either for them to do it just more of a cool thing to lookup in free time. -> For whatever it's worth, PS3's can do real time ray tracing and they came out in 2006. It took big budget developers 2-3 years to even get PS3 games on par with 360 ones in most cases when they hit both platforms. The Cell Processor in PS3's is theoretically capable of 179glops (25.6 more if you activate the 8th spe which is disabled by default and usually doesn't work.) That's MORE power than a 1080ti has 12 years later. The PS3 had monster theoretical numbers, but you could never come close to them in the real world. They do work REALLY well for ray-tracing though because it's a very consistent data stream which few things are. There's vids there on it, it's really cool if you're into nerdy and useless things.
 
Last edited:

Saki_Sliz

Well-Known Member
May 3, 2018
1,403
992
Appreciate that you're getting into these topics, but you're getting a bit mixed on the point of both types of rendering and it's very unlikely to change anytime soon. Graphics cards and CPU's are VERY good at specific types of math, and unfortunately Ray Tracing is a million miles away from what they're really good at. It's also a technology that's based entirely on professional use where you can afford to have millions of dollars in hardware attacking the renders. That means there's zero incentive for anybody to optimize for speed at the cost of accuracy, do you think movie studios or even small design studios (think of the guys doing concept car renders) would sacrifice even a 1-2% dip in quality to half their render time. Even the people selling the hardware don't want that to happen because they won't half their times, they'll just buy half the gear.

The bottom line is ray tracing will never be an end user solution, it is good when used by talented people but that also applies to the vast majority of real time 3d now. Graphics cards just happen to be REALLY good at calculating triangles and that's about it, ask them to venture outside of that and they really struggle.

For whatever it's worth, PS3's can do real time ray tracing and they came out in 2006. It took big budget developers 2-3 years to even get PS3 games on par with 360 ones in most cases when they hit both platforms. The Cell Processor in PS3's is theoretically capable of 179glops (25.6 more if you activate the 8th spe which is disabled by default and usually doesn't work.) That's MORE power than a 1080ti has 12 years later. The PS3 had monster theoretical numbers, but you could never come close to them in the real world. They do work REALLY well for ray-tracing though because it's a very consistent data stream which few things are. There's vids there on it, it's really cool if you're into nerdy and useless things.
I wasn't going to say too much with the first two paragraphs, your not entirely wrong, but you certainly aren't an engineer or an academic in the field such as myself (or at least have been for the past 5 years officially, I'm still pretty new). But you've missed some key points, which isn't too much of a concern for me, but with that third paragraph, you seriously miss heard or miss understood some of your readings over the years, like trying to do a report in only 3 nights.

the PS3 Cell/BE core's theoritical 180 gflops is not greater than or even near the 1080 ti's tested and measured 10 to 11 tflops, (that is to say 10,000 to 11,000 gflops), over 50x the performance, at least for single float precision (there is also double or half-precision). the Cell/BE was ahead of its time, in terms of chiplet design, it is basically a ryzen chip using APU chiplets instead of AMD's simpler CPU chiplets. AMD made the APU a commercial success, or atleast commercially viable, and now they are doing the same thing with chiplet design, so now the industry is starting to dip its toes into chiplet design as being an affordable solution to our growing needs for more powerful computers. the Cell/BE also failed in terms of accesability, it was a pain in the ass to code for, at best its ability would have made it great for academics and researchers just in terms of its great floating point ability, that is untill we started to tinker with gpu's a year later in 2007 which stomped all over Cell/BE's one advantage. I'm sure something like the Cell/BE's that has everything we want or need in one package will come back to market, but for now things are 'good enough' and affordable now. hardware is limited by supporters, and with the way the coding world is evolving, we are going to have to rely on the industry slowly moving to chiplets designs as programs get used to what is new and slowly addopt. if programers addop, the apps are supported, then users move to the new hardware, then the new hardware has money, we get stuck there for a while and then we in forward again sometime later.

As for ray tracing, the RTX series do have dedicated ray-tracing nodes and their tensor cores, in one big 3D matrix on a single waffer, not even chiplet design could compete with that. But what I think would help and what I am excited for is the recent development in reliable memristor technology and Gate array transistor. The only issue is getting new manufacturing techniques to market, intel has run aground, AMD is doing phenomenal, and Samsung atleast has a 7 year plan to fade to the Gate Array, but so far memristors chips are just proof of concepts that don't leave the lab.
 

camshaftjim

New Member
Jun 14, 2017
2
0
I slapped an edit on the PS3 paragraph, was half asleep when looking up the facts on that part which is why it's so far off. I just wanted to mention the PS3's doing ray-tracing because at that time it was insane to see it done in real time even if it had zero realistic potential (To the point where a single PS3 would make it impossible to do on any scale in a regular game.) Was just a bit of flavour, to a subject that doesn't much of that really.

Ray tracing is good because it's a fantastic solution to a nearly impossible problem, the problem is there's already amazing solutions when speed becomes of a factor. I'm sure a huge part of why Nvidia's working on it is for their workstation cards exclusively and will take years upon years to hit consumer cards. You've obviously well versed on the hardware side, much more than me. It's also very clear why they need to do that with their current consumer cards being basically the same as their professional cards, so john doe and the smaller guys that used to HAVE to buy professional cards are now sticking to the much cheaper consumer stuff. Huge companies are still going to be using the Quadro stuff they are still slightly better and more importantly have much better support from Nvidia behind them but that's it.

Ray tracing will always have it's niche and it's incredibly popular/well known because of the quality brings to the table but I highly doubt it'll ever take over as the dominate technique for real time. If it does take over, it won't actually be ray tracing as we currently know it imo, it'll come under the name but will be cutting some major corners and probably share more similarity to other techniques. Hell the original DOOM does a lot of ray tracing, but it's for deciding what the player can see not for lighting so there's plenty of room for technicalities sadly.
 

Saki_Sliz

Well-Known Member
May 3, 2018
1,403
992
Ah ok, I was wondering about that.

Yeah, I don't know if we'll see raytracing become practical much less affordable in our lifetime, when there are so many other techniques that offer greater speed for a little less quality. Such as how Nvidea uses AI trained tensor cores to filter out their really rough shadow only raytrace stuff. I know daz is also using AI in their beta build to help with filtering and Blender 2.8.1 has an intel implementation of a more modern AI that does the same thing. But it just maybe a matter of how much computational power we can throw at the problem. We've been stuck at the 5Ghz limit for almost a decade now, morre's law is dead, and we are starting to have to innovate again to get better technology, so we just need to get over the development hump. That is why I'm excited to see AMD making a recovering and pushing out new competitive products, it looks like Intel's development team is going to be behind AMD for almost a decade until they can work out the issues they have been having with 7nm production. The only other thing I worry about is that we are due to have another recession in a year or to, just based on patterns often displayed with the federal reserves, so that will likely slow things down for another 5 or 7 years.