+ 5
How does selective focus get simulated in 3D rendering software?
Here is an example of a 3D rendering using selective focus: https://www.shutterstock.com/es/image-illustration/team-building-text-on-blue-background-149323673 Selective focus based on depth can be done with some 3D rendering software but I want to know what underlying algorithms do the trick. A brute force a approach would be to move the camera to numerous perspectives and aim the camera to a common focal point. This is easy to understand but 3D rendering is often slow enough already to not be looped 100 times or whatever necessary to hide artifacts. What more efficient ways are there?
2 Answers
+ 1
I eventually found this page which answers my question with a few solutions:
https://developer.nvidia.com/sites/all/modules/custom/gpugems/books/GPUGems/gpugems_ch23.html
The "Layered Depth of Field" is what I used. Stacked layers could be blurred in real time at full screen when I tested on a couple laptops and smart phones. This looked like one of the best real-time solutions.
"Ray-Traced Depth of Field" looks like a great solution non-real-time 3D rendering. Tracing rays from uniformly distributed random locations on a simulated lens also mixes nicely with how motion blur can be achieved in a 3D rendering algorithm. To include motion blur, just sample the rays around space on the lens and the time interval for the simulated shutter opening. I will likely experiment with this technique for depth of field and motion blur soon.
That page listed several other techniques but the "Ray-Traced Depth of Field" looks the most interesting.
0
Just a little update. I have used "Ray-Traced Depth of Field" and it works very well.
It creates noise but sampling over 16 rays per pixel roughly brings the colour variation to under 1/16'th of the colour range. Sampling multiple rays per pixel with slightly different directions helps with antialiasing too. The difference between a colour like #000000 and #010101 is subtle to the eye but noticable. You could sample over 100 rays per pixel to get a really noiseless appearance or find some good antinoise processing techniques in 2D processing. The 2D antinoise processing would be faster but sacrifice some quality.