Managing Poser Scenes (07. Rendering Quality)

Since mankind started cinematography, it was well understood that the human eye could not discriminate individual images when they passed by at a rate over say 18 a second. So by setting a 24 frame per second rate on film (or 25 (PAL) / 30 (NTSC) for video/television) a reduction of resources was derived without any meaningful loss of quality – referring to the visual experience of the spectators.

Next to chopping a continuous stream of light over time, we can chop it over space and divide an image into pixels. Again, this can be done without any meaningful loss of quality if we take the limitations of our eyes into consideration.

Very good and very healthy eyes have a discriminating ability of about 1 bow-second. One can’t do better than that. One full circle makes 360 degrees, 1/60th of a degree makes a bow-minute and 1/60th of a bow-minute makes a bow-second. When I’m looking forward in a relaxed way, I can oversee about 70° really well while say 150% of that (105°) is covering the entire range from left to right. This 70° makes 70*60*60 = 252.000 bow seconds (378.000 for the 105° field), so it does not make sense to produce anything over that amount of pixels which has to be viewed from the left to the right of our visual range.

Unfortunately, this is hardly a relief as I do not intend to render images with a width or height in that amount of pixels. Fortunately, our eyes and brain become of help. In the first place, we don’t all have “very good and very healthy” eyes, they just aged over time like we did ourselves. In the second place, the extremes occur with our pupils wide open which is not the case when viewing images under normal lighting conditions. In cinema and before a monitor (television) it’s even worse: the image is radiating light in a darker surrounding closing our pupils even further.

As a result, a standard used commonly in research on visual quality takes the 1 bow-minute (and not the second) as a defendable standard for looking at images on TV or in cinema. Then, the relaxed view range of 70° requires just 70*60 = 4200 pixels while the full range (for surround and IMAX, say) requires 150% of that, a 6300 pixels wide image.

This can be compared with analog film. IMAX is shot in 70mm (2.74”) film size and a film scan can produce about 3000 pixels/inch before hitting the film grain limits, so IMAX can be sampled to at most 2.74 x 3000 = 8220 pixels and fills our visual range completely. In other words, for a normal relaxed view 4.000 x 3.000 does the job, while say 6.000 x 3.000 does the job in the full left to right range, for anything monitor, TV or cinema.

This is reflected in current standards for pro cameras:

Standard Resolution Aspect Ratio Pixels
Academy 4K 3656 × 2664 1.37:1 9,739,584
Digital cinema 4K 4096 × 1714 2.39:1 7,020,544
  3996 × 2160 1.85:1 8,631,360
Academy 2K 1828 × 1332 1.37:1 2,434,896
Digital Cinema 2K 2048 × 858 2.39:1 1,757,184
  1998 × 1080 1.85:1 2,157,840

For print, things are not that different. An opened high quality art magazine with a size of 16”x 11” (2 pages A4) printed at 300 dpi requires an 4800 x 3300 pixel image, which brings us in the same range as the normal and full view on monitor, considering our eyes as the limiting factor.

Of course one can argue that a print on A0 poster format (44”x 32”) might require 13.200×9600 pixels for the same quality but that takes people looking at it from the same 8” distance they use to read the mag. From that distance, they can never see the poster as a whole. Hence the question is: what quality do they want, what quality do you want, what quality does your client want?

I can also reverse the call: in order to view the poster in a full, relaxed way like we view an opened magazine, this A0 poster which is 2.75 times as long and wide should be viewed from a 2.75 times larger distance, hence from 22” away. In this case, a printing resolution of 300/2.75 (say: =100 dpi) will do perfectly.

Thus far, I’ve considered our eyes as the limiting factor. Of course, they are the ultimate receptors and there is no need to do any better, so this presented an upper limit.

On top of that, I can consider additional information about the presentation of our results. For instance, I might know beforehand that the images are never going to be printed at any larger format than A4, which halves the required image size (in pixels, compared to the magazine centerfold) to 3300×2400 without reducing its visual quality.

I also might know beforehand that the images are never going to be viewed on TV sets or monitors with a display larger than say 2000×1000 (wide view), which reduces the required image size to 1/3rd in width and 1/3rd in height, hence 1/9th in the amount of pixels to be rendered, compared to the full wide view of 6000×3000 mentioned above for anything monitor or cinema.  Which might bring me the required quality in just 1/9th of the time as well, but … at a reduced visual quality. The resolution of our eyes is just better than the output device, and I might want to compensate for that.

Quality Reduction Strategies

Render size

The main road to get quality results efficiently is: not to render in larger size than needed. An image twice as large, meaning twice as wide and twice as high takes four times longer to render, or even more if other resources (like memory) become a new bottleneck during the process.

Render time

As said in the beginning of this chapter, not much more can be done when the renderer depends on algorithm, hardware and image size alone. This for instance is the case with the so-called “unbiased” renderers like LuxRender, which mimic the natural behavior of light in a scene as much as possible. More and faster CPU cores, and more and faster GPU cores sometimes as well will speed up the result, but that’s just it.

Let me take the watch-scene (luxtime.lxs demo file) on my machine, at 800×600 size. The generic quality measure (say Q) is calculated by multiplying the S/p number (samples per pixel, gradually increasing over time) by the (more or less constant) Efficiency percentage which refers to the amount of lighting available.

  • * Draft quality, Q=500, after 4 mins. Gives a nice impression of things to come, still noisy overall.
  • * Basic quality, Q=1000, after 8 mins. Well-lit areas look good already, shadows and reflections are still quite noisy
  • * Decent quality, Q=1500 after 12 mins, well lit areas are fine now
  • * Good quality, Q=2000 after 16 mins, shadows are noisy but the reflections of them look nice already
  • * Very good quality, Q=3000 after 24 mins, good details in shadow but still a bit noisy, like a medium res digital camera, shadow in reflections looks fine now
  • * Sublime quality, Q=4000 after 32 mins, all looks fine, hires DSLR camera quality (at 800x600m that is).

From the rendering statistics, render times at least can be roughly predicted for larger render results. LuxRender reports for the watch-scene, given my machine and the selected (bidirectional) algorithm, a lighting efficiency E of 1100% and a speed X of 85 kSamples/second. These parameters can be derived quickly, from the statistics of a small sized render (200×150 will do actually, but I used 800×600 instead).

From the formula

 Q = 1000 X*E*T / W*H, for image Width and Height after time T,

 I get

 4000 = 1000 * 85 * 11,00 * T / 800*600 so T = 2053 sec = 34min 12sec

And from that I can infer that a 4000×3000 result, 5 times wider and 5 times higher, will take 5*5=25 times as long, that’s: half a day, as long as memory handling does not hog up the speed. Furthermore, quality just depends on lighting. Lighting levels too low or overly unbalanced cause noise, like on analog films or digital captures. Removing noise requires more rendering time. Specular levels too high (unnatural) cause ‘fireflies’ which don’t go away while rendering longer. I just have to set my lighting properly. And test for it in small sized brief test runs.

Next >>

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.