You are here
Home > DPR > Canon EOS R3’s OVF Simulation: What it is and why it matters

Canon EOS R3’s OVF Simulation: What it is and why it matters

Canon is the first company we’ve seen to take advantage of the greater dynamic range of the latest OLED viewfinders. Almost paradoxically, OVF Simulation mode, a new feature on the recently announced EOS R3, delivers a more advanced, more naturalistic viewfinder experience than previous mirrorless cameras as a result of Canon trying to make users of an older technology (optical finder) feel more at home. Let’s take a closer look.

Canon essentially said ‘We have an HDR display that can better represent the world… let’s make use of it!’

Canon did what arguably the first manufacturer to put a 1000-nit capable OLED viewfinder in its camera should’ve done: gone back to the drawing board and said: ‘we have a high dynamic range display that can better represent the world the photographer is trying to capture: let’s make use of it!’

Essentially, OVF Simulation Mode is a tool that exploits the improved dynamic range offered by modern OLED displays to allow photographers to better visualize the scene they’re about to shoot.

This is how the electronic viewfinder on the EOS R3 looks with OVF Simulation Mode on vs. off (shot with an iPhone):

It may not look like much at first, but in practice, the distinction can be very useful. Also, keep in mind that the difference is more dramatic in person, as the iPhone capture, tone-mapped to the standard dynamic range output you’re viewing, necessarily limits comparison. With OVF Simulation on, the image through EVF appears closer to what you might see through an optical finder, due to a few reasons:

  1. Exposure simulation is disabled, with the preview based instead on what the camera considers a reasonable representation of the scene.
  2. Standard image processing, including the S-shaped contrast curve associated with camera JPEGs, is bypassed
  3. Shadows appear a bit brighter
  4. Highlights also appear brighter than other tones in the scene, well separated from midtones and shadows, just as they do in reality.

The description Canon provides when enabling ‘OVF simulation view assist’ in the menus is telling: ‘Natural view at viewfinder or screen display when photo shooting. Display appearance will differ from your shots.’

In OVF Simulation Mode, the camera constantly – and automatically – adjusts the overall brightness of the scene, so you have a usable preview. It’s comfortingly familiar to anyone coming from a DSLR, and not totally unlike the EVF experience in Program AE modes, where autoexposure is constantly adjusting to match the scene. Meaning, usually even with exposure simulation, you get a reasonable preview of your scene, as long as you don’t have dramatic exposure compensation values dialed in.

What really sets the OVF Simulation Mode apart from a standard EVF experience then is the gamma curve, or tone curve, employed to translate scene brightness to display brightness. Below you’ll see – in red – an S-shaped gamma curve applied by a typical JPEG processing engine to boost global contrast. This same curve is applied to the standard EVF preview on most cameras (the exact shape will differ, and will also be affected by any dynamic range compensation or gamma modes). That’s because the camera is trying to show you what the camera JPEG will look like.

And that means somewhat darkened shadows, and brightened light tones. A roll-off at the brightest tones allows for smoother transitions from bright to clipped pixels, but also comes at the cost of tones being ‘pushed’ into one another: if you brighten one section of a cloud, you run the risk of it becoming just as bright as the adjacent section of cloud lit just a bit brighter by the sun. The end effect is less separation of tones in bright regions of the scenes, something addressed by OVF Simulation Mode and by HDR display in general. The take home here, though, is that standard EVFs take a present a punchier, more contrasty representation of the scene in front of you, due to image processing meant to replicate the output you’ll get when reviewing the JPEGs.

OVF Simulation Mode uses a more linear curve to relate scene brightness to display brightness, indicated in blue, below. Closer to what a Raw file might look like when you first pull it into Adobe Camera Raw and select the lower contrast ‘Adobe Standard’ profile and a ‘linear’ tone curve. Our rollover above compares a camera JPEG vs. Adobe Standard rendering of the same image in Raw, so you can appreciate the difference between the contrasty S-curve vs. more linear output. For the geeks amongst you, hover your mouse over ‘Shadows’ below the graph, and you’ll see that OVF Simulation mode renders scene shadows brighter in the display than the standard EVF mode. This makes it easier to visualize dark portions of your scene, much as it would be easy for your eyes to naturally discern details in darks when looking through an optical finder, thanks to the expansive dynamic range of the human visual system.

On mobile, tap you finger on ‘Shadows’ or ‘Highlights’ above to see how scene shadows and highlights are rendered differently on the display in standard EVF mode vs. OVF Simulation mode.

Now, hover your mouse over the ‘Highlights’ text below the graph, and you’ll note something quite interesting: in OVF simulation mode, while strict midtones are rendered similarly (the point at which the red and blue curves cross), brighter midtones and light tones are rendered darker compared to the standard EVF mode. Go back to our slider earlier and you’ll see this – the blue skies around the central building get darker. However, the brightest scene highlights are still rendered at the same display brightness.

This is what creates the impression that very bright objects, like white clouds, are in fact much brighter than midtones – like the blue skies surrounding them. It’s due to the simple fact that the more linear processing ensures relationships between tones are better preserved: rather than brighten the blue skies to the point of making them so bright they’re not much dimmer than the white cloud pictured, keep them closer to as dark as they were relative to that white cloud. It’s not the best example we could imagine (we had very limited time with the R3), but hopefully you get the idea.

This is only made possible due to the large output dynamic range of the EVF itself, thanks to the high brightness and contrast capability of the OLED panel. Were the panel not this bright, darkening those midtones and lights to separate them from the brightest highlights would not have been feasible, as it would’ve rendered the overall image too dark. To put this in context, though, many high-end cameras have such capable OLED EVFs, but that doesn’t mean they’re utilizing them intelligently in this manner.

Why is this important?

What Canon has done here is smart, and it’s a first in the dedicated camera industry. The company has essentially said: ‘we’ve got a high dynamic range, wide color gamut display in our camera that gets closer to representing the luminance and color range of the real world; why not let it do precisely that, rather than limit it to representing only the limited dynamic range our JPEGs reproduce?’ Every other manufacturer has simply been feeding its typical JPEG output (intended for standard dynamic range, or SDR, displays and print) and stretching it out to the HDR display. That may make things look punchy, but it doesn’t push the needle forward in terms of utilizing new technologies to better represent what we see with our own eyes, in an end-to-end imaging system. Cinema and broadcast have been thinking about this for years, and the best Dolby Vision HDR movies and nature documentaries viewed on OLED displays capable of luminosities ten times higher than TVs of yesteryear are a sight to behold, appearing lifelike. No doubt Canon relied on its expertise in the cinema world and its own HDR display technology to bring this implementation to the EOS R3.

To put it simply, OVF Simulation Mode on the R3 gives you a better experience than your typical EVF because the output looks more realistic. HDR displays like OLED EVFs are able to reproduce scenes in a manner that better preserves the tonal relationships we see in the real world. That means less of a need to either (1) compress large scene dynamic ranges into a small, standard dynamic range (SDR) space, leading to a flat image or, instead, (2) discard a lot of that dynamic range in order to yield a punchy image, as most JPEG processing does.

OLED displays are capable of brighter whites and darker blacks. Such displays are better able to approximate the tonal relationships between objects in the real world. Brights appear brighter than lights and far brighter the midtones, whereas in SDR they may not appear all that much brighter than midtones and lights, due to the restricted brightness and dynamic range of SDR displays. Furthermore, HDR displays / formats are happy to render shadows and midtones relatively dark, since (1) they’re capable of nuanced shadow detail due to higher bit-depth and (2) a higher overall display brightness means that dark shadow detail tends to be visible.

Canon decided to, in this mode, forego the traditional route of presenting the (simulated) JPEG output, rife with decades of limitations, and instead chose to present a more linear processing of scene light and possibly a slightly larger range of tones as well. It realized that in doing so, thanks to the high peak luminance and large output dynamic range of OLED, it didn’t run the risk of creating a dark and flat preview. It’s great to see a company thinking a little outside the box and putting new technology and standards to good use. But… psst… Canon… now that we have HDR preview when shooting, how about HDR playback of your very own HDR PQ files in the OLED viewfinder?

Top