Dynamic Range (I think)

vftt.org

Help Support vftt.org:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

Neil

Well-known member
Joined
Apr 26, 2004
Messages
3,434
Reaction score
487
A big challenge for me in taking pictures is not having blown-out highlights and underexposed portions in the same picture, as in a sun-dappled forest at noon.

Or from a peak overlooking several mtn. ranges and valleys on a bright day.

My understanding is that dynamic range is a measure of a given camera's ability to properly expose extremes of lighting.

I want to make sure I have it right and also, if I do, how do I go about looking up and comparing cameras' DR's?
 
You've got it. However, I don't think I've ever seen a manufacturer or reviewer actually provide numbers for dynamic range.

Rule of thumb is larger pixels can capture higher dynamic range, so digital cameras with large sensors should do better than smaller ones with a similar pixel count.

If you want to start an argument, go on a photo website and ask whether you get better dynamic range from film or from your digital camera. Last I checked, black-and-white negative (not slide) film was thought to be slightly better in terms of *useful* dynamic range (ie, with "acceptable" noise).

Bottom line, though, is that the differences are small.

You are probably better off looking into graduated neutral-density filters...
Also, try using a tripod and taking multiple exposures, then combining them digitally. A lazy-man's version of this is to intentionally underexpose (to avoid "clipping" the highlights) then selectively brighten the shadows.
 
Last edited:
For film, negatives (B&W) have more dynamic range than positives (prints or slides). Transmissive media (film, positive or negative), tend to have more dynamic range than reflective media (prints).

I think digital tends to have a little less dynamic range than film. Film tends to have soft cutoffs, that is you tend to get some detail (at reduced contrast) at the edge of saturation or full black. Digital tends to have a hard saturation (ie is either saturated or not, and if saturated has no detail) and at the black end becomes noisy or the quantization levels (posterization) become visible. And ultimately, if the charge on the sensor pixel is below the first quantization level (even if there is some useful info in the pixel), it will be recorded as full black. Some of the more recent DSLRs are going from 12-bit to 14-bit A/D converters to reduce the quantization effects.
http://www.kenrockwell.com/tech/dynamic-range.htm
http://www.clarkvision.com/imagedetail/film.vs.digital.summary1.html
http://www.clarkvision.com/imagedetail/dynamicrange2/

The in-camera histogram is a big help here--a spike on the right edge indicates saturated pixels and a spike on the left edge indicates full-black pixels.
http://www.kenrockwell.com/tech/histograms.htm
http://www.kenrockwell.com/tech/yrgb.htm
http://www.luminous-landscape.com/tutorials/understanding-series/understanding-histograms.shtml

Monitors and prints have the least amount of dynamic range. (A monitor should be viewed in the dark to prevent reflected light from washing out the blacks.) One can process an image on a computer to enhance the apparent contrast while reducing the dynamic range to improve the ability of a display or print to display it.

One technique for taking high-dynamic-range images with a "normal" camera is to take several shots of the exact same scene (tripod required) at different exposures and digitally combine them after the fact.

FWIW, medical X-rays are high-dynamic range images which is one reason that the traditional film X-rays are displayed in transmission of the negative.

Doug
 
Some of the latest generation digital cameras have a high dynamic range mode. That is, individual pixels stop exposing as they approach being blown out. Not sure how good it is.
 
the_swede said:
Some of the latest generation digital cameras have a high dynamic range mode. That is, individual pixels stop exposing as they approach being blown out. Not sure how good it is.
I think that is just a remapping of the response curve to soften the limits. It uses some of the available dynamic range to do this. (As I understand it, it happens in the digital processing of the image, not the sensor so it does not fundamentally give you info that was not there.)

Not sure how good it is either.

I think I read a review somewhere, but I can't find it.

Doug
 
Neil said:

Don't have time to detail my objections, but I'm not comfortable with their method.

http://www.clarkvision.com/ has some good stuff, but it is rather technical and may be hard to understand.


Don't forget that when looking at a scene by eye, you can change the sensitivity of your eye as you look at different parts of the scene. (In fact, I believe the different parts of the retina can operate at different sensitivities.) The camera is being constrained to a single sensitivity for the entire scene. Not quite a fair comparison. (And, of course, scenes can also exceed the dynamic range of your eye, too.)

Doug
 
DougPaul said:
Monitors and prints have the least amount of dynamic range. (A monitor should be viewed in the dark to prevent reflected light from washing out the blacks.) One can process an image on a computer to enhance the apparent contrast while reducing the dynamic range to improve the ability of a display or print to display it.



Doug

Proper calibration of the moniter beforehand is critical for this process to occur.
 
skiguy said:
Proper calibration of the moniter beforehand is critical for this process to occur.
I had a photobook printed up and the results (colors and exposures) were pretty much identical with what I had on the monitor so I think I'm safe from having to go that route.

As for increasing DR on the computer I use PS elements 6.0 and it's very easy to brighten the shadows and darken the highlights, which is the first thing I do if I like a picture enough to spend time editing it in PS. Is that what you meant Dougpaul?
 
Neil -

One other useful image correction you can make (post - capture) is gamma correction. You can't really change the dynamic range after you capture the image, and I'd guess most modern cameras are 8, 10 or 12 bits. While they may save the image as a 16bit raw format, it's doubtful they have the capability of really capturing that much (you generally need to cool the sensor to reach 16 bit DR - anything else is generally artificially achieved)

But with a gamma correction, you can adjust the lookup table (how the saved pixel values are displayed) in a non-linear fashion. Brightness and Contrast adjust these things linearly, but gamma correction allows you to also adjust the mid-tones.

With earlier versions of MS Office came a program called Microsoft Photo Editor - if you check the Help-About and confirm it's the one they shipped from Media Cybernetics, it is based on a very capable scientific imaging software called Image Pro Plus, and has some of the same features, including a very handy tool they called BCG (Brightness/Contrast/Gamma).

BTW - one other thing...the human eye is not capable of distinguishing nearly that many grey levels...maybe only 4 or 5 bits for someone with a lot of training. That's why we prefer color - and when looking at critical scientific or medical images in B&W, it's preferable to pseudo-color them)

Scott
 
Last edited:
skiguy said:
Proper calibration of the moniter beforehand is critical for this process to occur.
Agreed--calibration is helpful (close to necessary) to get the best image out of monitors and prints. However, it is always possible to get a nice image by accident.

Doug


BTW, I posted a procedure by which one can do an approximate calibration using only network resources. It was difficult to find ( http://www.vftt.org/forums/showthread.php?t=18939 ) so I will reproduce it here:

I suspect that very few of us have calibrated our monitors and the color temps (should be ~6500K), gammas (net gamma should be 2.2), brightness, and contrast vary considerably. Thus we are likely viewing different images.

One can get formal calibration gear which uses a small camera-like device to measure the output of your screen and calibrate it (ICC profiles and all that stuff). One can also use some network facilities do a cheap and easy approximate calibration.
One simple procedure:
1. If your monitor has the appropriate adjustments, set the color temp to 6500K.
2. For the rest of the procedure, the room should be dark enough to prevent reflected light from altering the images. The monitor should be on for at least 15 min to stabilize.
3. Go to http://www.normankoren.com/makingfineprints1A.html#gammachart. This chart will enable you to check the gamma of your system (graphics software, graphics card, and monitor combined). If you have some method of adjusting the gamma, adjust it until the gamma reads out at 2.2. (You might also want to read this page--there is lots of good info on the issue.)
4. Go to http://www.pcbypaul.com/software/monica.html and look at the grey scale just above the colored squares on the screenshot. Adjust your monitor contrast (which actually adjusts the black level) so that you can see the entire greyscale with the black block truly black. Adjust the monitor brightness (which actually adjusts the max intensity) to your preference.
5. Repeat 3 and 4 several times as they may interact.

Notes:
* I use Linux so I can't tell you how to adjust the gamma on MS OSes or Macs. ("Xgamma" will do it under X-windows in Linux/Unix.)
* For casual viewing with too much room light, I temporarily increase the gamma and wait until the room can be darkened for critical viewing.

Happy screwing up your monitors...
 
Last edited:
He sure did, but I did once read him saying something about the fact that he'd shoot 20 or 30 rolls to get one good picture.
 
WinterWarlock said:
One other useful image correction you can make (post - capture) is gamma correction. You can't really change the dynamic range after you capture the image, and I'd guess most modern cameras are 8, 10 or 12 bits. While they may save the image as a 16bit raw format, it's doubtful they have the capability of really capturing that much (you generally need to cool the sensor to reach 16 bit DR - anything else is generally artificially achieved)
The formal definition of dynamic range is the ratio of the max pixel to the min pixel. Thus you can increase the dynamic range of any image which does not not span the entire range of the data format or you can change to a format which can represent a greater range and then increase the dynamic range. But you cannot increase the amount of information--details once lost to total blackness or saturation can never be recovered (which is what I think you meant). Processing an image cannot recover lost information, but it can make what is there more visible or more pleasing to a viewer.

Raw formats for digital cameras are linear in typically 12-14 bits, JPEG is an 8-bit log format. Thus JPEG can represent more dynamic range than raw.

But with a gamma correction, you can adjust the lookup table (how the saved pixel values are displayed) in a non-linear fashion. Brightness and Contrast adjust these things linearly, but gamma correction allows you to also adjust the mid-tones.
Correct. The gamma correction is a nonlinear function of only one variable (gamma) which maps a range of intensities from 0-max to 0-max. See http://www.normankoren.com/makingfineprints1A.html#Gammabox for some sample gamma function plots. Adjusting the gamma of an image typically makes it look brighter or darker.

Brightness typically adds a fixed value to all pixels (or on a monitor, effectively sets the level of black pixels) and contrast just multiplies the value of the pixels by some constant (or on a monitor effectively sets the level of the 100% white pixels). (Ideally a monitor should show all the levels of a grayscale step chart, such as the one shown in http://www.normankoren.com/makingfineprints1A.html#TestPrint. You may need to download the full size file to see it properly.)

BTW - one other thing...the human eye is not capable of distinguishing nearly that many grey levels...maybe only 4 or 5 bits for someone with a lot of training. That's why we prefer color - and when looking at critical scientific or medical images in B&W, it's preferable to pseudo-color them)
However, the human eye does something that our cameras do not--it can adjust its settings depending on what portion of an image we are looking at. This is analogous to doing different processing on different parts of an image in a photo processing program.

Doug
 
Last edited:
Neil said:
As for increasing DR on the computer I use PS elements 6.0 and it's very easy to brighten the shadows and darken the highlights, which is the first thing I do if I like a picture enough to spend time editing it in PS. Is that what you meant Dougpaul?
Don't think so. I was referring to the dynamic range capabilities of the display devices, not the image.

One can define a DR for:
* image capture devices (eg cameras)
* image data formats (eg 14-bit linear)
* display devices (monitors, printers)
* images (which in practice is limited by the image data format used to store and process the image)

DR is generally stated in stops (log base 2) or D (log base 10). Most images have a DR of less than 10 stops or D=3.0. Most monitors are driven by 8-bit linear DACs with a DR of 8 stops or D=2.4.

What you are doing in PS is altering the DR of the image, presumably because it will look better to your eye on your available display devices.

Doug
 
Last edited:
WinterWarlock said:
He sure did, but I did once read him saying something about the fact that he'd shoot 20 or 30 rolls to get one good picture.
Shooting a large number of pics and getting only a small number of good ones is typical behavior for a pro.

I can shoot in two modes--trip recording or artistic. If I'm in trip recording mode, I may record lots of nice memories but only very few of them would be worth printing and putting up on a wall somewhere for others to view. If I am in artistic mode, I can do a number of hikes without pulling my camera out once.

Doug
 
Adams' most famous work was done with a large view camera, often an 8X10. He did not just blast away with a roll film camera and hope for the best. One of his most famous, Moonrise Over Hernandez, New Mexico was a single exposure.

My knowledge of photography precedes the digital age, but as far as range is concerned, black and white film has a range of ten F stops. Adams developed what is known as the Zone System for determining exposure and development time for the negative. This involves time consuming testing of film and chemistry. Light meters such as the Weston V and the Ranger 9 had zone markings on them.

Quick explanation-middle gray (18%) on a gray scale is Zone 5. Zone 1 is black and 10 is bright white. However, on a bright, contrasty day, for a given scene, the zones might range up to 14 or 15 stops apart or on a gray day perhaps only 8. Using your meter, you choose what part of the scene you want to appear as Zone 5, measure the other areas where you want detail, then set your exposure. You then know what areas of the negative will be black and which will be white and which will be somewhere in between. You can compress the zones down to 10 by underdeveloping the negative or expand the zones by overdeveloping. You can also control the gray scale in the printing using a similar method.

I assume you can do the same thing digitally, but it would seem to me that if you don't have a proper exposure to begin with, you would have to "manufacture" the image using a program such as Photoshop. This to me is where photography stops and computer graphic design begins. I've seen Photoshop tutorials showing all kinds of image manipulation which turns out a product that really isn't a photo, it's a digital image which isn't a representation of reality.

btw, watch the videos on You Suck At Photoshop sometime, they are pretty entertaining.
 
Thanks Tom for posting. I had been meaning to post something in Ansel's defense and concerning his 8x10 B&W work. You said it far more succinctly than I could have.

One cannot afford the time nor the film stock when working in that laborious format. I do remember seeing filmed interviews of Adams describing the Moonrise Over Hernandez photo. He was driving when he came upon the sunset lit scene. He only had time to set the camera up for one photograph. By the time he had made the exposure, and wanted to set up for a second - the light had faded from the mission and the crosses in the cemetery. The possibility for another image was gone. With B&W you do have the ability to correct the exposure during printing with more latitude than with color. But from what I have heard from several sources, he got the first negative exposure right on the money.

Ansel could read most scenes merely with his eyes, and know which exposure to use without a meter. That is something that can be done with some practice. I recall my first film SLR. The batteries would generally go dead in very cold weather, but it fortunately had a mechanical X-sync intended primarily for flash. I successfully learned to read scenes so that I could photograph in cold weather without any battery in the camera. The X-sync dictated the shutter speed, and I set the lens aperture to get the right exposure that I read with my eyes. It is often called the Sunny 16 rule for ISO 100 and sunny conditions. You adjust from there for a different ISO or different scene lighting. The basic exposures were printed on film boxes or on a sheet of paper in the film box. It still helps in general photography today in knowing when to override the digital camera's meter reading. Although you can review digital images in the field and reshoot -- not every image will be there for a second try.

In Adams' later life he did begin to use a 35mm SLR and color films. Perhaps he began to shoot more images with the easier format. But there was no doubt that he was the master of the zone system which he pioneered. It is still useful in the digital world today. A few zone system references:
http://www.kenrockwell.com/tech/zone.htm
http://en.wikipedia.org/wiki/Zone_system
http://www.luminous-landscape.com/tutorials/zone_system.shtml
 
Last edited:
TomD said:
My knowledge of photography precedes the digital age, but as far as range is concerned, black and white film has a range of ten F stops. Adams developed what is known as the Zone System for determining exposure and development time for the negative. This involves time consuming testing of film and chemistry.
You could also alter the range and ISO of B&W negative film by how you develop it. And since the film had more range than a print, you could select what part of the film range was displayed on the print, or if you prefer, the final exposure compensation could be done in the darkroom. Slides and digital cameras have less range than B&W film so you have to be more careful in your original exposure.

<description of zone system snipped>

I assume you can do the same thing digitally, but it would seem to me that if you don't have a proper exposure to begin with, you would have to "manufacture" the image using a program such as Photoshop.
Modern digital cameras have a metering mode (Nikon Matrix metering, Canon evaluative metering, etc) which implements a zone system. Ken Rockwell advocates just using the matrix/evaluative metering and compensating where necessary. It generally works for me.

Don't forget that back in the darkroom days, we did such things as burning-in or dodging to alter the exposure of parts of prints. And used variable contrast papers and a variety of other techniques to alter the printed image. Image processing software (eg Photoshop) is just an extension of this with far more capability. In either case, it depends on the judgment of the photographer to use it wisely.

Doug
 
Last edited:
Top