Here's an interesting comparison of the same scene shot with two photographic methods. The first was shot traditionally, metering for the sky. The second was shot using HDR (previous experiments here, here, here, here, here, and here), a photographic technique that involves tripoding the camera and taking photographs of the same scene again and again at different shutter speeds (to capture as much dynamic range -- light values -- as possible) and then combining them with an algorithm (I use Photoshop's). Once combined, you tone the photo as you normally would. The strength of HDR is that it allows the photographer to capture a wide range of light values without using gradiated filters. It actually beats filters handily, as filters are nowhere near as detailed in their masking as an algorithm can be. So, #1, normal, #2 HDR, click to enlarge:
#1
#2
This was a shot of the NYC skyline at sunrise. Sunrise behind buildings is a great candidate for HDR, as light of the sky is so much brighter than the light on the face of the buildings, as the light hitting the face of the buildings is only the light that refracts in the atmosphere and reflects off of the surrounding terrain. However, our eyes and brains are sensitive and smart enough to still resolve color and detail on the buildings while resolving the detail in the sky, too -- but our cameras are not. Well, not without a little help. :)
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment