1. This is a follow-up to my previous post on vignetting corrections with the M9, here. In that post I said that I was working on a new version of CornerFix that would address the issues that I had identified. That new version is out now, as version 1.3.0.0, and can be downloaded from Sourceforge here.

    To recap, the previous post showed images of the Tim Ashley's 18mm f/3.8 as corrected by the M9's in-camera correction firmware. The result was a red tint on one edge. I also mentioned that the then current version of CornerFix couldn't do much with the problem, because V1.2 of CornerFix couldn't deal with optical decentering.

    However, V1.3.0.0 can. Here's the result, first correcting the 18mm f/3.8 for an original image for which coding was disabled (so no in-camera corrections):




    18mm f/3.8 uncoded with CornerFix 1.3.0.0 correction


    And now if we use CornerFix to correct the nasty in-camera corrections that gave a red edge:





    18mm f/3.8 coded as "Auto" with additional CornerFix 1.3.0.0 correction

    Both of those are (IMHO) pretty good - they show no visible signs of red edges anywhere. However, I do recommend that if you are going to use CornerFix, you shoot with lens detection disabled, aka with the lens uncoded. The reason is that if you have a camera/lens combination that shows the decentering issue (and every M9 image I've analyzed regardless of lens shows it) and use your lens coded, a interference pattern can form between the actual vignetting and the in-camera correction. Think about two circles, on top of each other, the same size, but with their centers slightly offset as shown below:


    Interference between offset circles

    Where the circles come together, you get e.g., over-correction, and where they are far apart, under-correction, in a complex two dimensional pattern. And that pattern is unfortunately nearly impossible to correct; there is no way to tell the difference between what should be there and what shouldn't. That may or may not occur in your case, but the easy way to avoid it is not to use in-camera corrections when you're planning to use CornerFix.
    4

    View comments

  2. Thomas Lester has posted on how he's using dcpTool here, describing it as a "fantastic tool". He shows some some great example images of how the hue twists in some of the Adobe camera profiles can result in really unnatural skin tones, and how to avoid the problem by using dcpTool.

    As a pro lifestyle photographer it's not surprising that he's serious about good skin tone, but what is surprising is that the camera is a Nikon D3. That's surprising because the D3 is generally considered to have pretty good color right out of the box, and not need much work in post, unlike some of the Canons.
    0

    Add a comment

  3. There's been a lot discussion on the Leica User Forum about problems with vignetting correction on the new M9. These have shown up in a number of different images, but the image below of a white diffuser shot with a coded Leica 18mm f/3.8, which should be entirely uniform edge to edge, shows the problem well:


    18mm f/3.8 with "Auto" in-camera correction

    Basically, there is a red tint on the side - mostly the left hand side - of the image despite, or perhaps because of, the in-camera correction. As CornerFix hasn't been able to correct some these images while it was usually able to correct M8 images with red corners, I've been trying to work out what the issue is, with the help of several people on the forum that have sent me images - Carl Bretteville, Eric Calderwood, Jonothan Slack, Tim Ashley and others - thanks to all of them!

    Background

    The background to this is that on any M-mount digital camera, you have a fundamental optical problem. On any digital camera you have to have an IR (infra red) filter to prevent color shifts due to IR contamination - all modern sensors are very sensitive to IR. However, because the M-mount has such a short lens to sensor distance, the light has to be at a considerable angle to reach the corners of the sensor. In turn that means that light that goes to the corners of the sensor passes through the IR filter at a sharp angle. That means that it passes through "more" of the filter, and reds get cut, resulting in cyan drift - blue corners. The M8 and M9 compensate for this by coding lenses, allowing the camera to boost reds (and greens, slightly) to compensate for the impact of the IR filter on visible light. CornerFix does the same thing, but as a post processor.

    What's happening with the red edges above is that the cyan drift is somehow being over-compensated, red being boosted too much, and as a result you get red edges.

    There are a few questions here - why exactly is this happening, what can be done about it, and most puzzling - why does this seem to happen on the left edge of images only?

    The 18mm lens uncoded

    To find what is happening with the in-camera correction, I did some analysis on the raw data in images shot of either a white diffuser or white walls. All of the images and charts in this post for a Leica 18mm f/3.8, but I've also looked at a number of other lenses, including a Zeiss 25mm, and some CV12's. All of the 18mm images are from Tim Ashley, btw - Tim shot these through a high-end diffuser, so the illumination is very even, which makes the charts clearer.

    The first image is from the 18mm with lens detection disabled, so no lens specific in-camera correction. This is the image with the lens uncoded:


    18mm f/3.8 with no in-camera correction

    The chart below shows the the vignetting of the different colors (red, green1, green2, blue, because this is raw Bayer matrix data) across the horizontal center of the image, with the horizontal axis in pixels, and the vertical axis in stops of vignetting. The chart shows dots for pixels of the Bayer array, and best fit polynomials for each Bayer color to make things clearer :



    18mm f/3.8 with no in-camera correction


    You can tell a few things from this plot:
    1. The luminance vignetting (effectively the blue curve) is about 1.3 stops at the edges. Note that because this cut is across the center of the image, corner vignetting is more than you see here; if the cut was the diagonal, you would see over two stops.
    2. The chroma vignetting (the difference between the blue and red curves), the thing that gives the cyan corners, is about 0.2 of a stop. It's cyan because green is also impacted by the IR filter.
    3. Most important, the lens is somewhat asymmetrical - vignetting on the left is greater than on the right. Note that both the luma AND chroma vignetting are greater. This is probably caused by the optical center of the lens not being aligned with the optical center of the sensor. Note that some asymmetry exists in almost all lens; I've routinely seen similar (and greater) asymmetry in other lenses from Leica, as well as Zeiss and Nikon lenses.
    To correct for vignetting, what we would want is:
    • To correct chroma vignetting only, we'd want the red, green and blue lines to coincide. What that would mean would be that there would still be "normal" luma vignetting, but no color drift.
    • To correct luma vignetting as well, we would want all the lines to be flat.
    One thing to be aware of is that even on an uncoded lens the M9 may apply some degree of luma vignetting correction, although there's no way to tell for sure. However, this isn't relevant to our discussion here.

    The 18mm with in-camera correction

    The second chart shows what happens with the Auto lens detection (the corresponding actual image is the one at the start of this post):



    18mm f/3.8 with "Auto" in-camera correction


    What is shows is that on the right hand side, the correction of the color vignetting is near perfect (all the lines coincide), but on the left, the in-camera algorithm is over correcting red by about 0.1 of a stop. That's what gives the left edge of the actual image its red tint, so the red edge is not a figment of anyone's imagination. It's also interesting to note that Leica is not correcting very much at all of the luma vignetting.

    Coding as a WATE at 16mm

    It's also instructive to look at what happens if the code the 18mm as a 16mm WATE. This is the image you get:



    18mm f/3.8 with WATE 16mm in-camera correction


    The chart is as follows:


    18mm f/3.8 with WATE 16mm in-camera correction

    Two things are interesting here:
    • Although this image is technicaly not as well corrected as the correct 18mm "Auto" coding, the WATE coding looks better than the 18mm coding. The reason is that while the WATE coded version is less well corrected, it's not over corrected, and its the red edges and corners that make the 18mm correction look so bad.
    • The second thing to note is that, if you look closely at the left hand side of the chart, you'll see that the red and green curve actually cross. Although its difficult to see in the image, that actually results in sort of a rainbow effect, with the image alternating from cyan under correction to red over correction. This is because the shapes of the correction curves for the WATE and the 18mm are actually different. This is important because many Leica users tend to assume that that the difference between in-camera corrections between lenses is just the "strength" of the correction. That's not actually true; the entire shape of the curve can be different for different lenses. The result is that using an in-camera correction for a lens it wasn't designed for is more difficult that many assume. Given that the M9 needs considerably more correction than the M8 did, this is likely to be a significant issue for users that have become accustomed  to being able to use in-camera corrections for non-Leica lenses; that approach may no longer be nearly as effective.
    So where does the asymmetry come from?

    Bottom line is that I would say that what's happening here is that (as I had suggested in previous posts on the forum) asymmetry is going to be real problem for correcting cyan drift on the M9. It's not really clear what the root cause of the asymmetry is - there are a number of possibilities:
    1. Lens asymmetry. Asymmetry in the lens itself (simplistically, the mechanical and optical centers being different) is by far the most likely reason for what we're seeing, and is certainly a large part of the problem. However, that doesn't explain why all the over correction examples I've seen to date have been red on the left, not the right. You would expect that the direction of lens asymmetry would be random, not all in one direction. It is possible that this is some kind of a artifact of the ways lenses are calibrated in the factory, but it is strange, and the bias to the left wasn't apparent on M8s.
    2. Camera asymmetry. Inevitably, the sensor is not always exactly aligned with the mechanical center of the lens mount. If this was the case for the M9 as a matter of design, or due to some quirk of the manufacturing process, it would explain the left bias. However, sensor to lens mount alignment is usually trivially easy to get right compared to the optical alignment of lenses, and in order to account for the degree of left bias, a substantial asymmetry would be required. It is thus unlikely that camera asymmetry would account for the variation.
    3. Asymmetry in the in-camera correction process. I did perform a test by subtracting the corrected image from the uncorrected image, which gives an estimate of the Leica correction curve, as shown below. Surprisingly, the Leica in-camera correction does appear to be very slightly asymmetric, but in the wrong direction! However, it is only asymmetric to a very small extent, and given that the measurement, made by subtracting two different images, is inherently imprecise, it's likely that the correction is actually symmetrical. The "lines of dots", btw, are as result of compression.

    Leica in-camera corrections for 18mm f/3.8 (see text)


    Can CornerFix fix this?

    In a word, no, not using the current version. The current version of CornerFix will do as well as the Leica in-camera corrections, but no better.

    The reason for this is that the current version of CornerFix is explicitly designed to correct symmetric vignetting. Substantial change - already underway - is going to be required to allow it correct for the kind of asymmetry found here.

    <UPDATE: The latest versions of CornerFix do correct the problem>


    Conclusion

    As a results of these tests, it is fairly clear that the "red edges" are as a result of variations in centering between the lens, the sensor, and the in-camera correction algorithm. This issue showed itself on the M8, but wasn't significant enough to cause major problems. However, on the M9 this clearly is able to cause visible image degradation. Based on the available evidence, we can't say what the root cause of this de-centering is. However, we can reach a number of conclusions:

    1. What I think that Leica probably need to do to fix the immediate in-camera correction problem with the 18mm is just to tune back the 18mm's red correction by 0.1 of a stop, or some value based on production variation with the lens. That will leave the right a bit under compensated, but under compensation is a lot less visually disturbing than over compensation, as shown by the WATE example above.
    2. CornerFix will need to be enhanced to be able to deal with asymmetric vignetting - as mentioned above, I've started that process.

    There are still a few things I don't understand however. The major one is why its always "red on the left", as discussed above. The second one is to what extent this issue varies with aperture setting. There is evidence that it does change significantly, getting worse with increasing aperture, but I don't as yet have enough data across enough lenses to be sure. The variation might be with either real aperture, because the aperture blades themselves are contributing to the vignetting, or at least changing the effective optical center of the lens, or with the M9's estimated aperture, because the correction that Leica are applying varies with estimated aperture. Although, supposedly Stephan Daniel said in an interview that the M9 correction is not variable with aperture.
    3

    View comments

  4. I've posted a camera profile and some reference images for the M9 on the ChromaSoft website. They're at the bottom of this page.

    The most important file is the DNG camera profile, generated from a real image of a real GM24 chart using Adobe's Profile Editor. It can can be used in Lightroom and Adobe Capture Raw to improve color rendering of M9 images. It's actually quite close to the embedded Leica profile, but tames reds a bit.

    There are also two synthetic calibration images. The first of these is a synthetic GretagMacBeth chart that was created by using an existing image as a template, but then replacing all of the image data with new synthetic data calculated from the l*a*b* data of the GretagMacBeth chart using the color profile embedded into the original DNG file by Leica. Thus the file represents what a GretagMacBeth 24 patch chart would look like if photographed by a "perfect" M9. The second image is grey-level step wedge chart.

    Please note that you cannot use the GM24 synthetic image to create a camera profile - as the synthetic image was created from the generic embedded profile, you will just get back to the same generic profile; the synthetic images are intended to calibrate your workflow, not your camera!
    6

    View comments

  5. A recent discussion over on the Leica User Forum got me to finally finish writing this. The discussion in question was largely around how to best use Capture One, but it demonstrated (again!) the almost unthinking acceptance in various parts of the photographic community that "expose to the right" (ETTR for short) is the right way to set exposure on digital cameras. In fact in some corners of the web - and I won't point to them here - ETTR is practically religion.

    What I'll be showing in this post is that ETTR is at best wildly over-hyped, and at worst will give you a less satisfactory end result than just exposing normally. I'll be doing that two ways. Firstly, by showing some practical examples of images using ETTR versus images normally exposed, but secondly showing that even in theory, ETTR is flawed under most conditions.

    What is ETTR?

    What ETTR says is in essence that the best way to set exposure on a digital camera is to place the highlights on the right-hand side of the histogram, hence the "expose to the right" name. This is in contrast to the "classic" exposure techniques which involve either an average exposure, which effectively places the mid-tones of the scene in the middle of the camera's range, or variants of the zone system, which says that you as the photographer need to take a conscious decision about where to expose particular tones in the scene.

    ETTR was popularized by (and maybe invented by, depending on who you believe) Michael Reichmann, the publisher of the The Luminous Landscape, in this article. Michael credits the original idea to a comment by Thomas Knoll, the chief architect of Adobe's Camera Raw. What the comment amounted to is that because camera sensors work in a linear space, while we see images in a gamma space, you maximize the signal to noise ratio of the sensor by exposing as much as possible "to the right", where there are more A/D converter codes available. (Read the LL article if you want more detail). As you would expect from Thomas, that's 100% right. And thus was ETTR born.

    Note that there are a bunch of different variation on what ETTR really is, and religious wars to go with those variations - some variations focuss on exposure adjustments upwards, some downward, and some in both directions:
    1. "Overexposure ETTR" works by overexposing low contrast scenes. What this means is that noise is reduced, and you get the benefit of the greater number of A/D codes to the right. This is the "classic" version of ETTR that the Reichmann article focuses on.
    2. "Underexposure ETTR" works by underexposing on high contrast scenes, placing the highlights on the right, but driving shadows down to the left. This preserves the highlights, but at cost of generating greater noise in the shadows.
    What I'll show here applies to ETTR adjustments in either direction, although I'll focus on the "classic ETTR" as described in the Reichmann article referenced above.

    So, given I just said "100% right" to the proposition that ETTR maximizes sensor signal to noise ratio, why do I say ETTR is plain wrong? Simple. What most of the proponents of ETTR forget, or perhaps don't understand, is that maximizing the signal to noise ratio of the sensor is absolutely a good thing, but the sensor is only a small part of the image processing chain that gets you from pressing the shutter button to a print. What I'll show below is that ETTR's benefits are actually minimal except in one very specific situation, but that ETTR is actively dangerous to the rest of the image processing chain pretty much all of the time.

    The test conditions

    Those of you that took a look at the LL article may have noticed a critical point. No images demonstrating the improved end results from ETTR. Just theory, but without any practice. Which should have set alarm bells ringing right there. So, what we're going to do is to look at some images. First, here are the test conditions:
    1. I've used my Canon G10. The G10 is useful in this situation, because being a 14 Mpix small sensor camera, it delivers fairly clean images at low ISO, but really noisy images at high ISO, so we'll be able to look at images at both ends of the spectrum. If ETTR is going to work, it should work at one or the other end of the G10's spectrum of sensitivity.
    2. To ensure consistency, I've used images of a Gretag Macbeth 24-patch chart, shot on a tripod. I've masked the brightest neutral patches off to give a low contrast test image. Shots were in diffuse daylight.
    3. Exposure values were determined such as to ensure that no channel went into saturation - one of the practical problems with ETTR is that it's very easy to blow the red channel, and end up with color shifts because of that alone. However, that's not what we're investigating today, so histograms were carefully monitored.
    4. I've used Capture One 4.8.3 and Lightroom 2.4 for these tests. All setting were default, other than white balance, which I set to Daylight for all images (as shot was 5000K), and whatever adjustment to the exposure slider was required to compensate for the degree of ETTR push or pull I applied. In all cases, "correct post ETTR exposure" was to be close to the theoretical l value of 50.867 (in L*a*b values) for the CC22 patch - the lightest patch I left uncovered. I made no changes to any other setting (contrast, black point, brightness, curves, etc) - all were left at their defaults for the particular program.
    A typical test image looked like this:


    Sample test image full size

    Testing at low ISO

    So, let's get to the images. As a reference point for some low ISO tests, we'll use a ISO 200 image. I've selected 200 so that we can compare to a ISO 100 image later, and we will be using Capture One. Further, we'll be looking just at the intersection of the CC4, CC5, CC10 and CC11 patches (the "foliage", "blue flower", "purple" and "blue green" patches) at 100% scale, as shown in the next image."

    ISO 200, 1/15 sec, f/4.5 crop - normal exposure
    Capture One V4.8.3, G10 Generic profile

    So what happens if we "expose to the right" by one stop - aka overexpose the image by one stop (so +1 stops), then bring it back to the same exposure as the first image by adjusting exposure by -1 in Capture One, so getting us back to 0:

    ISO 200, 1/8 sec, f/4.5 crop, +1 stop ETTR exposure
    Capture One V4.8.3, G10 Generic profile

    So, on an immediate glance - well, not much. Let's overlay the two images to get a better idea of the differences. Here what I've done is to overlay the center part reference image with the "ETTR image" - I've offset them slightly so that its easier to see where the ETTR image starts and stops:


    Outer: ISO 200, 1/15 sec, f/4.5 crop - normal exposure
    Inner: ISO 200, 1/8 sec, f/4.5 crop, +1 stop ETTR exposure
    Capture One V4.8.3, G10 Generic profile

    Taking a look at the images overlaid you can see that the inner, ETTR exposure really is just a little better; there is less noise, more or less as ETTR promised. (ETTR fanatics should stop reading now). But that's not the end of this. I choose ISO 200 for a reason, let's take a look at a 100 ISO image, superimposed on our better quality ISO 200 ETTR image:


    Outer: ISO 200, 1/8sec, f/4.5 crop, +1 stop ETTR exposure
    Inner: ISO 100, 1/8sec, f/4.5 crop - normal exposure
    Capture One V4.8.3, G10 Generic profile

    As you can see, there's no real difference. So, in this example, all the benefits that you can get by one stop of ETTR can also be obtained by just adjusting the ISO setting down by notch.

    This isn't some kind of an aberration that's specific to the G10 or these ISO setting either - there's a good theoretical reason. Take a look at the actual exposure values of each image - they're the same - 1/8sec, f/4.5. Now sensors, either CCD or CMOS essentially work by accumulating electrons in the sensor well; the more electrons, the higher the brightness of that pixel. In this case, because the exposure was the same, the number of electrons was the same. But noise in sensors, counted in terms of number of electrons, is relatively fixed. So for the same exposure, regardless of ISO setting, you will tend to get roughly the same amount of visible noise. The reason why noise increases with ISO setting isn't actually because the amount of noise increases on an absolute scale, it is because as you increase ISO setting, the white point goes down, so the same number of noisy electrons become a larger and larger part of what you see. For example, at ISO 200, you might have 10000 image electrons, and 100 noise electrons, while at ISO 100 you would have 20000 image electrons, but still the same 100 noise electrons. Overexpose the ISO 200 image by 1 stop, and you double the electrons to 20000. So you're right back to the noise level of your ISO 100 image. Only you have to adjust the camera, and adjust back in post, all to get exactly the same result as just changing the ISO setting.

    High ISO Test

    Ok, the low ISO tests suggest we're wasting our time with ETTR. But let's try that at high ISO, on a really noisy image. First, lets compare a normally exposed ISO 1600 image (outer) to a ETTR exposed ISO 1600 image:


    Outer: ISO 1600, 1/125sec, f/4.5 crop - normal exposure
    Inner: ISO 1600, 1/60sec, f/4.5 crop, +1 stop ETTR exposure
    Capture One V4.8.3, G10 Generic profile

    Again, as we'd expect, the ETTR exposure shows less noise. However, again, lets also take a look at the ETTR exposure versus just reducing ISO by a notch and exposing normally:


    Outer: ISO 1600, 1/60sec, f/4.5 crop, +1 stop ETTR exposure
    Inner: ISO 800, 1/60sec, f/4.5 crop - normal exposure
    Capture One V4.8.3, G10 Generic profile

    Interesting - the normally exposed inner is actually a bit better than the ETTR equivalent. Why? - because the G10 applies its own internal noise reduction algorithms, based on ISO setting, and Canon's engineers, as you might expect, know a few things about their sensors. So here what we have is that the results as delivered by the camera, exposed normally at a lower ISO, are better than using ETTR. In other words, here ETTR gave a worse image than just exposing normally, and letting the camera do its stuff.

    The One Situation where ETTR does work

    What the tests above establish is that as a general rule, ETTR is no better than just adjusting the ISO setting down, and in some situations, e.g., where the camera does its own noise reduction, is worse than just exposing normally.

    But there is one situation where ETTR can help - when you're already at the lowest ISO setting you camera offers. Take a look at the next image overlay:


    Outer: ISO 100, 1/8sec, f/4.5 crop - normal exposure
    Inner: ISO 100, 1/4sec, f/4.5 crop, +1 stop ETTR exposure
    Capture One V4.8.3, G10 Generic profile

    While the G10 has a ISO 100 setting it doesn't have a ISO 50 setting. (It does a ISO 80 setting, but I'm ignoring that as it's too close to 100 to make much of a difference.) So what ETTR is doing here is allowing us to synthesize a lower ISO setting, and hence a better noise performance, than the camera actually has. The disadvantage of course is that the camera's dynamic range is reduced by one stop, but if you have a low contrast image, that might be a price worth paying. But that's not the only disadvantage of ETTR:

    The real problem with ETTR - color and tone curve shifts

    We've established that outside of one situation - the situation where you're already at the lowest ISO setting you camera has - ETTR offers no practical advantage, and in some situations such as high ISO, may be an active disadvantage as regards noise performance. But now we get to the unpleasant part - color shifts. I already mentioned in the "test conditions" section that when using ETTR it is easy to blow the red channel, and get color shifts as a result. However, what I'll show in this section is that even without blowing a channel, you can still get color and tone curve shifts. Just to emphasize again - none of the images in this article have blown channels.

    Firstly, take a look over the previous sets of overlaid images. If you take a close look, and some of you with sharp eyes may have noticed this before, there are some subtle color shifts, especially in the lower right green-blue patch.

    However, the issue becomes a lot clearer when you look at how Lightroom responds to ETTR. In this case, we'll use a 2 stop ETTR to make the difference clearer:


    Outer: ISO 100, 1/15sec, f/4.5 crop, normal exposure
    Inner: ISO 100, 1/4sec, f/4.5 crop, +2 stops ETTR exposure
    Lightroom 2.4, Adobe Standard profile

    Notice how much more difference there is between the inner and outer parts of the image. To some extent, that's because of the 2 stop difference in noise. But also take a look at two things:
    • The color difference in the lower right green patch;
    • The difference in brightness between the border areas - the normal exposure is darker than the ETTR exposure; in fact, of all the overlay images, this is only overlaid image where you can see a distinct difference in the border.
    And the difference isn't just random - when you look at the numbers, a pattern emerges:

    What you can see is that for each individual ETTR value (each setting of the exposure slider, -1, 0 and +1), the values are consistent, but the L value for the border area, and the b value for the green patch change as ETTR changes between those values. In other words, the color and tone curve is consistent between the ISO 100 and ISO 200 images, but inconsistent between the images with different ETTR values.

    So what's happening here? Two things. Let's go back to the theory - Firstly, all raw developer programs apply a tone curve to raw images. In Lightroom, this is just called "Brightness", in Capture One it is explicitly called a "Film Standard" curve, and in Aperture it is called "Raw Boost". However, in Capture One, the curve is applied before the tone curve, while in Adobe products such as Lightroom, the curve is applied after. So Lightroom shows a shift in brightness with changes in Exposure setting, Capture One doesn't.

    The second thing that's happening is "hue twists". Hue twists are deliberate changes to colors in an image to give a more pleasing result. So in current versions of Lightroom, you can choose between a number of different profiles, e.g., "Adobe Standard", "Camera Neutral", "Camera Portrait", etc. Each of these is designed to give better colors under specific circumstances. So, for example, a landscape profile will take a light sky blue, and make it a darker "more blue sky" sort of a blue.

    So, when you offset exposures by using ETTR, what you are doing at the same time is to completely upset a whole bunch of carefully calculated tweaks to make your images look better. For example, that slightly overexposed blue sky is now way overexposed, and as a result the profile will tweak it to somewhere entirely not like a blue sky. The result is an image with a sky that looks just slightly not right. And you get to spend a lot of time in post trying to sort out subtle color and tone shifts that aren't obvious, but just make your image look slightly wrong.

    For those interested in more details on tone curves and hue twists, I've blogged on them in more detail here and here.

    Making ETTR work

    So, ok, in practice ETTR is only useful under one very specific circumstance - when you want a lower ISO than your camera has, and you're willing to sacrifice both dynamic range and color reproduction to get improved noise performance. But ETTR does have theoretical advantages. Is there any way to get the advantages without also getting the disadvantages?

    The answer is yes, but only partially, and at a price. The price is the cost of a "Gen-2" Nikon DSLR, and Nikon's NX2 software. The way this works is as follows. Newer Nikons, e.g., the D3x, have something called Active D-Lighting. In its normal variants, D-Lighting is just some optimization in software, but when set to "Active", D-Lighting actually automatically does ETTR for you. However, I say partial because it only does an "under exposure" ETTR for you; in other words it preserves highlights in high contrast scenes, but doesn't increase exposure in low contrast scenes. The clever bit however is that the camera then encodes the D-Lighting data into the raw image. Then NX2, Nikon's raw developer, can correctly adjust exposure prior to applying any tone curves or hue twists, and without you having to play with sliders. So no tone curve or color shifts. Magic. There's only one problem - this works only with NX2; Nikon have not disclosed to any other raw developer producer how the D-Lighting data in the raw file is encoded, so you only get the magic if you use a Nikon camera and NX2.

    All those extra A/D converter codes

    I've given a bunch of examples of how the only visible improvement that ETTR gives is as a result of lowered noise, but what about the argument in the original Reichmann article that the advantage was in more A/D codes. Well, I can't prove a negative. But the evidence on this is pretty overwhelming:
    1. There is no sign of any visible improvement from additional codes in any of the test images shown here.
    2. In the Nikon community, many DSLRs can shoot either compressed or uncompressed raws. The compressed raws have 683 codes versus the 4096 to 16384 that uncompressed Nikon raws have. Ever since Nikon cameras came out that could shoot both compressed and uncompressed, people have been trying to show a visible difference. They never have, at least without doing really heroic post processing, things like 4 stops of shadow recovery. And bear in mind, ETTR requires careful exposure; it's easier to get the exposure right in the conventional sense than it is to apply ETTR without blowing channels.
    3. In the Leica community the M8 compresses down from 16384 codes to only 256 codes. Similarly, there has been a lot of testing done to try to demonstrate a loss in image quality from that compression. Including by me - see here. Likewise, nobody has succeeded in showing any difference under normal conditions.
    So, unless and until someone can show me an image, normally processed from a real camera that shows a visible advantage that can't be duplicated by switching to a lower ISO, I don't see any evidence to suggest that the theoretical "more levels" advantage translates into any kind of a practical image quality advantage.

    And in conclusion.......

    Here's my conclusions:
    1. There is no advantage to image quality from ETTR that can't be duplicated by selecting a lower ISO, if a lower ISO setting is available. In some situations, such as where there is in-camera noise reduction, ETTR actually increases noise. That's what the practical tests show, and the theory of the case confirms the practical results to be correct.
    2. The only situation where there is an advantage to ETTR is if you're already at the lowest ISO setting your camera, and you use ETTR to synthesize a lower ISO. However, given the noise performance of most modern cameras, that advantage is often very small. The test I did here - a small sensor high pixel count camera - is the best possible scenario for seeing an improvement. Using a modern DSLR, the improvement would be marginal at best.
    3. Any kind of ETTR brings significant disadvantages in the shape of color and tone curve shifts that will have to be repaired in post processing. While these shifts are small, they are easily the equivalent in effect of changing profiles. So, in effect, ETTR negates the advantages that modern raw developers such as Lightroom bring with them.
    Bottom line - ETTR offers improved image quality in only one specific situation - where you can use a lower ISO setting than your camera has. In all other situations, ETTR will only ever decrease image quality.

    Update : see my later post here as well, as well as the subsequent two ETTR posts, the last one of which adds another situation in which ETTR may be useful. For some cameras, if you're willing to overexpose by four (yes four!) stops.
    39

    View comments

  6. dcpTool, and the whole Adobe hue twist story been getting some attention on various forums, primarily with respect to skin tone rendition with the Canon 5DII - at least some people have found that untwisted and/or invariate profiles are giving them better colors than the usual Adobe profiles.

    There's a long thread on dpreview here.

    And there's also a thread on DCHome here.

    The DCHome thread has some example images showing different renderings by TK Chan, who, judging by his gallery, is a seriously talented photographer.

    Unfortunately, Sourceforge's statistics system is bust (again), otherwise I'd be able to quote some statistics as to how many hits/attention resulted.
    0

    Add a comment

  7. One of the pleasures of writing imaging software is that you occasionally get to see some really stunning images that were processed through your software.

    David Ryan has just published some images that I think are just great. According to David, they were digitized back in the early 90's have been languishing as a pile of CD's ever since. David's now used pcdtojpeg to convert them from Kodak Photo CD format, and put them up as part of his gallery - you can see the collection on David's site here: http://www.davidryanphotography.com. In particular, I think that David's "San Francisco Doors" collection is just great - the use of color and light to turn objects that are quite mundane into something extraordinary is just amazing. And also a pretty good testimonial to the quality levels that pcdtojpeg can deliver(!)

    You can also see what David has to say about pcdtojpeg on his blog.
    4

    View comments

  8. So I tested V1.0.0.1 of CornerFix on Windows 7, and guess what - it broke. Specifically, while images were converted, corrected, etc, they weren't displayed.

    A little digging around showed that Microsoft, in their wisdom, have changed the behavior of the .Net 2 Picturebox control. Previously, under both XP and Vista, if the Picturebox control came across a file that it couldn't decode, but could extract a thumbnail from, it would display the thumbnail. Entirely logical and useful behavior, and what CornerFix depended on. That has changed in Windows 7 - all you get now is that Picturebox throws an exception, and displays a little cross icon.

    I posted a query on the .Net part of the MSDN forum, and of course got the helpful suggestion (from a Microsoft employee) that as this was "a Windows 7 problem", I should post on the general Windows 7 forum. Very useful. Let's do anything we can to avoid actually solving the problem.

    So given that Microsoft weren't going to offer any work-around, I got to thinking about how to solve this one myself. My first thought was to generate a JPEG from the thumbnail image data in the DNG file. A bit of work showed that while that was possible, it was going to be pretty messy. However, a better thought occurred to me. A DNG file is actually just a TIFF with additional tags, and a sub-IFD structure. In fact, what a DNG consists of is a single IFD which contains the thumbnail as well as the main image in raw form as a sub-IFD. Now the thumbnail is actually a perfectly valid TIFF image, it's just tagged via the kcNewSubFileType tag as "1" to show its a thumnail. In fact, the TIFF/EP spec requires that "In TIFF/EP files, the 0th IFD should be an image that can be read by a baseline TIFF 6.0 reader." So to get a valid TIFF file, which Picturebox can display, all that's required is to extract the thumbnail in the 0th IFD, and relabel it as the main image.

    So, that's what I did. Now CornerFix uses Adobe's DNG SDK, so in theory I could have used that, but a quick look showed that the SDK isn't designed to extract thumbnails. In fact, it pretty much ignores them. So I decided to code a completely separate thumbnail extractor in C++, on the basis that it would be useful as a standalone product.

    Because TIFF is a complex format, it turned out to be a more complex piece of code than I'd hoped, but I got it done, and its now part of CornerFix V1.1.0.0, which shipped a few days ago.

    For those interested in using it, its just a single C++ file - tiffThumb.cpp - and its associated tiffThumb.h file, with no dependencies on anything else. The API is simple - it takes a file name, and return a memory buffer with the TIFF thumbnail file in it. I've tested it on both Windows and OS X. The documentation is in the tiffThumb.h file.

    The file can be downloaded as part of the CornerFix source code, here.

    0

    Add a comment

  9. So I've been lazy about blogging, and productive about writing software. Since I last blogged about software releases, I've released not one, but two new open source applications, keychainDD and pcdtojpeg.



    keychainDD
    keychainDD is an OS keychain manager that delivers very secure drag&drop capability to the OS X Keychain management system. Why I wrote it was that OS X has a very robust password management system in the Keychain system, but in my view, there were two deficiencies:
    1. No drag&drop - you can have Safari (or whatever else) autofill your passwords on web pages, but I've never been very comfortable with that - I'm always suspicious that even a badly spoofed website may be good enough to get any autofill program to cough up my passwords. And copying via cut&paste is vulnerable to keyloggers (yes, there are keyloggers for OS X). So keychainDD only allows you to explicitly drag&drop passwords. Not only is drag&drop a lot more secure on OS X than cut&paste, but the way that keychainDD implements drag&drop is in a very secure way, btw.
    2. No support for "Memorable Information". A lot of financial sites that I use now require what is effectively a second password, usually one that you have to enter a few selected characters via an on-screen menu or keyboard. This is a protection against keyloggers, and in my view a very good thing. But you have to remeber that information, and be able to count characters to get the 5th character, the 3rd, etc. keychainDD supports memorable information, and has a neat character-by-character tray type display that means you don't have to write down your memorable information to count characters.
    So, anyway, that's keyChainDD. It's website is here, and you can download it here.

    pcdtojpeg
    pcdtojpeg converts Kodak Photo CD (PCD) files into JPEG files. I wrote it because basically every other solution out there for doing any kind of PCD conversion just sucks. Either they blow highlights, get colors wrong, only convert at very low resolution, or just don't work all. I won't mention any names here, but pretty much every other solution out there that I tried doesn't work. And don't just believe me, take a look at Ted Felix's PCD site.

    pcdtojpeg gets the color right, won't blow highlights, and runs under any of OS X, Windows or linux.

    pcdtojpeg also isn't just a monolithic application, it's PCD decoder comes in the form of a proper C++ library that other programs can use.

    Acknowledgments: Although pcdtojpeg shares no code with Hadmut Danisch's hpcdtoppm, and can decode image information that hpctoppm can't, pcdtojpeg would not have been possible without the work that Hadmut did in reverse engineering the format in the early 1990’s in order to write hpcdtoppm. For those interested, hpcdtoppm converts PCD images into ppm format images.

    pcdtojpeg's website is here, and you can download it here.
    2

    View comments

  10. So, after a long break due to work pressures, back to the CG Pipelines and CIContext Bugs issue.

    What I ended up doing was to modify my application to allow me to build a test image, and then assign any profile that I wanted to that image, and to any of the subsequent stages. So the points at which I could set a profile were:
    1. The test image - the input image for the pipeline;
    2. The working space of the CIContext that I use for processing;
    3. The output space of the CIContext that I use for processing;
    4. The rendering space that render a final image into.
    I also provided the ability to either create images via createCGImage or by a drawImage/CGBitmapContextCreateImage sequence. Finally, I built a CIKernel that either clips input RGB values to a predetermined value, or sets RGB values to a predetermined value. This CIKernel allowed me to find out what the actual working space of the filter was.

    Armed with this collection of things to test, I went through pretty much every possible combination of settings that I could find. The results of this were interesting (and, as you will see, somewhat confusing):

    1. On entry to the processing pipeline, RGB values are clipped to the working space of the pipeline, if you set a working space. In other words, the maximum values for R, G and B are set to 1.0 in whatever the working space of the pipeline is. The working space is set by the kCIContextWorkingColorSpace option of the contextWithCGContext method. 
    2. As is claimed by several references on the web, the default working space for a CIFilter is kCGColorSpaceGenericRGBLinear. Default in the sense of what you get if you don't specify a working space via kCIContextWorkingColorSpace. That is, Apple's Generic RGB, set to a gamma of 1. However, there's a twist to that - there is no clipping of the input image on entry to the processing pipeline if you don't set working space.
    3. So far as I could tell, the output space, as set by the kCIContextWorkingColorSpace option has no effect at all.
    4. Contrary to what has been stated on the Web in a few places, you can set the working space to a space with a gamma not equal to one, and the pipeline and your CIFilter will still operate correctly. Mostly. The "mostly" is that if you try to have two different pipelines in the same program, with working spaces set to different gammas, Core Image will get confused, and sometimes give you the right primaries, but the wrong gamma.
    5. There is, as far as I can tell, never any clipping other than on the input to the pipeline - so its entirely possible, if you're using float values, to get outputs that are well out of gamut of the working space. You need to clip those, or using integer values, which will be automatically clipped.
    6. Using a drawImage/CGBitmapContextCreateImage sequence to generate an image rather than createCGImage is not reliable for large images - at some point, drawImage/CGBitmapContextCreateImage just seems to run out of memory (or something). Worse, it fails silently - you just get a blank image.
    7. Rendering intents are honored, but only on the input side. In other words, if your input image has a profile with a rendering intent, that rendering intent will be used when converting to the processing pipeline's working space. However, setting the working space (or output space or rendering space) to have an intent doesn't work.
    The upshot of all of this - if your needs for a CIFilter processing pipeline are simple, then you're probably better off just NOT setting the working space. That way, there's not clipping anywhere in the pipeline, and as long as you render the image correctly, all will be well. It's very probable that Apple specifically designed the "default color space doesn't clip" behavior quite deliberately to allow reasonably simple image processing applications to just do their stuff without getting worried about color spaces.

    If however you are in the kind of a world where your CIFilter actually needs to be color space aware, then you should set the space to something with a gamma of 1, and it should not have rendering intents (BToA tags or AToB tags). Based on the results above, while non-1.0 gammas work under some conditions, they don't always work.

    A final note: The ICC (international Color Consortium) have some very useful profiles, the "Probe Profiles" on their website. These allow you to unambiguously find out whether rendering intents are being honored.
    0

    Add a comment

Popular Posts
Blog Archive
About Me
About Me
My Photo
Author of AccuRaw, PhotoRaw, CornerFix, pcdMagic, pcdtojpeg, dcpTool, WinDat Opener and occasional photographer....
Loading