1. In part 1 of this series, I promised to show the tone curves for the various raw developers that I'm looking at. Here they are:


    The Lightroom curves, for various settings of brightness and contrast - brightness has by far the most pronounced impact the image. In Lightroom, to get to a linear curve, you need to do three things - set brightness to zero, set contrast to zero, and select "Tone Curve - Flat" from the presets.


    Then the Aperture curve showing the effect of the Aperture boost setting; the effect is essentially the same as the Lightroom brightness setting. To get a linear curve from Aperture, all you have to do is to set boost to zero.


    Finally, the Capture One V4 setting; Capture One is a bit different to Lightroom and Aperture. Where Lightroom and Aperture have slider settings that are non-zero by default (brightness and boost respectively), on Capture One all settings default to zero. However, also by default, Capture One loads a "Film Standard" tone curve, which has a very similar effect to the other two program's non-zero settings. To get rid of the curve, all you need to do is to select the "Linear Response" Curve setting.


    The last set of curves show a comparison of the default curve for each program, all referred back to a sRGB/2.2 gamma curve to make them comparable. While all the curves are about the same shape, there's a distinct difference in the "aggressiveness" of each curve. Lightroom/ACR adds the most brightness in the mid tones, and Capture One the least, and Aperture's about in the middle. We'll come back to this issue later in this series, but the next step is to use these tone curves to allow us to calibrate colors from the GretagMacbeth test chart.

    And actually using the tone curves to calibrate colors is quite easy. All that's involved is the following two steps:
    1. Convert the l*a*b* color values for the GretagMacBeth patches to RGB in the color space of the program in question
    2. Use the tone curve to adjust the RGB values in accordance with the curve
    Once we've done this, we have RGB values that are what should be displayed for each program, if the color calibration is correct. I've done this for the three programs I'm testing - a spreadsheet with the values is posted here: http://chromasoft.googlepages.com/calibrationspreadsheets.

    In the next post in this series, I'll take a look at how each program compares.
    1

    View comments

  2. Just before the Christmas break, I decided to spend some time over the holidays comparing Adobe’s Photoshop Lightroom, Apple’s Aperture and Phase One’s Capture One raw developer/digital asset management products. By way of background I’ve been a Lightroom user since the earlier betas, and really like Lightroom’s workflow management. But I’ve never been happy with Lightroom’s color rendition, but have also not had the time to really dig into why I wasn’t getting what I wanted. But by just before Christmas, I was sufficiently frustrated that, despite what I like about Lightroom, I was in “there’s got to be a better way than this” mode – ready for change.

    Now one person’s great color rendition is another person’s nightmare, so there isn’t any point in trying to just play with the sliders till something nice comes out – at least not for me. Also, on some products there eighteen sliders, all of which interact. I accept that there may be people out there that can look at an image, move a few sliders, and get what they want, but that isn’t me. So what I’ve chosen to do is to look at color rendition in a more “scientific” way; and ask three questions:

    1. How close is the default rendition of each product to a 24-patch GretagMacBeth color checker?
    2. How easy is it, using each product to calibrate the rendition to as exactly as possible match the theoretical values of the GretagMacBeth test chart, as printed on the instruction sheet that comes with the chart?
    3. How usable is the calibration that I’ve created – is it easy to transfer to other images, how sensitive is it to changes in exposure settings, etc.

    At least, while calibrating to to a GretagMacBeth chart doesn't mean I'll have "good" color, at least it's a consistent starting point. And yes, accepted that questions 1 and 2 are fairly simple and objective questions, but that question 3 starts to get more subjective.

    As perhaps I might have expected this turned out to be a far more complex process than I’d thought, and eventually pulled me into comparing the three products quite a bit more broadly, for example, as regards the performance of their Bayer interpolation engines.

    For the record, the software versions used for this comparison were:

    • Lightroom 1.3.1 Camera Raw 4.3.1
    • Aperture 1.5.4 and 2.0
    • Capture One 4.0.14154.14152 and 4.0.1.14900.14887

    Now the first issue that comes up when trying to do this is a simple one - given that, for example, the skin patch on a GetagMacBeth cart has the l*a*b* color values of (65.711, 18.13, 17.81), what does that actually mean in terms of the RGB values that we should expect to get from the cursor read out in each program. A little of experimentation will show that this isn't a simple question.

    Actually there are two issues with trying to calibrate imaging software - in what units is the readout, and secondly what adjustments are being applied to the image. Typically, when a raw developer loads a TIFF file, it does so without adjustment, but usually applies some kind of tone curve when loading a raw file.

    The readout units for the programs that I've calibrated are:

    1. Lightroom: Melissa RGB - Melissa RGB is the combination of the ProPhoto primaries and the sRBG gamma curve, Also known as "bastard RGB", as it's the bastard child of ProPhoto and sRGB.
    2. Aperture: Wide Gamut. (Note: this was correct at the time this post was originally written. But see the comments this year below - its now Adobe RGB. However, the color rendition information is still correct)
    3. Capture One: Capture One uses whatever color space is set as its output space, so you can set it to any ICM profile you have. For a bunch of ICM profile you can use, see my ICM Profiles page.

    The adjustments made by default are more complex - generally, each raw developer has its own tone curve, and also its own default brighness settings.



    This is the graph of the ACR 3 default tone curve, as extracted from the Adobe DNG toolkit - it shows the flatting at the top and bottom of the curve, and also the default brightness setting



    What I've done to get real tone curves from the packages I'm looking at is to use the monochrome stepwedge reference image (shown above, and available in DNG format on my Reference images page) to work out what the tone curve is.

    In part 2, I'll show those curves.

    6

    View comments

  3. At long last, I've done something I've had on my to-do list for a long time now, which is to create a web space where I can put various files that might be useful to other people. It's at http://chromasoft.googlepages.com/. The first thing that I've posted there are two papers I wrote several years ago. The first goes into the mathematics of color spaces as used by ICC profiles, and color conversion between color spaces. Pretty much any serious imaging software package will use color space descriptions like this, either implicitly, or explicitly as is the case for Phase One's Capture One product.

    Although the papers are largely written from the point of view of color conversion on monitors, for display purposes, all the maths are exactly the same as for cameras. The second paper shows how the shape of the CIE color space can be modeled in three dimensions.These two documents were originally created in MathCad, which is a mathematical modeling package from PTC. It allows the document to contain live mathematical equations, so that you can check that your maths actually works, and foots to real answers.

    Both of these papers are either available in Adobe PDF form, or as MathCad documents. The MathCad documents are live, so allow real calculations to be made. However, they require that you have MathCad.
    0

    Add a comment

  4. Well, I learned something new this morning, thanks to a question that Baxter Bradford asked over on the LUF. What he asked was (in effect) whether the various Adobe raw products (Adobe Camera Raw, Lightroom) would pick up on a changed camera profiles in a DNG file. This was in regard to the Leica M8, which changed its camera calibration data after it's IR sensitivity problems were discovered.

    The way color works in a DNG files is that there are two pairs of color matrices:
    1. ColorMatrix1 and ColorMatrix2. These two provide color calibration t two different color temperatures; in order to set an intermediate temperature, a linear interpolation is used.
    2. CameraCalibration1 and CameraCalibration2. These are used to provide color calibration that is specific to the individual camera, rather than to the camera model.
    The color temperature adjusted ColorMatrix and CameraCalibration matrices are multiplied together to get an overall color conversion matrix. In most DNGs the CameraClalibration matrices are not used (set to an identity matrix, technically) - the only DNGs that I've seen using these are for an Olympus E-3.

    Up till this morning, I would have automatically responded to Baxter's question to the effect that that ACR and LR read the color matrices in the DNG, and since Leica has modified that post IR filters, ACR/LR's color calibration will in effect have changed to match the IR filter adjusted sensitivity. However, I've never actually checked that. So today I did, by overriding the ColorMatrix's in a test DNG, which should give weird color. And it made no difference, which was quite unexpected. Then I also overrode the EXIF camera name to "unknown", and then, guess what, weird color, as expected. After a bit more digging what I found is:
    1. If ACR/Lightroom recognizes the camera name in the DNG, it ignores the ColorMatrix matrices, but still uses the CameraCalibration matrices.
    2. If ACR/Lightroom does not recognize the camera name, it uses both the ColorMatrix matrices and the CameraCalibration matrices as contained in the DNG.
    So the bottom line is, even if ACR/LR are reading a DNG, if either program sees a camera name they recognize, you will get an Adobe Camera Raw color calibration, not what's in the DNG. Only if they don't recognize the camera name will they use the DNG values. However, ACR/Lightroom always honor the CameraCalibration matrices.

    To confirm this, I took a look inside the LR/ACR code, and "Leica M8 digital Camera" is indeed listed in there.
    5

    View comments


  5. As part of my journey into digital imaging, I found myself writing CornerFix, which can be found on http://sourceforge.net/projects/cornerfix/. The image is a screen shot of the Mac version.

    CornerFix corrects for color dependent vignetting in digital images, which shows as cyan colored corners, as in the image on the left hand side of CornerFix's main window in the screen shot. The image in the screen shot comes from an M8 with a CV12 lens and IR filter on it. All digital cameras are some extent subject to this; current generation sensors are highly IR sensitive, so there needs to be a IR filter somewhere. But the combination of sensors and IR filters also cuts into the red part of the spectrum, and do so in a way that depends on the angle through which the light bends as it travels to the sensor. So red gets cut more in the corners. Most DSLR's do this a bit - take a picture of a white wall and you will probably see it, although many cameras correct internally to a greater or lesser extent.

    Leica's M8 has a particular problem in this regard. Historically, one of the advantages that rangefinder cameras had was that the back of a rangefinder lens can be a lot closer to the film surface than is the case for a film SLR - the SLR needs space for the mirror, which a rangefinder doesn't have. So in the film world, rangefinder lenses had a lot more design freedom that SLR lenses - wide-angle lenses that on a SLR had to be reverse telephoto designs could be normal designs on a rangefinder. Ironically, in the digital world, this turns into a disadvantage. Because the back of the lens can be a lot closer to the film, the angle is more acute, and you get a worse cyan corner problem than a DSLR would.

    M8's can correct for this problem themselves, but there are two issues for users - firstly, your lenses must be coded, which means that it must be a Leica lens, either new enough to have been coded when it was manufactured, or one that has been sent to the factory to be upgraded. So anyone with a non-Leica lens or one that can't be upgraded is out of luck, unless they somehow code the lens themselves. The second problem is that the M8's correction is "one size fits all"; it's designed for average situations, and sometimes doesn't do well in unusual lighting.

    CornerFix allows the cyan corners problem to be corrected for any lens, in more or less any lighting situation, by post processing the camera's image file. The right hand side of the main window in the screen shot shows the corrected image. CornerFix is available for the Mac and for Windows. It's free and open source, released under the GPL. Note however that image must be a DNG formatted file - CornerFix doesn't work with TIFF or JPEG files.

    For those interested in the technical details, the core of Cornerfix, which is common to both the Mac and PC versions, is written in "pure" C++, and uses the Adobe DNG toolkit to decode files. The GUI of the Windows version is written in C++/.Net, and the GUI of the Mac version in Cocoa. For those REALLY interested in the technical details, as CornerFix is GPL, you can download all the source code from site above.
    0

    Add a comment

  6. Over the past year or so, I've been involving myself more and more in the world of digital imaging. Photography isn't new to me - I was using a rangefinder while in school, developing and printing my own work. Neither is technology new to me - once upon a time, I co-founded (and ran all the R&D and engineering functions) a start-up that built data acquisition systems. Which is essentially what a digital camera is - a sensor, analog to digital converters, and a way of storing and displaying what you get. And I've been programming on and off since then - not as a core part of what I do, but to support what I do. Things like simulations of various parts of the global financial system.

    What I've found is that digital imaging is in a fascinating space; it's only just - say over the past 2-3 years - got out of what I call the "Bear" phase. Not bear in the financial markets sense, but in the "People look at a dancing bear not because of how well it dances, but because it dances at all". The technology has now reached the point that it's mostly better than film. But, there's still lots of innovation coming down the track. Up until recently, digital cameras concentrated on just being more convenient than, and as good as, film. So digital imaging was focussed on being "just like film" only better. That's what's changing now. Things like sensor resolution, the "more megapixels race" are close to done; in the high end DSLR format, we're up against the limits of lenses and basic physics. So now what's starting to happen is things like the "d-lighting" on Nikon's new generation DSLR's. Something that has no analog in the film world. Similarly, some of the capabilities being built raw developers have no analog in the darkroom - e.g., Lightroom's "smart vibrance", a vibrance control that can recognize skin tones, and leave them untouched while colors around the skin can be made brighter.

    Along the way, I also wrote CornerFix, which I talk about more in another post. That re-involved me in Windows C++ programming as well as involved me for the first time in programming for the Mac, which was a journey in of itself. So, while I'm active on some of the photography forums - e.g., the LUF (www.l-camera-forum.com). - this is where I talk about some of the deep technology issues, some of the really broad "where are we going" stuff, or about some of the programming that I do. So that's what this about - a mix a deep imaging technology, what's happening to the photography market, and stuff about Windows and Mac software development.
    0

    Add a comment

Popular Posts
Blog Archive
About Me
About Me
My Photo
Author of AccuRaw, PhotoRaw, CornerFix, pcdMagic, pcdtojpeg, dcpTool, WinDat Opener and occasional photographer....
Loading