• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

CMYK editing

Jeremy Jachym

pro member
I've been doing my color editing in the LAB space, but have recently been experimenting with CMYK and like what I've found. I've read some people's thoughts on CMYK editing and it seems like less channels to edit is more appealing than more. Personally, I like a fourth curve option.

I know that there's been discussions whether converting from RGB to LAB is destructive and as far as I'm concerned the answer is a quasi "yes", but it's so minor that it's not an issue. As I understand it LAB is the subset of RGB, so all the colors in the RGB gamut will be found in the LAB gamut as well, but CMYK's gamut is smaller than both, so in theory converting an RGB or LAB file to a CMYK color space would clip quite a bit more color?

I've been doing some edits and comparing the histogram afterwards and from what I can tell it seems rather than being destructive it's the opposite. What are the known pitfalls of CMYK editing and how can I experience the proof? Thank you.
 
What are the known pitfalls of CMYK editing and how can I experience the proof?

Unless you are processing for a specific CMYK output process, you will lose gamut accuracy when converting to, and again when converting from, CMYK. The more saturated colors in either colorspace cannot be mapped in the other colorspace without either losing colors, or require mapping of all other colors in a narrower space with loss of color differentiation.

There is a possibility to use an artificial 'wide gamut' CMYK colorspace for editing purposes, but that is IMHO a bit of a strange construct since in many cases the output will need to be converted to RGB for output anyway (unless process printing is the goal).

It depends on the image/subject matter how badly you will be hurt and how visible it will be, but you typically need to watch for the pure primary (RGB) or secondary (CMY) spectral colors, and posterization in smooth gradients. You can also try this target, assign an input colorspace, convert to a different (editing) colorspace, and convert to an output colorspace, and see if you have issues.

Bart


P.S. Also try this target with the conversion from RGB to LAB, and back to RGB. It's not a lossless (double) conversion.
 

Andrew Rodney

New member
Unless you are processing for a specific CMYK output process, you will lose gamut accuracy when converting to, and again when converting from, CMYK. The more saturated colors in either colorspace cannot be mapped in the other colorspace without either losing colors, or require mapping of all other colors in a narrower space with loss of color differentiation

Exactly. CMYK is an output color space. Its based on a specific CMYK printing device and should only be used when you're done with the image and wish to print it. I can see no reason why anyone would want to convert to an output specific space unless they intend to send those values to the device for printing. It only adds more steps to the process and hoses a lot of useful data.

RGB working space are totally different. They are not based on any specific RGB output device, they are designed for editing our images PRIOR to conversion to an output space. As for Lab, I've yet to be convinced that this too is useful or necessary, especially when we have the luminosity blend mode in Photoshop to produce roughly the same effects without converting into and out of a huge color space based on nothing more than human vision. Lab is not a subset of RGB. Its totally a synthetic color space based on math used to define human vision. You can hose a great deal of data in 8-bit so this should be avoided.

In reality, based on modern workflows (which are progressively starting out as Raw capture), do all the heavy lifting of global tone and color rendering work in a Raw converter. There should be little if any need to go into CMYK or Lab. You should attempt to provide the best initial RGB from this rendering of the Raw data. The idea you should somehow 'fix' this in Photoshop using Lab or CMYK is 20th century thinking and just not necessary these days IF you start with Raw data. Nor should you have to do this if you're starting out with good RGB data from a scanner. Do all the global tone and color corrections at the scan stage. Feed Photoshop the best RGB data and forget all this turd polishing use Lab or CMYK. Its old school thinking.
 

nicolas claris

OPF Co-founder/Administrator
In reality, based on modern workflows (which are progressively starting out as Raw capture), do all the heavy lifting of global tone and color rendering work in a Raw converter. There should be little if any need to go into CMYK or Lab. You should attempt to provide the best initial RGB from this rendering of the Raw data. The idea you should somehow 'fix' this in Photoshop using Lab or CMYK is 20th century thinking and just not necessary these days IF you start with Raw data. Nor should you have to do this if you're starting out with good RGB data from a scanner. Do all the global tone and color corrections at the scan stage. Feed Photoshop the best RGB data and forget all this turd polishing use Lab or CMYK. Its old school thinking.

Yup!

I stopped using LAB a few months ago… pppffffffeeeeeeeeww! I'm modern ;-()
lol but very true (as experienced, the more on RAW = the less in PS/CS).

We have to recognize that RAW converters (and our ability to use them as well) are doing better and better.

Now, when I have to use an old file, I prefer to reopen the RAW than the saved 16bits PS file, incredible how I get more DR and details…

Save and back-up your RAWs (or DNGs if you wish), trash the PS files ! (kidding, but not that much)

Another reason, my dear friend Asher, to do your photography in camera (framing, etc…) otherwise you'll have to redo all the creative postprocessing one day, and everyone knows that creative post-processing cannot be redone.
Because one's mind has changed in the meantime ;-)
 

Jeremy Jachym

pro member
Excellent example Bart. I used your target and the printer evaluation image from outbackphoto and saw how ugly things can can get, but as for the LAB conversion I could see no visible evidence of color degradation. *update* viewing the image at 400% magnification I could see slight noise introduced as a result of the double conversion, which has given me food for thought.

Thank you Bart, Andrew & Nicolas. Kind regards,

JJ
 

Andrew Rodney

New member
Editing in LAB: I have nothing against the LAB color model. However, there are a group of people who feel that editing in LAB is the only way to accomplish specific corrections, making it sound like a macho editing space. It is true, there are a few correction techniques that rely on a document being in LAB color space.The question becomes whether it’s worth taking the time or worse, producing image degradation to convert from a working space to LAB and back. Every time a conversion to LAB is produced, the rounding errors and severe gamut mismatch between the two spaces can account for data loss, known as quantization errors. The amount of data loss depends on the original gamut size and gamma of the working space.For example, if the working space is Adobe RGB, which has 256 values available, converting to 8-bit LAB reduces the data down to 234 values.The net result is a loss of 22 levels. Doing the same conversions from ProPhoto RGB reduces the data to only 225 values,producing a loss of 31 levels.

Bruce Lindbloom, a well-respected color geek and scientist, has a very useful Levels Calculator,which allows you to enter values to determine the actual number of levels lost to quantization (see the “Calc page”at http://www.brucelindbloom.com). If you do decide to convert into and out of LAB, do so on a high-bit (16-bit per channel) document.

Another problem with LAB is that it has a huge color gamut, and if you’re working in 24-bit images, you have three channels, each containing 256 possible values. With large gamut color spaces, these 256 data points are spread further apart than a smaller gamut space, which can result in banding in certain kinds of imagery like smooth blue skies. LAB is derived from CIE XYZ (1931), which defines human perception; it represents all the colors we can see and its gamut is huge. Not as many tools or operations in Photoshop can operate on images in LAB and the numeric scale isn’t very intuitive for users to work with. Some applications we might want to work with can’t accept a LAB file so an additional conversion is usually necessary.
 

Jeremy Jachym

pro member
"It is true, there are a few correction techniques that rely on a document being in LAB color space."

Andrew, out of curiosity would you please cite an example. Thanks.

JJ
 

Andrew Rodney

New member
An old, and sometimes useful one is to get just the luminosity channel for other kinds of work. For example, photographer Greg Gorman has a really nice color to B&W conversion which requires you get just the L star channel from a color doc to use for the conversion.
 

Jeremy Jachym

pro member
"An old, and sometimes useful one is to get just the luminosity channel for other kinds of work."

I'm a huge fan of being able to tweek the black channel. Would you or anyone else know how to incorporate black channel adjusting into an RGB file, or how to mimick the effect in an RGB color space?

JJ
 

Andrew Rodney

New member
I'ld think a 'selective color' adjustment layer would do the trick...

Plus black channel generation is so specific to the CMYK profile. When you hear folks recommend this, they often do not define the CMYK conversions which plays a huge role in that black generation (after a huge gamut clipping to the rest of the image). Least we forget why we need black in CMYK! Its all about the impurity of the inks. In a perfect world, we'd only need CMY to make a nice black, and in fact many printers can do this. Just not those we call presses.
 

Jeremy Jachym

pro member
Thanks for the suggestion Rene. The 'Selective Color' tool is good, but it doesn't offer the control I'd like. The 'color range' option appears to be what I was looking for. I can select the shadows and then use a curve on the selection.

JJ
 

Asher Kelman

OPF Owner/Editor-in-Chief
Another problem with LAB is that it has a huge color gamut, and if you’re working in 24-bit images, you have three channels, each containing 256 possible values. With large gamut color spaces, these 256 data points are spread further apart than a smaller gamut space, which can result in banding in certain kinds of imagery like smooth blue skies. LAB is derived from CIE XYZ (1931), which defines human perception; it represents all the colors we can see and its gamut is huge. Not as many tools or operations in Photoshop can operate on images in LAB and the numeric scale isn’t very intuitive for users to work with. Some applications we might want to work with can’t accept a LAB file so an additional conversion is usually necessary.

Wouldn't a 3D adjustment limiter like the "curves" adjustments limits in Lightroom protect the image from accidental damage in LAB? It would mean that the 16 or 64 BIT files would come into LAB from the camera perhaps and only from the very best cameras.

I have seen banding even in Phase One images of blue sky and non-textured aluminum or steel sheet covered building. I wonder whether perhaps that might arise from the sort of expansion beyond sufficient data in a large gamut that you refer to?

So why are we in a position where use of LAB is so risky? Is it more that programmers have gotten too comfortable with Adobe RGB?

Asher
 

Andrew Rodney

New member
Wouldn't a 3D adjustment limiter like the "curves" adjustments limits in Lightroom protect the image from accidental damage in LAB? It would mean that the 16 or 64 BIT files would come into LAB from the camera perhaps and only from the very best cameras.

Not sure I understand. You're saying parametric curves that don't let you move them such you Posterize the image? Yes that's helpful, but the damage, in 8-bit is simply the conversion, even before you edit the data.

So why are we in a position where use of LAB is so risky? Is it more that programmers have gotten too comfortable with Adobe RGB?

Lab is a huge color space based on nothing we can use to capture or output the file. Adobe RGB (1998) is much smaller, plus, you can encode directly into that space from a Raw file (or even larger, like ProPHoto RGB). I don't think its possible to go directly to Lab from Raw. Not sure, I suspect you have to start in some RGB color space. Then there's the question, why even mess with Lab when you're doing the big edits globally in a Raw converter. In fact, one could question why anyone would need to do any harsh edits on rendered pixels these days considering the controls we have either with good scanning software or Raw converters. There's a midset of some (of which I find silly and counterproductive) that says, don't mess with the scanner settings or Raw converter and fix (color correct) the data in Photoshop. This is usually proposed by those who like to make complex correction tutorials using Photoshop. GIGO:Garbage In Garbage Out applies. When you have good scanning software, you can correct at the capture stage so to speak, in high bit, using (hopefully) good tools and do all this faster with far less data loss.

With Raw, its absolutely silly to fix global color and tone anywhere but in the converter. You're dealing with linear, high bit data which has some advantages. You're not opening 12 million (or more) pixels which is really slow. You're using instructions to define how you'd like these new, rendered pixels to appear and if you change your mind, you simply change the instructions which are really just text files. You take the Raw data source and render new pixels from scratch. This truly is, non destructive image editing because its not really editing, its creation. The opposite is taking pre-baked pixels that have defined numeric values and alter them to fix the issues. That's slow. It always introduces data loss, hence why many recommend doing so in high bit.

Rendering is pixel creation. Color correction is taking existing pixels and fixing them by altering their values. We've been doing the later for a good 17 years and unfortunately, many people see this as the only means of producing pleasing and acceptable data. But with Raw and metadata editing, its a new ball game.

I fail to see why anyone would want to screw around with CMYK or Lab to globally fix an image unless the Raw is gone and can't be used.

For localized corrections, OK, there's Photoshop. Of course, I guarantee you WILL see local corrections in Raw processors. Lightzone does it now. It may not be as refined as working at a pixel level in Photoshop and may never be. But we're getting closer to doing as much work at the rendering stage as we can, leaving Photoshop for the job its best suited for.
 

Asher Kelman

OPF Owner/Editor-in-Chief
Andrew,

Should we be able to go back even further in this generation of pixels. Global changes have already been made in the camera, is that not true?

One thing we are still missing is an open access to firmware in our cameras so smart people can start the process right where the data chain comes from the sensor. After all, Hasselblad and Leica already tweak the files for their particular different lenses even on the RAW images!

Asher
 

Jeremy Jachym

pro member
Although I agree in theory with Andrew that it makes most sense to do global editing on an image in the RAW converter, I've mainly been using it as a loose starting point, because I haven't been impressed with the tools available.

From time to time I poke my head into the world of RAW converters. I'm using a 1Ds and have made comparison tests between Capture One (using Magne Nilsen's low sat profile), DPP, ACR, and RAW Developer. There's something to be said for everyone of them, but I've found myself relying on DPP most of the time. DPP's tools are 'crap' for lack of a better word, but I find the color and contrast to be the most pleasing right out of the box.

I've been hearing the buzz around Adobe's Lightroom (LR) for a while now, but figured considering my take on RAW converters that I was happy with what I had. I've always been attracted to the idea of being able to work on images in a non-destructive enviroment, but without tools that make sense to me I've relied on PS for PP work. From what I can tell it looks like LR is a new generation of RAW converters. It seems theory has made itself available to practice.

JJ
 

Andrew Rodney

New member
Andrew,

Should we be able to go back even further in this generation of pixels. Global changes have already been made in the camera, is that not true?

Well there are pixels. They are Grayscale and don't resemble an image or anything we'd look at and say is an image. This is pre-demosiced data. Its about as Raw as I want to mess with.
 

Andrew Rodney

New member
Raw developer is a very, very nice package for the qualities of the Raw conversions. The UI isn't all that slick at all, nor is it a full workflow product like Lightroom which is my converter of choice. But it (LR) shares the same processing engine as Camera Raw, so if you're not a fan of that processing engine, you may not love Lightroom. That said, I'm continuously impressed with Lightroom.
 

nicolas claris

OPF Co-founder/Administrator
I completely agree and sign-up Andrew's assesments (these).

I would like to add that the UI and the experience each one gets with one's preferred raw converter is really important.
Some years ago (can't remember which version) when I did adopt CI, I found the UI quite non intuitive but I did like the results of the conversion. Now the UI of 3.7 is peace of cake to me :)

Each time I can try a new converter (including C1 Vß4) as LR or ACR (post V4) I find some interesting features.
Then I discover that, if I dig a bit more C1, I can get almost the same pleasing results (ok not all, i.e.anti vignetting).

This is not to say that C1 is any better than the others, this is to say that, once you've made your choice, dig, dig, learn, push the limits of your RC and you'll be amazed how all these are brilliant.

In fact knowing a lot of a RC is time saver! If you know how to get what you want from your RC, you won't have to do it for EACH file in PS…

I guess now that the real difference between RCs will be the ability to incorporate:

• Anti-vignetting - done in LR and ACR
• Better CA treatment - almost done in LR and ACR
• Lens corrections (barrelling etc. ala PTlens) done afaik in Bibble
• Shadow/lightness control ala PS
• Dust cloning done in LR
• Midtone contrast done in LR and ACR (Clarity)
• Better USM - not that good in LR and ACR, not bad in C1 Vß4
• Good enlargement possibilities up to 400% not bad in C1 Vß4

• And local treatment as pointed by Andrew, maybe in the next generation?

Then we'll do 99% of the job PRIOR to converting the pixels and will produce neat files…

PS/CS will be used for different aspects such as photomontage or purely artistic interpretations and that is a huge space, neighbor to photography…
 

Michael Fontana

pro member
I second Nicola's findings; using 3 different RCs, depending on the light source and its contrast:

For shots in sunshine, I prefer C1 (LE, with Magne's profile though) as the shadows or 3/4tones shop up fine, just with the curve, whithout 29 additional slider-adjustements.

RAW Developer is used for delicate tasks, image reproductions in the studio, etc. The other day, they printed a painting catalog kinda straight out of the converter; one proof-print, small CMYK-corrections afterwards; that was it!

LR for the tons of images, or when some specific tools, à la CA-corrections or highlight-recovery (cloudy sky with some "exploding" clouds) is required.
 

Jeremy Jachym

pro member
It's amazing. Little did I know when I started this thread where it would lead. I was looking for another element to add to my workflow and instead found an entirely new workflow.

I've done my best to keep my finger on the pulse of digital photography, but for the past year I've been focusing my attention elsewhere. I had been pleased with the techniques I discovered, but also recognized the amount of pp time I was investing into each project was gettinng a bit ridiculous.

Theory has definitely developed into practice. I'm amazed at how Adobe recognized the future of digital photography and acted on it. It's an absolute delight to work on a RAW file and have a history palette. I find the entire UI intuitive and sleek. Bravo Adobe.

Many thanks to all of you who've contributed to this thread and the new perspective ;-) Cheers,

JJ

PS: a couple years ago I recall reading an article where someone posted instructions on how to create different profiles for ACR that would mimick the colors of CO, DPP, so on and so forth. I remember it involved using a Gretag Macbeth color checker...? Has anyone experimented with this? Thank you.
 
A fascinating thread about potential pitfalls of mode conversion that prompted me to explore further. Starting with an image in sRGB, I applied average blur three times to selected portions to produce consistent RGB matrix numbers within those selections. They were part of a blue background {160,210,240}, the gold top of a Pepsi can {241,228,180}, and skin {192,149,138). The corresponding LAB matrix numbers accord well with Bruce Lindbloom's tables mentioned earlier in the thread (“Calc page”at http://www.brucelindbloom.com). Then followed a conversion to LAB, a conversion to Adobe RGB, and a final conversion to LAB.

The first conversion to LAB resulted in changes in the RGB matrices mainly in the R channel - {176,209,238}, {238,227,181}, and {181,147,137}. Changes in the R channel were +16, -3, and -11, with changes in the G and B channels within 2 points of the level before conversion. What surprised me was that the LAB matrix numbers were within 1-point of the levels before conversion and no longer accorded so closely with Linbloom's tables.

Subsequent conversions to Adobe RGB and back to LAB showed only small changes to the RGB and LAB matrix numbers (1-point or no change) that might reflect measurement error.

Because the selections described were at the lighter end of the luminosity scale, I repeated the procedure with an average blurred selection from the darker end. The findings were similar: greatest change in the R Channel on initial conversion; minimal changes in the RGB matrices in subsequent conversions; minimal change in the LAB matrices with any conversion.

These findings from a non-comprehensive range of selections seem to suggest that initial conversion from RGB to LAB affects mainly the R Channel of RGB but not LAB values, and I have to admit that visual inspection failed to detect any differences. Subsequent conversions made no difference to either RGB or LAB. Can someone more knowledgeable elaborate on these findings?
 

DanielSmith

New member
"An old, and sometimes useful one is to get just the luminosity channel for other kinds of work."

I'm a huge fan of being able to tweek the black channel. Would you or anyone else know how to incorporate black channel adjusting into an RGB file, or how to mimick the effect in an RGB color space?

JJ
CMY method to reconstitute the colors, well-known by photographers, who exploits a set of three filters known as subtractive: a cyan filter (C) which transmits the blue and the green, a magenta filter (M) which transmits the blue and the red, a yellow filter (Y) which transmits the green and the red. The method to restore the true colors image is more complex than in the case of traditional imagery RGB. It is necessary to calculate differences in intensities to obtain the components R, G and B, essential for a display true colors on the computer screen (this is the reason of the subtractive name for CMY combination).

you can check more detail in this arcticle:
 
I fail to see why anyone would want to screw around with CMYK or Lab to globally fix an image unless the Raw is gone and can't be used.

Because sometimes years of experience and the ability to make a correction in minutes rather than tweaking with bizarre ill-defined controls in a RAW converter saves time. And that is money in business.


Plus black channel generation is so specific to the CMYK profile. When you hear folks recommend this, they often do not define the CMYK conversions which plays a huge role in that black generation

Duh!!!!

The reason for this is that CMYK is an ill-defined (as opposed to well-defined/unambiguous) description of color. Hence that undefined black channel separation setting is typically tweaked to the image data at hand with ink limits set to max (i.e., minimizing clipping). This extra level of control can be very powerful and gives an alternate luminosity like channel (K) that can at times extract details that would otherwise be difficult to isolate.

Hence, if you define the conversion to only a single black channel separation setting, then you lose the entire point of using CMYK to work with images. Anyone who understands linear algebra would know this (please note, that undergraduate courses in linear algebra do not teach linear algebra, they teach elements of linear algebra over the field of real numbers which is a severely specialized case which does not apply well to integer* arithmetic on a computer).

I'm a huge fan of being able to tweek the black channel. Would you or anyone else know how to incorporate black channel adjusting into an RGB file, or how to mimick the effect in an RGB color space?

You can duplicate an image and convert the duplicate to CMYK and then use the possibly degraded duplicate image to generate masks for RGB corrections. Or you could overlay the CMYK changes over the original in Luminosity blending mode. One can also use CMYK separations to generate layers to use for color isolated high pass sharpening.

But in the end, all theory aside, it is the method that generates the best image according to your tastes that is the best. But that is focusing on the print rather than the process which was not the point of this thread.

some thoughts,

Sean


* The numbers computers use are not truly integers, but are instead a subset of the integers and many of the more vital properties that allow the integers to form a field fail to be true.
 

Andrew Rodney

New member
Because sometimes years of experience and the ability to make a correction in minutes rather than tweaking with bizarre ill-defined controls in a RAW converter saves time. And that is money in business.

When all you know is a hammer, everything looks like a nail. Years of experience indeed, I've been driving Photoshop since 1.0.7 in 1990. And I can drive it fast. I can also render Raw data better and faster because I've attempted to learn to use a newer, better designed, more modern tool instead of just relying on something older that I do have more experience with. And I'd be happy to take up the challenged, in front of an audience using my newer tool next to the older one (something I've challenged Mr. Margulis to do and something he's totally dismissed and ignored).

CMYK is an output color space based on a specific paper, press and inks. Raw is scene referred data waiting to be output referred. Unfortunately one tool (and one group of users) is stuck in the output referred, output designed color space and needs to look at the origin of the data rendering and the tools to do this, experience not withstanding.
 
Top