Preface
As we struggle to learn about a new field by reading the technical literature, we are often bedeviled by (at least) these two factors:
• Either because the actual work was evolving as the "definite paper" was written, or because the author's outlook on terminology was evolving, we find different terms for the same thing randomly used in the paper.
It happens to me all the time. Fortunately my ace copy editor often catches it. And you often suffer from the times she does not get a crack at the paper before it is "published" (she's usually making us breakfast then.)
• When the author coins a term for some new metric of phenomenon, he may choose the name based on just what he was preoccupied by when he coined it. The name may in fact not be a good fit to the truly-general understanding of the "thing".
Sometimes a certain measurement concept may be names from the instrument used to measure it.
For example, if had just devised a scheme of determining what we today call the "RMS value" of an electrical wave, and important attribute, with an instrument that depended on heating a resistor with the waveform, I might decide to call that value the "heating value" of the waveform.
These things are rampant in the area of the "acutance" metrics.
But we "triangulate". You can run, but you can't hide.
The "edge acutance", "acutance", and "texture acceptance" metrics
You can get an idea of what I am talking about from a peek at a paper by Baxter, Cao, Eliasson, and Phillips entitled "Development of the I3A CPIQ spatial metrics". It is the paper by which certain CIQ metrics were first generally introduced. (It was issued when the PPIQ Initiative was still in the hands of I3A.)
I will quote from the Abstract of the paper. I give a fairly long quote as I wanted to properly establish the context; I have highlighted in blue the specific passage of interest.
The I3A Camera Phone Image Quality (CPIQ) initiative aims to provide a consumer-
oriented overall image quality metric for mobile phone cameras. In order to achieve this
goal, a set of subjectively correlated image quality metrics has been developed. This paper
describes the development of a specific group within this set of metrics, the spatial metrics.
Contained in this group are the edge acutance, visual noise and texture acutance metrics.
Now, in the Introduction, we find (again, I have highlighted the critical passage in blue):
This paper describes a subset of these metrics, referred to as the spatial metrics. This encompasses metrics for sharpness, SNR, and texture blur.
I can just hear Carla calling from the editing table (in front of our sofa) into my office: "Honey, is 'texture acutance' the same thing as 'texture blur'? It sounds as thought they are maybe the opposite."
In fact, it turns out that when the authors wish to speak of a metric for "sharpness", they often speak of a metric for "blur" (because of course blur is anti-sharpness)
Next, in the section "Spatial Metrics", in the subsection "Edge acutance", we start with this:
The ISO 12233 standard1 describes several methods to measure and calculate the spatial
frequency response (SFR) of an imaging system. For the CPIQ sharpness metric, the edge
SFR was found to be most appropriate.
It turns out that what this means is this, which is the crux of this whole mystery.
The "sharpness metric" (which seems to be what is identified as "acutance" in the Phase 2 CPIQ specification) is extracted from the system MTF (or SFR - spatial frequency response - as it is sometimes, but not always, called in this paper).
But it turns out that in digital cameras, the system SFR is not a clearly measurable curve. The intrusion of various processing algorithms at different stages (potentially quite sophisticated these days) makes the SFR curve depend on the kind of pattern used to determine it. The impact is greatest on "more complicated" patterns (more on that presently).
The classical (and actually simplest) way to determine the SFR (MTF) is to determine the edge spread function (the variation in image illuminance as we cross a sharp edge transition in the "test target") and take its Fourier transform.
But that does not take into account the complications introduced by in-camera processing.
Those can be taken into account by a much more complicated measurement process. It uses a special class of "texture" test pattern consisting of overlapping circles of varying diameter in a quasi-random pattern. It is called a "dead leaves" pattern because apparently when this type of measurement was first devised, it was suggested that a shot of a pile of dead leaves would have about the statistical properties needed.
The distribution of frequencies in such a pattern can be mathematically determined.
Simplistically, to determine the system SFR (MTF) on this basis, we let the camera capture the image and then, from the overall camera digital output, by digital signal processing, determine the frequency content of the image.
Then, the SFR is the ratio, at each frequency, of the content at that frequency in the image to the content in that frequency in the target pattern.
Note that the SFR's determined in these two different ways are conceptually different. They may just be different curves depending on which technique is used to measure them in a real digital camera - the one that uses an "edge" test pattern or the one that uses a "texture" test pattern.
The edge acutance metric
Evidently, it was ascertained during the research that the basic "sharpness quality" metric in the CPIQ doctrine (the one that closely parallels the Gadrner-Cupery SQF) bears the most consistent relationship to subjective assessments of sharpness quality when the formula is used on an SFR curve determined by measurement of the edge spread function.
And so, the authors speak of this metric as the "edge acutance" metric.
"This style is what we call a Delta coffee table", says the furniture artisan. "Why do you call it that". "Well, I build them with my Delta table saw."
Aargh!
The texture acutance metric
An assessment of camera performance that correlates differently with subjective assessments of sharpness quality is derived by using the same equation to derive it from the SFR but assumes that the SFR was determined using the "texture target" technique.
And so this metric is called the "texture acutance" metric.
"This style is what we call a Rockwell coffee table", says the furniture artisan. You know the rest.
So which is which?
Sadly, the authors neglect to tell us which of these two metrics has what significance to the user.
We see it said by DxOMark that the terms
edge acutance and
texture acutance (not necessarily relating to proposed CPIQ metrics) refer to indicators of:
• How the rendering of a sharp edge is degraded from the ideal
and
• How the rendering of detail in a textured area is degraded from the ideal
respectively.
But I do not believe that in fact the CPIQ "edge acutance" metric, by its nature, focuses on the faithful rendering of a sharp edge.
It is entirely possible that these DxO definitions, while certainly reasonable, do not relate to the two CPIQ metrics discussed by Baxter
et al. Perhaps that is what the two terms "should mean!"
Co-author Cao was in fact with DxO laboratories (France).
In the CPIQ Phase 2 specification itself
In the current CPIQ specification that defines the metric
acutance, the determination of acutance is to be done based on a system SFR to be determined by the ISO slanted edge technique, which is of course an "edge spread function based" means of determining the SFR.
The term
edge acutance does not appear in that specification.
What does all this mean?
My take on this is:
• The notion of two different metrics for sharpness, based on SFR's determined in two different ways, as discussed by Baxter
et al, is an incompletely-hatched egg, and did not make it into the Phase 2 CPIQ specification. There are hints that something like this this may appear in the Phase 3 work.
• For now, we should not take that fact that the moniker "edge acutance" was been attached to the acutance as defined in the CPIQ Phase 2 specification means that this metric is intended to be an indicator of how well the camera deals with rendering sharp edges between areas of differing luminance.
Rather, it is a measure of projected overall "user satisfaction" with the images the system can produce (in the "sharpness" sense).
• The notion of "texture acutance" may be a measure of how the rendering of detail is degraded, not only by SFR considerations, but as well by image processing mischief.
Maybe.
Best regards,
Doug