Let me review a related matter.
The AF operation revolves around the camera being able to measure a "focus error" by "phase comparison". Basically, that part of the system examines the relative positions of two copies of a part of the image as observed on two "AF subdetectors". It is quite equivalent to the comparison of the alignment of the two images on the two halves of a split-image focusing aid.
But here, the ideal result is not actually perfect alignment of the two images, mainly because the two subdetectors are offset from each other (they can't be in the same place!).
Hopefully, the working of the two little optical systems is such that the perfect result is when each image is the same distance from a reference point on its subdetector. But of course that doesn't always happen.
Thus, the ideal result is defined in terms of a certain difference between the locations of the two images from the reference points on their respective subdetectors. That reference difference is determined by precise measurement at the factory (or later at a Service Center) and stored in non-volatile memory in the camera.
If in fact life at the AF subdetectors was a precise analog to life at the image detector, that would be the end of the story. But it isn't.
Recall that each of the subdetectors works from rays through a little aperture, one near each edge of the actual aperture.
Various phenomena in the lens, most prominently
spherical aberration, mean that rays passing through the center of the "real" aperture do not focus at the same point as rays near its outer edge. This means that the"best focus" setting for the actual shot (involving all the rays through whatever aperture we have chosen) will not necessarily be the same as the focus that will seem ideal to the AF detector system (working only with rays through two outboard parts of the aperture).
The difference is small, but not negligible.
Note that this phenomenon will be different for the use of a "standard" AF detector or an "enlarged baseline" AF detector (the ones that require an aperture of f/2.8 or better), since their little apertures take rays from different places across the whole (possible) aperture.
To help overcome this matter, each lens transmits to the body, at the commencement of an AF operation, a
best focus correction value (BFCV). The body applies this to the focus error reported by the phase comparison itself. The objective is that the corrected reported focus error tracks with the effect of the focus setting on the actual image.
The BFCV depends on:
• The lens model
• Small peculiarities of the individual copy of the lens
• The current zoom setting (for a zoom lens)
• The current focus position of the lens optics
• Whether AF is planned to be with a "standard" AF detector or a "enlarged baseline" AF detectors
The value given to the body will typically be based on all of those.
The value is stated in terms of an offset of the focus error in terms of the location of the image with respect to the focal plane (the "image space" focus error).
The unit of this value is some fraction of the single-sided depth of focus, always defined by Canon as 0.035*
N mm, where
N is the f-number.
Now, my understanding is that the MA value (as stored for the particular lens and its current focus position) is just added to the BFCV received from the lens before being used by the body to adjust the interpretation of the focus error as indicated by the phase comparison operation. Thus, it works in just the same way.
My discussions earlier in this thread are based on that understanding (which of course could be incorrect).
My thanks to my colleague "Wilba" from ProPhoto Home forum (and elsewhere) for first calling clearly to my attention the role of spherical aberration in this whole chain.
Best regards,
Doug