Doug Kerr
Well-known member
My understanding of the basic principles of the Canon AF system (and of course it changes in many unknown details all the time) is this:
1. The focus error is measured when the process begins and the amount the lens focusing mechanism should have to move to attain ideal focus is reckoned, based on parameters provided by the lens.
2. The lens focusing mechanism is commanded to move that far, and the error at that point is measured.
3. If the error is within some small band (the "tolerance" on final focus error), the job is considered done, and the shutter is allowed to fire.
4. If the error is not within that small band, but within a certain sightly larger band, the amount the lens should have to move to attain ideal focus is reckoned, based on parameters provided by the lens. The lens focusing mechanism is commanded to move that amount, and when that movement is done, the shutter is allowed to fire (the assumption being that, from "not very far from ideal focus", that amount of movement will surely put focus within the acceptable tolerance.
5. If the error is not within this larger error band, the process repeats from step 1.
An important complication is that the "ideal focus" as indicated by the focus detector is not necessarily the "best focus" in terms of the image. The reason is the impact of spherical aberration, the lens aberration in which rays from a point on the subject, passing through different parts of the lens aperture, do not converge at the same spot. Thus no lens focus setting will produce "perfect" focus.
Thus we wish to set the lens at the place that will make the image of that point seem "sharpest" to the ultimate viewer (and there are empirical rules that guide us in determining that).
But that "best focus" position is not the same position that would be adjudged "perfect" by the focus detector (which is after all operating on essentially a "rangefinder" basis, and cannot take into account this matter).
Thus, the finding of the AF detector of "how far, if at all, is focus away from 'perfect' " is adjusted by a value given by the lens (the "best focus adjustment", or BFA, value) before being considered to see, for example, is "focus is now ideal".
Now we know by examination of the service manuals for some very old EF lenses that this value may differ from copy to copy of the same lens model. We know this because there are instructions for setting that value on the specific lens copy (on those old lenses, it is set as a two-bit value by placing solder bridges across two pairs of pads on the circuit board in the lens!).
Now, a question is:
Considering that the overall system is essentially "closed loop"* (that is, the body ultimately manipulates the lens until the focus error is "negligible"), what property(ies) of an individual copy of the lens is there than can cause the focus result to not be ideal?
This question is related to:
• What property(ies) of an individual lens copy that make autofocus with that copy be "not ideal" are adjusted at a service center when we send a lend in for calibration?
• What property(ies) of an individual lens copy that make autofocus with that copy be "not ideal" are compensated for by the micro focus adjustment set into a body for that lens?
It might be a simple as the matter of the BFA not being "proper" (at the present time) in the copy of the lens we have in hand.
But other than that, I have trouble coming up with a credible (to me) scenario for this matter.
I would appreciate any input on this matter.
Thanks.
Best regards,
Doug
1. The focus error is measured when the process begins and the amount the lens focusing mechanism should have to move to attain ideal focus is reckoned, based on parameters provided by the lens.
2. The lens focusing mechanism is commanded to move that far, and the error at that point is measured.
3. If the error is within some small band (the "tolerance" on final focus error), the job is considered done, and the shutter is allowed to fire.
4. If the error is not within that small band, but within a certain sightly larger band, the amount the lens should have to move to attain ideal focus is reckoned, based on parameters provided by the lens. The lens focusing mechanism is commanded to move that amount, and when that movement is done, the shutter is allowed to fire (the assumption being that, from "not very far from ideal focus", that amount of movement will surely put focus within the acceptable tolerance.
5. If the error is not within this larger error band, the process repeats from step 1.
An important complication is that the "ideal focus" as indicated by the focus detector is not necessarily the "best focus" in terms of the image. The reason is the impact of spherical aberration, the lens aberration in which rays from a point on the subject, passing through different parts of the lens aperture, do not converge at the same spot. Thus no lens focus setting will produce "perfect" focus.
Thus we wish to set the lens at the place that will make the image of that point seem "sharpest" to the ultimate viewer (and there are empirical rules that guide us in determining that).
But that "best focus" position is not the same position that would be adjudged "perfect" by the focus detector (which is after all operating on essentially a "rangefinder" basis, and cannot take into account this matter).
Thus, the finding of the AF detector of "how far, if at all, is focus away from 'perfect' " is adjusted by a value given by the lens (the "best focus adjustment", or BFA, value) before being considered to see, for example, is "focus is now ideal".
Now we know by examination of the service manuals for some very old EF lenses that this value may differ from copy to copy of the same lens model. We know this because there are instructions for setting that value on the specific lens copy (on those old lenses, it is set as a two-bit value by placing solder bridges across two pairs of pads on the circuit board in the lens!).
I have no idea what variation in the physical construction of the lens can lead to a significant variation in the BFA value.
Now, a question is:
Considering that the overall system is essentially "closed loop"* (that is, the body ultimately manipulates the lens until the focus error is "negligible"), what property(ies) of an individual copy of the lens is there than can cause the focus result to not be ideal?
* Note that this is not really true when the scenario for an AF job turns out to be (in terms of the steps above) 1-2-4.
This question is related to:
• What property(ies) of an individual lens copy that make autofocus with that copy be "not ideal" are adjusted at a service center when we send a lend in for calibration?
• What property(ies) of an individual lens copy that make autofocus with that copy be "not ideal" are compensated for by the micro focus adjustment set into a body for that lens?
It might be a simple as the matter of the BFA not being "proper" (at the present time) in the copy of the lens we have in hand.
But other than that, I have trouble coming up with a credible (to me) scenario for this matter.
I would appreciate any input on this matter.
Thanks.
Best regards,
Doug