• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname


    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Best focus adjustment for Canon EF lenses

Doug Kerr

Well-known member
In another thread, I introduced the notion of "best focus adjustment" as it applies to the Canon AF system. I thought I would expound here a little further on the matter.

Sadly, most of the information I have on the subject is gleaned from a rather old publication, the Canon service manual for the Canon EF 50 mm f/1.8, 28 mm f/2.8, and 15 mm f/2.8 lenses (1987). Certainly the implementation of the concepts I discuss here may be much different for newer lenses, and in addition there are further complexities in the case of zoom lenses.

But the underlying concepts comport well with discussion of the theoretical underpinnings in various optical textbooks.

I cite here a passage that recurs for all three lenses:

There is bound to be some discrepancy between the focus point determined by
the autofocus system and the actual best focus point of the interchangeable lenses
due to the inherent differences between the different lens types.

In the EOS system, the difference between the AF focus and the optical best
focus has been determined for each lens type and the information written into
the lenses Read Only Memory (ROM) so that correction for the difference at
maximum aperture is made electronically.

In actuality, in addition to this type difference, there is a difference between
individual lenses within each type, which can be noticeable if not corrected, At
the factory, correction is written into the individual lens' ROM with a expensive,
special tool. This is called the "Best Focus Adjustment". Because of the
tooling cost involved, this adjustment will not be a part of the service procedure.
In its stead, the following actions will be taken.​

Next I will present a figure that accompanies this discussion:


The curve is a plot (typical) of the effect of spherical aberration as it pertains to rays at varying distances from the optical center (this being the vertical axis of the graph). The horizontal axis represents the amount by which the point of convergence of the rays at any distance differs from the point of convergence for rays that are infinitesimally away from the axis (the "paraxial" rays from which basic optical theory proceeds).

This curve is for a lens that has been "corrected" for spherical aberration, and in effect shows the "residual" aberration. For an uncorrected lens, the aberration continues to increase (in an "accelerating" way, in fact) as we consider rays further and further from the axis.​

We note that the discrepancy increases as we consider rays further and further from the axis up to a point, and then declines as we go further yet, until at a certain distance from the axis there is no discrepancy. (If we go beyond that point, the discrepancy goes in the opposite direction, but this curve is cut off before we see much of that.)

Now, recognizing that the rays, from a given point on an object, that pass through various parts of our lens (that is, at different distances from the axis) do not all converge at the same place, how should we set the lens position (thinking in terms of a simple lens, focused by moving it, as in a view camera) to give the best image?

The answer is not simple. As we consider different candidate positions, we find that the "blur figure" caused by the lack of singular convergence changes. Its overall diameter changes, and we might think that the best focus setting would be when the overall diameter of the blur figure is the least.

But the variation of brightness as we go from the center of the blur figure also changes its nature as we use different focus settings. And when the overall diameter of the blur figure is the least, the "brighter" portion in its center is actually a bit larger, and so the blur figure appears relatively larger to the viewer.

Thus there is a "sweet spot" in which the apparent size of the blur figure to the viewer (and thus the degradation in sharpness) is the least. This can only be determined by subjective experimentation.

But commonly this "best focus setting" is one that moves the "paraxial" focus point toward the film or sensor by about 70% of the amount of the maximum spherical aberration.

And that is the significance of the numbers "3" and "7" on the figure (the relative lengths of certain distances), suggesting that the best focus is offset from the paraxial focus (0 on the horizontal axis) by 70% of the maximum spherical aberration.

This offset of the "target" focus from the paraxial focus is called the best focus adjustment (BFA).


Now back to the discussion in the manual text. The passage I cited means that for any given lens design, the BFA value (possibly actually a table working from the focus setting for different distances) is written into the lens ROM. And further, that adjustments to this value (or perhaps to the entries in the table) for the specific lens "copy"are written into the lens ROM.

But what about in the field? At this time (1987) it was apparently not economical to equip field service centers with the gear to determine the best value (or table values), say, after a lens had been reassembled after replacing an element, and to write those new values into ROM.

So there is a much more agricultural provision for field adjustment.

A "pile-on" adjustment to the BFA value(s) can be made in the field by way of a two-bit number (four possible values) that is set by placing or removing solder bridges between two pairs of pads on the lens flexible circuit (one pair for each "bit").

Now, how was this "pile-on" adjustment to be determined?

Well, if the flexible circuit is being replaced, then the setting from the original circuit should be reconstructed. (This suggests that a setting of the "pile-on" adjustment was made at the factory, which does not match the overall story in the text, which suggests that everything at the factory is put into the lens ROM.)

If a certain element group has been replaced, then open both bridges (resulting in a certain one of the four possible values - apparently the default).

If other work is being done do nothing (leave the setting as found).

All amazing!

As you can tell, I can't get this whole story to fit together. But still, what we know gives us some insight into this matter.

Now, how this works for more modern lenses I have no idea.

Now, on to the use of the BFA value in the operation of the AF system.

The working of the AF detector essentially tells us how far the current focus setting of the lens differs from ideal focus in the geometric sense. That is, if the two subimages are aligned, then focus would be ideal on a paraxial basis (ignoring the impact of spherical aberration on the truly ideal focus setting).

So before the findings of the AF detector are utilized, the lens is asked to provide the BFA value applicable to the current lens focus setting and that is used to adjust the findings of the AF detector before that result is used to guide the AF process.

Best regards,


Doug Kerr

Well-known member
Here is some more on spherical aberration, which of course figures prominently in the matter of best focus adjustment.

If we make a lens with spherical surfaces (the kind that has been traditionally easy to make), the rays that pass through the lens at different distances from the axis do not all converge at a single point. The term "spherical aberration" is given to this shortcoming, and comes from the fact that is is produced by lenses with spherical surfaces. (Lenses with non-spherical surfaces can have less of this phenomenon - conceptually even none.)

Here is an interesting figure taken from Modern Optical Engineering (Second Edition) by Warren J. Smith (used here under the doctrine of fair use):


This shows the mechanism of spherical aberration for a lens which we assume exhibits only what is called "third order" spherical aberration (a simplification from the typical real case) and of course is not corrected for it. When we consider the result of this phenomenon in terms of failure of all the rays to converge at the same distance from the lens, we speak of longitudinal spherical aberration.

In this figure, the term "focus" refers to a convergence of rays, not a setting of the lens in the context of a camera.

We see a number of rays, all emanating from a single point on a scene object, passing though the lens at different distances from its center (the optical axis). We see them as (from the left) they have just left the lens.

Here, the outermost rays (the ones at the steepest angles) are assumed to come from the edge of the lens (or at least the edge of the aperture in use). These are called "marginal rays", a term that is of little importance to us other than it lies behind some of the notation on the figure.

The point marked "paraxial focus" is where rays an infinitesimal distance form the axis converge. (Those that actually lie along the axis don't converge.) This is what we think of as where all the rays would converge if the lens did not exhibit any spherical aberration. Thus, we describe the degree of (longitudinal) spherical aberration afflicting any other rays in terms of the distance from that point to where they converge.

Thus the degree of longitudinal spherical aberration that afflicts the marginal rays (the greatest aberration in this situation) is called LA<sub>M, "M" for marginal.

Note that there is no place we could put the film or sensor such that the image of the object point would be a point (resulting in ideal sharpness of the image). At any position, the spreading of the rays caused by the spherical aberration would result in a blur figure.

Now we might think that, for this situation, best image sharpness (for an object all of whose points were in the plane of the point whose rays are shown here) would be attained if we were to place the film or sensor at the point where the overall diameter of the bundle of rays was the smallest (leading to the smallest blur figure).

In this case, that predictably falls at 3/4 of the marginal aberration distance, reckoned from the paraxial focus​
But not so. Consider now this figure:


At the plane shown by the red line, the overall diameter of the ray bundle is greater. But note that there are not a lot of the rays in its outermost region. As a result, the diameter of its "bright core" is in fact smaller than at the other plane. Thus the image of the blur figure with the film or sensor at the red line may appear visually smaller - thus the image will appear sharper. And there is some such location in which it has been found that the resulting image would seem sharpest to typical observers.

So the purpose of the "best focus adjustment" is to move the film or sensor from the "geometric" ideal spot (at the paraxial focus) - which is what the indication of the AF detector would guide us to do - to the (empirically-determined) "red line" position.

Now of course in our case the lens is usually a complex one, winch inherently exhibits both third and fifth order spherical aberration, but is corrected for it (which does not remove the aberration - merely reduces it and limits its maximum, which no longer occurs for the marginal rays). So the details of the matter are somewhat different from what we see in the figure. But the concepts are the same.

Best regards,


Doug Kerr

Well-known member
It turns out that I have excerpts (pertaining to this matter) from the service manuals for the Canon EF 50 mm f/1.4 USM and EF 35 mm f/1.4L USM lenses. (I have no idea where I got these or where the entire manuals are!)

The sections that I have discuss (for each lens) the adjustment of the best focus adjustment by placing solder bridges on pad pairs on the flex circuit board based on an actual shooting test of a test target (examining the results on the film with a microscope).

I do not have the discussion, if any, of the concept behind this: the matter of spherical aberration. What I do have is, in the introduction to this section of the manual:

Purpose - Make the AF focus position closer to the best focus position​

That may be the whole discussion! (Canon of course tells you less and less as time goes by!)​

There are separate adjustment settings (sets of solder bridge pads) for the vertical and horizontal aspects of the AF detectors (where both are applicable).

For the 50 mm non-L lens, these settings are on the basis of two bits (two pairs of pads, 4 possible values) for each direction.

For the 35 mm L lens, these settings are on the basis of three bits (three pairs of pads, 4 possible values) for each direction.

Here (as for the older lenses I spoke of earlier, although I did not mention this) the "unit" of the amount of correction is the depth of focus limit for the lens. This is the amount of displacement of the focal plane from the position of ideal focus that will result in a blur figure (from imperfect focus) whose diameter is the circle of confusion diameter limit (COCDL) that Canon uses for all such matters (0.035 mm). (They of course just call the COCDL the "circle of confusion".)

For the older lenses and the EF 50 mm f/1.4 lens (non L), the available adjustments values (set with the solder bridges) are, as a fraction of this reference displacement:

-3/4, -1/4, +1/4, +3/4

For the EF 35 mm f/1.4L USM lens, the available adjustment values are, on this same basis:

-8/5, -6/5, -4/5, -2/5, 0, +2/5, +4/5, +6.5

I have a feeling that the best focus adjustment for a particular copy of the lens at the factory is done with this "solder bridge" facility rather than an overwriting of the "stock ROM". (I think that is even true for the "older lenses" despite the language in the manual there.)

Best regards,


Doug Kerr

Well-known member
I find that "WilbaW", with whom I collaborated intensely a number of years ago on the reverse engineering of the Canon EOS AF system, but with whom I have not corresponded for a number of years, writes in this dpr post:

as follows:

Len calibration is basically updating the BFCV table in the lens, and you can think of AF microadjustment as an offset applied by a particular body to the BFCVs received from a particular lens.

BFCV (best focus correction value) refers to the correction applied to the determination of the AF detector to compensate for the fact that, owing to spherical aberration effects, the "ideal focus" indication of the detector may not correspond to the focus setting that yields a visual assessment of "best focus" for the image.

This is of course consistent with a conjecture I had made earlier, although without much optimism that it was correct.

I have written privately to WilbaW to ask him how he knows that. (My therapist will not allow me to go onto dpr.)

Best regards,