Explore the captivating world of large format lenses, where iconic names like Super Angulons, Symmars, and Grandagons embody decades of optical innovation. Discover how these masterpieces from brands such as Schneider-Kreuznach, Rodenstock, Nikon, and Fujifilm contribute to the distinctive, breathtaking beauty that defines large format photography.
Introduction
There is extensive literature on large format lenses, covering the history of their designs, the types of lenses and shutters, the purpose of the most common focal lengths, and many technical specifications.
At the beginning of this adventure, it was a daunting matter, populated by mysterious and ghostly denizens called Super Angulons, Symmars, Sironars, Grandagons, Fujinons, Nikkors, Super Symmars, Clarons, Ronars, Tele-Xenars and so on… I felt like I was lost in a dark and deep forest, but I was very motivated to learn more about them. After all, they are one of the main reasons why large-format photographs are unique and so striking. These marvels of optics were the result of decades of experimentation, optical design and manufacturing, essentially by the big four: Schneider-Kreuznach, Rodenstock, Nikon and Fujifilm.
The following resources have been extremely useful to me in my learning, for consultation and to gain new insights:
- Using the View Camera by Steve Simmons (Rev. Ed). Introductory book.
- The famous “Future Classics” from Kerry Thalmann
- On landscape: Large Format Lenses – The Standards by T. Parkin and R. Childs
- View Camera Technique (7th Ed.) by Leslie Stroebel.
- Applied Photographic Optics by Sidney E. Ray
Statistical Insights
My analysis is based on the dataset compiled by Michael K. Davis, hereafter denoted as MKD45. This data set was instrumental during my own lens selection process. I focused on variables most relevant to practical and technical decision-making: focal length [mm], image size [mm], coating type (i.e., none, Multicoating, or EBC), apochromatic (APO, yes/no), weight [g], need for a centre filter, and price [€]. Visualizing these parameters against one another, as in Fig. 1, reveals several expected correlations. However, additional statistical analyses are required to extract more nuanced patterns. It should be noted that the price data from MKD45 is historical and included primarily for comparative purposes within the dataset. When considering a specific lens for purchase, I always cross-checked current market prices using platforms such as eBay, photography forums, and specialized dealer listings.When I first saw the MKD45 dataset, I wondered whether anyone had ever applied statistical analysis to it. Simple visualizations—such as histograms showing the distribution of focal lengths or charts identifying the top large format lens manufacturers—seemed like a natural first step. As I explored further, more complex questions began to emerge:
- Who was the most prolific manufacturer of 4×5 Large-Format lenses?
- What is the distribution of focal lengths across the dataset?
- Can the specifications in MKD45 (which function as predictors in statistical terms) reveal clear relationships that help explain both quoted—historical—prices and image circle sizes?
- What distinctive characteristics define the lenses in Thalmann’s “Future Classics” list?
- How would a multivariate classification—combining several lens attributes from MKD45—help visualize or distinguish different lens types?
So far, I haven’t found any prior analysis or synthesis specifically focused on the MKD45 dataset. Driven by curiosity—and a desire to explore whether meaningful answers to my initial questions could be found—I decided to analyze the dataset myself. I hope these insights prove useful or at least thought-provoking for other large-format photographers as well. To ensure that my work is reproducible, I used a variety of R packages, including: R packages, including: Hmisc, ggplot2, ggrepel, visreg, hier.part, dplyr, ggtext, randomForest, quantregForest, neuralnet, np, plotly, combinat, k-means, boot, bootstrap. The scripts and an ASCII file with the curated MKD45.csv dataset (corrections of some values, filling of missing data etc.) can be found in my Git repository.
Although MKD45 is the primary dataset used in this analysis, it is not the only one available. An expanded and actively maintained alternative is the database curated by Alexander Rutz. Rutz’s dataset builds on MKD45 but extends coverage to additional formats such as 5×7, 8×10, 11×14, and 14×17. Since my focus is specifically on the 4×5 inch format, MKD45 was better aligned with the scope of this study.
Some members of the Grossformatfotografie forum didn’t appear particularly impressed by this approach, but I believe that data-driven analysis can offer fresh perspectives on 4×5 large format optics—especially for those navigating the field today without decades of prior experience.
Manufacturers, Focal length and Image Circle
Fig.2 shows that 60% of 4×5 lenses were produced by German manufacturers, of which Schneider-Kreuznach has the largest share (33%). The two Japanese manufacturers had an equal share of 20% each.
The focal lengths are not evenly distributed either (Fig. 3). If we divide the lenses into four classes (based on Simmons): Wide angle (45-125 mm), standard (125-180 mm), long (180-300 mm) and very long (300+ mm), then the frequencies of these classes are 34%, 20%, 30% and 36%, respectively.
Consequently, the standard lenses were produced the least frequently than the others classes. Wide angle and long lenses were purchased very frequently as they were usually considered suitable for architectural shots (wide angle) and the long lenses for general work, studio work on the table, portraits in the studio and landscape shots. Very long lenses and telephoto lenses were used for art photography, nature and wildlife photography.
In optics, the image circle is the diameter of the circle formed by a lens in which the image is focussed on a distant object. The size of the image circle depends on several factors, including the focal length of the lens, the aperture and the particular design related with the intended use (e.g. portraiture, landscape, architectural, etc.).
The image circle of modern 4×5 lenses is also quite skewed (Fig.4). The most common value is around 210 mm, the maximum value is over 600 mm, which corresponds to the the amazing Fujinon C. The smallest image circle is 145 mm that belongs to the Schneider-Kreuznach APO-Symmar 100 mm. This lens does not even cover a 4×5 film. I wonder what kind of films and cameras it was made for? Likely barely covers a 9×12 cm film in a 4×5 view camera. This illustration also shows that Fuji achieved excellent results with lenses that have very large image circles but weigh much less than their counterparts.. This will become even clearer in the next illustration.
Note: After reading some of the recent comments on the Grossformatfotografie forum, I’d like to clarify: the focal length categories in my analysis are based on Simmons’ definitions for the 4×5 inch format. These categories are used strictly as a statistical tool to examine documented lens distributions—not to reflect current market availability (e.g., on eBay), nor to suggest fixed standards across other formats like 5×7 or 8×10.
Factors Explaining the Image Circle
Figure 5 illustrates the relationship between focal length [mm] and image circle [mm] for all lenses included in the MKD45 dataset. To accommodate telephoto designs—whose physical length is shorter than their stated focal length—I substituted bellows draw for focal length in those cases. Following Simmons’ guideline, I estimated the bellows draw as approximately two-thirds of the focal length, using this rule of thumb where exact data was unavailable. A theoretical reference line is included in the plot as a dotted orange line, based on the approximation that the image circle is roughly 1.5× the focal length—a simplification that provides a baseline for comparison.
The plot legend encodes lens weight using the natural logarithm of the weight, with green tones indicating lighter lenses and blue tones indicating heavier ones. This visualization highlights the substantial variability in image circle size, driven by optical design. As expected, this variation is strongly linked to lens weight, reflecting the trade-offs between coverage, portability, and construction complexity.
Figure 5 also reveals how certain manufacturers have carved out specialized niches by producing lenses with unique design characteristics. One notable example is the Fujinon SW 105 mm f/8 (data from its NSW variant), represented by a greenish dot outlined in red. This lens stands out as one of the few wide-angle options at 105 mm that delivers an exceptionally large image circle—a rare combination in large-format optics. The additional red squares in the figure highlight the focal lengths I selected for my own system: 75 mm, 150 mm, 210 mm, 300 mm, and 500 mm. This sequence offers a smooth progression and has proven well-suited to my photographic needs.
The next logical step is to explore whether other specifications in the MKD45 dataset can be used to predict the image circle. Searching for a robust empirical relationship may help answer one of my central questions—and uncover new insights into these remarkable optical instruments.
The Theoretical Relationship of the Image Circle
Theoretically, the diameter of the image circle at f/22 (d [mm) is linked to the focal length (f [mm]) and the angle of coverage (α [rad]) by the following equation: \(d = 2 f \tan(\frac{\alpha}{2})\). Applying this equation to the data provided in MKD45 we can obtain the green triangles depicted in the in Fig. 6 . These theoretical values show excellent agreement ( \(r^2=0.9988\) ) (blue dots) with the data of the image circle available in the data set. It is known that the total image circle also depends on the construction of the lens, thus the small deviations between the actual and the theoretical value can be attributed to the respective constructions. A correction of the theoretical values can be performed with a lens-depended factor \( k \), namely \( d = 2 k f \tan( \frac{\alpha}{2} ) \). \( k \) can be modelled as a potential model that can be estimated as follows:
\[ k = x_1^{b_1} x_2^{b_2}\ldots x_n^{b_n} . \]
In a first step, the most likely predictors for \( k \) (those with the lowest cross-correlation to each other) were selected from the MKD45 dataset. The selected predictors for the correction factor are the following specifications:
- Maximum aperture [mm]: \( maxF = x_1 \)
- Minimum aperture [mm]: \( minF = x_2 \)
- Tilt degree landscape [˚]: \(tilDl = x_3\)
- Weight [g]: \(W = x_4\)
- Filter size [mm] \(filS = x_5 \)
- Apochromatic ( \( 1=T \; 0.01=F \) ) \(APO = x_6 \).
With 6 potential predictors, there are 63 combinations of potential models. All possible models were automatically generated, and their efficiency was assessed using the AIC metric. The lowest AIC value indicates the best model.
The theoretically corrected model is shown in Fig. 6 (best fit). The selected predictors were: maxF, minF, aCur, W, and APO. This model has a very high coefficient of determination \( r^2=0.9991 \), marginally better than the theoretical equation. For practical applications, the theoretical equation is consequently enough. The high level of agreement indicates that the MKS45 dataset is fully consistent with the theory. The remaining errors could be due to the truncation of decimals in the data supplied by the manufacturers. The yellow points obtained with the corrected theoretical model are almost above the orange line, which indicates a perfect match between the actual and predicted image circle.
This result answers my third question.
Factors Determining the Historical Quoted Price
The first step was to select the most likely predictors having the least cross-correlation with others. Then they were normalized them in the interval (0,1]. This facilitates the convergence of the parameter optimization algorithms in R. Initially, three types of models were considered plausible:
1) multilinear
\[ y = b_0 + b_1 x_1 +\ldots + b_n x_n \;, \]
2) exponential with a multilinear exponent
\[ y = e^{ b_0 + b_1 x_1 +\ldots + b_n x_n } \;, \]
3) potential
\[ y = b_0 x_1^{b_1} x_2^{b_2} \ldots x_n^{b_n} \;. \]
The selected predictors to model the quoted Price [US$ 2002] = y are the following specifications:
- Focal length [mm] (fLen = x1)
- Maximum aperture [f-stop] (maxF = x2)
- Minimum aperture [f-stop] (minF = x3)
- Image Circle [mm] (iCirc = x4)
- Tilt degree landscape (tilDl = x5)
- Angle of coverage [deg] (aCvr = x6)
- Weight [g] (W = x7)
- Filter size [mm] (filS = x8)
- Apochromatic [-] (1=T/0.01=F) (APO = x9)
With 9 potential predictors, there are 511 combinations of variables. All possible models for the three types of model structures were created automatically, and their efficiency was evaluated using the AIC metric. The lowest AIC value indicates the best model. This procedure was repeated for all model structures. The best models for the three selected structures are shown in Fig. 7. A jackknife cross-validation technique was used to test the robustness of the best models for each structure. The Jackknife root mean square error (jRMSE) for these models was: 607, 400 and 486 [US$ 2002], respectively. The lowest root jRMSE) correspond to the exponential-multilinear model.
There is a large variability for lenses priced below US$ 3000 (2002 prices). These prices do not reflect current retail prices, which are mainly driven by “recommendations,” increasing the demand and short supply (almost 2nd market). It would be interesting to survey the current prices for these lenses (on eBay) and see how these models have performed. The 2002 US dollars can be adjusted for inflation to get an idea of how much these lenses may cost at current US$.
My interest with this empirical analysis was to get insights on which factors were driven the 2002 prices. In case of the exponential model the selected predictors are the usual suspects: fLen, minF, iCirc, W, und filS. Interestingly, APO did not appear among the best predictors. If someone would like to try, here the coefficients of the model (b0,…,b6 are 2.59e2, 3.05e-4, 8.76e-3, -1.62e-3, 2.51e-4, 1.74e-2, respectively). The exponential model has a very high coefficient of determination (r2 = 0.89). The results of the cross-validation show that most of the coefficients are significant. The unexplained variability (deviation from the mean) indicates advantages or disadvantages of the respective lenses.
How Unique Are Thalmann’s Future Classics?
Tim Parkin and Richard Childs, when discussing Kerry Thalmann’s well-known “Future Classics“, articulated a dilemma that I often face when selecting large format lenses. They raised a fair question:
“These lenses are all very, very good, but do they deserve the price premium they have inevitably gained?
Their answer? They’re not sure. And frankly, neither am I. Thalmann’s selections—based on personal experience, aesthetics, and specific needs—are valuable but inherently subjective. As Parkin and Childs rightly observed, the only certainty is that the list has “probably single-handedly doubled the price of every lens listed.”
Motivated by this, I set out to explore whether these so-called Future Classics exhibit objectively measurable traits that set them apart. To do so, I conducted a simple but informative statistical test. In the histograms shown in Figure 8, I plotted the deviations from the mean for several features across all lenses in the MKD45 dataset. If Thalmann’s selections consistently appear at the extremes of these distributions, that would suggest distinctive characteristics. If not, their “specialness” may be more a product of perception than measurable distinction. For this analysis, I selected five features: weight, historical quoted price, image circle, and two efficiency metrics—image circle per focal length and weight per focal length.
The results of the analysis are presented in Figure 8. Without delving into formal statistical testing, a clear pattern emerges: the primary distinguishing factor for lenses in Thalmann’s Future Classics list appears to be weight. Most of the lenses in this group rank among the lightest per focal length in the entire MKD45 dataset. In contrast, the selection shows no notable advantages in other features—such as image circle size, quoted price, or optical efficiency—when compared to the broader lens pool.
Based on available documentation, it seems no advanced metrics—like those derived from MTF curves—played a role in defining the Future Classics list. (And indeed, MKD45 lacks this level of optical data.) Seeing these results, I was genuinely surprised by how much influence and market impact this selection has had over the years.
If you have the budget, owning a Future Classic may be a solid investment, as Parkin and Childs suggest—largely because of the speculative value this list has created over the past two decades. But if you’re like me, you’ll likely prefer to make your own decisions. That’s perfectly reasonable: all of these lenses are already excellent in terms of optical quality. What truly matters is selecting the right tool for your needs—based on specifications, condition, and price—not a badge of collector prestige.
Classifying 4×5 Lenses: A Detailed Exploration
There are numerous ways to classify large-format lenses—but the approach presented here is different in a key respect: it is data-driven and multivariate, based on the full range of features available in the MKD45 dataset. Rather than relying solely on traditional optical categories or anecdotal groupings, this classification leverages objective, quantitative variables.
The terms I use—wide, normal, long, and very long—are explicitly defined for the 4×5 inch format. For instance, a 150 mm lens is considered “normal” on 4×5, but it would function as a wide-angle lens on 8×10, or a short telephoto on 6×7. These definitions are format-relative, and the present analysis is strictly framed within the 4×5 context. Misinterpretations often arise when such labels are applied without reference to format.
To develop a more nuanced classification, I applied the k-means clustering algorithm using all (normalized) variables in the dataset, with equal weight assigned to each. The model was configured to form ten clusters, allowing for the emergence of subclasses within conventional categories like wide, standard, and long lenses. Interestingly, one lens emerged as an outlier, forming a standalone class: the APO-Tele Xenar TM.
For visualization, I chose two metrics—image circle per focal length and focal length—to represent the clusters in a 2D plot. While this selection is admittedly subjective, it provides a clear and well-dispersed graphical overview of the lens groupings.
The results are presented in Figure 10. It’s important to emphasize that this is just one possible outcome of an unsupervised classification approach. It is not intended to replace traditional frameworks—such as the well-established scheme by Simmons—but rather to complement them. By applying statistical tools to the MKD45 dataset, this analysis offers a fresh perspective on the characteristics of lenses used with 4×5 view cameras, helping to uncover patterns that might otherwise go unnoticed.
Challenging Dogmas in Large Format Photography
In my professional life as a scientist, I’ve learned to accept—even expect—criticism when proposing new ways of thinking about data or solving problems. I also know how tough it can be, especially for younger or less-established voices, to absorb feedback when it comes laced with arrogance or harsh words from those who claim to know “too much.” Science has taught me to keep my mind open—and to pursue ideas that make sense to me, regardless of how they fit into existing dogmas. The success of my research—measured by citations and the practical applications of my models in building effective monitoring and forecasting systems—is proof that my original “crazy” ideas have turned out to be useful.
I bring the same mindset to large format photography. I explore what I find useful and meaningful, driven by curiosity. I read forums, yes—but I rarely give much weight to the opinions of self-proclaimed “experts” who speak more to enforce orthodoxy than to share insight. There are valuable exceptions—photographers like Mat Marrash are an example of those who generously share knowledge and encourage experimentation.
In this case, I applied statistical reasoning to the classification of lenses and proposed a structure that someone in a forum dismissed as “denkwürdig” (peculiar). But this kind of response overlooks a fundamental truth: unsupervised classifications are never perfect. They are always shaped by assumptions—choices about which features matter, which dimensions to emphasize, and how to group or differentiate. This is not a flaw; it’s a reality of exploratory data analysis. The goal isn’t to create a final taxonomy—it’s to illuminate patterns that might help users, like myself, make more informed decisions.
