Commentaries on Stanley Klein's Research Articles
A. Spatial vision modeling
A1. Threshold vision: The Modelfest Project. The modelfest project that I helped organize has finally put the filter model to a rigorous test.
Several groups, including ours, have been developing a quantitative detection/discrimination model of human vision. One problem with this effort is that rarely has one
researcher used another's model on a common dataset. One is thus left with the worry that each particular model has only a very limited range of applicability. Two years ago, at the 1997 OSA (Optical Society) meeting I proposed the
notion. The original idea was to have a competition in which different models would be applied to the same dataset. At each of the subsequent annual ARVO, OSA, SPIE (basic and applied vision
researchers) and ARVOday (Bay Area vision researchers) meetings, the evolving Modelfest group met and had discussions. Thom Carney has organized the recent ARVO and SPIE meetings and I have organized
the other ones. Beginning at the 1998 ARVOday meeting the group decided to focus on collecting a large dataset from many labs. For Year 1 we decided to gather detection thresholds on a very diverse
set of 44 stimuli (see Thom Carney's web site www.neurometrics.com for details). At present nine different labs are each contributing one or more observers to the Year 1 data collection
130. Carney T., Klein S.A., Tyler, C.W., et al.(1999). "The development of an image/threshold database for designing and testing human vision models". Human Vision and Electronic Imaging IV. Rogowitz & Pappas, Eds, Proc. SPIE 3644, 542-551.
This is the first group Modelfest paper. It presents the rationale for the Modelfest project, gives the display specifications and viewing conditions, and presents the Year 1 battery of 44 stimuli with their rationale.
Silverstein, D.A. & Klein, S.A.(1999). "A regression-tuned, multi-resolution visual detection model".
This paper is a chapter from Amnon Silverstein's 1999 PhD thesis. The goal of this paper was to see whether the standard model with oriented filters could predict detection thresholds for a battery of 17 widely diverse stimuli. In addition to Gabor-like patches of different lengths and spatial frequencies, we also included localized dots, blurred patterns and several types of high spatial frequency textures that are used for half-toning uniform fields. A special feature of our model was that it enabled us to do a nonlinear search for the optimal parameters in a manner that kept the number of free parameters constant as we altered the number and characteristics of the underlying filters. Thresholds were measured on three observers. We were pleased to discover that elongated mechanisms with reasonable orientation and spatial frequency bandwidths were able to fit the large data-set very well. For the tiny dot stimulus the human observers where more sensitive than the model prediction. This could imply that there are tiny circular mechanisms in addition to the orientation tuned mechanisms. An extended version of this model has been successfully used to fit the modelfest data.
- close section -
A2. Suprathreshold vision, successes. The test-pedestal approach
Starting with my first papers with Charles Stromeyer (Klein, Stromeyer & Ganz, 1974; Stromeyer & Klein, 1974), I have been developing filter models of suprathreshold spatial
vision. However, about eight
years ago I began to suspect that the success of the diversity of models was due more to the nature of the stimuli than to the cleverness of the modelers. The "Template Observer" or as we now call
the test-pedestal approach, offers a method of predicting discrimination thresholds on a variety of tasks without making assumptions about model mechanisms or filters. This approach is similar to the
original argument by Campbell and Robson that a square wave grating can be distinguished from a sinusoid when the difference pattern (mainly the third harmonic in this example) is at its own
threshold. The next three papers show that this approach does a good job in predicting hyperacuity thresholds, blur thresholds and motion thresholds without requiring a parameter-laden model. The
enterprise needs to focus on deviations from this test-pedestal prediction. In each of the following three papers we discuss how the deviations from the first order test-pedestal predictions might
be understood in terms of a mechanistic filter model.
106. Carney, T. & Klein, S.A. (1997). "Resolution acuity is better than vernier acuity". Vision Research, 36, 525-539.
128. Carney, T. & Klein, S.A. (1999). "Optimal spatial localization is limited by contrast sensitivity". Vision Res. 39, 503-511.
These two studies compare vernier and bisection hyperacuity, contrast discrimination and blur/resolution thresholds by using the same test stimulus for each task. Consider, for example, vernier acuity of a thin line. A vernier line target can be decomposed into a straight line pedestal plus a dipole test stimulus added to half the line. Instead of specifying the vernier threshold in units of min of arc it can be specified in dipole strength units (%min2). The connection between the two units is threshold (min) = Test (%min2) / Pedestal (%min). A similar decomposition can be made for edge blur discrimination (in dipole units). Edge blur can be decomposed into a sharp edge pedestal and a dipole as the test pattern which blurs the edge. Carney and Klein (1997, item #106) showed that thresholds for resolution or edge blur are lower than vernier. The same situation is apparent for two-line resolution and dipole vernier acuity where the test is a quadrapole in both cases (Carney & Klein, 1997). The realization that hyperacuity (position acuity) is worse than blur discrimination may be surprising at first, but it becomes reasonable when one take the spatial frequency content of the pedestal and test into account. A similar result is shown for a three-line bisection task (Carney & Klein, 1999). The differences in thresholds across tasks are why spatial filter based models are needed. Our results are being used to guide the development of a vision model that can predict these task dependent thresholds by accounting for the multitude of visual masking effects.
The test-pedestal approach has radically changed our understanding of many hyperacuity and blur detection tasks. In the past, hyperacuity was measured in spatial units (sec of arc) and special devices or filter models were invoked to explain the very low thresholds (0.6 sec of arc in Carney & Klein, 1999). In the test-pedestal approach the units used for measuring thresholds are switched to the units of the difference pattern involved in the discrimination task. In these units the hyperacuity thresholds are easily predicted within a factor of two, based on the visual systems' sensitivity to that difference pattern. What remains is to explain the factor of two differences and the nature of the masking function under different test conditions.
116. Beard, B.L., Klein, S.A. & Carney, T. (1997). "Motion thresholds can be predicted from contrast discrimination". J. Opt. Soc. Am. A. 14, 2449 - 2470.
This article uses the test-pedestal (Template Observer) approach to relate motion thresholds to contrast discrimination and contrast detection thresholds. The observer's task is to detect a counterphase test grating added to a static pedestal grating of the same spatial frequency. When the test is added in-phase the task is contrast discrimination in that the pedestal contrast is flickering in contrast, but the grating is stationary. When it is added with a 90 deg spatial phase shift the task becomes one of motion discrimination in that the pedestal grating is shifting back and forth with nearly constant contrast. We explored a very wide range of spatial frequencies and temporal frequencies. We offer two models for explaining the effects observed: one involving motion mechanisms, the other involving counterphase mechanisms. The former does better. This paper gives a wonderful overview of the power of our methodology in predicting a wide variety of thresholds in terms of contrast discrimination.
- close section -
A3. Suprathreshold vision, failures. The role of noise.
The next three articles presents an overview of our work on spatial vision. The theme here is that there are many conditions in which there is more masking than predicted by
present masking models.
113. Klein, S.A., Carney, T, Barghout-Stein, L. & Tyler, C.W. (1997). "Seven types of visual masking". Human Vision and Electronic Imaging II. BE Rogowitz & TN Pappas, Eds, Proc. SPIE 3016,13-24.
114. Barghout-Stein, L., Tyler, C.W. & Klein, S.A. (1997). "Partitioning mechanisms of masking: contrast transducer versus divisive inhibition". Human Vision and Electronic Imaging. BE Rogowitz & TN Pappas, Eds, Proc. SPIE 3016, 25 - 33.
The first of this pair presents an overview of the multiple dimensions of masking. We feel that an important hindrance to progress in understanding masking is that researchers are trying to shoehorn a number of very different types of masking into the narrow 'contrast-gain' formalism. We point out that there are many examples (e.g. hyperacuity tasks) where there is substantially less masking than is predicted by contrast gain control and there are even more examples where there is substantially more masking than is predicted (e.g. when the observer misrepresents the background). An abbreviated classification of the types of masking considered is: 1) an intrinsic contrast nonlinearity within a mechanism, 2) a divisive gain control operator based on surrounding mechanism activation, 3) masking by multiplicative noise, 4) decision stage noise that intrudes from neighboring mechanisms.
The second article (#114) includes experiments designed to isolate lateral masking (pooled gain control) from local masking. This study investigates the visibility of a tiny Gabor patch (one cycle at 8 c/deg) in the fovea and a scaled patch in the periphery, under three types of masking: matched pedestal, annular mask, and disk mask. Each masker was matched in spatial frequency and orientation to the test pattern. Threshold vs. pedestal contrast curves were measured. Several striking features of the data are: a) In the periphery all three maskers produce substantial masking, with the matched pedestal producing the most masking. b) In the fovea there was greatly reduced masking for all maskers with no masking for the annulus. One conclusion is that there is minimal lateral gain control in the fovea. The periphery shows lateral masking and further experiments are needed to determine whether the lateral masking in the periphery is due to lateral gain control or to attentional problems with the observer getting the test and pedestal mixed up.
135. Levi D.M., Klein S.A. & Carney T. (1999). "Unmasking the mechanisms for vernier acuity". In Press Vision Research
The experiments reported here are the first of a new broad range of experiments we will be working on for the next few years. Our narrow goal is to discover the mechanisms underlying vernier acuity by using a masking technique. Our broader goal is to gain insight into the mechanisms and decision strategies of suprathreshold pattern discrimination. The test target was a pair of thin, vertical, sinusoidal ribbons 3 min wide, with a 3 min separation and the vertical spatial frequencies ranged from 1 c/deg to 24 c/deg. We used thin vertical ribbons to isolate the vernier cue. The masker was a full field sinusoidal grating with a wide variety of spatial frequencies and orientations. For 1 c/deg test ribbons we found strongest masking (12-fold threshold elevation) with a 5 c/deg masker tilted so that the vertical spatial frequency is 1 c/deg and the horizontal spatial frequency is around 5 c/deg. The masking had broad tuning horizontally and narrow tuning vertically. We argue that the pattern and magnitude of the masking eliminates a broad class of filter models as being responsible for the vernier judgment. We show that a template model, in which the underlying mechanisms are tuned to the discrimination cue (very 'stubby' mechanisms rather than the circular or elongated filters found in physiological studies) does an excellent job in accounting for the data.
- close section -
118. Haddad Jr., H., Klein, S.A. and Baldo, M.V. C.(1997). "The contribution of attentional and pre-attentional mechanisms on the perception of temporal order". In: Neuronal bases and
psychological aspects of consciousness. World Scientific, C Taddei-Feretti, C. Musio eds. 339-342.
I am continuing a collaboration with Marcus Baldo, who was a postdoc in my lab five years ago. We are investigating the flash-lag effect in which the location of a moving dot is compared to the location of a flashed dot. Romi Nijhawan showed that the moving dot was perceived to be ahead of its true location at the instant of the flash. Nijhawan explains the effect in terms of a prediction mechanism. We have recently submitted a paper to Nature Neuroscience demonstrating a strong attentional basis for the lag. The attached, older, paper (item #118) investigated stimulus manipulations and controls on a slightly different task (no motion) that indicates there are strong attentional effects that control the delay.
120. Klein, S.A. (1998). "Double-judgment psychophysics for research on consciousness: Application to blindsight". in Toward a Science of Consciousness: The Second Tucson Discussions and Debates. Hameroff, Kaszniak, Scott, eds. 361-369.
133. Klein S.A. (1999). "Do apparent temporal anomalies require non-classical explanation?" In Press, Toward a Science of Consciousness: The Third Tucson Discussions and Debates. Hameroff, Kaszniak, Chalmers, eds. 343-357.
The past 10 years have seen a transformation in the legitimacy of the scientific study of consciousness. Newsome, Logothetis, Maunsell and many others are searching for the neural correlates of visual awareness. The media and the public have become captivated by the topic as seen by the numerous popular books coming out each year. The bi-annual Tucson conference titled: "Toward a Science of Consciousness" brings together a broad range of people, including many well-known scientists and philosophers, examining all facets of the subject. I have been bothered by the general ignorance of the "consciousness community" regarding psychophysical methodologies that are relevant to many questions that are being asked. The two papers listed above were written following the 1996 and 1998 conference, to be included in the conference proceedings. The goal of these papers is to introduce people from other fields to the methodology of visual psychophysics. In the first paper (#120) I examine the issue of blindsight from the point of view of discriminating a response bias effect from a true sensitivity effect. In the second paper (#133) I reanalyze a set of famous data collected by Ben Libet, (from UCSF) comparing subjective asynchronies of cortical vs. peripheral stimulation of the somato-sensory system. In both papers I point out that there are relatively simple, plausible mechanisms that can account for the data, rather than exotic mechanisms that have been proposed by others.
- close section -
As can be seen from the next group of papers, the successful collaboration between Dennis Levi and myself that started about 20 years ago still continues strong. Several of
these papers have nothing to do with amblyopia or peripheral vision, but are included here because they are closely associated with an adjoining paper applying the methods to amblyopes.
119. Wang, H., Levi, D.M. & Klein, S.A.(1998). "Spatial uncertainty and sampling efficiency in amblyopic position acuity". Vision Research., 38, 1239-1251.
This paper applies the equivalent input noise model of Pelli (1990) to quantifying the equivalent position uncertainty and sampling efficiency of normal and amblyopic vision. The results show that both anisometropic and strabismic amblyopes have increased equivalent positional uncertainty, whereas strabismics (but not anisometropes) show a dramatic reduction in sampling efficiency - consistent with the notion that strabismic amblyopia is accompanied by cortical undersampling. In unpublished studies, we have made similar measurements in normal peripheral vision, with very high contrast samples, and find that peripheral sampling efficiency is normal. We believe it is critically important to make similar high contrast measurements in strabismic amblyopes to determine the nature of their undersampling.
107. Beard, B.L., Levi, D.M. & Klein, S.A.(1997). "Vernier acuity with non-simultaneous targets: The cortical magnification factor measured by psychophysics". Vision Research, 37, 325-346.
In 1984, Levi and I proposed a set of psychophysical tasks that could reveal the cortical magnification scaling factor. In subsequent years other researchers have introduced tasks that muddy the waters. This paper represents a thorough review of the literature, and a new approach to measure the variation in the precision of "local sign" information with eccentricity. In the present paper we suppress the responses of intruding contrast sensitive spatial filters but introducing a temporal asynchrony between the paired stimuli. Our results suggest that the precision of local sign information is degraded by a factor of two at an eccentricity of ‰0.8 deg, consistent with recent estimates of both human and monkey cortical magnification.
117. Levi, D.A., Sharma, V. & Klein, S.A.(1997). "Feature integration in pattern perception". Proc. Nat. Acad.Sci. 12742-12746.
127. Levi, D.M., Klein, S.A. & Sharma, V.(1999). "Position jitter and undersampling in pattern perception". Vision Res. 39, 445-465.
This pair of papers represent an attempt to study how features combine to produce a recognized object (the letter E in this case) in normal (#117) and amblyopic (#127) vision. Our initial results show that recognition of simple objects is very robust to jittering the feature positions. The orientation of the pattern can be reliably identified when the jitter standard deviation is as large as about 0.5 times the feature separation. Normal observers can also reliably identify the pattern orientation with just 40-50% of the samples. Interestingly, strabismic amblyopes and the normal periphery show that their jitter tolerance is essentially identical to the normal fovea; however they need more samples (70-80%) to correctly identify the target orientation. One of the interesting features of these papers is the development of an ideal observer for the undersampling manipulation. In comparing human to ideal observers we found that our observers had efficiencies greater than 50%.
139. Levi D.M., Klein S.A., Sharma V. & Nguyen L.(1999). "Detecting disorder in spatial vision". Submitted to Vision Res.
140. Levi D.M. & Klein S.A.(1999). "Seeing circles: What limits shape perception?" Submitted to Vision Res.
One of the proposals for the extra loss in strabismic amblyopes is that there is an increased "intrinsic" topographic disorder. We tested this hypothesis by measuring thresholds for detecting disorder either in a horizontal string or a circle of N Gabor samples in normals and amblyopes over a wide range of feature separations. We also estimated the equivalent intrinsic spatial disorder using an equivalent noise approach. Our results indicate that equivalent intrinsic disorder depends strongly on separation even for fixed Gabor scale, violating the hypothesis being tested.
136. Levi D.M., McGraw P.V. & Klein S.A.(1999). "Vernier and contrast discrimination in central and peripheral vision". In Press Vision Res.
The test pattern is a pair of thin vertical ribbons each 3 min wide that is discussed in the Levi, Klein & Carney (1999) paper (#135 above). Each ribbon is a long cosine grating. This paper is the latest in our 'test-pedestal' series in that contrast discrimination is directly compared to vernier acuity. We explore how ribbon width affects the two tasks differently. We also compare foveal and peripheral processing. We found a number of surprises. For standard vernier targets (long gratings) we had previously reported that strabismic amblyopes had an extra loss in vernier acuity as compared to contrast discrimination even though the two tasks differ only in a 90 deg relative phase shift of the test and pedestal. For the thin ribbons of the present experiments there was no extra loss of in vernier acuity. We explored the conditions that produced the extra loss found in strabismic amblyopes and in peripheral vision.
137. Sharma V., Levi D.M. & Klein S.A.(1999). "Under-counting features and missing features: Evidence for an attentional deficit in strabismic amblyopia". Submitted to Nature Neuroscience.
Abnormal visual development in strabismic amblyopia is known to have drastic effects on visual perception and on the properties of neurons in primary visual cortex (V1). We set out to test the notion that amblyopia may also have consequences for higher visual areas by asking humans with amblyopia to count briefly presented features. Our results show that strabismic amblyopes cannot count accurately with their amblyopic eye they markedly underestimate the number of features. One of my contributions to this paper is my suggestion to count missing features. This is accomplished by starting with a 7 x 7 arrays of features and removing several of them. We found that the amblyopic eye exhibits much stronger undercounting errors than the non-amblyopic eye. This underestimate of missing features is evidence that the amblyopic loss is not due to low level considerations (blur, visibility, crowding). Rather, the deficits of counting in strabismic amblyopes reflect a higher level limitation in the capacity of information that the amblyopic visual system can attend to and individualize. We also carried out a cueing experiment to determine whether amblyopes had a relatively normal ability to switch attention based on cue. They had that ability.
- close section -
The publications in this category have been directed at the image processing community. Our goal has been to bring knowledge from the vision research community to the
who are developing algorithms for image enhancement, image compression and image quality. We have discovered that there is a lack of data that is relevant to the engineers and we need to gain a much
better understanding of the different types of masking.
104. Klein, S.A., Hu, Q.J. & Carney, T.(1996). The adjacent pixel nonlinearity: Problems and solutions. Vision Res., 36, 3167-3181.
115. Carney, T. & Klein, S.A.(1997). Gray scale adjacent pixel luminance nonlinearity compensation with three color guns. Color Imaging: Device-Independent Color, Color Hard Copy, and Graphic Arts II. G.B. Beretta & R. Eschbach, Eds. Proc. SPIE 3018, 188-197.
This pair of articles are on the topic of how to deal with the striking nonlinearity between adjacent pixels in the direction of the raster for cathode ray displays. This issue is very important for researchers desiring to combine adjacent pixels for achieving visual effects such as sub-pixel position shifting or halftoning to increase the number of gray levels, or simply for displaying high spatial frequency information. We developed a novel two-dimensional lookup table for minimizing the error caused by this nonlinearity. A number of novel ideas were included to avoid the nonlinearity while maintaining a large gamut of luminances.
Silverstein, D.A. & Klein, S.A.(2000). Precomputing and encoding compressed image enhancement instructions. PhD dissertation and submitted to IEEE Image Processing.
This paper and its associated patent are concerned with a new method for improving the results of image compression. The problem being tackled is that image compression reduces the number of bits available to specify the image and a loss in image quality is possible. For example, if one throws away some of the high spatial frequency information, the image might appear blurred. In the decoding stage (decompression) one could use deblurring algorithms to sharpen the image. In some circumstances the deblurring might do a wonderful job of enhancing the image, but in other circumstances it might introduce unwanted ringing. Our articles and patent present a scheme that allows one to specify the regions where enhancement is useful. At the encoding stage one can have a fidelity metric compare the original image to the image that had been compressed, decompressed and then enhanced by one or more algorithms. The information about which enhancement algorithm works best for each region of the image can be hidden as a marking map inside the image. We devised a hiding scheme for the marking map based on using round-off errors.
122. Chang, Y.C., Carney, T, Klein, S.A., Messerschmidt, D, & Zakhor, A.(1998). Effects of temporal jitter on video quality: Assessment using psychophysical and computational modeling methods. Human Vision and Electronic Imaging. BE Rogowitz & TN Pappas, Eds, Proc. SPIE 3299, 173 - 179.
129. Carney, T., Chang, Y.C., Klein, S.A., & Messerschmidt, D.(1999). Effects of dynamic quantization noise on video quality. Human Vision and Electronic Imaging IV. Rogowitz & Pappas, Eds, Proc. SPIE 3644, 141-151.
This pair of papers reveal a method of improving low bandwidth interactive video transmission compression while actually improving the image quality. Existing vision models predict reduced video quality using our method yet our psychophysical results indicate the opposite is true. An improved vision model is clearly needed.
Silverstein, D.A. & Klein, S.A.(1998). "Precomputing and Encoding Compressed Image Enhancement Instructions". Patent for the technique discussed in item #22.
105. Halstead, M.A., Barsky, B.A., Klein, S.A. & Mandell, R.B.(1996). Reconstructing curved surfaces from specular reflection patterns using spline surface fitting of normals. ACM/Siggraph '96,
New Orleans, 335-342.
We developed a spline based method for calculating corneal shape from the Placido ring images. This paper presents a new method that greatly reduces the number of iterations needed to calculate corneal shape. The algorithm has advantages over the algorithms used in present machines in that it doesn't suffer from the skew ray error (see articles #111 and #112). It also has the advantage that it is an analytic expression so that we are able to calculate local curvature and other surface properties in a smooth manner.
110. Barsky, B.A., Klein, S.A. & Garcia, D.D.(1997). Gaussian power with cylinder vector field representation for corneal topography maps. Optometry and Vision Science, 74, 917-925.
121. Garcia, D.D., Barsky, B.A., & Klein, S.A.(1998). CWhatUC: A visual acuity simulator. in Ophthalmic Technologies VIII, PO Rol, KM Joos, & F Manns, Eds. Proc. SPIE 3246, 290-298.
This pair of papers involves the development of a variety of new methods for visualizing corneal topography. The first paper (#110) showed that the displays currently in use provided a distorted representation of keratoconus (a high curvature anomaly on the cornea). The previous methods had a singularity at the center that interfered with visualizing the keratoconic anomaly. Our new visualizations fared much better by eliminating the central singularity. The second paper (#121) developed a number of methods for representing refractive aspects of corneal shape. The insights developed in this paper have limited relevance when just based on corneal shape, but as will become clear in the Section B2 group of papers, they become quite important when applied to refractive surgery and the total aberrations of the eye.
111. Klein, S.A.(1997). Axial curvature and the skew ray error in corneal topography. Optometry and Vision Science, 74, 931-944.
112. Klein, S.A.(1997). Corneal topography reconstruction algorithm that avoids the skew ray ambiguity and the skew ray error. Optometry and Vision Science, 74, 945-962.
This pair of papers grew out of my worrying about a flaw in all the reconstruction algorithms in present corneal topography instruments. The present reconstruction algorithms make the assumption that all rays from the target to the camera lie in the meridional plane (the plane containing the axis of the corneal topographer). When cylinder or keratoconus is present this assumption is incorrect and a skew ray error is introduced. The first paper of this pair goes into great detail in clarifying the geometry and in developing a mathematical framework for calculating the skew ray error. The error is calculated for a number of simulated corneas. The second paper develops and demonstrates a detailed algorithm for calculating corneal shape in a manner that correctly handles skew rays. I am presently working on improving the reconstruction algorithm so that it can handle the full cornea. Present algorithms fail near the limbus where the corneal curvature can become zero and even negative.
- close section -
123. Klein, S.A.(1998). Optimal corneal ablation for eyes with arbitrary Hartmann-Shack aberrations. J. Opt. Soc. Am. A. 15, 2580 - 2588.
This is the first of many forthcoming papers on refractive surgery. There is a growing interest in improving the outcomes of the surgery. The present paper provides a detailed analysis of the optimal ablation given perfect information about corneal shape and the refractive aberrations. It also ignores issues of epithelial regrowth and corneal plasticity. I showed that the Hartmann-Shack aberrations are nearly sufficient for predicting the appropriate ablation. However, depending on the Hartmann-Shack geometry there is a small correction term based on corneal shape. I also examine the benefit of introducing a small amount of spherical aberration.
124. vandePol, C, Tran, H.H., Garcia, D.D., & Klein, S.A.(1998). Three-dimensional analysis of corneal image forming properties: A monocular diplopia example. OSA Technical Digest, Vision Science and its Applications. OSA Technical Digest Series, vol. 1. 219-222.
This brief conference proceeding gives a glimpse into how one can use corneal information to reveal refractive properties of the eye. We have introduced a new map to display refractive information (the point spread function) in the corneal plane (see further details in item #138). This paper shows a dramatic example of how monocular diplopia can be accurately predicted based on corneal information. The eye examined has corneal warpage probably due to eyelid pressure. The subjective measurement of the distance between the diplopic images agreed well with the estimate based on ray tracing based on corneal topography. In the past year this research direction has been taken over by Dan Garcia (Computer Science) as part of his PhD dissertation. He has made a number of improvements in the accuracy of the calculations and in the display of the information.
132. Corbin J.A., Klein, S. & vandePol, C.(1999). Measuring effects of refractive surgery on corneas using Taylor series polynomials. Ophthalmic Technologies IX, PO Rol, KM Joos, F Manns, BE Stuck & M Belkin, Eds. Proc. SPIE 3591, 46-52.
131. Garcia D.D., vandePol C, Barsky B.A. & Klein S.A.(1999). Wavefront coherence area for predicting visual acuity of post-PRK and post-PARK refractive surgery patients. Ophthalmic Technologies IX, PO Rol, KM Joos, F Manns, BE Stuck & M Belkin, Eds. Proc. SPIE 3591, 303-310.
138. Klein S.A. & Garcia D.D.(2000). Alternative representions of aberrations of the eye. In Press, OSA Technical Digest, Vision Science and its Applications. OSA Technical Digest Series.
This trio of papers develop new methods for quantifying and displaying the aberrations of the eye. In the past, the aberrations were displayed in terms of the retinal point spread function or optical transfer function (like the contrast sensitivity function). With the increasing popularity of refractive surgery, there is a growing need to be able to specify the aberrations on the cornea. At present the main method for illustrating corneal aberrations is to display the wavefront height. These three articles illustrate several new methods for displaying refractive information. The first two develop new methods for displaying and quantifying the region of coherent optics in the corneal plane. The third paper is meant to be controversial. There is presently an effort to develop standards for specifying corneal aberrations. I believe that a number of errors are being made in choice of reference axis and in the types of displays that are being advocated by the majority of the standards committee. In this paper we point out some of the problems with present methods of displaying the aberrations. We advocate methods that reduce the problems.
- close section -
108. Klein, S.A.(1997). Photoreceptor waveguides - A simple approach. In Basic and Clinical Applications of Vision Science: The Professor Jay M. Enoch Festschrift Volume. Ed. V.
Lakshminarayanan. Kluwer Academic Pub., Dordrecht. Documenta Ophthalmol. Proc. 60, 37-41.
This brief paper was written as a tribute to Jay Enoch and his contributions to our understanding of photoreceptor waveguides. One of the problems with analyzing the waveguide properties is that calculations require delicate assumptions and advance mathematics. In this paper I present a fairly simple Matlab program for analyzing the modes of waveguide action. The program is shown to produce reasonable results.
109. Corzine, J.C. & Klein, S.A.(1997). Factors determining rigid contact lens flexure. Optometry and Vision Science, 74, 639-645.
In this paper we use a device developed by Irv Fatt for measuring the flexure of rigid contact lenses. We had been puzzled by the very strong amounts of flexure found by previous authors. We found that flexure was not caused by an intrinsic surface tension, but rather it is caused by tear insufficiency. When sufficient tears are present, there is minimal flexure. When the eyelid action squeezes out the tears, so there is fluid insufficiency, flexure will occur if there is an imbalance in the toricity of the cornea and contact lens. We develop a simple mathematical analysis of how the tear volume depends on corneal and contact lens spherical and cylindrical curvatures.
125. Klein, S.A., Corzine, J, & Kung, J.(1998). Corneal topography volume maps for predicting rigid contact lens centration and motion. OSA Technical Digest, Vision Science and its Applications. OSA Technical Digest Series, vol. 1. 216-218.
One of the driving forces for my involvement with corneal topography is my interest in using corneal topography for improving contact lens fitting. This conference report is the first of our papers on this topic. We have developed a new corneal map that shows the tear volume for different contact lens locations. We hypothesize that the contact lens will move in the channels of minimal tear volume, since energy will need to be expended to flex the lens in regions where the needed tear volume is greater than the tear availability. We are continuing to work on this topic to test the hypothesis and to extend the concept to soft lenses where stretching occurs in addition to flexing.
- close section -