Commentaries on Stanley Klein's Research Articles
148. Klein, S.A. (2001). Measuring, estimating and understanding the psychometric function: A commentary. Perception & Psychophysics. 63, 1421-1455.
The psychometric function is a plot with stimulus strength on the abscissa and percent correct on the ordinate. Even though this plot of psyche vs. physics is central to the field of psychophysics, there are many misconceptions on how it can be efficiently used to assess human performance. I was co-editor, with Neil Macmillan, of a special issue of Perception and Psychophysics devoted to this topic. In my article I take the 'underdog' side on several topics and use simulations and calculations to show that: i) 'dumb' staircase methods for estimating thresholds can be as good as fancy adaptive procedures and the 'dumb' methods even have advantages, ii) efficient adaptive methods are available for yes/no tasks, iii) some of the problems associated with 2AFC methods can be fixed by adding confidence ratings to the 2AFC judgments, iv) surprisingly strong biases can affect 2AFC staircases, v) non-parametric methods for estimating properties of psychometric functions have advantages and disadvantages, vi) there is a surprising connection between the Weibull function and the d' function. I devoted nearly half a year of effort to this article and am quite pleased by the many areas that were covered and uncovered.
- close section -
150. Silverstein, D.A., Carney, T, & Klein, S.A. (2001). Modeling contrast thresholds. In Vision Models and Applications to Image and Video Processing, ed: Christian J. van den Branden Lambrecht, 53-68.
144. Carney, T., Tyler, C.W., Watson, A.B,, Makous, W., Beutter, B., Chen, C.C., Norcia, A.M., & Klein, S.A., (2000). Modelfest: year one results and plans for future years. Human Vision and Electronic Imaging V. Rogowitz & Pappas, Eds, Proc. SPIE 3959, 140-151.
156. Carney, C., Klein, S.A., et al. (2002). Extending the modelfest image/threshold database into the spatio-temporal domain. Human Vision and Electronic Imaging V. Rogowitz & Pappas, Eds, Proc. SPIE 4662, 138-148.
My early research in vision was devoted to the development of a multichannel filter model for detection. Although the vision community felt that the 'standard' filter model did a good job, it was never adequately tested for a broad range of detection stimuli. Several years ago, as part of his PhD dissertation, my graduate student, Amnon Silverstein took a major step forward in testing the adequacy of the standard filter model. He included a broader variety of stimuli than had been previously used. He then used the data to constrain a multichannel filter model. Some of his results are reported in a book chapter (150). He found that although the model worked very well, there were tantalizing hints that very tiny stimuli were more visible than the model's prediction. I felt that a dataset with more stimuli and with more subjects was needed to place tighter constraint on the standard model, so about four years ago my colleague Thom Carney and I organized the Modelfest project whose goal was to form an international collaborative effort to gather a large database of thresholds for testing vision models. In choosing the 45 test stimuli the Modelfest group included several that preliminary data had shown to be incompatible with the standard filter model (the detection of a string of same polarity vs. oppositve polarity Gabor patches is one example). An overview of the data from the first phase of this collaboration is reported in Appendix A3. The data showed that contrary to the reports of previous literature, none of the stimuli violated the predictions of the standard filter model. Carney and I are presently working on a detailed analysis of the data with the goal of placing constraints on the filter properties and pooling properties of the model. Appendix A4 describes a preliminary overview of the next Modelfest data acquisition effort where we extend detection threshold into the spatio-temporal domain. The previous Modelfest effort was for static images. Further details are available at Thom Carney's web site, www.neurometrics.com.
- close section -
135. Levi, D.M., Klein, S.A. & Carney, T. (2000). Unmasking the mechanisms for Vernier acuity: evidence for a template model for Vernier acuity. Vision Res. 40, 951-972.
136. Levi D.M., McGraw P.V. & Klein S.A. (2000). Vernier and contrast discrimination in central and peripheral vision. Vision Res. 40, 973-988.
151. Levi, D.M. & Klein, S.A. (2002). Classification images for detection and position discrimination in the fovea and parafovea. Journal of Vision, 2, 46-65.
As discussed in #150, the multichannel filter model developed in the period from 1968-1974 does an excellent job of predicting detection thresholds for a wide class of stimuli. Several researchers (Campbell & Robson (1968), Findlay (1973), Wilson (1986), Klein & Levi (1985)) showed that the same type of multichannel model could be used to predict discrimination thresholds for suprathreshold vision including hyperacuity. Findlay's research was particularly strong. He measured Vernier acuity of two abutting lines in the presence of an overlapping sinusoidal mask, and found fairly sharp tuning as a function of mask spatial frequency and orientation, consistent with the filter model developed for detection. The research reported in #135 was similar to Findlay's, but rather than using abutting thin lines as the Vernier target we used very short broad sinusoidal ribbons of many cycles. The tuning of the masking function revealed that the template that was being used for the Vernier discrimination had minimal resemblance to known spatial frequency mechanisms. Rather, the template resembled an enveloped version of the test stimulus, similar to what a local, ideal observer would use. #136 extends the #135 work to peripheral vision and also to contrast discrimination. We found that there are differences between Vernier acuity and contrast discrimination that are not compatible with the standard filter models. We also again demonstrated the failure of the ideal observer approach to predicting Vernier thresholds.
#151 makes use of a new technique for determining the underlying mechanisms of suprathreshold vision. Dennis Levi and I embedded detection and Vernier acuity tasks in background noise. The presence of noise offers two major advantages: i) detection and discrimination thresholds are elevated so that thresholds are in the suprathreshold regime, thus bypassing the system's intrinsic noise, ii) the external noise biases the subject's response so that by doing a linear regression of the rating responses on the noise we were able to determine the templates (classification images) that observers used for their judgments. We used an innovative stimulus with the test pattern and the noise each consisting of only 22 Fourier components. The sparse number of components allowed accurate templates to be calculated with about 1/3 the number of trials that would be needed with the more standard cross-correlation methods. We found that for the detection task, and more strongly for the Vernier task, the subject's templates had greater weighting at high spatial frequencies than the ideal observer's matched template. We argue that this non-ideal template is based on the observer's focusing attention more locally than the ideal case. We are presently applying the methods developed in #151 to the amblyopic visual system and to other stimuli in order to determine the nature of the underlying mechanisms and interactions.
- close section -
147. Yu, C., Klein, S.A., & Levi, D.M. (2001). Surround modulation of perceived contrast and the role of brightness induction. Journal of Vision, 1, 18-31.
A9. Yu, C., Klein, S.A., & Levi, D.M. (2002). Facilitation of contrast detection by cross-oriented surround stimuli and its psychological mechanisms. Journal of Vision, 2, 243-255.
152. Levi, D.M.,, Klein, S.A. & Hariharan, S. (2002). Suppressive and facilitatory spatial interactions in foveal vision: Foveal crowding is simple contrast masking. Journal of Vision, 2, 140-166.
153. Levi, D.M., Hariharan, S. & Klein, S.A. (2002). Suppressive and facilitatory interactions in peripheral vision: Peripheral crowding is neither size invariant nor simple contrast masking. Journal of Vision, 2, 167-177.
154. Levi, D.M., Hariharan, S. & Klein, S.A. (2002). Suppressive and facilitatory spatial interactons in amblyopic vision. Vision Res. 42, 1379-1394.
At present one of the hottest areas in both the psychophysics and the neurophysiology of spatial vision is the issue of surround modulation of perception. This topic is the research focus of my postdoctoral fellow, Cong Yu. In a typical task a small grating patch is presented to the fovea and its detection (Appendix 9) and contrast discrimination thresholds, and its perceived contrast (#147) are measured. These quantities are remeasured when the patch is surrounded by an annulus, or by other patches, whose spatial frequency, orientation, contrast and gap is varied. We found many interesting results. One of the most surprising findings, reported in Appendix 9, is the strong threshold reduction (facilitation) produced by a surround whose orientation is orthogonal to the test orientation (the 'cross' condition). It was surprising because several previous psychophysical and about half of the previous single cell studies reported that the cross condition always resulted in either no effect or inhibition rather than facilitation. We found strong facilitation when our surrounds had low contrast. Previous studies that showed no facilitation used high contrast surrounds. Our results clarify what had previously been a confusing situation with inconsistent neurophysiological data. Our study of perceived contrast (147) also resolves a number of conflicting reports in the previous literature. An iso (aligned orientation) surround reduces perceived contrast over a large range of conditions. Surprisingly, the cross surround switches from enhancing perceived contrast when it is very close to the test patch to decreasing perceived contrast when a gap is introduced. Another surprise is that the iso surround produces contrast enhancement when its spatial frequency is below that of the test. The pattern of enhancement or decrement in perceived contrast is decoupled from what we found in detection and discrimination studies, contrary to the claims of others whose data were more limited than ours. We are actively continuing this research area. We have come across a number of peculiar effects that will undoubtedly be a part of next year's grant renewal.
Articles 152-153 examine the role of flanking stimuli on pattern detection to examine 'crowding' more carefully than previous studies. Most of the studies in these articles use a test target consisting of 17 Gabor patches of different sizes, arranged in the shape of a Snellen E, with three rows of five patches plus two patches to fill in the side bar of the E. Targets in a Landoldt C, shape were also used. The target is flanked by a surrounding group of patches at different orientations and gap sizes. The target is randomly rotated in one of two or four orientations and the observer's task is to identify the orientation. One of the goals of these studies is to determine source of the masking/crowding when the flanks are close. Hypothesis 1 is that there is a fixed gap size that controls the scale of the crowding. Hypothesis 2 is that stimulus size sets the scale of the crowding. We found that in the fovea hypothesis 2 is valid over a 60-fold range of stimulus sizes (#152). However, in both peripheral vision and amblyopic vision hypothesis 1 is valid (#153 and #154). In peripheral vision the gap size (distance from flank to target) where crowding occurs is about 1/10 of the eccentricity of the stimulus, a spacing that corresponds to about 1 mm of cortex, which is about a hypercolumn. At gap sizes with strong crowding the stimulus features are still quite visible. There were a number of other important findings in these studies. We examined the pattern of errors and showed that for small gaps 180 deg errors were most common. In order to explain this pattern of results we examined two models: an ideal observer model of crowding based on the detection of the difference pattern and a model based on Fourier analysis of the stimuli (#152). The ideal observer model worked quite well, whereas the Fourier model failed, contrary to the claims of previous researchers. One of the useful aspects of the modeling done in Appendix 10 is that we developed a formalism based on an analytic Fourier transform rather than the fast Fourier transform used by previous researchers. The analytic formalism allowed us to analyze the location of the various cues more precisely than had been done previously. Another important finding is that we showed that foveal crowding, but not peripheral or amblyopic, could be predicted from the facilitation/inhibition found for the detection of a single Gabor patch in the presence of flanks. As part of this study of the detection of single Gabor patches we compared two hypotheses (long range facilitation vs. uncertainty reduction) of what causes the facilitation when a small gap is present (the hot topic explored in #147 and A9). We measured the shape of the psychometric function and found that in the presence of flanks, the accelerating transducer exponent was linearized, precisely as would be expected from uncertainty reduction, rather than the more popular, but unsupported, long range interaction explanation.
- close section -
139. Levi, D.M., Klein, S.A., Sharma, V. & Nguyen, L. (2000). Detecting disorder in spatial vision. Vision Res. 40, 75-95.
140. Levi, D.M. & Klein, S.A. (2000). Seeing circles: What limits shape perception? Vision Res. 40, 97-107.
137. Sharma, V., Levi, D.M. & Klein, S.A. (2000). Under-counting features and missing features: Evidence for an attentional deficit in strabismic amblyopia. Nature Neuroscience. 3, 496 - 501.
Articles 139 and 140 explore higher level perceptual organization in normal, peripheral and amblyopic vision. We examined subjects' ability to detect disorder in strings or circles of Gabor patches as a function of the orientation, spatial frequency, separation, number and amount of jitter of the Gabor patches. We compared our observers thresholds to what was expected of ideal observers with simple degradations. One of our interesting findings was that strabismic amblyopes had a marked reduction of sampling efficiency. That is, it appeared that they made use of far fewer samples than the normal observers, even though individual samples were highly visible. In order to directly test this notion of reduced sampling efficiency we asked observers to count the number of briefly presented samples when the number of available samples was randomly varied from trial to trial (#137). Strabismic amblyopes undercounted the samples, even though individual samples were very visible. Many low level explanations of the undercounting are possible: blur, visibility reduction by flanking stimuli, crowding, undersampling, or topographic jitter. We eliminated all these low level explanations by a direct, simple experiment (#137). We had subjects count the number of missing elements from a 7 x 7 square matrix of 49 Gabor patches that had some positional jitter. From 1 to 10 samples were removed. The amblyopic eye of observers exhibited a much larger undercounting of the missing elements than what was obtained with their non-amblyopic eye. The low level explanations of undercounting would predict that crowding or undersampling in the amblyopic eye would have produced more dropout of perceived samples thereby increasing the number of missing samples. We interpret our results (an undercounting missing samples) as reflecting a higher-level limitation on the number of features the amblyopic visual system can individuate. Based on the high level of object definition shown by our study, we speculate that this amblyopic loss may be similar to the deficit found in 'neglect' patients with lesions of parietal cortex (e.g., Balint's syndrome).
- close section -
141. Klein, S.A. (2000). Corneal topography: A review, new ANSI standards and problems to solve. OSA Trends in Optics and Photonics Vol. 35. Vision Science and its Applications, Vasudevan Lakshminarayanan, ed (Optical Society of America, Washington DC, 2000), pp. 174-177.
157. Klein, S.A., Corzine, J., Corbin,, J., Wechsler, S. & Carney, T. (2002). Wide angle cornea-sclera (ocular) topography. Ophthalmic Technologies XII. .F Manns, PG Soderberg, & A Ho, Eds. Proc. SPIE 4611. 149-158.
#141 is partly a review of progress in corneal topography and partly a summary of the recent ANSI standards on corneal topography. I participated in a year-long e-mail effort plus a national ANSI (American National Standards Institute) meeting, developing standards for corneal topography instruments. Before these standards had been developed there was great confusion over the meaning of simple terms such as corneal apex and corneal power. The ANSI standards clarified many such items and also set standards on how to measure and specify the precision and accuracy of corneal topography instruments. One problem with the standards is that the ophthalmic and vision communities are largely unaware of them. When I was asked to write a review article of corneal topography (#141) for the Optical Society of America TOPS (Trends in Optics and Photonics) publication, I decided to include a section on what the ANSI committee accomplished. I also presented an overview of my opinions of the problems standing in the way of corneal topography becoming a useful tool for eye care providers. My review of the ANSI standards becomes especially interesting in Section 2e where I suggest future modifications of the standard. I was especially bothered that insufficient thought was given on how to report the reliability and accuracy measurements of topographers.
#141 has a number of novel items not found elsewhere. For example, I did a very simple, rough calculation of the potential accuracy of height-based topographers vs. slope-based (Placido ring) topographers. The calculation assumes that image processing of the CCD images gives the same subpixel resolution for both types of topographers. I found that for the detection of a small anomaly, the height-based instruments were about a fifth as sensitive as Placido instruments. The section on unsolved problems facing corneal topography was subdivided into four categories: i) Improvements needed before corneal topographers could have a major impact on contact lens fitting. #157 provides further details on this topic. ii) Diagnosing corneal anomalies. I emphasized the need for specific shape descriptors relevant to particular anomalies rather than the generic Zernike expansion. iii) Improving refractive surgery outcomes. For example, minimal work has gone into the study of how peripheral topography might influence the cornea's biomechanical response to refractive surgery. iv) Predicting optical limitations to visual acuity. Nowadays, this topic is less relevant to corneal topography since it is in the realm of wavefront aberrations and it will be taken up in #146.
About 5 years ago there was great hope that every optometrist and ophthalmologist would need a corneal topographer for their practices. #157 first explores the three general needs that would be satisfied by these instruments: 1) Knowledge of corneal thickness is important for refractive surgery. The Orbscan topographer is the only one that measures corneal thickness as well as corneal shape so it has become the instrument of choice for refractive surgery. Unfortunately, its cost is far above the budget of most eye care providers so it has limited distribution. 2) One would like to know the optical properties of the eye to determine whether a person with degraded visual acuity is limited by optical aberrations or by neural factors. A few years ago corneal topographers were used for this purpose, but it now looks like instruments that measure wavefront aberrations will do a better job. 3) Contact lens fitting. This is the main item that much of the corneal topography hype of the past decade was about. It was felt that corneal topographers would usher in a new era for fitting contact lenses for the difficult-to-fit patient. In recent years this dream of a new era in contact lens fitting has faded. #157 argues that the reason that present corneal topographers have not succeeded in revolutionizing contact lens fitting is because their less than 9 mm coverage of the corneal surface is too small. For rigid lens fitting one need to span the full 11 mm of the cornea. For soft lens fitting (especially the new silicon hydrogel extended wear lenses) one needs to have coverage of about 15 mm that includes the near sclera. The peripheral regions of cornea and sclera play a dominant role in lens centration, motion and comfort. The large intersubject variability of peripheral corneal shape makes it important to have topographers using a new technology to achieve broader coverage. #157 provides the first analysis of the Euclid topographer with that has the broad coverage capability. We developed several image processing techniques to augment the Euclid's output in order to improve its accuracy.
- close section -
146. Klein, S.A. (2001). Problems with wavefront aberrations applied to refractive surgery: Developing standards. Ophthalmic Technologies XI. .F Manns, PG Soderberg, & A Ho, Eds. Proc. SPIE 4245. 47-56.
143. Klein, S.A. & Garcia, D.D. (2000). Line of sight and alternative representations of aberrations of the eye. J. Refractive Surgery. 16 630-635.
#146 is similar to #157 except that it is applied to the measurement of wavefront aberrations rather than to corneal topography. At the 1999 Optical Society's Vision Science and Its Applications (VSIA) meeting in Santa Fe a taskforce was organized to develop standards for specifying wavefront aberrations. They issued an informal (not ANSI sponsored) report in 2000. This is an important effort, especially given that a large number of people are receiving refractive surgery in which the ablation pattern is specified by wavefront aberration measurements.
The first part of #146 is a critique of the OSA taskforce report. The taskforce had three subgroups: 1) Standardize the reference origin and reference axis direction. The taskforce recommended, appropriately, that the line of sight be used as the reference axis. I point out, however, that the official definition of the line of sight as being the ray going through the center of the pupil is an inappropriate choice. I demonstrate that a better choice would be the average of rays going through a small (3 mm diameter) area around the pupil center (the Zernike small-pupil prism term). 2) Standardize the Zernike notation. The taskforce emphasized the Zernike expansion. I provide an example that shows how the Zernike representation is inadequate for capturing the consequences of refractive surgery. I argue that shape descriptors specific to refractive surgery are needed to augment the Zernike representation. 3) Develop test shapes for calibrating wavefront systems. Here I argue for the need to add a test shape that mimics the type of aberrations found after refractive surgery. The second part of Appendix 18 is devoted to night vision problems caused by the transition zone following refractive surgery. For example, it is shocking that the vision community has not yet developed a satisfactory test for assessing night vision when the pupils are naturally large. I discuss the characteristics of such a future test. In the last part I present a brief overview of new ways to display the aberrations, a topic presenting in much greater detail in Appendix 19.
#143 introduces four new methods for representing the aberrations of the eye. Presently aberrations are represented either by the wavefront height, the point spread function or the modulation transfer function, with the latter two losing the connection with pupil location. #143 illustrates two new representation based on wavefront slope and two based on wavefront curvature. A model cornea with a small keratoconic bump near the center is used to demonstrate the various representations. This model cornea was chosen to provide a critique of the line of sight standard chosen by the OSA taskforce discussed in #146. It is gratifying to me that the leading manufacturer of wavefront instruments (Wavefront Sciences of Albuquerque) is implementing all of the new representations. I am looking forward to seeing them in a future version of their software. Too bad I didn't patent the methods for making those pretty pictures.
- close section -
Visual Evoked Potentials (expand)
My most ambitious project involves my desire and commitment to developing a methodology to for using visual evoked scalp potentials (VEP) to localize sources of brain activity. My Los Alamos grant extends the techniques to magnetoencephalography (MEG). A successful analysis method would provide exquisite temporal resolution of brain activity that would mesh well with the excellent spatial resolution provided by fMRI. A major problem facing all previous attempts at VEP and MEG source localization is that when sources are close together, as they are in visual cortex, the estimated time functions can be incorrect (the 'rotation' problem). We are developing methods for overcoming this problem. Our novel approach is to not only use multiple electrodes, but also to use a multifocal multiplicity of simultaneously modulated, tiny stimulus patches (60 in the present studies). We use the expected retinotopic continuity of the time functions, locations and magnitudes of the sources across cortex to remove ambiguities in the source localization.
145. Slotnick, S.D., Klein, S.A., Carney, T. & Sutter, E. (2001). Electrophysiological estimate of human cortical magnification. Clin Neurophysiology, 112: 1349-1356.
159. Slotnick S.D., Hopfinger J.B., Klein S.A., Sutter E.E.. (2002). Darkness beyond the light: attentional inhibition surrounding the classic spotlight. Neuroreport 13, 773-778.
#145 is our latest study on human cortical magnification. See Beard, Levi & Klein (1997) for a summary of previous work including a critique of fMRI attempts to estimate cortical magnification in humans. Foveal magnification is difficult to assess with fMRI because the fovea is difficult to isolate. On the other hand it is easy to isolate the foveal evoked potential response to tiny stimuli because of the large signal/noise ratio of evoked potentials. We measured foveal and peripheral magnification by two methods based on dipole source localization of V1 activity: 1) based on the spacing of dipole cortical sources and 2) based on the assumption that the magnitude of the dipole sources are proportional to cortical area stimulated. Our values for peripheral magnification agree with previous estimates. Our values for foveal magnification agree with psychophysical estimates and are more stable than fMRI estimates. This agreement provides an important validation of both our electrophysiological and our psychophysical approaches.
#159 shows how our techniques can be applied to understanding high level attention effects. During one group of runs of the multifocal stimulation, the subject attended to a task in the right visual field. During separate runs the subject attended to a mirror image task in the left visual field. We compared the magnitudes of the V1 sources in the two cases. We found that the responses were increased in a region that extended from the attended region almost to the fovea. Surrounding that region of increased response there was a region of inhibited response. Recent psychophysical findings by others are in agreement with these results. As we continue our development of isolating the responses from different visual areas, this type of attentional manipulation should allow improved localization of the origins of attentional effects.
- close section -
Higher Order Effects
The perceived time of a mental event has a long and controversial history. I have recently become interested in two aspects of this topic: the flash-lag effect currently a hot topic among vision scientists, and the temporal anomalies found by Benjamin Libet that remains a hot topic among philosophers.
155. Baldo M.V., Kihara A.H., Namba J, & Klein S.A. (2002). Evidence for an attentional component of the perceptual misalignment between moving and flashing stimuli. Perception, 31, 17-30.
160. Klein, S.A. (2002). Libet's temporal anomalies: A reassessment of the data. Consciousness and Cognition. 11, 198-214.
161. Klein, S.A. (2002). Libet's research on the timing of conscious intention to act: A commentary. Consciousness and Cognition. 11, 273-279.
162. Klein, S.A. (2002). Libet's timing of mental events: Commentary on the commentaries. Consciousness and Cognition. 11, 326-333.
#155 offers our latest thinking and data on the flash-lag effect. The flash-lag effect is the finding that when an observer is asked to localize a moving object at the time of a flash, the object is localized ahead of where it actual was. There are multiple explanations for this effect. Several years ago Marcus Baldo and I proposed an attentional component for this effect. We have recently carried out experiments to provide further evidence for our hypothesis. In this paper we also comment on alternative, not mutually exclusive hypotheses. The flash-lag effect is important because it has inspired novel experiments useful for probing on the multitude of factors controlling the perceived timing of events.
Articles 160-162 were written for a special issue of Consciousness and Cognition. #160 was a target article, #161 was a commentary on several other target articles and #162 was a commentary on a number of the commentaries. In order to minimize overlap, the articles focused on different aspects of Libet's work. #160 is about Libet's data on the perceived timing cortical and thalamic stimulation. #161 is about the timing of an act of volition relative to the associated brain waves. The flash-lag effect may play an important role in the perceive temporal judgements being discussed. #162 covers several topics not treated in the first two articles.
#160 shows how standard psychophysical ways of analyzing data can make a useful contribution to philosophical discussions. Libet's experiments measured the perceived asynchrony of skin stimulation vs. direct brain stimulation during epilepsy operations. The methodology was a rating scale method of constant stimuli, precisely the method that Dennis Levi and I have been using for twenty years. Libet's original data was presented in tables, a presentation that made the results difficult to interpret. I carried out a signal detection analysis of the data and plotted the results as psychometric functions. . I showed that the psychometric functions were very shallow and the subjects' criteria between the three categories (skin first, about equal, skin last) were separated by more than two hundred msec. These results showed that Libet's raw data were highly variable and could easily be explained by high level response bias effects and memory shifts rather than early retrograde timing shifts proposed by Penrose and others. Much of the commotion generated by Libet's data could have been eliminated had a careful analysis been done twenty years ago.
#161 is about Libet's experiments in which subjects estimated the time at which they became aware of their intent to make a voluntary hand movement. The timing of intent is important because it was found to come after a motor planning evoked scalp potential (the readiness potential-RP) in apparent violation of free will. The Trevena & Miller target article shifts the debate to a subsequent lateralized readiness potential (LRP) that is supposed to be more relevant for decision making. This lateralized potential come slightly after the judged intent. The first part of my article concerns a methodological issue on how the synchrony judgement was made. The subject's temporal judgement was based on their report of the location of a moving spot of light at the moment of awareness of intent to move their hand. The target articles by Pockett and Gomes discuss a bias of time estimate due to the visual processing time needed for the clock stimulus to reach the brain. This delay would place the true "intent" judgement at an earlier time and would avoid any free will violation. In my article I discuss how the flash-lag effect could compensate for the delay by an amount that would restore Libet's temporal order (but still not cause a problem for free will). The attention shifting hypothesis for the flash-lag effect proposed by Baldo and me is applicable to Libet's situation because subjects need to shift their attention from the intent feeling to the moving clock hand.
Part 2 of #161 and the second half of #162 shifts the discussion to the philosophical question of whether Libet's experiments are relevant to free will. This has been the most controversial aspect of his research and is an aspect not discussed in any of the target articles. Libet claims free will exists and it is not compatible with standard neuroscience, in opposition to the views of most modern scientists. I discuss the strong, scientifically mainstream, compatibilist position by presenting Roger Sperry's emergent top-down approach. However, I also side with John Searle who, in agreement with Libet, finds there is a problem with this standard neuroscience approach to the problem. For those interested in these philosophical issues, the last parts of these two appendices may be of interest to see how one might be able to resolve these historically ongoing debates.
- close section -
One of my serious areas of interest is psychophysical methodology. One of the basic assumptions of our field is that signal detection theory, based on Gaussian noise provides the solid formalism for modeling the response of the system. Memory research raises questions regarding this assumption since there are data that seem to be compatible with high threshold theory rather than signal detection theory. The specific experiments have to do with a double judgment task where subjects must respond not only to whether a word was new or old, but they also had to discriminate the source of a previously heard word. This type of research is relevant to issues of conscious vs. unconscious memories, but in #142 we are concerned primarily with methodological issues.
142. Slotnick, S.D., Klein, S.A., Dodson, C. & Shimamura, A.P., (2000). An analysis of signal detection and threshold models of source memory. J. Experimental Psychology:LMC 26: 1499-1517. #142 is based on Scott Slotnick's Master's thesis. Most memory models use a multinomial analysis that is based on high threshold assumptions. Since high threshold theory is in direct conflict with signal detection theory there have been recent experiments to directly compare the two. For old vs. new memory tasks, the Gaussian noise assumptions of signal detection theory have been validated and high threshold theory has been disconfirmed. However, for source memory tasks ("was the word to be memorized spoken by a male or or a female?") there the evidence has been against signal detection theory, in favor of high threshold theory. There are a number of reasons that could produce non-Gaussian distributions so we decided to replicate the experiments. We introduced a new method for analyzing the data so that the two hypotheses could be carefully tested. We found that the source memory data was in full agreement with signal detection theory and not compatible with high threshold theory.
- close section -