Klein Lab

University of California, Berkeley | Vision Science Program

Research Activities

 

Quite often breakthroughs in science are brought about by new technologies. We are fortunate to have available several unique capabilities:

1) Austin Roorda’s adaptive optics scanning laser ophthalmoscope is unique in its ability to stabilize the human retina, enabling one to do single cone psychophysics.

 

2) Our novel approach for handling closely spaced cortical sources is unique in enabling EEG with its msec temporal resolution to have the spatial resolution of fMRI.

 

3) Our advanced computational modeling approaches are able to identify neural mechanisms that underlie our psychophysical/perceptual findings.

An overview of the three methodologies are described below and more details are provided through additional links. These methods are especially helpful for graduate students and postdocs to become familiar with tools that may prove indespensible in their careers. To this end, following the methodology overview is a listing of sample research projects.

 

Three novel technologies:

[back to top]
1) Retinal circuitry.  Novel adaptive optics methods that enable single cone psychophysics. Austin Roorda's Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) is unique in being able to stabilize the human retina to seconds of arc accuracy. We have recently been awarded an NIH grant together with Austin Roorda, to use this instrument for doing psychophysics repetitively targeting individual cones and their neighbors across days. This amazing instrument offers unique abilities to learn about complex retinal circuitry. The new capability is just months old and it will produce highly visible research.

 

2) fMRI/EEG with msec temporal resolution. This technology disentangles complex early/late cortical processing in humans using EEG/MEG/MRI/fMRI. Our recent graduate student Justin Ales, Ph.D. and present student David Kim have developed a new methodology for identifying cortical activity in separate visual areas. It enables one to extend fMRI type experiments to the msec time scale, a 1000-fold speedup in temporal resolution. This is a recent development and we have recently been funded by NSF to apply these EEG methods to see what cortical changes are produced by perceptual learning. The challenge is to develop methods for using EEG for isolating the event related activity in closely spaced regions of human cortex in response to visual stimulation. The challenge is that when the sources of brain waves are spatially close together, it had been an intractable problem to isolate the activity to individual brain areas. The new approach implemented by David Kim is able to overcome these problems for visual sources in occipital cortex. For lots of gory details go here.

 

3) New modeling and analysis tools for connecting psychophysics to underlying mechanisms.  We have developed a battery of psychophysical tasks, new analysis tools and neural models for exploring perceptual quirks, brain plasticity and perceptual learning. In collaboration with Dennis Levi's group we plan to use video games and EEG source localization to uncover human processing and learning at the limits of performance (Prof. Levi's entry in the Guinness Book of World Records is a product of this sort of research). The merging of psychophysics with computational and theoretical neuroscience has been a major part of our research for the past 25 years.


Listing of individual research projects.
This is your chance to participate in ongoing projects as well design new projects consistent with the laboratory member's research interests. We have two broad research areas, one focused on retinal processing and one on cortical processing. All our projects use human (and computer) subjects.

 

Area 1: Measuring retinal circuits using single cone psychophysics.

Supported by an NIH R21 grant

[back to top]

These experiments all depend on Roorda's adaptive optics (AO) setup that has the unique capability of stabilizing the retina in a manner that enables one to repetitively stimulat individual cones.

 

1) Distortions of retinopy - Using a three-dot vernier and three-dot bisection task one can measure local distortions of the map of the retina. This is especially interesting in amblyopia and near boundaries such as the blind spot. For many years we have held the Guinness record for hyperacuity thresholds of less than one second. Maybe we can extend that record to regular acuity. In addition to having fun competing for new world records we are interested in whether cortex knows the position of individual photoreceptors to hyperacuity accuracy.

 

2)  Color perception and the role of surrounding cones - This is the specific topic of the NIH grant. Previous experiments have shown that stimulation of different middle wavelength cones can give very different color percepts. Since those previous experiments were unable to repetitively stimulate individual cones and their neighbors various hypotheses about color perception had been unable to be tested. With the present stabilization capability we will now be able to resolve the controversies about what retinal processes contribute to the color that we see.

 

3)  Cone temporal dynamics including adaptation - Many experiments that had been previously done only on animals can now be non-invasively done on humans. That has the advantage that the experiments can be much more subtle. One previously impossible experiment will explore the Westheimer effect that measures increment thresholds as a function of spot size. It had been found that for very tiny spots threshold increase dramatically. One possible explanation is that tiny eye movements jiggle the cones on which the light falls, creating an unstable visual percept. With super stabilization, strong single cone steady stimulation would have a fading percept and then any increment would be dramatically visible. This would inform us about the nature of cone and retinal adaptation. By manipulating the timing of stimulation of single cones and its neighbors we will be able to learn about the dynamics of retinal processing.

 

4) Ganglion cell isolation - One of the most important challenges of these single cone experiments is to identify the nature of the pooling of cones into individual ganglion and geniculate cells. Individual cones feed into the centers of one or more ganglion cell and into the surround of others. By simultaneous stimulation of more than one cone, it should be possible to determine local networks.

 

Area 2: Looking inside the brain while doing Perception and Action

> Clever Perception (like color metamers)

> EEG/MEG (doing fMRI with msec temporal resolution)

> Computational Neuroscience for modeling perception/action

> Eye movements for assessing the decision stage.

 

Supported by NSF and an NIH R01 grant

[back to top]

 

The tools available include advanced psychophysical methodologies, EEG recording, eye movement recording and advanced computational modeling and analysis. The questions we tend to focus on are where and when in the brain do event occur during diverse perceptual tasks. Here are some potential projects, many could be completed in relatively short time frames.  All of the following projects can also be done with EEG localization methods. Visit grad student David Kim's page for details on some the new techniques he is developing for disentangling the signals from early visual areas.

 

1) Illusions - The new element here is that by combining the psychophysics with EEG we will be able to help pin down the cortical mechanisms and areas responsible for the illusion.


a) Flash illusion in which two auditory tones can make a single flash look like two flashes. What are the physiological (EEG) underpinnings of this illusion? Does auditory input have early rapid access to primary visual areas?


b) DeValois illusion whereby a moving Gabor with a static Gaussian envelope appears shifted in position. This perceptual cue can be used for teasing out other effects (like crowding).

 

2) Perceptual learning - In our daily lives our brains are constantly adapting to new visual experiences and learning optimal solutions to new tasks. Understanding the mechanisms of neural plasticity is crucial to develop sound training paradigms. Several aspects of our research are:


a) Transfer. Learning a simple discrimination task at one visual location may or may not transfer to a different spatial location. What are the stimulus constraints that impede performance transfer over space and task. 


b) Video games & EEG. We have recently begun a project involving video games and EEG. - In recent years several studies have demonstrated a significant impact of action video game expertise on general spatial vision tests. This could become a significant vision training tool given users are generally motivated to perform well in gaming situations and often willing to commit significant time to  training. Does the EEG change after game playing especially to brief target in the periphery?


c) Cortical mechanisms. One important step is to identify cortical sites that participate in learning new tasks and their relative dynamics. Functional magnetic resonance imaging (fMRI) gives insight to the sites of perceptual learning but technologies like electroencephalography (EEG) are needed to reveal the dynamic interplay on the millisecond time scale between cortical areas that change with learning.

 

3) Aging brain - With age comes degradation in many realms including vision. One significant area appears to be peripheral visual attention. This is especially true when multi-tasking or otherwise increasing the attentional load. How much does training improve performance and are changes long lasting?

 

4) Crowding - In peripheral vision performance on simple tasks such as orientation discrimination and letter detection is often limited by the presence of nearby distractor well beyond acuity limit constraints. It is unclear if this occurs in foveal vision but now with adaptive optics we can address the question. This is a hot area of research and new effects  continue to be discovered, some in our lab, that open up interesting further experiments. There is a weekly journal club on this issue with faculty and students from four labs participating.

 

5) Binocular Rivalry - Binocular interactions in general provide a powerful tool for localizing the dynamics of cortical processing. We have been wanting to do experiments in this area for several years. It is perfect for integrating our psychophysics and EEG capabilities.

 

6) Brain computer interface - combining eye movements and EEG analysis for single trial analyses of visual processing. This is a continuation of our DARPA work.  There are many labs doing eye movement research and many doing EEG research, but very few combining the two and extremely few (if any?) doing it with high quality techniques used in our lab. The recent finding (not by us) that many of the EEG gamma oscillations can be attributed to microsaccades provides strong support for this direction of new research. We are eager to extend our BCI research to EEG oscillations in addition to the P300 signal of EEG. We are also eager to do more in the area of frontal eye field saccadic control (read more on grad student Weston Pack's page).

 

7) Exogenous vs endogenous attention - Pin down the timing of exogenous cued attention.  This has been a controversial field and we have developed a methodology for clarifying the issues. This could be extended to use TMS to replace the visual cue control (read more on Weston Pack's page).

Jump to:

Technologies

Sample Projects: Area 1

Sample Projects: Area 2