Visual Processing Laboratory

University of California, Berkeley | Vision Science Program

Research Activities

Localization in time and space of evoked EEG and MEG sources

Human neuroscience has been hampered by technical limitations. A reliable technology for non-invasive imaging of human brain activity with high temporal and spatial resolution has not been available. We have recently developed new methods for obtaining and analyzing evoked EEG/MEG (electro- and magneto-encephalograpy) and related fMRI data that promises to provide the needed spatio-temporal resolution. We first present a brief history of M/EEG source identification together with past problems. Then we describe our new methods.

 

HISTORY

Researchers have been trying to localize the M/EEG sources for more than 40 years. Look at just about any issue of IEEE Transactions in Biomedical Engineering and you will find articles on this topic. There are several reasons for this great interest:

  1. How a multiplicity of cortical modules process visual information remains an unsolved mystery. Unraveling the code used by the brain remains one of the most challenging unsolved problems in science.
  2. There is general agreement that new research techniques are needed to explore the brain on the mm/msec scale. See Methods section for details.
  3. The evoked M/EEG signal to noise ratio (SNR) is very high and very stable from day to day (see ref 84 [PDF] [commentary] for details). The evoked signal is the time locked average response to a stimulus pulse.
  4. M/EEG methods work for vision, audition and somatosensory (skin) stimulation, so there is broad interest in these methods.

In order for M/EEG to become a widely used tool for analyzing brain function it is necessary to go from the sensor information (magnetic fields for MEG and electric potentials for EEG) to the identification (locations, orientations and time functions) of the multiple brain sources. Although there are multiple approaches to source identification, there are problems with each. This discussion will focus on the equivalent dipole approach that assumes the sources are very local (an excellent assumption for the teensy stimuli that we use). A least squares search is used to find the optimal dipoles that best fit the sensor data. This is called the inverse problem (given the sensor data, find the sources). The forward problem (given the source information, predict the sensor data) is assumed to be known (but see item 2 in the next paragraph). Two major hurdles have stood in the way of solving the inverse problem.

  1. Even without noise it is possible to get stuck in a local minimum that is not the global minimum. With noise present it gets worse since the global minimum might not be the correct solution. Our path to overcome the local minima problem began with our realization that the problem was most often a "rotation problem" that becomes dominant when two or more sources are closely spaced. We clarify the rotation problem in Eqs. 1 - 3 and part 1 of ref. 96 [PDF].
  2. Forward models to go from the sources to the sensor activity are never perfect because of the unknown anisotropic and inhomogeneous conductivity inside the brain. Errors in the forward model (called the "mis-specification problem") have two consequences. First, they shift source locations and orientations. These errors can make it difficult to validate the results with MRI information. Second, if the forward model is mis-specified then the sensor data from the leading sources will not be fit perfectly and the residual error will add to the secondary sources that are being fit thereby severely distorting the results. See part 2 of Ref. 96 [PDF] for examples of this problem.

We have developed methods for dealing with both the "rotation problem" and the "mis-specification problem" that are probably the two main problems facing high quality source identification based on both EEG and MEG. Shahram Dastmalchi's recent thesis (see the poster for some details) provides an overview of these new methods. We will provide a few details next, in Methods.

(back to top)

 

METHODS

Our ability to overcome the previous obstacles to source identification of E/MEG signals is based on two new methods:

  1. Enhanced data collection. We have developed a novel set of stimuli that allows us to collect a much larger set of data than ever previously collected without increasing the data collection time.
  2. New source identification algorithms. Our new algorithms enable us to overcome the "rotation problem" and to minimize the "mis-specification" problem so that the location, orientation and time functions of multiple cortical sources are identified.

Data collection: The raw sensor data is given by: V(s, p, t) where V is the scalp voltage for EEG and the magnetic field strength for MEG.

The sensor label, s, goes from 1 to 403. We have recently begun using the new M/EEG setup at UCSF with 275 MEG sensors plus 128 EEG electrodes for a total of 403 simultaneously active sensors (visit ctf.com for details on the system we use). It is the most advanced instrument with the largest number of sensors presently available. In this country only NIH in Washington has a similar instrument.

The patch label, p, goes from 1 to 163, representing separate stimulus micro-patches in the visual field that are simultaneously flickering according to a pseudorandom binary sequence. You can click on the adjacent figure to see what a 60 patch stimulus looks like (we plan to use stimuli with more patches in the the future).

 

movie of dartboard stimulus

The large number of patches is a new item that we bring to source identification. Getting 163 micro-patches to produce good signal to noise ratios in about one hour of data collection has been a major achievement. The 163 micro-patches comes from five contiguous rings with 12 spokes (60 patches) arranged in a dartboard fashion as shown in the adjacent figure. In addition to the 60 primary patches we obtain responses from the edges of adjacent patches (55=5*11 spoke edges plus 48=4*12 radial edges) for a total of 60+55+48=163 micro-patches.

In the flickering stimulus above there are random appearances of black-white stripes and red-green stripes. After a full run consisting of 216 frames, the time sequence of each patch for each color is perfectly orthogonal to other patches (a property of the m-sequence pseudorandom noise that is used to determine the state of a patch. This orthogonality makes it very easy and quick to extract the linear response (1st order kernel) from a single patch and a single color from the response string. One can also extract nonlinear responses that depend one sequential states of a single patch or interactions between adacent patches (higher order kernels).

a screen capture from the linked VEP movie.

The adjacent figure shows the visual evoked potential, V(s, p, t) for 46 electrodes and 48 patches (four rings of 12 patches per ring). For more details see Slotnick, et al. (1999) [PDF]. The 48 colored circles represent the voltage across the scalp at one instant of time. Red indicates a positive voltage and blue represents negative voltage. The full response from 0 to 200 msec can be seen by clicking on the picture. If only a single fixed dipole had been active then the pattern on the scalp would stay fixed and only the magnitude would change and possibly flip sign. However, one can see that more than one dipole is present in almost every patch since the patterns shift as a function of time.

The time index, t, goes from 1 to more than 1200. The large number of time samples comes from concatenating the temporal responses to multiple stimuli and to multiple nonlinearities. Each run typically produces a linear response (1st order kernel) plus two or more nonlinear responses (higher order kernels). In addition, we use at least 4 types of stimuli chosen to produce different responses in different visual areas. Each response has about 400 msec of high SNR (signal to noise ratio) data. Since a 4 msec sampling time is adequate for most purposes, the number of time samples is >400/4 * 3 * 4 = 1200. The advantage of having a lot of stimulus samples is that it increases the independence and diversity of the responses from multiple visual areas that we seek to identify.

Putting these three factors together gives a dataset of 403*163*1200 ≈ 80,000,000 separate samples. This large value is thousands of times larger than what previous investigators have used for source localization. Our novel analysis makes use of this rich data.

 

Advances in Source Identification Algorithm: We analyze the data using the local dipole approximation:

 

V(s, p, t) = ∑d Ed(s, p) Td(t, p)

 

where the index d specifies the dipole label. For vision, d might go from 1 to 4 to specify the sources in the strongest responding retinotopically organized visual areas (like V1, V2, V4 and MT). Non-retinotopic areas and areas with very large receptive fields would not give coherent responses to our very tiny (less than 7 x 7 mm2) simultaneously active patches. The functions Td(t, p) are the time functions of the four sources. We often group patches that have similar eccentricities where we have shown that Td(t) is independent of p. The electrode function Ed(s, p) is the forward solution. For example, E1(s, p) depends on 5*p parameters specifying the location (3 parameters) and orientation (2 parameters) of the first source (say V1) for each of the patches.

Our regression program has two parts: linear and nonlinear. In the linear part the locations and magnitudes of all the dipoles are known from the preceding iteration and a simple matrix inversion gives all the time functions. Next, in the nonlinear search all the time functions are known (from the preceding step) and there are 4 dipoles (for four visual areas) * 5 (locations and orientations for each dipole) = 20 parameters for each patch. The time functions are assumed to be common across a number of patches so the only nonlinear search is over 20 parameters, which is not difficult or slow. We have developed a number of tricks for getting good initial conditions for the search. After the nonlinear search we go back to improving the time functions by linear regression, as already mentioned. The iterations continue until convergence is achieved. Because of the rotation problem discussed next the search is called a ravine search that requires special search techniques.

If some or all of the four dipoles are closely spaced, as is true in visual cortex because of the proximity of the multiple visual areas, then we have a major problem. We call it the "rotation problem" that is common to many such search problems. The rotation problem occurs because linear combinations of the source time functions together with matched linear combinations of source orientations gives sensor strengths, V(e,p,t), with very similar sum of squared errors (see 96 [PDF] for details of the "rotation problem").

 

V(s, p, t) = ∑d Ed(s, p) Td(t, p) = ∑d Ed'(s, p) Td'(t, p)

 

where the primed quantities are linear combinations of the unprimed quantities. The rotation problem occurs whenever a linear combination of 2 or more dipoles is well-approximated by another dipole.

We have developed a number of techniques for dealing with this rotation problem. For example, because our large dataset contains the responses to a multiplicity of topographically organized patches we are able to find the unique rotation that not only gives an excellent fit to the sensor data, but also has appropriate continuity of the dipole locations and orientations across neighboring patches. For more details, see this poster.

(back to top)

 

movie of dartboard stimulus

Contact:

Stanley A. Klein, Ph.D.

Professor of Vision Science

360 Minor Hall #2020

School of Optometry

UC Berkeley, 94720-2020

Office: (510) 643-8670

Lab: (510) 643-7571

Fax: (510) 643-5109

sklein@berkeley.edu