Bartlett Mel, University of Southern California
A key challenge in reverse engineering the visual cortex is to understand how the dendrites of pyramidal neurons integrate their excitatory and inhibitory inputs over (dendritic) space and time. To know the nature of these intradendritic computations is to know better what role an individual pyramidal neuron, embedded within the visual cortical circuit, may play in establishing its own nonlinear receptive field properties. Recent experimental and modeling studies suggest that the thin basal and apical oblique branches of pyramidal neurons may behave like separately thresholded integrative subunits , causing the perisomatic region of the cell to act like a 2-layer "neural network" with sigmoidal "hidden" units. As a corrollary to this, inhibitory inputs to the thin basal and apical oblique dendrites can divisively normalize neuronal firing rates. Based on this biophysical mechanism, it will be shown how a simple feedforward circuit involving a single inhibitory interneuron could contribute to two apparently unrelated visual processing functions: (1) optimal cue combination, and (2) focal visual attention.
Ania Majewska, Neurobiology & Anatomy, University of Rochester
During a developmental critical period, visual activity shapes the structure and function of connections between neurons in the visual cortex. Although this process is highly dynamic, most of our knowledge about structural changes following manipulations of vision has been pieced together from studies carried out in culture and excised or fixed tissue where dynamic processes are inferred from static images compared across animals. This has been largely due to a lack of noninvasive in vivo imaging techniques that allow for the dynamic imaging of neuronal structure over time. I will discuss our recent studies using in vivo multiphoton imaging to track the structure of synaptic components over time. These studies have demonstrated that dendritic spines (elements of the postsynapse) exhibit visually-driven structural plasticity that is specific to the visual critical period. Changes in spine morphology are initiated very rapidly following manipulation of vision suggesting that structural plasticity may be intimately tied to changes in synaptic function. The postsynapse likely caries the majority of structural synaptic plasticity as axonal terminals (elements of the presynapse) are less affected by visual experience than dendritic spines. These synaptic changes occur in the absence of changes in gross dendritic or axonal structure, suggesting that fine scale changes in synaptic connectivity underlie rapid ocular dominance plasticity without an overall remodeling of the pre and postsynaptic scaffold.
Geraint Rees, Institute of Cognitive Neuroscience, University College London
The simplicity and directness with which we have conscious experience of the world around us belies the complexity of the underlying neural mechanisms, which remain incompletely understood. This talk will review recent work from our laboratory in two key areas. First, we have been attempting to decode spatial patterns of activity in visual cortex in order to predict conscious and unconscious perception. Second, we have been studying in depth the neural correlates of fluctuations in perception during binocular rivalry. Taken together, our findings point to new and potentially exiting data concerning neural correlates of human consciousness; and suggest novel possibilities for how functional MRI might be used to study both human consciousness and cognition more generally.
Bruno Averbeck, Brain & Cognitive Sciences, University of Rochester
Keith Schneider, Rochester Center for Brain Imaging, University of Rochester
In this talk I will describe my recent work investigating the retinotopic structure and functional properties of human lateral geniculate nucleus (LGN) and superior colliculus (SC) using functional magnetic resonance imaging (fMRI). First, I will show that the retinotopic organization of the LGN and SC are similar to that in the macaque, but the fMRI measurements suggest that the horizontal meridian of the visual field is significantly overrepresented relative to the vertical meridian. Part of this bias can be explained by the distribution of retinal ganglion cells, but the remainder of the bias puzzled me for several years. New data I have acquired in Rochester show that stimuli presented to one hemifield can somewhat suppress the fMRI activation of mirrored stimuli in the opposite hemifield. In conjunction with the standard retinotopic mapping stimuli, this interhemispheric suppression can explain the remainder of the vertical meridian deficit. Second, I will show that the magnocellular (M) and parvocellular (P) regions of the LGN can be distinguished based upon their sensitivities to stimulus contrast. Although we did not attempt to distinguish the eye-specific layers of the LGN, we did show that the activity in the LGN is modulated during a binocular rivalry task and we can infer that the activity in the eye-specific layers is suppressed. Third, I will show that the activity of the LGN and especially SC can be modulated by spatial attention, independent of the feature to be tracked. In the LGN, both the M and P divisions were modulated by attention, with the M modulation being somewhat larger. Finally, I will discuss my current attempts to resolve the individual layers of the LGN using high-resolution fMRI in conjunction with a super-resolution technique I developed in which inadvertant subject head motion during the course of an fMRI experiment can be utilized to improve the spatial resolution.
Greg DeAngelis, Washington University
Research in my laboratory focuses on the neural circuits responsible for perceiving the location of objects in 3D space and for computing the direction of one's self-motion through 3D space. I will provide an overview of three ongoing projects in the laboratory. First, I will describe reversible inactivation experiments that probe the roles of area MT in two depth discrimination tasks. I will show that MT contributes to coarse depth discrimination but not to fine discrimination. In addition, I will show that training animals to perform the fine depth discrimination task profoundly alters the contribution that MT makes to the coarse depth task, suggesting remarkable training-induced plasticity. Second, I will describe a set of ongoing experiments that test whether MT neurons are involved in computing depth from motion parallax, the relative motion of objects that frequently results from self-motion of the observer. I will show that MT neurons combine retinal motion information with extra-retinal signals to compute depth sign from motion parallax. These data provide the first clear evidence for a neural mechanism of depth perception that relies on motion parallax. Third, I will introduce a set of experiments that explore how visual motion (optic flow) is combined with vestibular signals to compute direction of heading. I will show that monkeys combine optic flow and vestibular signals near optimally in a heading discrimination task, and that a subset of neurons in area MSTd also integrate these two sensory inputs to achieve higher sensitivity under cue combination.
Jeansok Kim, Psychology, University of Washington, Seattle
Stress is a biologically significant factor that, by altering brain cell properties, can disturb cognitive processes such as learning and memory, and consequently limit the quality of human life. The hippocampus, as a part of a medial temporal lobe system necessary for the formation of stable declarative (or explicit) memory in humans and spatial (or relational) memory in rodents, is highly enriched with corticosteroid receptors and particularly susceptible to stress. In my talk, I will present a neural-endocrine model to explain how stress influences hippocampal-dependent as well as hippocampal-independent memory systems.
Carl Olson, Carnegie Mellon University
In numerous brain areas, neuronal activity varies according to the size of the reward for which a monkey is working. Reward-dependent activity has commonly been viewed as representing the value of the reward. Alternatively, however, it could reflect the monkey's degree of motivation. Anticipation of a more valued reward leads to stronger motivation as evidenced by measures of arousal, attention and intensity of motor output. We have distinguished between value-related and motivation-related processes in single-neuron recording studies of monkeys working to obtain rewards and avoid penalties. We have found that the nature of reward-dependent activity varies across areas in the frontal lobe. Neuronal activity in orbitofrontal cortex genuinely represents the value of the anticipated outcome. In contrast, neuronal activity in other frontal areas is determined by the monkey's degree of motivation. These findings cast light on the stages by which representations of goal value (in the limbic system) are transformed into the motivated pursuit of goals (in sensorimotor cortex).
Steven Feldon, Ophthalmology, University of Rochester
Objective: To develop and validate a computerized system to analyze Humphrey visual fields obtained from patients with non-arteritic anterior ischemic optic neuropathy (NAION) and enrolled in the Ischemic Optic Neuropathy Decompression Trial (IONDT).
Methods : We used visual fields from 189 non-IONDT eyes with NAION to develop the computerized classification system. Six neuro-ophthalmologists (expert panel) described definitions for visual field patterns defects using 19 visual fields representing a range of pattern defect types. The expert panel then used 120 visual fields, classified using these definitions, to refine the rules, generating revised definitions for 13 visual field pattern defects and 3 levels of severity. These definitions were incorporated into a rule-based computerized classification system run on Excel(r) software. The computerized classification system was used to categorize visual field defects for an additional 95 NAION visual fields, and the expert panel was asked to independently classify the new fields and subsequently whether they agreed with the computer classification.
Results: Despite a set of agreed upon rules, the panelists did not agree on classification among themselves, but a majority did agree with the computer classification for 93 of 95 visual fields. Conclusions: The IONDT developed a rule-based computerized system that consistently defines pattern and severity of visual fields of NAION patients, essential for use in a research setting.
Julia Trommershäuser, Giessen University
I present research that combines theoretical and experimental methods to investigate visuo-motor strategies during the planning and execution of goal-directed movements under risk. In our experiments, we study human movement planning in environments where there are explicit gains and losses associated with the outcomes of actions and compare human performance to a model of optimal performance based on statistical decision theory. The model comprises approaches of motor control, statistical and Bayesian decision theory and is based on the idea that goal-directed movements reflect a subject‚s choice under the constraints of the perceptual and motor system.
I will present several experimental tests of the model. In these experiments, subjects effectively acted so as to maximize gain, in good agreement with predictions of the model. Our experiments further demonstrate that subjects are able to modify their movement strategies to maximize expected gain in response to changes in response variability, reward, penalty, and object locations. Optimal performance can be disrupted by introducing uncertainty to the presentation of reward, penalty or object location.
Xinying Cai, Arizona State University
This study investigated adaptive changes in the spike activity of cerebral cortical neurons during sensorimotor adaptation. We performed chronic multi-electrode recordings from the primary motor (M1) and somatosensory (S1) cortical areas of 2 monkeys during the animal's performance of a 3D reaching task with adaptation to predictable external force perturbation. The same population of cortical neurons has been monitored concurrently over three weeks. The obtained data allowed us to analyze the adaptation-related day-to-day modifications in the spike activity of each neuron individually and as a population. Recruitment of cells was observed in M1 during the adaptation process, which was reflected in the gradual, day-to-day increase of spontaneous spike activity as well as movement and target-related modulation. A characteristic feature of the recruited cells is the gradual buildup of their spike activity preceding the perturbation onset, which was substantially different in its time course and target-related modulation from kinematic adaptive modifications and muscle activity, suggesting its relation to an internal model of perturbation. The resulting pattern was retained for at least two days after the perturbations had been discontinued, but the buildup significantly decreased after the perturbations became randomly scattered across the trial series. A gradual attenuation of the response to the perturbation onset was observed in S1, similar in its time course to the above buildup in M1. The response quickly reappeared after the perturbations became random, suggesting that the attenuation was strongly related to the anticipation of a perturbation onset. The results are consistent with viewing M1 as a pool of functionally flexible processing units that can be dynamically recruited into a system controlling the performance of a given motor task and individually tuned as required for motor learning.
Dominic Barraclough, Center for Visual Science, University of Rochester
Uri Polat, Goldschleger Eye Research Institute, Tel Aviv University
David DiLoreto, Ophthalmology, University of Rochester
David Fitzpatrick, Duke University
The pioneering studies of Hubel and Wiesel provided the foundation for understanding the radial and tangential organization of primary visual cortex. Neurons displaced along the radial axis (perpendicular to the cortical surface), form columns sharing similar response properties, such as receptive field location, preference for edge orientation, and relative effectiveness of the input from each eye (ocular dominance). In the tangential dimension (in the plane of the cortical surface), adjacent columns exhibit slightly different stimulus properties, forming orderly two dimensional representations that have come to be known as cortical maps. The availability of optical imaging techniques has made it possible to visualize with greater precision the organization of these columnar maps, their topological relationships, and to probe the mechanisms that shape their development. In this seminar I will present recent work from my lab that challenges current views of map structure, and the role of visual experience in cortical map development.
Lin Gan, Ophthalmology, University of Rochester
The mammalian retina consists of six major neuronal and one glial types that are further classified into many subtypes based on their anatomical aspects and functional differences. Nevertheless, how these subtypes arise remains largely unknown at the molecular level. Here, we demonstrate that the expression of Bhlhb5, an Olig-class basic helix-loop-helix (bHLH) transcription factor, starts at E11.5 in post-mitotic retinal precursors. Throughout the retinogenesis and in adult retina, Bhlhb5 expression is tightly associated with the genesis of selective Type 2 OFF-cone bipolar and GABAergic amacrine subtypes. Targeted deletion of Bhlhb5 results in a reduction in selective OFF-cone bipolar and GABAergic amacrine subtypes. Over-expression of Bhlhb5 in chick retinas promotes amacrine genesis. Furthermore, we show that Bhlhb5 is co-expressed with the amacrine-determining factor NeuroD and that while loss of Bhlhb5 has no effects on the expression of NeuroD and other bHLH-class retinogenic genes, Bhlhb5 and NeuroD are up-regulated in Math5+ cell lineages of math5-null retinas. Our results imply that Bhlhb5 functions downstream of bHLH-class retinogenic genes to specify bipolar and amacrine subtypes.
Daniel Gajewski, Michigan State University
Due to the limits of visual acuity, the eyes must be directed toward regions of interest in a scene to resolve and encode the details. As a result, information is sampled from a scene via a sequence of alternating saccades and fixations. A program of research is presented that examines a number of inter-related questions from the active vision perspective, which gives the saccade-fixation sequence a central role in visual cognition. One set of questions concerns the role for short-term or working memory in vision. What information can be acquired from a single fixation? What information is maintained in the service of on-going behavioral goals? Studies reported support the idea that objects are the unit of memory capacity as well as a general preference for 'just in time' processing strategies that minimize the contents of visual working memory.
Gary Paige, Neurobiology & Anatomy, University of Rochester
The brain uses vision and audition to construct a spatial map of the external world. The integrity of this map requires that the two senses maintain spatial calibration. This poses two key challenges. 1) Vision and audition are encoded using different mechanisms and coordinate schemes; visual images are topographically mapped directly onto the retina, while auditory space must be constructed centrally based upon interaural and spectral cues from the two ears. Spatial congruence requires adaptive mechanisms that co-calibrate the two sensory modalities over time, given sufficient cross-sensory interaction. 2) The visual and auditory frames of reference shift relative to one another during eye movements. The brain must account for this to maintain spatial register and constancy, presumably by exploiting an eye-in-head signal. We will discuss long-term adaptive as well as more immediate processes that ensure concordance between visual and auditory space. The latter focuses on how eye position systematically and dynamically shifts the localization of auditory, but not visual, targets across a broad spatial field. This may reflect dynamic errors in how eye position signals are used to align visual and auditory space, or that eccentric gaze gradually shifts (adapts) our sense of 'straight-ahead.'
Miguel Eckstein, University of California at Santa Barbara
When artificial cues indicate the probable location of a target the accuracy of perceptual and saccadic decisions is typically better in the trials in which the target appears at the cued location than in the trials in which it appears at the uncued location (i.e., the cueing effect). Similarly, in real scenes, perceptual performance is better when the target appears at an expected location (i.e., context effects, Chun, 2000). The most common interpretation of both of these results is that observers allocate limited attentional resources to a single cued/expected location rather than distribute them across many locations and therefore enhance the quality of processing (signal to noise ratio) at that cued/expected location vs. an uncued/unexpected location (e.g., Hawkins et al., 1990; Downing et al., 1988). However, such analysis ignores the stochastic nature of neural processing. The theory of Bayesian decision making specifies the mathematical optimal method to weight noisy visual information from the different cued and uncued locations based on prior knowledge about the validity of the cues. This differential weighting strategy will maximize performance across all types of trials (target at cued and uncued locations) and, as a byproduct, give rise to a cueing effect without resorting to limited attentional resources or enhancement of processing at the cued location. In this talk, I will test these two competing models: differential weighting (Bayesian priors model) vs. limited resources model using human psychophysical performance and classification images, human saccades during target search in real scenes, and neuronal responses in the Superior Colliculus of awake behaving monkeys (collected at the Salk Institute, Krauzlis laboratory). Together, the behavioral and neurophysiological data show that differential weighting of information based on prior expectations is an important (yet often ignored) neural mechanism used by primates to improve their visual search performance.
Shawn Green, Brain and Cognitive Sciences, University of Rochester
Probably the most consistent finding in the literature on the effects of action video game experience on perceptual/perceptuo-motor skills is that video game players (VGPs) have faster reaction times than non-players. This effect is consistent across a wide range of tasks and reaction times (from simple Go/Nogo experiments with mean RTs less than 300 ms to visual search paradigms with 25+ distractors with mean RTs greater than 1.5 seconds). However, there are many mechanistic explanations for a reduction in RT (increased sensitivity to the stimulus, reduced criteria, faster motor execution) that are not easily teased apart by any paradigm used to date. Here we present data on a motion coherence paradigm (Newsome et al, 1989), which in combination with a model developed by Palmer et al (2005) allows us to more explicitly test the relative contribution of sensitivity, criteria, and motor execution on speed and accuracy. VGPs demonstrated an increase in sensitivity, a reduction in criteria, and an increase in the speed of motor execution. Pilot data from an auditory lateralization experiment and a relatively asensory mean estimation paradigm, both designed to extend the work beyond purely visual paradigms, will also be presented.
Osamu Masuda, Center for Visual Science, University of Rochester
The temporal responses of the chromatic channel in the peripheral visual field were investigated in comparison with those of the luminance channel by measuring the temporal summation properties of paired flashes. The size of the flash was scaled according to the cortical magnification factor. The temporal response to complementary chromatic pulse pair was biphasic, and was accelerated in the peripheral visual field as well as that to luminance pulse pair was. On the other hand, the temporal response to isochromatic pulse pair was monophasic, and was decelerated in the periphery. The sensitivity of the biphasic channel at high temporal frequency in the periphery was comparable to that at the fovea, whereas the sensitivity of the monophasic channel at low temporal frequency in the periphery matched for that at the fovea. A model that a biphasic channel and a monophasic channel were arranged in parallel in chromatic channel was proposed. Similar results were obtained with a pair of flashed spatial Gabor patches with spatial frequency scaled according to cortical magnification factor. The inhibition phase of the biphasic channel was degraded with the spatial frequency of the Gabor patch. The properties of the biphasic channel were consistent with double-duty hypothesis whereas those of the monophasic channel with two-channel hypothesis. The biphasic channel might correspond to parvocellular pathway and the monophasic channel might reflect the properties of koniocellular pathway.
Richard Libby, Ophthalmology, University of Rochester
Glaucoma is a complex group of diseases where many different genetic and environmental factors conspire to cause vision loss. While there are different causes of glaucoma, the ultimate reason for vision loss in all glaucomas is the death of retinal ganglion cells (RGCs), the output neurons of the retina. Therefore, glaucoma is a highly selective neurodegeneration of ganglion cells. Our lab focuses on this unifying characteristic of glaucoma, the death of the RGC. Specifically, we are interested in addressing two of the fundamental questions concerning vision loss in glaucoma: 1) how and why RGCs die in glaucoma and 2) why some RGCs are more susceptible to elevated intraocular pressure (the best known risk factor for glaucoma) than others. Our studies use mouse models of glaucoma, genomics analysis and cell biological techniques to address these questions. Answering these questions will identity potential therapeutic targets for the treatment of this and other optic neuropathies. Furthermore, this work will identify genetic susceptibility factors that could be important in determining individual patient prognosis and/or the most appropriate treatment paradigm for a given patient. In this talk I will present the progress we have made on these questions and discuss experiments planned in the near future.
Howard Federoff, University of Rochester
Transmissible spongiform encephalopathies (TSE) are a group of fatal neurodegenerative diseases that include scrapie and bovine spongiform encephalopathy (BSE) in animals, and Creutzfeld-Jacob disease (CJD) and Gertsmann-Straussler-Scheinker syndrome (GSS) in man. Scrapie is the prototypical prion disease known to occur in sheep since the 18th century. Bovine spongiform encephalopathy, also known as "mad-cow disease", has only been described in recent decades and is widely believed to have arisen from the feeding of processed scrapie-infected sheep to cattle. Through a similar mechanism of transmission BSE was conveyed to humans creating variant CJD (vCJD).
All TSEs are caused by prions which are thought to be devoid of nucleic acid. The infectious prion agents, designated PrPsc, arise from a conformational change in the non-infectious normal cellular prion protein, designated PrPc. The mechanism of this conformational conversion is unknown, but it is believed that PrPsc, once present, can act as a template and facilitate infectious prion replication through direct PrPc/ PrPsc interaction. Currently, there are no known effective therapies for TSE and all forms are invariably fatal.
We undertook a strategy prevent conversion of PrPc to generate PrPsc specific antibodies by expressing PrPc specific scFv antibodies from an AAV vector. We have delivered AAV scFv vectors harboring five different antibody genes (4 against PrPc and one control directed against Phenobarbital) to the CNS prior to challenge of mice peripherally (IP) with PrPsc. The onset of disease was measured by performance on a RotaRod behavioral apparatus. Animals being evaluated are placed on stationary rod then as rod rotates and animals fall, the velocity of the rod and time at which and animal falls is recorded by the animal breaking an infrared beam. Differences in run time indicate differences in balance, coordination, and motor control in experimental and control groups of animal. Mice were evaluated monthly for five months post-PrPsc challenge then weekly until performance decreased to zero. Incubation periods were also measured. Our data indicate that rAAV delivery of scFvs directed against PrPc can substantially alter the natural history of scrapie infection in mice as evidenced by extended incubation period and improved clinical response.
Aaron Cecala, Neurobiology and Anatomy, University of Rochester
Dan Gray, Optics, University of Rochester
Though single photoreceptors can be imaged of the primate retina, the transparent cells in the inner retina and cell layers behind the receptors have proven much more difficult to resolve. By combining fluorescence imaging with adaptive optics, ganglion cells and retinal pigment epithelial (RPE) cells can be imaged in vivo in primate retina.
To image ganglion cells, the lateral geniculate nucleus (LGN) of a monkey was injected with rhodamine dextran or alexa 594 dye, which are retrogradely transported to retinal ganglion cells (Rodieck and Watanabe, 1993). The monkey was sedated and in vivo images were acquired with a custom built, fluorescence adaptive optics scanning laser ophthalmoscope. An argon/krypton laser provided tunable excitation wavelengths from 476 to 676nm for fluorescence imaging. Multiple frames were summed to increase the signal-to-noise ratio in the extremely dim fluorescence images. High-intensity reflectance images obtained at 830 nm were recorded simultaneously with the fluorescence images and were used to guide fluorescence image registration, since they shared the same effects of eye movements. Individual ganglion cell bodies, and occasionally their axons can be resolved. Extended exposure of rhodamine-labeled ganglion cells to a small patch of bright light produced enhanced fluorescence ("fireworks") similar to that observed in vitro by Dacey et al. 2003.
By focusing deeper in the retina, retinal pigment epithelial cells were visualized by exciting the intrinsic fluorophore, lipofuscin that accumulates in the cell cytoplasm. This instrument can also produce high contrast video of the smallest capillaries in high-resolution fluorescein angiography.
In vivo imaging of primate ganglion cells may be useful in a number of future applications. For example, the phototoxicity of rhodamine can be exploited to create spatially localized lesions in individual classes of ganglion cell, possibly allowing the visual function of these classes of cells to be revealed by psychophysical experiments. In vivo imaging of RPE cells can be used to characterize the topography of the normal and diseased RPE cell mosaic as well as the efficacy of therapies for retinal diseases that disrupt the RPE mosaic.
Nathan Rosecrans, Brain and Cognitive Sciences, University of Rochester
Previous work has shown that spontaneous activity in primary visual cortex possesses a spatio-temporal correlational structure that is only moderately altered by sensory stimulation. Recently, we have employed principle component analysis and self-organizing maps to identify fine-scale structure within spontaneous activity, and look for changes in that structure during visual stimulation. Our findings suggest that under both stimulus conditions, neural ensemble activity may be decomposed into two separate, but interacting, components. One component is high-magnitude synchronous neural firing that occurs across all recording sites. Embedded within this synchronous activity are traveling waves and fluctuating local regions of elevated neural firing. During sensory stimulation, these localized regions cease to move dynamically, and become stabilized. This suggests a model of visuo-cortical function in which sensory signals interact with ongoing activity to stabilize patterns intrinsic to the cortical network, rather than network activity during sensory stimulation directly reflecting the structure of the input signal itself.
Krista Gigone, Brain & Cognitive Sciences, University of Rochester
Cory Hussar, Neurobiology & Anatomy, University of Rochester
Responses of neurons in the prefrontal cortex (PFC), believed to play a role in sensory working memory, have been shown to reflect the properties of stimuli that are used for task performance. Recent work in our lab has examined PFC activity during a task where monkeys compared the direction of two consecutive random-dot stimuli, sample and test, separated by a memory delay. We found that sample responses reflected the direction and coherence of motion, while test responses were modulated by the remembered sample direction. As the PFC has been implicated in executive control of behavior, we asked whether the motion responses were dependent upon the demands of the behavioral task. To gain insight into this question, we recorded from PFC neurons while monkeys performed the working memory task with identical stimuli but two alternative demands. In one block of trials they were required to discriminate direction, and in the other block, the speed of motion. Preliminary data indicate that response selectivity to motion stimuli can be modulated by the demands of the task during which the stimuli are presented. These results suggest that the activity of PFC neurons during working memory for motion can reflect both the available sensory information and the specific task demands.
Hyojung Seo, Brain & Cognitive Sciences, University of Rochester
Daniel Zaksas, Neurobiology and Anatomy, University of Rochester (Advisor: Pasternak)
Most visually guided behaviors require working memory, the brief, task-specific storage and manipulation of sensory information. Studies outlined here explored cortical circuitry that may underlie the ability to retain and manipulate information about visual motion. In all experiments, monkeys performed delayed match-to-sample tasks, in which they compared the directions of motion in two stimuli, sample and test, separated by a memory delay. We examined: 1) the nature of visual motion representation in memory; 2) neuronal activity during the task in two interconnected cortical regions: motion processing area MT and prefrontal cortex (PFC), commonly associated with sensory storage and cognitive control; 3) putative "top-down" motion signals in area MT during task performance.
1) The nature of visual motion representation in memory. In this psychophysical study we used a "memory masking" stimulus, introduced during the memory delay to characterize what is remembered. Results revealed that stored information included not only sample direction, but also its speed and size, implicating mechanisms that specialize in processing visual motion. This study also showed that the remembered direction is precisely localized in space and the scale of localization is consistent with sizes of neuronal receptive fields (RFs) in motion processing area MT.
2) Activity in MT and PFC during the memory for motion task. We found, for the first time, selective, behavior-dependent motion responses in PFC, their properties suggesting MT as a likely source. Neurons in both areas reflected motion information throughout the task, including the memory delay, suggesting a possible shared role in maintenance. Furthermore, PFC activity represented decisions that may be based on MT's comparison of sample and test directions.
3) Top-down signals in MT. Given functional connectivity of MT and PFC during behavior, we tested whether MT neurons manipulate information provided by high-level regions like PFC, but unavailable through direct visual stimulation. Indeed, we found that most MT neurons are activated by stimuli presented far from their RFs, and these responses could contribute to the comparison process.
Taken together, the data suggest that successful execution of the task involves a network of functionally linked areas that include MT and the PFC.
Leo Lui, Monash University
The concept of the receptive field is of central importance for understanding sensory processing. In the visual system, receptive fields have been traditionally regarded as spatially fixed "windows", through which single cells analyze parts of the visual scene. However, in recent years this concept has been challenged by many studies that have demonstrated that the extent of the excitatory receptive field of visual cortical cells changes according to the type of visual information presented. The study I will present describes a new facet of this problem, based on studies of spatial summation properties of cells in the middle temporal area (MT). MT is considered the prime motion analysis center of the primate visual cortex, having hypothesized roles in the integration and segmentation of motion signals. Whist its classical receptive fields can be clearly defined, the interactions between excitatory and inhibition sub-regions, within and outside the classical receptive fields have proven to be rather complex. The data I will present contributes to determining the role(s) of which this area may play in visual processing. We investigated the responses of single neurons in the middle temporal area (MT) of anesthetized marmoset monkeys upon presentation of moving gratings of different sizes, with length and width varied independently. A minority of neurons responded best to the largest stimulus (22%), with such responses being predominant in layer 5. The responses of most MT cells were obtained upon presentation of gratings of specific dimensions, with stimuli that exceeded these optimal values revealing inhibition. The strength of inhibition along the length and width dimensions of the receptive field was often dependent on the other. We identified a sub-group of neurons for which optimal lengths and widths could not be defined independently. These neurons typically had strong responses to stimuli which were either short and wide, or long and narrow. Rather than forming a homogenous and entirely distinct group, these cells occupy the upper end of a continuum of complex spatial summation response properties that characterizes the population of MT cells. These results, demonstrate distinct patterns of spatial selectivity in MT, supporting the notion that neurons in this area can perform various roles in terms of grouping and segmentation of motion signals.
What mechanisms are involved in visual attention and where are they localized in the brain? I discuss how relating psychophysics to electrophysiology and neuroimaging has advanced our understanding of visual attention. Covert attention enhances performance on a variety of perceptual tasks carried out by early visual cortex. In this talk, I concentrate on the effect of attention on contrast sensitivity, and discuss evidence from psychophysical, electrophysiological, and neuroimaging studies indicating that attention increases contrast gain.
First, I illustrate how psychophysical studies allow us to probe the human visual system. Specifically, I discuss studies showing that attention enhances contrast sensitivity at the attended location and decreases it at unattended locations, as well as the underlying mechanisms, namely external noise reduction and signal enhancement. I show how attention not only affects performance but also appearance. Then, I relate these psychophysical findings to single-unit recording studies, which show that attention can reduce external noise by diminishing the influence of unattended stimuli and that it can also boost the signal by increasing the effective stimulus contrast. Lastly, I will discuss a human neuroimaging studies showing attentional modulation of neural activity in early visual cortex. For instance, we have documented how exogenous attention improves performance and the concomitant stimulus-evoked activity in early visual areas. These results provide evidence regarding the retinotopically-specific neural correlate for the effects of attention on early vision. By integrating psychophysical studies with fMRI, we can narrow the gap between single-unit physiology and human psychophysics, and advance our understanding of visual attention.
Jeff Beck, Brain & Cognitive Sciences, University of Rochester
Many experiments have shown that human behavior is nearly Bayes optimal in a variety of tasks. This implies that neural activity is capable of representing both the value and uncertainty of a stimulus, if not an entire probability distribution. Here, we argue that the observed variability in neural activity is ideally suited for the representation of the uncertainty. Specifically, we note that Bayes‚ rule implies that a variable pattern of activity is, in fact, a natural, implicit representation of a probability distribution for the value of a stimulus. Of course, it is by no means clear that the various operations which cortical circuits may perform are capable of manipulating, combining, or decoding such a representation efficiently or otherwise.
We address this issue through the construction of a Probabilistic Population Code, or PPC. This type of code consists of two elements: a neural operation, in which the activities of two or more populations of neurons are combined according to biologically plausible rule, and an associated operation on the probability distributions of the variables represented in those population codes, which are obtained through an application of Bayes‚ rule. Specifically, we show that a neural network which is capable of performing both linear operations and divisive normalization can optimally implement a Kalman filter, but only when the variability of the network units is Poisson-like, that is, when tuning curves are observed and the covariance matrix is proportional to the mean activity. Since both tuning curves and Poisson-like variability are ubiquitous in cortex we propose that a PPC may be a significant mechanism for optimal Bayesian computation.
Tirin Moore, Stanford University
It is well established that the strength of signals conveyed by neurons in visual cortex are not merely determined by the retinal image, but can vary from moment to moment depending on an animal's attention. Neurophysiological studies have identified correlates of selective attention within visual cortex of both human and nonhuman primates. However, the causal mechanism of these effects remains a mystery. Indirect evidence suggests that the behavioral modulation of visual signals reflects, at least in part, the preparation of appropriate movements, particularly saccadic eye movements, in response to visual stimuli. I will describe some recent neurophysiological experiments that directly address the role of oculomotor mechanisms in driving visual selection.
John O'Doherty, California Institute of Technology
Adaptive reward-based decision making in an uncertain environment requires the ability to make predictions about the expected future reward associated with particular sets of actions or stimuli. These predictions are usually learned through experience, and are used to guide action selection so that actions associated with greater expected reward are chosen more frequently over the course of learning. Reinforcement learning models (RL) provide a theoretical account of how such learning might be implemented. In such models, successive predictions of future reward are updated via a prediction error which signals discrepancies between expected and actual rewards. While RL models have proved very successful in accounting for much of human decision making, these models assume no higher order structure in the decision making problem, such as interdependencies between states, actions, time and ensuing rewards. In this talk I will describe evidence that during performance of a simple decision task with a rudimentary higher order structure, human subjects engage in state-based decision making in which knowledge of the underlying structure of the task is used to guide behavioral decisions rather than standard 'model-free' RL. Moreover, neural activity in human ventromedial prefrontal cortex as measured with fMRI during performance of this task is strikingly consistent with the state-based algorithm and not with standard RL. These results show that a region of the human brain - the ventromedial prefrontal cortex, employs an abstract predictive model of task structure in order to guide behavioral decision making. This capacity could underlie the ability of humans to predict the behavior of others in complex social transactions and economic games, and accounts more generally for the human ability of abstract strategizing.
Zhong Lin Lu, Visiting Scholar from the University of Southern California
Signal detection theory (SDT) provides a precise language and general framework for analyzing decision making in the presence of uncertainty by postulating noisy internal representations of external stimuli and separating response criterion from signal-noise discriminability. The functional relationship between the internal representations and the physical characteristics of external stimuli, necessary for predicting human performance in various stimulus conditions, is however unspecified in the SDT framework. The issue has been addressed by the external noise approach. By constructing and specifying observer models with various intrinsic properties that do not vary with stimulus conditions, the approach provides a framework to compute the internal responses and predict performance in new stimulus conditions. Recently, the approach has been used to assay mechanisms of attention, perceptual learning, object recognition, adaptation, and various visual deficits.
A number of components, derived from both sensory psychology and physiology, have been used to construct observer models, including perceptual template, non-linear transducer, additive noise, multiplicative noise, contrast-gain control, and decision uncertainty. To identify the optimal model for the range of empirical data, we have derived properties of five observer models in the literature in relation to three external noise paradigms: the equivalent input noise method, the triple TvC method, and the double-pass procedure. The models are compared in light of existing data in the literature as well as their ability to fit the data in a new experiment. We conclude that the optimal model consists of five components: a perceptual template, a non-linear transducer function, internal additive noise, internal multiplicative noise, and a decision structure.
In this talk, I will first present a review of the observer models. I will then describe a Bayesian adaptive procedure, the "qTvC" method, developed to efficiently estimate threshold versus external noise contrast (TvC) functions at three performance criterion levels. Finally, I will talk about attentional modulation of BOLD contrast response functions in early and mid visual cortical areas.
Anna Roe, Vanderbilt University
Visual scenes contain local and global cues. Often, these cues may signal conflicting and competing signals about our environment. Local cues provide information closely associated with elemental features of the physical world, whereas global cues are associated with 'Gestalt' percepts that may override local cues. In this sense, we term local cues as 'real' (physical) and global cues as 'illusory' (created by the brain). To study the neural basis of this interaction between local and global percepts in the early visual pathway, we have used electrophysiological and optical imaging methods to study processing of 'visual illusions' in Macaque monkey visual cortex (areas V1 and V2). The visual illusions we have used include illusory contours, Cornsweet brightness illusion, and object depth illusions. We find that, common to both V1 and V2, representation of different object features (e.g. contour vs color & brightness) are packaged within different functional subcompartments within each area. However, a major difference between V1 and V2 lies in their respective responses to local vs global stimulus features. V1 appears to encode the local elemental properties, while V2 encodes higher order (more global) object features. Based on both physiological and psychophysical data, we hypothesize that these potentially conflicting cues in V1 and V2 are resolved via a competitive interaction between feedforward and feedback interactions between V1 and V2.
Jeong Woo Sohn, Brain & Cognitive Sciences, University of Rochester (Advisor: Lee)
Reward seeking is a basic common feature in animal and human behavior. Often, reward requires a sequence of actions rather than just one action. In addition, it is not always easy to find which action leads to maximum reward at every moment. Therefore, a precise evaluation of value expected for actions embedded in a sequence would be important in reward seeking behavior. However, how such value signals are evaluated and represented in the brain is not well known.
This thesis have investigated how information about value and motor sequence is combined in the brain. For this purpose, I have developed a serial reaction time paradigm in which the serial position of movement and the number of remaining movements (NRM) necessary to receive reward was dissociated in a motor sequence. The accuracy and reaction time during this task showed that NRM and therefore temporally discounted value of reward systematically influenced the animal,s motivation to perform the task.
Neurophysiological recordings from supplementary and presupplementary motor area revealed two main findings. First, many neurons in both areas showed modulation in their activity related to the directions of previous and next movements in a sequence. Previous-movement encoding activity was sustained even after the initiation of the current movement, suggesting that this signal might reflect internal monitoring movements, rather than simple sensory feedback. Similarly, next-movement encoding activity began even before the initiation of current movement, suggesting that medial frontal neuron was activated concurrently for multiple movements. Second, many neurons showed activity increasing or decreasing monotonically with the number of remaining movements. In addition, this activity was combined multiplicatively with signals related to movement direction. These results suggest that the medial frontal cortical might play an important role in estimating the utility of individual action in a motor sequence by combining motor parameters and values together.
In conclusion, medial frontal cortex may encode multiple aspects of motor sequence, such as utilities of actions as well as retrospective and prospective information about successive movements, suggesting that this cortical area might be important for the selection of optimal movement sequences.