Allison Sekuler, McMaster University
The "greying population" is the fastest growing group the developed world. Despite the importance of this group, we know relatively little, however, about how aging affects critical functions such as vision and neural processing. For a long time, it was assumed that once we passed a certain age, the brain was essentially fixed, and could only deteriorate. But recent research shows that although aging leads to declines in some abilities, others are spared and may even improve, and neural systems underlying visual processing may change substantially throughout our lifetimes. This lecture will discuss the trade-offs in visual and neural processing that occur with age, and provide evidence that we really can teach older brains new tricks.
Chris Curcio, University of Alabama at Birmingham
The eye's clear media has made visualization of living cells and tissues in health and disease a hallmark of ophthalmologic practice. New imaging modalities such as spectral domain optical coherence microscopy (SD-OCT) and fundus autofluorescence are spreading rapidly in clinical usage. In vivo histopathology of retina in age-related macular degeneration (AMD), a major cause of vision loss in the elderly, raises the value of classical histopathology for image validation and benchmarking. This lecture will review conceptual and laboratory studies on AMD eyes, focusing on retinal pigment epithelium and sub-RPE lesions, that can guide and challenge both image science and models of pathogenesis.
Kalanit Grill-Spector, Stanford University
The prevailing view of high-level visual cortex suggests a few domain specific modules for special categories such as faces, places, bodyparts and words. Specifically, this view suggests a domain specific module for faces in the fusiform gyrus (FFA) and one for body parts in extra-straite cortex (EBA). However, previous research shows high variability of the locations of these activations across individuals as well as some visualizations of multiple activations (typically without reporting their organization). We asked whether this variability reflects inter-subject variability of high level cortex or rather an inconsistent definition of multiple activations. Using higher resolution fMRI measurements (1.5x.1.5x3mm) we examined the organization of face- and limb-selective activations in lateral and ventral occipito-temporal cortex to determine if there is a consistent representations of activations across subjects. Our results reveal not one FFA, but 2 face-selective activations in consistent anatomical locations on the fusiform gyrus, one on the mid fusiform sulcus (mFus) and one on the posterior fusiform (pFus), which are separated by a limb-selective activation on the occipito-temporal sulcus (OTS). In addition, these activations are arranged in a consistent spatial location relative to visual field maps hV4, V01 and V02 and are distinct from another face activation on the inferior occipital gyrus (IOG). Similarly, on the lateral surfaces, we find not one EBA, but 3 limb-selective activations arranged in a crescent organization surrounding hMT+. These activations are on distinct anatomical locations (LOS/LOG, MTG and ITG, respectively) and overlap distinct visual field maps. Our data suggests that category-selectivity is an insufficient criteria to define a visual area. Instead, we suggest a new framework for defining activations systematically in individual subjects using several criteria: (1) anatomical location (2) relation to visual field maps (3) consistent spatial relation to other activations and (4) function.
In the second part of the talk I will describe experiments in which we examined whether the selectivity of these face- and limb-selective activations are affected by stimulus repetition. A robust phenomenon is that stimulus repetition reduces responses in high-level (but not early) visual regions, namely, repetition produces fMRI-adaptation (fMRI-A). I show that fMRI-A in high-level visual cortex occurs for both preferred and nonpreferred stimuli, is modulated by the number of repetitions, number of intervening stimuli, but not response time. By examining the relation between repeated and nonrepeated responses to preferred and nonpreferred stimuli in a region we show that fMRI-A follows a scaling principle for both immediate and long-lagged repetition in face- and limb-selective regions in lateral ventral temporal cortex. However, house-selective regions in medial ventral temporal cortex show scaling effects for immediate repetitions, but sharpening effects for long-lagged repetitions. These results indicate different fMRI-A mechanisms across regions and time scales. I will end with a computational model linking fMRI-A to neural adaptation mechanisms.
Takeo Watanabe, Boston University
Perceptual learning (PL) is defined as a long-term performance improvement on a perceptual task as a result of perceptual experience and is regarded as a manifestation of plasticity in perceptual system (Yotsumoto, Watanabe & Sasaki, 2008, Neuron). In spite of the prevalence of PL research, how PL occurs has yet to be entirely clarified. We have conducted research concerning what determines PL. First, in spite of the prevailing dogma that PL occurs only for stimulus features to which voluntary attention is directed (task-relevant PL), we found that PL occurs for features that were task-irrelevant and subthreshold (Watanabe, Nanez & Sasaki, 2001, Nature; Watanabe et al, 2002, Nature Neuroscience) (task-irrelevant PL) and further that learning of those task-irrelevant features depends upon the subjects' engagement in the main task (Seitz & Watanabe, 2003, Nature; Seitz et al, 2005, Current Biology). Furthermore, we have found that pairing task-irrelevant features with rewards is key to task-irrelevant PL (Seitz, Kim & Watanabe, 2009, Neuron). These results suggest that PL occurs as a result of interactions between reinforcement signals and bottom-up stimulus signals (Seitz & Watanabe, 2005, TICS). At the same time, results of an fMRI study indicate that while the lateral prefrontal cortex (LPFC) detects and suppresses suprathreshold signals, it fails to detect and thus to suppress subthreshold signals. This leads to the paradoxical effect that a signal that is below, but close to, one's discrimination threshold ends up being stronger than suprathreshold signals (Tsushima, Sasaki & Watanabe, 2006, Science). We have confirmed this mechanism by showing that task-irrelevant learning occurs only when a presented feature is under and close to the threshold (Seitz, Tsushima & Watanabe, 2008, Current Biology). From all of these results, we have concluded that while attention enhances task-relevant feature signals and suppresses task-irrelevant feature signals, leading only to task-relevant PL, reinforcement signals enhance both task-relevant and task-irrelevant feature signals, leading to both task-relevant and task-irrelevant PL (Sasaki, Nanez & Watanabe, 2010, Nature Reviews Neuroscience).
Laurence T. Maloney, New York University
In classical decision under risk, decision makers choose between lotteries. They typically overweight small probabilities and underweight large. I'll first describe an experiment (Wu, Delgado & Maloney, 2009) where we asked subjects to choose between classical lotteries and also to choose between motor tasks that were formally equivalent to these lotteries. In the motor lotteries, the element of uncertainty resulted from the participant's own motor error in a highly practiced reaching task. Overall, the same participants exhibited the typical pattern of probability distortion in the classical task but the opposite pattern in the motor task, underweighting small probabilities and overweighting large. Ungemach, Chater & Stewart (2009) report a similar reversal in distortion when probability information is based on sampling. Similar patterns of distortion are found in visual frequency estimation, frequency estimation based on memory, and in the use of probability in decision making under risk.
I'll show that probability distortions in all cases can be approximated by a linear transformation of the log-odds of probability or relative frequency (Zhang & Maloney, 2011b). The slope and intercept of the linear transformation control probability distortion. Researchers have not been able to predict or explain the values of slope and intercept observed in experiments across tasks or across participants (Gonzalez & Wu, 1999).
In Zhang & Maloney (2011a), we focused on one method for presenting probability, the relative frequency of items of one kind in a visual array of items. We developed a model of human distortion of relative frequency based on Luce's choice axiom and demonstrated in two experiments that we can separately control slope and intercept with high accuracy. Our results support the Gold-Shadlen conjecture that probability is coded neurally as log-odds but with the twist that it is systematically adapted to particular tasks. I'll conclude by discussing the possible benefits of observed distortions.
Maria Diehl, University of Rochester
Social communication requires the accurate integration of both auditory and visual information that are present in faces and vocalizations. We are interested in how the ventral frontal lobe of the rhesus macaque encodes social communication information. The ventral frontal lobe includes the ventrolateral prefrontal cortex (VLPFC) and the underlying orbitofrontal cortex, and evidence has suggested that these areas are involved in processing complex features of objects including faces and vocalizations. Important features embedded within audiovisual vocalizations include emotional expression and caller identity; however, little is known to what extent VLPFC represents these features. Our previous studies have demonstrated that single cells in VLPFC are multisensory, integrate face and vocalization information, and also demonstrate context and stimulus-related activity during discrimination tasks of conspecific vocalization movies. To further investigate the role of VLPFC in social communication, we recorded single-cell activity while animals performed a discrimination task using audiovisual face-vocalization stimuli that differed in emotional expression or caller identity. Our preliminary data analysis shows that neurons in VLPFC demonstrate both task-related and stimulus-related activity during face-vocalization discrimination and were located in anterolateral portions of VLPFC. Many of the cells recorded showed a significant change in firing rate during the Non-Match period. A small proportion of these cells showed a significant change in firing rate that was related to the change in caller identity compared to the proportion of cells affected by the change in emotional expression during the Non-Match period. Continued analyses and recordings are aimed at determining other factors in addition to emotion and caller identity that contribute to the neuronal activity in the ventral frontal lobe during sensory integration of face and vocalization information relevant to social communication.
Yang Dan, UC Berkeley
The cholinergic system in the basal forebrain is an important component of the neuromodulatory system controlling brain state, and it is thought to play critical roles in regulating arousal and attention. However, its role in modulating sensory processing is not yet well understood. Using electrical stimulation in anesthetized rat and optogenetic activation in awake mice, we found that activation of the cholinergic system causes decorrelation between neurons and increases in response reliability in the primary visual cortex. Both of these effects can contribute to enhanced visual processing during arousal and attention.
Marie Burns, UC Davis
Rod photoreceptors generate highly amplified responses to the absorption of single photons. Remarkably, the single photon responses (SPRs) from rods with widely varying lifetimes of the G protein-coupled receptor, rhodopsin (R*), or the G protein-Effector complex (G*-E*), show remarkably similar peak amplitudes. After measuring the diffusion coefficient of cGMP and the dark PDE activity, we have obtained a fully constrained, spatio-temporal model of phototransduction that accurately accounts for the observed SPR amplitude stability in 8 different transgenic mouse lines. Surprisingly, this stability arises primarily from calcium feedback regulation of cGMP synthesis. This feedback mechanism not only confers amplitude stability across genotypes, but also underlies trial-to-trial reproducibility of SPRs in normal mammalian rods, which is important for rod-mediated vision in dim light. In steady background light, light-dependent movement of phototransduction proteins modulates deactivation mechanisms and greatly extends the range over which rod signaling can occur.
Xaq Pitkow, University of Rochester
An influential theory of visual processing asserts that retinal center-surround receptive fields remove spatial correlations present in the visual world, producing ganglion cell spike trains that are less redundant than the corresponding image pixels. In bright light, this decorrelation would enhance coding efficiency in optic nerve fibers of limited capacity. Here we test the central prediction of the theory and demonstrate that the spike trains of retinal ganglion cells are indeed decorrelated compared to the visual input. However, most of the decorrelation is accomplished not by the center-surround receptive fields, but by nonlinear processing in the retina. We show that a steep response threshold enhances efficient coding by noisy spike trains, and the shape of this nonlinearity is near optimal in both amphibian and primate retina. These results offer an explanation for the sparseness of retinal spike trains, and highlight the general importance of treating the full nonlinear character of neural codes.
Len Zheleznyak, University of Rochester
Presbyopia is the age-related loss of near vision. In the ongoing effort to provide unassisted near vision to an aging population, most research has been devoted to monocular strategies, such as accommodating intraocular lenses and extended depth of field contact and intraocular lenses. Monovision, a clinically well-established binocular approach, designates the dominant eye for distance vision and the non-dominant eye for near vision. This method of inducing a large difference in interocular optical quality can sacrifice binocular summation and visual performance. To improve through-focus visual performance and summation, we propose a method of decreasing interocular optical diversity in monovision by extending the depth of focus of the non-dominant eye with spherical aberration. A binocular adaptive-optics system was used to manipulate the wavefront aberrations of both eyes and simultaneously measure through-focus binocular visual performance (contrast sensitivity and visual acuity). In addition, we introduce a method for predicting binocular visual performance from optical theory, allowing for the design and optimization of binocular presbyopic correction strategies.
Alex Roxin, Theoretical Neurobiology of Cortical Circuits, IDIBAPS, Barcelona, Spain
Long-term memories are likely stored in the synaptic weights of neuronal networks in the brain. The storage capacity of these networks depends on the plasticity of the synapses, as very plastic synapses provide for strong memories which are quickly overwritten while unlabile synapses result in long-lasting yet weak memories. Here we show that the trade-off between memory strength and lifetime can be overcome by initially storing memories in a highly plastic network, which then transfers patterns of synaptic weights to less plastic downstream networks during an off-line mode. This model is reminiscent of the process of memory consolidation, whereby memories are transferred from the hippocampus to cortical sites for long-term storage.
Ramkumar Sabesan, University of Rochester (Advisor: Geunyoung Yoon)
The human eye suffers from higher order aberrations, in addition to conventional spherical and cylindrical refractive errors. Advanced optical techniques have been devised to correct them in order to achieve superior retinal image quality. However, vision is not completely defined by the optical quality of the eye, but also depends on how the image quality is processed by the neural system. In particular, how neural processing is affected by the past visual experience with optical blur has remained largely unexplored.
The objective of this thesis was to investigate the interaction of optical and neural factors affecting vision. To achieve this goal, pathological keratoconic eyes were chosen as the ideal population to study since they are severely afflicted by degraded retinal image quality due to higher order aberrations and their neural system has been exposed to that habitually for a long period of time.
Firstly, we have developed advanced customized ophthalmic lenses for correcting the higher order aberration of keratoconic eyes and demonstrated their feasibility in providing substantial visual benefit over conventional corrective methodologies. However, the achieved visual benefit was significantly smaller than that predicted optically. To better understand this, the second goal of the thesis was set to investigate if the neural system optimizes its underlying mechanisms in response to the long-term visual experience with large magnitudes of higher order aberrations. This study was facilitated by a large-stroke adaptive optics vision simulator, enabling us to access the neural factors in the visual system by manipulating the limit imposed by the optics of the eye. Using this instrument, we have performed a series of experiments to establish that habitual exposure to optical blur leads to an alteration in neural processing thereby alleviating the visual impact of degraded retinal image quality, referred to as neural compensation. However, it was also found that chronic exposure to poor optics caused neural insensitivity to fine spatial detail thus adversely limiting the achievable visual benefit when improving the eye's optical quality. Finally, we demonstrated that the altered, but plastic visual system could be re-adapted to improved optics such that it partially recovers its normal mechanism. These findings not only provide vast clinical implications for advanced customized vision correction methodologies for normal, pathologic and presbyopic eyes but also vital scientific insight into the neural processing of the visual system in response to the aberrated optics of the eye.
Kevin Dieter, University of Rochester
Wave your own hand in front of your eyes--what do you perceive? Certainly you both feel and see your hand moving. This multisensory experience is characterized by consistent pairings of component sensations: vision and proprioception. How might our brains adapt to these reliable co-occurrences of sensory inputs? Specifically, we asked what happens when only one of the normally paired inputs is present?
Participants waved their own hand in front of their eyes in total darkness (either while wearing a blindfold, or in a totally dark room), and were asked to assess their visual experience. Subjective ratings indicated that participants frequently experienced visual sensations of motion when waving their own hand, but did not experience these sensations when the experimenter waved his hand. We objectively assessed these visual percepts by measuring eye movements, with the hypothesis that only those subjects who experienced a visual percept would be able to make smooth pursuit eye movements (SPEM). Results showed that eye movements were significantly smoother when participants tracked their own hand than the experimenter's hand. Synaesthetes, who are hypothesized to have increased sensory connectivity, reported the strongest subjective percepts, and showed the greatest proportion of SPEM. This pattern of results suggests that proprioception alone can give rise to a visual percept when a self-generated movement is performed in a way that normally results in a visual percept. In addition, results from synaesthetes indicate that this percept is mediated by multisensory connectivity.
Joe Lappin, Vanderbilt University
How do retinal images lead to perceived environmental objects? Vision involves a series of spatial and material transformations from environmental objects to retinal images, to neurophysiological patterns, and finally to perceptual experience and action. A rationale for understanding functional relations among these physically different systems occurred to Gustav Fechner: Differences in sensation correspond to differences in physical stimulation. The concept of information is similar: Relationships in one system may correspond to, and thus represent, those in another. Criteria for identifying and evaluating information include (a) resolution, or the precision of correspondence; (b) uncertainty about which input (output) produced a given output (input); and (c) invariance, or the preservation of correspondence under transformations of input and output.
We apply this framework to psychophysical evidence to identify visual information for perceiving surfaces. The elementary spatial structure shared by objects and images is the second-order differential structure of local surface shape. Experiments have shown that human vision is directly sensitive to this higher-order spatial information from interimage disparities (stereopsis and motion parallax), boundary contours, texture, shading, and combined variables. Psychophysical evidence contradicts other common ideas about retinal information for spatial vision and object perception.
Joanna Crook, University of Washington
Parallel pathways for the unique color-coding, trichromatic primate, an ideal model for its human counterpart, have been defined anatomically, setting the stage for understanding how neural structure gives rise to function in one particularly accessible outpost of the central visual system, the neural retina. My work applies extra- and intracellular physiology together with pharmacological manipulation to a unique in vitro macaque monkey retina and represents the first use of sophisticated visual stimuli, which can isolate spectrally distinct photoreceptor signals, to tease apart the synaptic basis for chromatic and luminance pathways. My work begins with the key outstanding questions about the three best-studied visual pathways in the primate.
The midget pathway transmits a red-green and a critical, luminance signal that sets the limit on visual acuity; I directly address for the first time, the synaptic mechanisms that give rise to this unique double-duty performance. Specifically I test the hypothesis for color selective inhibition and define a new type of spatio-chromatic receptive field structure critical for understanding how opponency and luminance coding arise from a single circuit. A blue-yellow pathway originates from a distinctive bistratified ganglion cell; I directly test for the first time, and dramatically confirm, the ON-OFF pathway excitatory hypothesis for color-coding. The parasol pathway transmits an achromatic signal but its role in object motion has been controversial; I found that parasols cells show non-linear spatial summation, a critical property for higher order motion detection.
Finally, I characterize a new, achromatic visual pathway, the smooth monostratified cell; the similarities of the smooth with the parasol pathway leads to a new hypothesis about how and why the retina utilizes a plurality of parallel visual pathways.
Ethan Rossi, University of Rochester
Current clinical imaging tools lack the resolution to examine retinal disease at the level of single cells in the living human eye. Adaptive optics retinal imaging provides single cell resolution of retinal mosaics and other microscopic structures in the living eye. This technology, applied to the study of retinal disease, has the potential to: 1) provide new understanding of how retinal diseases disrupt the retina at the level of single cells, 2) reveal new biomarkers of retinal disease, 3) detect diseases earlier (when treatments might be most effective), and 4) monitor clinical interventions on short time scales (speeding drug discovery and reducing both the duration and cost of clinical trials). Thus adaptive optics retinal imaging has the potential to transform clinical and translational research of retinal disease. However, adaptive optics imaging of retinal diseases, particularly those of the aging eye, presents many challenges that do not exist for the examination of healthy eyes or of eyes with only mild disruptions. These include: reduced optical quality, poor vision, and poor fixation. Thus the clinical and scientific utility of adaptive optics imaging of retinal disease requires careful selection of scientific questions and patient populations for success. This talk will provide an overview and summary of several recently completed and currently ongoing studies of retinal disease in the laboratories of the Advanced Retinal Imaging Alliance here at Rochester. Recent findings will be presented covering several diseases, including: cone rod dystrophy, macular telengectasia, and age-related macular degeneration (AMD).
Michele Rucci, Boston University
Our eyes are never at rest. Even when attempting to maintain fixation on a stationary point, microscopic eye movements keep the stimulus on the retina always in motion. This talk will focus on the effects of fixational eye movements on input signals and the neural encoding of visual information. During viewing of natural scenes, fixational eye movements reformat the retinal stimulus to yield temporal modulations with uniform spectral density over a wide range of spatial frequencies. This effect depends on the joint characteristics of the scene and fixational instability; it indicates a form of matching between the statistics of natural images and those of normal eye movements. The consequent spatial whitening of the retinal stimulus implies a reduced sensitivity to predictable input correlations and an enhanced response to luminance discontinuities, outcomes long advocated as important goals of early visual processing. These results suggest that perception and behavior are more intimately tied than commonly thought. They link fixational instability to the statistics of the natural world and imply a contribution from fixational eye movements to the enhancement of luminance edges in neural representations in the retina and thalamus.
Valentin Dragoi, University of Texas-Houston Medical School
Understanding the rules by which brain networks represent incoming stimuli in population activity to influence the accuracy of behavioral responses remains one of the deepest mysteries in neuroscience. The research goal of my laboratory is to investigate the real-time operation of neuronal networks in multiple brain areas and their capacity to undergo adaptive changes and plasticity. What are the fundamental units of network computation and the principles that govern their relationship with behavior? I will describe our recent studies employing electrophysiological techniques to record from large pools of cells in the non-human primate brain while the animal performs a fixation or behavioral task. We find that spatio-temporal correlations between neurons could act as an active switch to control network and behavioral performance in real time by modulating the communication between cortical networks. We believe that cracking the mysteries of the population code will offer unique insight into a network-based mechanistic explanation of behavior and new therapeutic solutions to cure brain dysfunction.
Ione Fine, University of Washington
Early blindness is a dramatic example of cortical plasticity that results in a wide-ranging set of changes such as changes in synaptic connectivity, wide-scale changes in the functional response properties of occipital cortex, and improved processing of auditory and tactile information. I will discuss how recent work done by our laboratory and others is moving us towards an integrated understanding of the neuroanatomical and behavioral effects of human blindness.
Ying Geng, Institute of Optics (Advisor: David Williams)
The rodent has become an increasingly valuable model due to its availability for genetic manipulations. Non-invasive microscopic imaging of the rodent retina would allow tracking of retinal development, disease progression, and the efficacy of therapy in single animals. Correction of the eye's aberrations using adaptive optics (AO) could improve the resolution of /in vivo/ rodent retinal images, but previous attempts have been limited by the small size of its eye and the difficulty in measuring its aberrations due to poor Shack-Hartmann wavefront sensor (SHWS) spot quality. The work in this thesis describes methods developed to measure the rodent eye optics and to optimize its retinal image quality /in vivo/. Our first attempt was modifying an confocal fluorescence adaptive optics scanning laser ophthalmoscope (AOSLO) originally built for imaging the primate and human eye to accommodate the rat eye. Despite achieving /in vivo/ resolution sufficient to resolve sub-cellular structures in fluorescent ganglion cells, we observed problems with aberration measurements and AO image quality. We then constructed a SHWS customized for the small mouse eye, and found a solution to the aberration measurement problem. The custom designed SHWS can favor light from a specific retinal layer and provide good wavefront spot quality. This wavefront sensor was incorporated into a confocal AOSLO custom designed for the mouse eye. We have obtained high quality images in the mouse retina of multiple cell layers, including the photoreceptor mosaic, nerve fiber bundles, fine capillaries and blood flow, and ganglion cell bodies and fine processes. The /in vivo/ resolution of the system was directly characterized to be sub-μm laterally, and ~10μm axially. This fine resolution has allowed classification of ganglion cells /in vivo/. The value of the instrument was also demonstrated in two functional imaging scientific studies.