Holly Bridge, University of Oxford
(co-sponsored with Neurology, Neuroscience, Neurosurgery)
Rick Born, Professor of Neurobiology, Harvard Medical School
Joseph Carroll, Medical College Wisconsin
Kim Schauder, University of Rochester
Richard Lange, Graduate Student, BCS, University of Rochester
Monique Mendes, CVS Trainee, University of Rochester
Randolph Blake, Centennial Professor of Psychology, Vanderbilt University
Eero Simoncelli, New York University
Nathaniel Zuk & Shaorong Yan, Postdoc, BME & Grad Student, BCS
Please RSVP here: https://goo.gl/forms/uz55t9eu6OlH5R932
Speech and music share similar temporal attributes such as rhythm and phrasing. Yet we perceive these attributes differently, and it is still unclear how temporal processing in the brain contributes to the differences in perception for speech and music. Electroencephalography (EEG) contains sufficient temporal precision to study the neural encoding of these types of sounds in humans, but there are still technical hurdles to overcome with this technique. One key issue is that typical research using simplistic and repetitive stimuli may show results that are hard to extend to a naturalistic context.
Our lab has shown that the neural processing of natural, continuous speech can be studied by using linear models to reconstruct the speech envelope from the recorded EEG. These models capture correlations between the EEG and the speech stimulus associated with the neural encoding of amplitude fluctuations in the stimulus. However, when we try to extend this analytical method to musical stimuli, these models fail to reconstruct the envelopes of music. The failures cannot be explained by envelope statistics, suggesting that the processes responsible for temporally encoding music are obscured in the raw EEG signal.
Here, we demonstrate a modification of our previous method that allows us to study the neural encoding of music with EEG in a naturalistic context. We decompose the EEG signal into the time-varying amplitude and time-varying phase for delta, theta, alpha, and beta frequency bands. We then use the decomposed EEG signal to reconstruct the envelope of the stimulus. We find that delta phase contributes significantly to the reconstruction of the envelopes for speech, while alpha and beta amplitudes contribute significantly to the reconstruction of the envelopes of rock music. This method fails to reconstruct the envelopes of classical music, potentially due to the differences in rhythmic regularity for the rock and classical music stimuli. We think that our result points to distinct temporal processes in the brain for encoding speech and music.
Matt Overlan, Frank Mollica, BCS Graduate Students, University of Rochester
Matt will be talking about probabilistic program induction as a framework for modeling human concept learning. Probabilistic programs are those that use stochastic functions, and program induction means learning a latent or hidden program from data. He will discuss the advantages and disadvantages of probabilistic programs as compared to other kinds of representations, and he'll show several applications of their use in the literature. Lastly, Matt will show results from a visual concept learning experiment that we ran with adults on mechanical turk, and compare those results to the predictions of our program induction model.
Frank will discuss how probabilistic programs have been used to uncover new insights about linguistic representations (e.g., phonotactics, argument structure), investigate productivity and reuse in language, and model children's early word use.
We’ll be having the second instalment of the CLS Interdisciplinary talk series next week (November 7 @4 in Meliora 301B). Matt Overlan (BCS, advisor: Robbie Jacobs) will be giving the short informal talk about his research, and Frank Mollica (BCS, advisor: Steve Piantadosi) will be leading the discussion. Following the talk we will be convening at the Tap and Mallet.
If you are planning on attending, please RSVP via this form: https://goo.gl/forms/3QDNa1HztoHgt5DP2
Robert Jacobs, Professor, BCS, University of Rochester
At the upcoming NIPS conference, there will be a workshop titled "Cognitive-Inspired Artificial Intelligence". I would like to practice my talk (only 20 minutes) for this workshop during this CVS colloquia and to receive your feedback. Perhaps more importantly, this workshop is an opportunity for people who work in cognitive science and neuroscience to provide advice and suggestions to people who work in artificial intelligence. During the CVS colloquia, I'd like to encourage a discussion of this topic, with an emphasis on biological and artificial visual perception. What do you think are the most important lessons that AI researchers (e.g., computer vision, robotics, etc.) need to learn from researchers who study biological vision?
Jonathan Samir Matthis, University of Texas Austin
Walking over rough terrain requires walkers to perform a rapid visual search on the upcoming path to identify footholds to ensure safe and stable locomotion. In this talk, I will examine how vision is used to guide foot placement, and discuss how optic flow arising from self-motion may contribute to the control of locomotion through the natural world. We developed a novel apparatus to record 3D gaze and full body kinematics of walkers traversing different types of real-world rough terrain. The relationship between gaze and upcoming footholds reveals that walkers tune their gaze allocation strategy to the constraints of the terrain they are traversing in order to maintain a consistent gait control strategy that balances energetic efficiency and locomotor stability. In addition, we analyzed the head-mounted video to measure the optic flow stimulus experienced during real-world locomotion. The resulting patterns of optic flow (and the behavior of the focus of expansion) are markedly different from descriptions in the literature on the neural processing of visual self-motion. Nevertheless, there are regularities to both head-centered and retinal-centered optic flow that suggest a much richer source of information that may be used to enact fine grained control of locomotion through the natural world.
Ed Lalor, U. Rochester, BME
Deficits in sensory processing have been widely reported in patients with schizophrenia using both behavioral and neurophysiological measures. These sensory processing deficits are now being used as endophenotypes to identify genes involved in familial transmission of schizophrenia and to monitor therapeutic drug response for both treatment and prevention. Furthermore, because low-level sensory deficits can negatively impact higher order cognitive function, an understanding of the basis for these deficits offers potential for improving cognition in this terribly debilitating disease. One active area of research on this topic is the use of EEG to study visual processing deficits in patients with schizophrenia. A number of these studies point to a high degree of specificity in the dysfunction underlying these deficits. This has led researchers to implicate certain neurotransmitter mechanisms and candidate genes providing targets for new drug development that could potentially improve cognitive function in patients. However, one important shortcoming with the commonly used EEG visual response paradigms is that they yield responses that derive from the simultaneous activation of many visual processing regions. As a result, they are difficult to interpret and any inferences drawn on them are difficult to validate in terms of underlying mechanisms. In this talk I will discuss several studies that suggest that the visual processing deficits seen in patients with schizophrenia are likely to be highly specific in terms of their neural substrates. And I will discuss novel approaches for eliciting EEG-based measures of processing from specific visual cortical areas in relative isolation.
Woon Ju Park, Chigusa Kurumada, Postdoc with Duje Tadin; and Assistant Professor, Brain and Cognitive Sciences, University of Rochester
In this meeting, Woon Ju Park (BCS, CVS) will tell us about her dissertation research about predictive eye-movements in a visual perception task. She tested a population with Autism Spectrum Disorder and a control group of Neuro-typical adults and found that they show different saccadic eye-movements within a trial and they also differed in their learning behaviors over the course of the experiment. Her results provide particularly interesting food for thought in regards to general questions about:
- how do we make predictive eye-movements based on visual, and auditory, stimuli?
- how do we accumulate relevant statistical information to improve our prediction accuracy?
Following Woon Ju's research presentation, we are going to have an open discussion about how we can address these questions from a interdisciplinary perspective. Chigusa Kurumada (BCS, CLS) will provide a brief overview of predictive (anticipatory) eye-movements typically discussed in psycholinguistic research and moderate discussion about how vision research can inform language research and vice versa.
RSVPs are requested for determining room size / snacks (to firstname.lastname@example.org). An informal social gathering will follow at Swiftwater Brewery (378 Mt Hope Ave).
Kamal Dhakal, U. Rochester
Stimulation of cells, especially neurons is of significant interest both for basic understanding of neuronal circuitry as well as clinical intervention. Existing electrode-based methods of stimulation is invasive and non-specific to single cells or cell type. Recently, optical stimulation of targeted neurons expressing light-sensitive proteins (opsins) has surfaced as an emerging and a powerful technique called ?Optogeneitcs? in neuroscience. However, due to significant loss of visible light from scattering and absorption by tissue, single photon optogenetic stimulation may not be an ideal for localized and in-depth stimulation. Fiber optic two-photon optogenetic stimulation (FO-TPOS) was used in order to enhance the in-depth stimulation. Various transfection methods such as viral-based, lipofection, electroporation have been developed to express the opsin. However, these methods are not suitable for transfection of a single cell. In order to achieve single cell transfection, femtosecond pulsed laser microbeam was used to make a transient hole in the cell membrane in order to deliver exogenous molecules such as plasmids: Channelrhodopsin-2 (ChR2), and red-activatable Channelrhodopsin (ReaChR) or cell impermeable dye (Rhodamine Phalloidin). This method allowed live cell imaging following injection of actin-staining dye Rhodamine Phalloidin.
Ross Maddox, U. Rochester, BME
Listening in the real world is a multisensory endeavor. When visual and auditory information are both are available, they are integrated to achieve the most accurate perception of complicated sensory scenes. A salient example of this integration is the ventriloquism effect, in which a visual stimulus "captures" the location of an auditory stimulus in a way that is well described by optimal integration of location information. But when a visual stimulus offers no task-relevant information to integrate, can it still affect auditory perception? In this talk we will discuss two examples from our research that show improvements in auditory spatial discrimination resulting from visual stimuli that are task-uninformative, and discuss possible underlying mechanisms.
In the first experiment we used visual a visual primer stimulus to direct listeners' eye gaze while keeping their heads fixed. We found that auditory spatial discrimination improved when gaze was directed towards the auditory stimulus versus when it was not. We found no improvement when the spatial primer stimulus was auditory rather than visual, indicating that attention alone does not explain the results, and eye gaze is thus an essential factor.
In the second experiment, we presented listeners with two symmetrically lateralized auditory stimuli: a noise and a harmonic tone, and asked them to report which side the tone was on. Concurrent with the auditory stimuli we presented two small visual stimuli. In one condition, the azimuth of the two visual stimuli matched those of the two auditory stimuli, and in the control condition both visual stimuli were presented centrally. We saw an improvement in auditory discrimination in the match-location condition, even though the visual stimuli provided no information about the auditory task.
Chia-Yang Liu, Ph.D. , University of Rochester
Pizza and refreshments provided at 11:45am
Conjunctival goblet cells are the major cell type that synthesize and secrete mucins for the maintenance of ocular surface integrity. Lack of mucins in the tear film due to goblet cells abnormality causes dry eye syndrome (DES) and affects millions of people’s vision and life. The lack of knowledge regarding the regulatory mechanisms by which conjunctival epithelial cells differentiate to form goblet cells hampers the development of treatment regimens for DES. We found that inhibition of Notch via conditional expression of a dominant negative transcriptional coactivator mastermind-like 1 (dnMAML1) in the ocular surface epithelia (OSdnMAML1) suppressed goblet cell differentiation in mouse model. Compared to the wild-type mouse ocular surface (OSWt), the OSdnMAML1 exhibited conjunctival epithelial hyperplasia, aberrant desquamation, and impaired goblet cell formation. Moreover, OSdnMAML1 inhibited Krüppel-like transcriptional factor 4 and 5 genes (Klf4 and Klf5) expression and Muc5/ac synthesis. In contrast, conjunctival epithelium was expanded and differentiated into goblet cells in entire eyelid stroma of the TGFβRII conditional knockout (cKO) mice. The expanded TGFβRIIcKO conjunctival epithelium strongly expressed SAM-pointed domain containing ETS transcription factor (SPDEF), which plays a critical role in goblet cell differentiation in multiple organs. Our data argued that intrinsic canonical Notch and TGFβ signaling pathways and their interaction(s) play pivotal roles in conjunctival goblet cell differentiation.
Woon Ju Park, Graduate Student, Advisor: Tadin, University of Rochester
A growing number of studies suggest atypical visual processing in autism spectrum disorder (ASD). Given that human behavior heavily relies on visual information, impairments in visual processing may have cascading effects on many other brain functions. Recent proposals in ASD, both domain-specific and -general, hypothesize different mechanisms that may impact visual abilities in this population. However, empirical support for such accounts has been lacking, and it is unclear whether and how these mechanisms can influence visual perception in ASD. The series of studies in this dissertation examine atypical visual processing mechanisms in ASD under three frameworks: larger receptive field size, elevated internal noise, and impaired prediction abilities. We examine each of these hypotheses in children and adolescents with ASD, using a combination of psychophysics, computational modeling, and eye-tracking. In Chapter 2, we tested the integrity of receptive field size udiscrimination task. The results showed that individuals with ASD have impaired motion sensitivity at smaller stimulus size, which was best explained by the larger receptive field size account. In Chapter 3, we investigated whether internal noise is in fact elevated in ASD, and found empirical evidence that supports this account. Importantly, we found that higher internal noise was associated with more severe behavioral symptoms of ASD. Lastly, we examined the prediction abilities in ASD in the context of visual motion extrapolation. The results demonstrate impaired motion prediction in ASD, which was also supported by their atypical eye-movement patterns during the task. Taken together, these studies reveal deficits in visual processing in ASD across a wide range of processing stages. The findings not only provide empirical support for existing proposals of ASD, but also shed lights on the specific mechanisms associated with atypical visual abilities in this population.
Mark Buckley, Professor, Biomedical Engineering, University of Rochester
In this two part talk, we will introduce our ongoing research efforts in the field of corneal biomechanics/mechanobiology. First, we are investigating the mechanical properties of the cornea to inform treatment of keratoconus. In keratoconus ? a degenerative corneal disease ? corneal biomechanics are substantially altered, leading to altered corneal shape and distorted vision. While UV/riboflavin crosslinking (CXL) stiffens the cornea and is an effective, FDA-approved method to halt the progression of keratoconus, it can only be used in the early stages of disease due to risk of damage to corneal endothelial cells. Moreover, long-term effects of CXL have not been assessed. Rigorously characterizing corneal biomechanics is a key step towards motivating and developing novel interventions for keratoconus. To this end, we have used fluorescence microscopy-based strain mapping to quantify the depth-dependent (viscoelastic) mechanical properties of the cornea for loading in different physiologically-relevant directions with a high spatial resolution. In addition, we are currently investigating the role of specific corneal constituents in dictating these properties and, in doing so, have identified partial digestion of corneal proteoglycans as a novel intervention for stiffening the cornea and halting keratoconus without endangering endothelial cells.
Second, we are investigating the mechanical vulnerability of corneal endothelial cells (CECs) and strategies to prevent CEC death during and after corneal transplantation. The most common reason for transplanted corneal grafts to fail is loss of corneal endothelial cells (CECs), the cells that line the inside of the cornea and pump fluid from it to maintain its transparency. Many of these cells are killed due to contact with tools and other materials during transplantation surgery. Thus, there is a need for new approaches that prevent CEC death during corneal grafting. Using instrumented surgical tools and a custom testing platform developed in our laboratory, we have characterized the susceptibility of CECs to surgically-induced mechanical trauma and have identified actin stress fibers as potential mediators of CEC vulnerability. These findings suggest that treatments targeting actin stress fibers could help limit surgical trauma-associated CEC loss during corneal transplantation and reduce risk of graft failure.
Summer Undergraduate Fellowship Poster Session & Picnic
Poster Session 9:00 - 11:00 AM, 2nd Floor Meliora Hall
Picnic: 12:00 PM, Genesee Valley Park
Frank Garcea, Graduate Student, BCS, University of Rochester
PhD Thesis Defense. Advisor: Brad Mahon
The capacity to manipulate objects according to their function is a fundamental cognitive ability that is utilized on a daily basis: We are constantly integrating knowledge of object identity and function with action knowledge to use objects in accordance with our behavioral goals. Previous research has described a whole-brain network of regions, referred to as the Tool Processing Network, that collectively support object recognition and object use ability. However, the neural mechanisms that integrate object representations with action representations in the service of object use remain poorly understood. The work reported in this thesis investigates the functional interactions among regions of the Tool Processing Network during object recognition and object use from three perspectives: i) functional MRI in healthy and brain damaged adults (Chapters 2, 3, 5, 7), ii) behavioral studies in healthy adults (Chapters 4 and 6), and iii) neuropsychological evaluations of patients with disorders of object use (Chapter 6). The principal insight of the work presented herein is that the neural mechanisms that support object-directed grasping and manipulation are contingent upon privileged functional interactions that come by way of the ventral object-processing pathway.
Krishnan Padmanabhan, Assistant Professor, Neuroscience, University of Rochester
From the earliest drawings of neurons, to the identification of families of voltage-gated ion channels, a central theme of neuroscience has been the remarkable variety of cells. As catalogues of this diversity grow at multiple levels (molecular, anatomical, physiological, connectivity, etc.), an open question in systems neuroscience is the role this diversity plays in circuit function and computation. First, I will briefly discuss work on the diverse roles that patterns of activity play in shaping the early connectivity of the ferret visual system. Following this, I will discuss more recent work examining how diversity in the intrinsic properties of neurons influences their function, and the substantial effect that altering diversity can have on coding. Finally, I will conclude by discussing new work in the lab interrogating diversity at multiple levels in the neural circuit.
PONS Luncheon Roundtable Series: Vision and Retinal Disease
Hosted by the Pre-doctoral Organization for the Neurosciences (PONS)
Please join us to discuss current research ongoing at the U of R with expert panelists Drs Krystel Huxlin, PhD, Duje Tadin, PhD, Mina Chung, MD, and Jennifer Hunter, PhD, (Dept. of Ophthalmology). Refreshments will be provided. Hope to see you there!
For more information on upcoming Neuro-related events, please visit http://blogs.rochester.edu/pons/upcoming
Pizza and refreshments provided at 12:15pm
Adam Kohn, Albert Einstein University
*Shared with Neuroscience
The primate visual cortex consists of a host of distinct areas. Visual processing requires the relaying of appropriate information between these areas, via feedforward and feedback connections. Despite the central role of interareal communication in brain function, we know little about the circuits and neural code that are used to relay information. I will present recent work that tests the relationship between population activity patterns in a source area and the firing they produce in a downstream target.
Leah Krubitzer, UC Davis
The neocortex is the part of the brain that is involved in perception, cognition, and volitional motor control. In mammals it is a highly dynamic structure that has been dramatically altered within an individual's lifetime and in different lineages throughout the course of evolution. These alterations account for the remarkable variations in behavior that species exhibit. Because we cannot study the evolution of the neocortex directly, we must make inferences about the evolutionary process from a comparative analysis of brains, and study the developmental mechanisms that give rise to alterations in the brain. Comparative studies allow us to appreciate the types of changes that have been made to the neocortex and the similarities that exist across taxa, and ultimately the constraints imposed on the evolving brain. Developmental studies inform us about how phenotypic transitions may arise by alterations in developmental cascades or changes in the physical environment in which the brain develops. We focus on how early experience shapes the functional organization and connectivity of each individual's brain and behavior to be uniquely optimized for a given sensory milieu. Such plasticity plays an integral role in shaping the brains of normal individuals, as well as those that have lost or altered sensory inputs, such as congenitally deaf or blind individuals. This loss of sensory input early in development leads to dramatic changes in both the normal organization and connections of the neocortex as well as in sensory mediated behavior. Studies have also demonstrated that enhanced sensory experience that occurs during critical periods of development has a profound effect on the resultant organization and connectivity of the neocortex. In our experiments we examined the specific types of alterations that occur when individuals develop with lost or enhanced sensory inputs in both experimental and natural settings. Because all aspects of complex social experience including parental rearing and sibling interactions are mediated by our sensory systems, it follows that these types of complex patterns of sensory inputs are fundamentally important for shaping both the organization and connectivity of the neocortex. In turn, the ultimate behavior generated by the neocortex will be highly adaptive for the context in which the individual develops.
David Fitzpatrick, Max Planck Institute
How do cortical circuits transform the information supplied by different populations of retinal ganglion cells into coherent representations of the visual world? Hubel and Weisel's demonstration of the emergent properties of cortical circuits such as selectivity for the orientation of edges, and their arrangement in an orderly columnar architecture set the stage for a host of studies that have provided key insights into the circuit mechanisms that build cortical representations. While progress has been substantial, we still lack a fundamental understanding of the functional synaptic architecture of cortical circuits--the rules that govern how individual neurons integrate thousands of synaptic inputs with different functional properties to produce coherent sensory responses. My presentation will focus on recent studies employing in vivo 2- photon imaging of the calcium sensor GCaMP6 that probe the organization of functionally defined inputs within the dendritic fields of individual pyramidal neurons in visual cortex. These results suggest that the spatial arrangement of synaptic connections within the dendritic field plays a significant role in the cortical computations that underlie sensory representations.
Jake Yates, Postdoctoral Fellow, University of Rochester
Motion perception is a classic framework for probing computations and circuits underlying perceptual decisions. Despite a long history of studying the sensitivity of single neurons, little is known about how direction is read out from the activity of neural populations. In the first half of the talk, I will describe the activity of small ensembles of MT neurons recorded simultaneously while macaque monkeys performed a 2-alternative coarse direction-discrimination task. By comparing the performance of a simple decoder to the psychophysical performance, we found that the population was more accurate than the best single neurons and performed at least as well as the monkey. We also found that the joint response patterns of neurons was not needed to compute the optimal weight pattern, and that MT populations were most sensitive to the stimulus immediately following motion onset.
In the second half of the talk, I will describe behavioral performance of two marmoset monkeys performing a continuous motion estimation task. The common marmoset is a New World primate that shares similar organization of MT to macaques, but due to its smooth cortex, offers unparalleled access to study large populations of MT neurons. Combined with large-scale recording techniques, this behavioral paradigm offers a new means for studying the neural population code that underlies motion perception.
Beth Buffalo, University of Washington
While it has long been recognized that medial temporal lobe structures are important for memory formation, studies in rodents have also identified exquisite spatial representations in these regions in the form of place cells in the hippocampus and grid cells in the entorhinal cortex. Spatial representations entail neural activity that is observed when the rat is in a given physical location, and these representations are thought to form the basis of navigation via path integration. One striking difference between rodents and primates is the way in which information about the external world is gathered. Rodents typically gather information by moving to visit different locations in the environment, sniffing and whisking. By contrast, primates chiefly use eye movements to visually explore an environment, and our visual system allows for inspection of the environment at a distance. In this seminar, I will discuss recent work from my laboratory that has examined neural activity in the hippocampus and adjacent entorhinal cortex in monkeys performing behavioral tasks including free-viewing of complex natural scenes and navigation in a virtual environment. These data have suggested that spatial representations including place cells, grid cells, border cells, and direction-selective cells can be identified in the primate hippocampal formation even in the absence of physical movement through an environment. I will also discuss new research involving chronic, large-scale recordings throughout the primate brain and other areas of opportunity for future research to further our understanding of the function of the hippocampal formation and the nature of the cognitive map.
Goker Erdogan, BCS Graduate Student, Advisors: Robbie Jacobs, Jiebo Luo, University of Rochester
Shape is a fundamental property of physical objects. It provides crucial information for various critical behaviors from object recognition to motor planning. The fundamental question here for cognitive science is to understand object shape perception, i.e., how our brains extract shape information from sensory stimuli and make use of it. In other words, we want to understand the representations and algorithms our brains use to achieve successful shape perception. This thesis reports a computational theory of shape perception that uses modality-independent, part-based, 3D, object-centered shape representations and frames shape perception as Bayesian inference over such representations. In a series of behavioral, neuroimaging and computational studies reported in the following chapters, we test various aspects of this proposed theory and show that it provides a promising approach to understanding shape perception.
Kristina Nielsen, Johns Hopkins
Little is currently known about the development of higher order visual functions at the neural level, in part because of limitations in existing animal models. Over recent years, ferrets have become a major animal model for the development of visual cortex. So far, however, most studies in ferrets have exclusively focused on the development of early visual stages up to primary visual cortex (V1). My lab has begun to investigate the development of higher visual areas in the ferret, with a particular emphasis on the development of complex motion processing. In this talk, I will cover our recent experiments on motion integration in young and adult ferrets. First, we used behavioral experiments to test whether ferrets can perceive global motion in stimuli that require motion integration. To this end, we trained ferrets to discriminate between leftward and rightward moving random dot kinematograms (RDK), in which the percentage of dots moving in the global direction can be systematically varied. Ferrets were able to perform this task, with performance levels systematically varying with the RDK coherence as expected. Second, using extracellular recordings we were able to demonstrate that area PSS in ferret visual cortex shows signatures of higher order motion processing similar to monkey area MT. Most importantly, PSS neurons appear to integrate local motion signals. In these experiments, we tested the responses of PSS neurons to plaid patterns, which are constructed by superimposing two gratings drifting in different directions. Perceptually, plaid patterns appear to be drifting in a third, intermediate direction. As in monkey MT, a fraction of ferret PSS neurons (15%) responded to the motion of the plaid, not the motion of the individual gratings, consistent with integration of local motion signals. Ferret V1 neurons, in contrast, responded to the motion of the individual components, not the integrated plaid motion. Third, I will present data on the development of simple and complex response properties in PSS. Lastly, I will describe ongoing efforts to investigate the joint development of V1 (which provides input to PSS) and PSS using two-photon microscopy.
Miguel Eckstein, UC Santa Barbara
Shared BCS/CVS Boynton Colloquium
When viewing a human face people first look towards the eyes. A prominent idea holds that these fixation patterns arise solely due to social norms. Here, I propose that this behavior can be explained as an adaptive brain strategy to learn eye movement plans that optimize the rapid extraction of visual information for evolutionarily important perceptual tasks. I show that humans move their eyes to points of fixation that maximize perceptual performance determining the identity, gender, and emotional state of a face. These initial optimal points of fixation, which vary moderately across tasks, are correctly predicted by a foveated Bayesian ideal observer (FIO) that takes into account the task, integrates information optimally across the face but is constrained by the decrease in resolution and sensitivity from the fovea towards the visual periphery. A model that disregards the foveated nature of the visual system and makes eye movements either to the regions/features with the highest discriminative information or center of the face fails to predict the human fixations. The preferred points of initial fixation are similar across cultural groups (East Asians vs. Caucasians). However, there are individual differences with a majority of observers (~ 85 %) looking just below the eyes while a minority (~15 %) closer to the tip of the nose and below. The systematic differences in initial points of fixation persist over time and also correspond to individual variations in the points of fixation that maximize perceptual performance. Finally, observers have difficulty changing their eye movement plans when confronted with unusual faces or simulated scotomas that make their over-practiced preferred points of fixation suboptimal. Together, these results illustrate how the brain optimizes initial eye movements to rapidly extract information from faces based on the statistical distribution of discriminatory information, general properties of the human visual system and individual specific neural characteristics. We propose that the ingrained nature of these highly practiced motor programs might suggest a domain specific neural representation of learned oculomotor plans.
Charlie Granger, Graduate Student, University of Rochester
The retinal pigment epithelium is a monolayer of cells that forms part of the blood-retinal barrier between the neural retina and the choriocapillaris, and performs several functions essential to maintaining the health and function of the retina. Due to its involvement in some retinal diseases, it is desirable to image the retinal pigment epithelium in the living eye to detect or monitor such diseases. This has been accomplished in some commercial clinical instruments by imaging the fluorescence of molecules within retinal pigment epithelial cells, though with limited resolution due to the aberrations of the eye. Short wavelength autofluorescence imaging of lipofuscin has previously been translated to the adaptive optics scanning light ophthalmoscope to allow imaging of the retinal pigment epithelium at the level of single cells. Recently, we have imaged the same cellular mosaic by implementing infrared autofluorescence, which is thought to originate from melanin. Our goals are to develop and utilize these modalities to learn about the retinal pigment epithelium in the living eye, by analyzing the cellular and sub-cellular structure and fluorescence signal. In this talk I will discuss and compare recent results and analysis from each modality, as well as future research directions.
Kimberly Schauder, University of Rochester
My research is guided by two overarching questions: 1) How do individuals with ASD process information differently than typically developing individuals? and 2) How do these information processing differences influence their experience and interaction with the world? In this talk, I will discuss a series of four studies that utilize a vision science approach to better understand neural, perceptual, and cognitive functioning in ASD. Findings across the first two studies suggest that individuals with ASD may have basic deficits in key neural mechanisms, specifically larger receptive field size and increased internal noise. The third study tests the prediction hypothesis of ASD by investigating individuals? abilities to accurately predict the final location of a moving object. We found no evidence of a prediction deficit, which was confirmed across several different analyses. Finally, the fourth study investigates a critical moment of face identification, the first look to a face. I will present findings from a replication study in neurotypical adults, explain our extensive process for adapting the task for use in adolescents with and without ASD, and show preliminary findings demonstrating feasibility in our population of interest.
Farran Briggs, Geisel School of Medicine at Dartmouth
The overarching goal of my research is to understand how visual information is encoded by individual neurons and neuronal circuits. In my talk, I will describe two projects that showcase the different research programs that are ongoing in my lab. The goals of the first project are to understand the neuronal mechanisms of attention. We record from multiple neurons simultaneously spanning the visual thalamus and primary visual cortex in alert and behaving monkeys performing an attention-demanding task to understand how attention alters communication in neuronal circuits and whether attentional modulation of neuronal activity can be predicted by neuronal feature selectivity. The goals of the second project are to understand the functional contribution of corticogeniculate feedback to vision. Based on physiological and morphological evidence, we have demonstrated that corticogeniculate feedback is organized into parallel streams that align with the feedforward parallel processing streams. More recently, we have used optogenetics to manipulate the activity of corticogeniculate neurons selectively and we observe striking effects of corticogeniculate feedback on the timing and precision of thalamic responses to visual inputs. Together, my research highlights the importance of probing visual function and cognitive influences on vision at the granular level in order to gain a more mechanistic understanding of how visual information is encoded in the thalamus and cortex.
Jose Sahel, University of Pittsburgh
Inherited and age-related retinal degenerative diseases are major cause of untreatable blindness due to a loss of photoreceptors. In all conditions where rods are destroyed, cones degenerate secondarily. As cones underlie all visual functions in lighted environment, cone rescue is of crucial importance for maintaining central vision. We discovered rod-derived cone viability factor (RdCVF) [1, 2], a thioredoxin secreted by rod photoreceptors that induces cone survival and prevents the loss of function of cone photoreceptors. Investigating the mechanisms of these protective effects, we have demonstrated that RdCVF acts through Basigin-1 (a transmembrane protein specifically expressed photoreceptors) and GLUT1 (a member of the glucose transporter family) . Our recent investigations provided evidence that while RdCVF protects cones, its long form RdCVFL is involved in defense mechanisms against light-induced oxidative injury on rod and cone photoreceptors, thus a therapy aimed at preventing secondary cone degeneration should be pursued using both RdCVF and RdCVFL . In advanced stages of retinal degenerative diseases when cone photoreceptors integrity is affected to the point that they do not express anymore the cell surface of RdCVF, the administration of RdCVF will be without benefit. In these cases, optogenetics can make possible the conversion of different retinal cells into ?artificial photoreceptors?, offering perspectives for vision restoration in a mutation-independent manner [5, 6].