Jelena Jovancevic, University of Rochester (Advisor: Mary Hayhoe)
Dealing with natural, complex scenes in everyday behavior, where one is surrounded by a variety of potentially relevant stimuli, poses an important problem for our visual system. Given the constraints set by attentional and working memory limitations in acquiring and retaining information, how does our visual system solve the problem of selecting appropriate information when it's needed, in the context of visually guided behavior? Though incomplete, overt fixations carry much information about current attentional state, and are a revealing indicator of this selection. What controls allocation of gaze and attention in natural environments? Traditionally, attention was thought to be attracted exogenously by the properties of the stimulus. Studies done using 2D experimental displays or viewing of scenes, showed that properties such as contrast or chromatic salience can explain some regularities in fixation patterns. These, however, can account for only a modest proportion of the variance. Further, experimental contexts examined may not reflect the challenges of natural visually guided behavior. Complexity of the environment and the ongoing behavior make it necessary to look at natural behavior when investigating control of gaze. Recent work in natural tasks has demonstrated that the observer's cognitive goals play a critical role in the distribution of gaze during ongoing natural behavior.
The goal of this thesis is to understand the mechanisms that control the deployment of gaze in natural environments. Though fixation patterns in natural behavior are largely determined by the momentary task, it is not clear how effective top-down control is in dynamic environments because of the difficulty of dealing with unexpected events. To address this problem we studied gaze patterns in both real and virtual walking environments where subjects were occasionally exposed to potentially colliding pedestrians. Our results indicate potential collisions do not automatically attract attention and are usually detected by active search, rather than by reacting to looming. If, however, a collider is detected, fixations on all pedestrians are increased in the subsequent few seconds, indicating that subjects learn the structure and dynamic properties of the world in order to fixate critical regions at the right time. We also investigated whether an addition of another perceptually demanding task interferes with the detection of potential collisions. In a situation of walking, while also following a leader pedestrian, detection of colliders decreased significantly indicating that subject learn how to allocate attention and gaze to satisfy competing demands. In a real environment we investigated whether manipulation of the probability of a potential collision, of pedestrians with predetermined roles, is accompanied by a corresponding change in gaze allocation. We demonstrated that fixation patterns adjust very quickly to changes in the probabilistic structure of the environment that indicate different priorities for gaze allocation. Based on our results, it appears that observers learn to represent sufficient structure about the visual environment in order to guide eye movement in a pro-active manner, in anticipation of events that are likely to occur in the scene. To investigate the importance of behavioral relevance we compared fixation durations when walkers stopped instead of going on a collision path. Other than the reduction in fixation probabilities of about 20%, the pattern remained the same. This supports the idea that gaze behavior takes into account the risk (or reward) value of particular information, and is consistent with reinforcement learning models of gaze as well as with the neuro-physiological findings on the importance of reward. Finally we made a comparison of performance in real and virtual environments in order to evaluate the validity of the latter. The results in the virtual reality walking strengthen our result in the real walking experiment validating virtual environments as useful paradigms in the study of natural behavior.
Keith Schneider, RCBI, University of Rochester
This is a practice job talk.
The human visual system contains multiple streams of information that originate in distinct classes of retinal ganglion cells. These information streams remain disjoint in the subcortex, innervating for example the magnocellular and parvocellular layers of the lateral geniculate nucleus (LGN) and the superior colliculus, and become intermixed in the visual cortex. High resolution functional magnetic resonance imaging (fMRI) of the subcortex is the only way to directly access these information streams in humans, and can reveal significant structural detail. My collaborators and I have used fMRI to detect phenomenon in the subcortex, such as binocularly rivalry and until recently attention, that have not been observed here with primate electrophysiology. The subcortical nuclei generally reflect the properties of the visual cortex, but with different gains and tunings. For example, while the LGN exhibits a preference for attention to particular features, the superior colliculus does not but instead is activated during transitions between attended features. For spatial attention, both the LGN and superior colliculus are modulated, but the superior colliculus significantly more so. For both featural and spatial attention, the activity of the pulvinar nuclei are intermediate to the LGN and superior colliculus. Examining the subcortical nuclei is an important step in understanding the function and architecture of the visual system. In addition, I will discuss clinical applications of this research for understanding dyslexia and congenital stationary night blindness.
Stephen Lisberger, University of California San Francisco
Co-sponsored by Neurobiology & Anatomy/Neuroscience
How are sensory inputs represented, decoded, and transformed into commands for movement? I will address this general issue by analysis of trial-by-trial variation in neural and behavioral measures during smooth pursuit eye movements. I will present evidence that most of the variation in the initiation of pursuit arises from noise in sensory representations. I also will show how the sensory-motor system can be thought of in terms of noise reduction and noise addition at each level of processing, and how this approach can be used to understand the transformation from vision to action.
Alen Hajnal, University of Louisville
Surface slant and distance are two properties of space with direct relevance to action. In two experiments I demonstrate that visual space in the dark has an intrinsic bias that represents the horizontal ground plane as slanted. Furthermore, the symmetry of the visual space around eye-level is preserved in the accuracy of perceived spatial locations, but slightly compromised in terms of the variability of perceptual judgments. The intrinsic bias of the visual system is present in full cue conditions as well, as exhibited by various explicit and implicit measures of slant perception. Implications for the perception and actualization of possibilities for action (affordances) are discussed.
James Elder, York University
Humans have a remarkable ability to rapidly group and organize image data into coherent representations reflecting the structure of the visual scene. However current computer vision algorithms are by comparison relatively primitive in their performance. Key issues include the combinatorial complexity of the problem and difficulties capturing and combining global constraints with local cues. In this work we develop a coarse-to-fine Bayesian algorithm that addresses these issues.
In our approach, candidate contours are extracted at a coarse scale and then used to generate spatial priors on the location of possible contours at finer scales. In this way, a rough estimate of the shape of an object is progressively refined. The coarse estimate provides robustness to texture and clutter while the refinement process allows for the extraction of detailed shape information. The grouping algorithm is probabilistic and uses multiple grouping cues derived from natural scene statistics. We present a quantitative evaluation of grouping performance on natural images and show that the multi-scale approach outperforms single-scale contour extraction algorithms. We suggest that the substantial feedback connections known to exist in ventral stream of the visual cortex may support an analogous refinement of perceptual representations in the human brain.
C. Shawn Green, University of Rochester (Advisor: Daphne Bavelier)
Action video game players (VGPs) have been shown to outperform their non-game playing (NVGPs) peers on a number of sensory/cognitive measures. In tasks that require accurate responses to quickly presented visual stimuli, VGPs typically exhibit higher levels of accuracy than NVGPs. In particular, VGPs have demonstrated enhancements in a number of tasks thought to tap reasonably independent aspects of visual attention (spatial distribution and resolution, temporal characteristics, capacity, etc). In tasks that require speeded responses, the VGP enhancement is observed as a large decrease in reaction time (RT) in VGPs compared to NVGPs (accuracy is typically equivalent in the two groups). Here we put forward the hypothesis that a single mechanistic explanation, an increase in the rate of sensory integration in VGPs, can account for the entirety of the data, thus bridging the gap between the accuracy and RT literatures. To test this hypothesis, two sensory integration tasks were employed - a standard motion coherence paradigm and a novel auditory localization task which, in combination with a model developed by Palmer et al (2005), allow for a more explicit test of the relative contribution of sensory integration rate, criteria, and motor execution in generating the differences observed between VGPs and NVGPs. In both the motion and auditory tasks, VGPs demonstrated a large reduction in RT compared to NVGPs with equivalent accuracy. This pattern was well captured by the model with an increase in the rate of information accrual and a concurrent decrease in criteria in the VGPs. Several follow-up experiments provide further support for the hypothesis that VGPs acquire sensory information more rapidly than NVGPs. Importantly, similar effects can be induced in NVGPs through extensive action video game training. Finally, to examine how these changes may be implemented at the neural level, a model by Ma and colleagues (2007) was utilized, with the primary difference between VGPs and NVGPs being an increase in the strength of feed-forward projections between sensory and integration areas in the VGP group.
Nathan Rosecrans, Center for Visual Science, University of Rochester
Sensory coding in the primary visual cortex is understood to occur primarily through the selective activation of neurons that encode specific local features of the visual environment. However, during visual stimulation, visual cortical activity exhibits strong spatial and temporal correlations that are not derived from the visual input. Rather, these correlations reflect coordinated activity that is produced spontaneously by the cortical network, which have been shown to strongly shape cortical responses to sensory stimuli. In awake animals the fine-scale spatio-temporal structure of this activity remains unclear. Using a 3.2mm linear array of 16 multi-unit electrodes, we examined spontaneous neural activity within the primary visual cortex of awake ferrets in complete darkness. We found spatial patterns of neural firing that reflected coordinated activity across neighboring cortical columns. During the majority of spontaneous activity, these local regions of elevated firing fluctuated rapidly, persisting from between 50ms to 100ms. Cortical activity during this dynamic behavior was dominated by synchronous alpha (8-12Hz) band oscillations. Interspersed within these periods, were episodes in which the local patterns of activity became stabilized within specific cortical columns, which lasted from 500ms up to 5sec. In contrast to the periods of spatial pattern instability, these periods of stabilization were marked by little power or synchrony within the alpha band, and increased firing rate and power in the gamma (26-80Hz) band. These findings establish a link between the large-scale oscillatory behavior of cortical networks and the spatial distribution of activity within local cortical columns.
Bruce Cumming, National Eye Institute
Simultaneous use of single unit recording and threshold psychophysics has revealed correlations between perceptual choice and firing rate, that cannot be explained by the visual stimulus (Choice Probability, CP). Quantitative modeling studies have explained the observed magnitudes with a bottom-up scheme, in which CP reflect an effect of random fluctuations in firing rate upon choice. In order to test this interpretation further, we measured CP using a stimulus which simultaneously allowed the use of white noise analysis to infer how fluctuations in the stimulus content affected neuronal activity. Two monkeys performed a disparity identification task while we recorded the activity of disparity selective neurons in V2. The stimulus was a random dot stereogram in which the disparity was chosen at random (from a discrete distribution) for each 10ms video frame. Signal was added by increasing the probability with which one disparity was presented on a given frame. Calculating the mean response following one video frame, for each disparity, yields disparity response functions. These were calculated separately according to the choice reported at the end of the trial. Trials (with no added signal) on which animals report the preferred disparity have higher mean firing rates - as expected from earlier observations on CP. The disparity response functions reveal that this mainly reflects an increase in the gain of the neuronal response to disparity, on trials where the animal chooses that neurons preferred disparity. We have been unable to generate such large gain changes in (bottom-up) simulations where the pooled response of a neuronal population determines choice. These gain changes resemble the effects of spatial or feature-based attention that have been reported by others. This suggests that a significant component of the CP in this task reflects a top-down process similar to feature-based attention.
Donald Hood, Columbia University
Glaucoma, the leading cause of preventable blindness, produces a loss of vision by damaging retinal ganglion cell (RGC) axons. What is the relationship between the loss of vision and the loss of RGCs? It is commonly believed that structural damage (e.g. loss of RGC axons) precedes functional damage (i.e. loss of behavioral sensitivity). The loss of RGC axons can now be measured in vivo with optical coherence tomography (OCT). Data from patients and controls indicate that behavioral and structural damage precede at the same rate. In fact, a simple linear model relates the local loss of RGC axons, as measured with OCT, to local losses in behavioral sensitivity. Structural damage appears to precede functional damage only under conditions where the measurement of structural damage is less variable than the measurement of the functional damage.
Peter Bex, Harvard University
Much of our understanding of visual processing comes from experiments involving foveally-viewed, isolated sinusoidal gratings that are presented briefly at barely-visible contrasts. Compared with such laboratory conditions, natural vision is concerned with complex images that contain broad distributions of spatio-temporal structure. I describe a range of psychophysical and analytical techniques that examine the information in natural scenes that is selected by the human visual system to guide behaviourally-relevant decisions.
Jessica Morgan, University of Rochester (Advisor: David Williams)
The retinal pigment epithelial (RPE) cells are an important layer of the retina because they are responsible for providing metabolic support to the photoreceptors. Techniques to image the RPE layer include autofluorescence imaging with a scanning laser ophthalmoscope (SLO). However, previous studies were unable to resolve single RPE cells. This thesis describes the technique of combining autofluorescence, SLO, adaptive optics (AO), and dual wavelength simultaneous imaging and registration to visualize the individual cells in the RPE mosaic in human and primate retina for the first time in vivo.
After imaging the RPE mosaic non-invasively, the cell layer's structure and regularity were characterized using quantitative metrics of cell density, spacing, and nearest neighbor distances. The RPE mosaic was compared to the cone mosaic, and RPE imaging methods were confirmed using histology.
The ability to image the RPE mosaic led to the discovery of a novel retinal change following light exposure; 568 nm exposures caused an immediate reduction in autofluorescence followed by either full recovery or permanent damage in the RPE layer. A safety study was conducted to determine the range of exposure irradiances that caused permanent damage or transient autofluorescence reductions. Additionally, the threshold exposure causing autofluorescence reduction was determined and reciprocity of radiant exposure was confirmed. Light exposures delivered by the AOSLO were not significantly different to those delivered by a uniform source. As all exposures tested were near or below the permissible light levels of safety standards, this thesis provides evidence that the current light safety standards need to be revised.
Finally, with the retinal damage and autofluorescence reduction thresholds, the methods of RPE imaging were modified to allow successful imaging of the individual cells in the RPE mosaic while still ensuring retinal safety. This thesis has provided a highly sensitive method for studying the in vivo morphology of individual RPE cells in normal, diseased, and damaged retinas. The methods presented here also will allow longitudinal studies for tracking disease progression and assessing treatment efficacy in human patients and animal models of retinal diseases affecting the RPE.
Pawan Sinha, Massachusetts Institute of Technology
Learning to integrate information is a key task a child's brain has to perform during the normal developmental course. In essence, this developmental process transforms the sensorium from an amorphous collection of primitive attributes, to one where these attributes are integrated into cliques corresponding to distinct objects. In order to study this process experimentally, we have recently launched Project Prakash--a humanitarian/scientific initiative that helps provide sight to children suffering from treatable congenital blindness, and characterizes their subsequent visual development. In this talk, I shall describe a case study from this project, and also outline a computational model motivated by the experimental results.
Zoe Kourtzi, University of Birmingham, UK
Successful actions and interactions in the complex environments we inhabit entail making fast and optimal perceptual decisions. Extracting the key features from our sensory experiences, assigning them to meaningful categories and deciding how to interpret them is a computationally challenging task that is far from understood. Accumulating evidence suggests that the brain is optimized to solve this challenge by combining sensory information and previous knowledge about the environment acquired through evolution, development and everyday experience. We combine psychophysics, fMRI and advanced mathematical approaches to investigate the neural mechanisms that mediate experience-dependent plasticity in the human brain. Our studies show that the human brain learns to exploit regularities in the environment and flexible rules of organization of the physical input. Our findings suggest that experience plays an important role in the adaptive optimization of visual functions by shaping neural processing across cortical networks in the human brain.
Andrew Welchman, University of Birmingham, UK
Estimating the depth structure of the environment is a principal function of the visual system, enabling many key computations, such as segmentation, object recognition, material perception and the guidance of movements. The brain exploits a range of depth cues to estimate depth, combining information from shading and shadows to linear perspective, motion and binocular disparity. Despite the importance of this process, we still know relatively little about the cortical processing of depth cues and their synthesis. Our recent fMRI work aims to understand the functional role of different cortical areas in the processing of perceptually-useful depth information. I will suggest an alternative view of the functional roles of the dorsal and ventral streams in the processing of depth information, based on the idea that they perform computations with different goals.
Xei Biao, Postdoc, University of Rochester (Knill lab)
In daily life, we regularly judge the color appearance of three-dimensional objects. However, most previous studies about color perception concerns simple stimuli. We conducted two experiments that explored the color perception of objects in complex scenes. The first experiment examines how people perceive the color of objects across variation in surface gloss. Observers adjusted the color appearance of a matte sphere to match that of a test sphere. Across conditions we varied the body color and glossiness of the test sphere. The data indicate that observers do not simply match the spatial average of the light reflected from the test sphere. Indeed, the visual system compensates for the physical effect of varying the gloss, so that appearance is stabilized relative to what would be predicted by the spatial average. The second experiment examines how people perceive the color of different parts of an object. We replaced the test sphere with a soccer ball that had one colored hexagonal face (test patch). Observers were asked to adjust the color appearance of a match sphere to the test patch. The test patch could be located at either an upper or lower location on the soccer ball. In addition, we varied the surface gloss of the entire soccer ball (including the test patch). The data show that there is an effect of test patch location on observers' color matching, but this effect is small compared to the physical change in the average light reflected from the test patch across the two locations. In addition, the effect of glossy highlights on the color appearance of the test patch was consistent with the results from Experiment 1.
David Sliney, Consulting Medical Physicist
Humans have evolved under a constant bath of "dangerous" ultraviolet rays and if they stare at the sun, they can experience a permanent retinal injury--an eclipse blindness. Nevertheless, ocular tissues are surprisingly well protected by anatomical, physiological and behavioral factors. Sunglasses may actually increase environmental risks to the eye, and ophthalmic instrument light may also pose serious risk of injury. Exposure limits for cornea, lens and retina have been developed to protect the eye from artificial light sources and lasers, but it is important to recognize the underlying assumptions that form the basis of these exposure limits, and when adjustments are required--as in the case of ophthalmic instruments.
Peter De Weerd, University of Maastricht
Perceptual filling-in in the visual domain can be defined as the perception of a surface feature in a region of visual space where that feature is physically absent. This illusion can be experienced during 'Troxler fading', during which a peripherally presented stimulus becomes perceptually replaced with the surrounding background after a period of retinal stabilization. There is an ongoing, unresolved debate about the nature of the neural process underlying this visual illusion, and its relevance for perception. Evidence will be presented from human psychophsyics and single unit recordings in monkeys suggesting that neural interpolation processes in early visual areas might contribute both to the illusion of perceptual filling-in, and to the perception of the interior of real surfaces. Furthermore, human fMRI evidence (including some recently published studies from other authors, as well as unpublished material from our group) that can contribute to the debate will be discussed.
Hal S. Greenwald, University of Rochester (Advisor: David Knill)
We explored how humans use visual information to compute estimates of three-dimensional (3D) surface orientation that can be used to guide motor behavior. First, we investigated how the visual system integrates monocular and binocular information for two different natural tasks and found that binocular information influenced perceived 3D orientation more when subjects grasped a coin than when they placed an object on the same coin, regardless of whether the tasks were performed separately by different subjects or by the same subject in interleaved sessions. We also measured processing speeds since these can affect cue integration, but there were no significant differences between tasks. We concluded that how one uses visual cues for motor control depends on the information demands of the task being performed, whereas how quickly the information is processed appears to be task invariant. Second, we assessed the usefulness of stereopsis across the visual field. Binocular information had a smaller influence relative to monocular information on 3D orientation estimates for stimuli at larger retinal eccentricities and distances from the horopter, where stereoacuity is worse than monocular acuity. The results were as predicted by a Bayesian integration scheme in which the cues were weighted according to their relative reliabilities across the visual field. We concluded that stereopsis is of limited use in the periphery and away from the horopter because monocular cues are more reliable in these regions. Third, we evaluated the potential role of orientation disparity as a binocular cue to 3D orientation by simulating a population of binocular visual cortical neurons tuned to orientation disparity and measuring the amount of Fisher information contained in the activity patterns. We concluded that orientation disparity provides an efficient source of information about 3D orientation and that it is plausible that the visual system could have mechanisms that are sensitive to it, although it would be most useful when combined with estimates from position disparity gradients and monocular perspective cues.
CVS Undergraduate Fellowship Program Poster Session: Meliora Hall, 2nd Fl., 9:00 am - 12:00 pm
CVS Picnic: Genesee Valley Park, 12:00-5:00 pm
Peter Battaglia, Department of Psychology, University of Minnesota
Perception and action are critical functions for most animals' survivals, but to achieve high performance each must overcome uncertainty and ambiguity, both in perceiving the world and planning and executing actions. The following studies examine how human behavior minimizes the negative consequences of uncertainty and ambiguity to improve performance in perceptually-guided tasks.
- An object's size and distance each influence the angular size of its visual image, rendering distance estimation based on angular size alone an ambiguous task. But, an "auxiliary" sensory measurement of the object's size can disambiguate the distance. We ran a psychophysical experiment to test whether humans use sensed size when making distance judgments, and found significant improvements in distance perception due to the auxiliary size sensations.
- Just as size and distance are each impossible to unambiguously estimate given angular size alone, judging the rate that an object's size changes given only angular size-change rate is also ambiguous. But an auxiliary distance-change rate sensation can be used to disambiguate the size-change rate. We conducted an experiment to evaluate whether humans use the sensed distance-change rate when making sizechange judgments, and found significant improvements in size-change perception due to the auxiliary distance-change sensations.
- Many behavioral tasks have inherent time constraints that limit the time available for perception and action. We tested whether humans are able to adjust the time they devote to each to minimize the negative consequences of uncertainty that originates from perceptual and motor imprecision. We found humans' timing choices are consistent with the theoretically ideal actor's choices, and concluded that humans' implicit knowledge of their perceptual and motor variability is used to make these choices.
Liping Wang, Dept of Neurosurgery, John's Hopkins University (Postdoctoral candidate in Pasternak lab)
Working memory provides the temporal and spatial continuity between our past experience and present actions. Studies have shown that in the monkey's associative cerebral cortex, cells undergo sustained activation of discharge while the animal retains information for a subsequent action. Single-unit activity was recorded from the hand areas of the somatosensory cortex of monkeys trained to perform a haptic delayed matching to sample task with objects of identical dimensions but different surface features. During the memory retention period of the task (delay), many units showed sustained firing frequency change, either excitation or inhibition. These observations indicate the participation of somatosensory neurons not only in the perception but in the short-term memory of tactile stimuli. In our recent study, we recorded from single units in SI cortex of monkeys "naïve" to those tasks. During unit recording, the naïve animals performed tasks identical to those for trained monkeys but without the requirements (untrained) to discriminate and memorize sensory cues. The results indicate that a population of SI neurons in naïve monkeys responds to tactile stimulus even though their retention is not necessary for task performance. The neuronal activity recorded in SI cortex during the delay period of the tasks is therefore not necessarily to represent the neuronal process of working memory.
Dennis Levi, School of Optometry, University of California Berkeley
Experience-dependent plasticity is closely linked with the development of sensory function; however, there is also growing evidence for plasticity in the adult visual system. This talk re-examines the notion of a sensitive period for the treatment of amblyopia in light of recent experimental and clinical evidence for neural plasticity. Two recently proposed methods for improving the effectiveness and efficiency of treatment are "perceptual learning" and action videogame playing. Our recent work suggests that adults with amblyopia can improve their perceptual performance through extensive practice on a challenging visual task and through videogame play. The results suggest that both perceptual learning and videogame play may be effective in improving a range of visual performance and importantly the improvements may transfer to visual acuity.
Constantin Rothkopf, University of Rochester (Advisors: Mary Hayhoe and Dana Ballard)
This dissertation explores the problem of human visuomotor control in natural goal directed behavior. To this end, it studies how humans walk through a virtual environment while approaching and avoiding objects. The analysis of the collected behavioral data shows the limitations of current models describing human gaze selection and further quantifies the influences of task on gaze selection. This analysis furthermore demonstrates that the statistics of the image sequence at the point of gaze is strongly task dependent, a fact that has important consequences on models of representational learning in the brain. If human vision is understood as an active process that has to learn how to select relevant information in time, then algorithms for the solution of complex visuomotor control tasks have to be developed. To this end, this thesis introduces a credit assignment algorithm for modular reinforcement learning that allows solving the same walking tasks in a virtual agent. The main feature of this algorithm is that it allows for multiple component tasks to find out their respective contributions to a single global reward signal. This is particularly important aspect in biological systems.
Finally, in order to validate the proposed reinforcement learning algorithm for the modeling of human visuomotor behavior an inverse reinforcement learning method is proposed that allows extracting the relative reward weights given to the component tasks by a behaving human individual. The application of this method to the collected human data shows that this model can indeed be used to describe well the human walking behavior in terms of a reinforcement learning model that assumes a modular solution of the individual component tasks.
Brett Fajen, Cognitive Science, Rensselaer Polytechnic Institute
Over the course of a lifetime, people acquire numerous perceptual-motor skills, many of which involve a tight coupling between continuously available information in optic flow and continuously controlled movements of the body. People learn to steer bicycles, catch fly balls, drive automobiles, pilot aircraft, and so on. It is well established that behavior in these kinds of tasks can be characterized in terms of mappings (or laws of control) from information variables to movements of the body (or an input device, as in the case of vehicle control). Laws of control have been proposed and tested for a variety of tasks, such as steering, braking, catching fly balls, and intercepting moving targets. However, little is known about how these mappings are acquired in the first place, and how they are updated with experience and changes in the body, environment, or task constraints. In this talk, I will present my research on how people flexibly adapt to changes in the dynamics of their bodies and the systems whose movements they control by learning novel mappings from optic flow variables to movement variables. This leads to a new view of visually guided action that emphasizes the importance of perceptual-motor learning.
Oh-Sang Kwan, Purdue University (postdoctoral candidate in Knill lab)
There have been two prominent models of the speed-accuracy tradeoff in a goal-directed human movement, namely, the stochastic optimized-submovement model and the minimum variance model. Both explain the speedaccuracy tradeoff known as Fitts’ law, but neither is complete. The former cannot predict movement trajectory between the endpoints, while the latter is not consistent with irregular kinematic profiles often observed in human movements. I will propose a new model in which a goal-directed movement consists of two submovements. The state characterizing the transition between the submovements is optimized so that the total movement time is minimal. Simulations using the proposed model show that the optimal transition between the two submovements occurs at an early stage of the movement, and is preceded by a sharp increase of acceleration. Such sharp peaks of acceleration are actually observed in psychophysical results. Furthermore, this model can account for the fact that the positional variance of a goal-directed movement is bell-shaped. I will conclude by discussing generalizations of this model to multiple submovements.
Akiyuki Anzai, University of Rochester
BCS Lunch Talk