Overview
of Research Interests
|
Our laboratory
studies the mechanisms of visual motion processing, visuomotor control,
and spatial memory, using behavioural, neurophysiological, and
computational modelling techniques.
We are
undertaking behavioural studies to explore fundamental perceptual and
cognitive principles using Virtual Reality (VR) Technology. This research
program explores how the brain extracts dynamic visual information
generated by object motion (e.g., an incoming car) and motion of the
visual environment experienced by an observer's motion (e.g., during
walking or driving). We are interested in how the brain uses multiple
sources of sensory information to control locomotion (e.g., visual
factors in collision avoidance), and what kinds of strategies humans use
in spatial route learning.
Our behavioral work has potential applications in a number of
fields such as the design of better standardized tests for driver/pilot
licences, the design of driving simulators, design of robots and automated
navigation vehicles, entertainment industry (e.g. IMAX display).
In another
line of our research (neurophysiological studies), we make electrical
recording from single cells of animal's brain while presenting
computer-generated visual stimuli to the animal. We are studying neural
mechanisms of processing of object motion in 3-dimensional space. This
research program has produced many discoveries including specialized
structures in the brain that compute "time to collision" with
looming objects and structures that compute movement of an object relative
to its background. Computational models are also being developed to
account for the physiological and behavioral results.
We are also
exploring a new FM telemetry technique to record neuronal activity from
awake and behaving animals. Single unit activity will be telemetered via
a miniature head mounted FM transmitter. We will make neuroethological
investigations of a number of important perceptual phenomenon which are
not possible on an anaesthetized preparation.
back
|
|
Research
Setup
|
Virtual Reality Setup
Human Setup
|
Animal Setup (in development)
|
|
Human Psychophysics
Setup
|
Single Cell Recording
new lab space
back
|
|
Background
Information for Spatial Navigation
|
Research in
spatial ability has experienced increasing interest over the past few
decades, perhaps as a consequence of the extraordinary progress made in
studying the behavioural and neural mechanisms of animal navigation; most
notably the discovery of place cells in the rat hippocampus. Researchers
(e.g. O’Keefe) have demonstrated that once a rat familiarizes itself
within a particular environment, hippocampal neurons establish a place
field, such that each neuron fires only when the rat remains in a particular
space in that environment.
Although an extremely successful paradigm in its own right, it
has proven to be a challenge to implement such a paradigm in humans.
However, more recently, the study of spatial navigation in humans has
seen great success with the adoption of virtual reality (VR)
technologies, and consequently this type of technology is the one we are
currently working with in our laboratory.
VR provides participants with an immersive environment within
which they can navigate, manipulate and interact with in real-time.
Realistic VR display (through head mounted displays) combined with
high-speed, ultra-high resolution graphics engines, make possible a
radically different level of investigation. Not only are participants
fully immersed in their environment, unlike traditional pen and paper
tasks (map viewing) or computer game-like tasks using a desktop
interfaces (monitor + mouse), but they can actually physically move (by
walking or riding a stationary bike), to obtain vestibular and proprioceptive
feedback. Virtual reality is the best tool currently available to provide
psychologists experimental control over three-dimensional, dynamic,
ECOLOGICALLY VALID, stimulus presentations.
back
|
|
Multisensory
Integration in the Estimation of Distance Travelled
|
One of the fundamental requirements for
successful navigation through an environment is the continuous monitoring
of distance travelled. To do so, humans normally use one or a combination
of visual, proprioceptive/efferent, vestibular, and temporal cues. In the
real world, information from one sensory modality is normally congruent
with information from other modalities, hence, studying the nature of
sensory interactions is often difficult.
In order to decouple the natural covariation
between different sensory cues, we use virtual reality technology to vary
the relation between the information generated from visual sources and
the information generated from proprioceptive sources. When we manipulate
the stimuli, such that the visual information is coupled in various ways
to the proprioceptive information, humans predominantly use visual
information to estimate the ratio of two traversed path lengths. Although
proprioceptive information is not used directly, the mere availability of
proprioceptive information increased the accuracy of relative path length
estimation based on visual cues, even though the proprioceptive
information was inconsistent with the visual information. These results
convincingly demonstrate that active movement (locomotion) facilitates
visual perception of path length travelled.
Sun,
H.-J.,
Campos, J. L., & Chan, G. S. W. (in press). Multisensory integration
in the estimation of relative path length, Experimental Brain Research.
|
|
back
|
Multisensory
Integration in the Estimation of Speed
|
Information that is particularly important for
monitoring self-motion is movement speed. While much has been discovered
about the contributions of vision in speed perception, less is understood
about the relative contributions of visual and nonvisual information when
both are available. Although vision and proprioception are suspected to
play a central role in assessing speed information, it is not clear which
modality is more important for monitoring speed of self-motion. While
visual information is often considered to be dominant as compared to
other sensory information in most spatial tasks, studies examining human
self-motion has called this assumption into question.
This series of studies assess the relative
contributions of visual and proprioceptive information during self-motion
in a virtual environment using a speed discrimination task. Subjects wear
a head-mounted display (HMD) and ride a stationary bicycle along a
straight path in an empty, seemingly infinite hallway with random surface
texture. For each trial, subjects are required to pedal the bicycle along
two paths at two different speeds (a standard speed and a comparison
speed) and subsequently report whether the second speed travelled is
faster than the first. The standard speed remains the same while the
comparison speed is varied between trials according to the method of
constant stimuli. When visual and proprioceptive cues are provided
separately or in combination, the speed discrimination thresholds are
comparable, suggesting that either cue alone is sufficient.
When the relation between visual and
proprioceptive information is made inconsistent by varying optic flow
gain, the resulting psychometric functions shift along the horizontal
axis. The degree of separation between these functions indicates that
both optic flow and proprioceptive cues contribute to speed estimation,
with proprioceptive cues being dominant. These results suggest an
important role for proprioceptive information in speed estimation during
self-motion.
Sun,
H.-J., Lee,
A, Campos, J. L., Chan, G. S. W. & Zhang, D.-H. (in press).
Multisensory integration in speed estimation during self-motion in a
virtual environment, CyberPsychology and Behaviour.
|
Pictures:
|
Reference
back
|
|
Examining
the Contribution of Visual and Nonvisual Cues to Distance Estimation by
Manipulating Cue Availability
|
Developing a mental
representation of one’s location in space requires both the visual
assessment of static egocentric distance between oneself and
environmental landmarks, and the continuous monitoring of dynamic
distance information when traversing from one location to another. Both visual and nonvisual sources
of information can potentially be used for distance processing.
Static visual cues may
include familiar retinal image size, texture gradient, accommodation,
convergence, and binocular disparity etc. Whereas, the spatial-temporal
relation between the observer and environmental landmarks is provided by
dynamic retinal information generated by the observer’s self-motion
(optic flow). Egocentric distance information is also available via
nonvisual cues that are internally generated as a result of one’s body
movements in space. This
source of information, often referred to as “idiothetic information”, is
provided by muscles and joints ("inflow" or proprioceptive input),
motor efferent signals ("outflow"), and vestibular information
generated as a result of changes in linear or rotational movement
velocities.
By
systematically varying cue availability we examine the contributions of
static visual information, idiothetic information, and optic flow
information in a real world distance estimation task. This experiment is conducted in a
large-scale, open, outdoor environment. Subjects are presented with information about a
particular distance and are then required to turn 180 degrees and produce
a distance estimate. Distance
encoding and responding occur via: (1) visually perceived static target
distance and 2) traversed distance through either: 2a) blindfolded
locomotion or 2b) sighted locomotion with the presence of optic
flow.
The
results demonstrate that humans can perform with similar accuracy with or
without optic flow information in all conditions. In conditions in which the
stimulus and the response are delivered in the same mode, constant error
is minimal when optic flow is absent, whereas when optic flow is present,
overestimation is observed.
In conditions in which the stimulus and response modes differ, a
consistent error pattern is observed. By systematically comparing
complementary conditions, the results show that the availability of optic
flow leads to an “under-perception” of movement relative to conditions in
which optic flow was absent.
Sun,
H.-J.,
Campos, J. L., Chan, G. S. W. Young, M., and Ellard, C. The contributions of static
visual cues, nonvisual cues, and optic flow in distance estimation. Submitted to Perception.
|
Pictures
|
Reference:
back
|
Spatial
Representation Revealed Through Different Modes of
Learning
|
Humans are able to learn and remember
information about environmental spatial layouts through direct means
(e.g., by navigating through the environment) or indirect means (e.g., by
viewing a map or by encoding verbal descriptions). Theoretical and
empirical work indicates that there may be multiple ways to learn spatial
information, each of which results in different spatial representations.
To understand the nature of these representations, it is important to
identify the functional distinctions between different ways humans
represent spatial information. One such distinction involves the degree
to which ones' spatial representation is orientation-specific, as
identified by whether or not the spatial memory is dependent on the
original orientation in which the spatial layout was learned.
Past
studies have demonstrated that during navigation, individuals often
develop orientation-free representations of the areas within which they
travel. In contrast,
learning from a map typically leads to an orientation-specific
representation, resulting in better performance should subjects be
positioned in the original orientation from which they encoded the
environment.
This series of studies first examined the
spatial representations of human participants after learning the spatial
layout of a single floor of a complex building: via map learning, via
navigating within a real environment, or via navigating through a virtual
simulation of that environment. Navigational learning was then compared
across situations in which participants: 1) assumed multiple vs. a single
body orientation, 2) experienced active vs. passive learning, and 3)
received high vs. low levels of proprioceptive information.
Following learning, participants were required
to produce directional judgments to target landmarks. Results show that
humans typically develop orientation-specific spatial representations
only following map learning and passive learning as indicated by better
performance when tested from the initial learning orientation. These
results suggest that neither the number of vantage points nor the level
of proprioceptive information experienced is the determining factor,
rather it is the active aspect of direct navigation that leads to the
development of orientation-free representations.
Sun,
H.-J., Chan,
G. S. W., & Campos, J. L.. (in press) Active navigation and
orientation-free spatial representations, Memory and Cognition.
|
- VR Photos (screen
capture): 1,
2, 3, 4
- VR Movie:
|
View point dependency
Humans are capable of recognizing learned objects and scenes from
viewpoints that differ from the originally experienced viewpoint.
However, response time and accuracy are dependent on the angular
difference between the novel view and the original view. This study
explored viewpoint dependency by presenting subjects with a five-object
configuration within a circular virtual room from a first-person
perspective. For each trial, subjects responded by comparing a standard
room (SR) to a comparison room (CR) and making a same/different
judgement. For half of the trials, objects in the CRs were in the same
configuration as objects in the SRs, but were presented from different
viewpoints (ranging from 9° to 180°). For the remaining trials, objects
in the CRs were in a different configuration than those in the SRs and
were again viewed from different viewpoints. Results demonstrated that
reaction time and error rate increased as angular difference increased
from 9° to 90°, and decreased as angular difference increased from 90° to
180°. Further, the results confirmed a previously well-documented male-advantage
in single-object “mental rotation” tasks. This paradigm maintains the
potential to explore the effects of dynamic visual and nonvisual updating
via locomotion on viewpoint dependency. We
next examine the effects of dynamic visual and nonvisual updating via
locomotion-induced changes in viewpoint and compare this to conditions in
which viewpoint changes are not initiated by the observer themselves
(similar to classical mental rotation task).
back
|
|
|
|
Object
Identification and Location Experiment
|
A variety of information is required in order to navigate routes
or maps successfully: the number of paths and their respective lengths,
number of turns, degree of angles, as well as the position and identity
of objects or landmarks that may be present along the route. When examining studies requiring
the performance of various spatial tasks, sex differences are often
reported. However, within this broad category of “spatial ability” there
are categories of spatial skills that do not reliably and consistently
demonstrate sex biases (see Linn and Peterson, 1985 for a review). In the
past, spatial memory has been looked at as consisting of two elements;
location information and object identity information (Postma, 1998). The
results of such studies seem to go against the assumed male superiority
for spatial tasks that are often reported, suggesting that perhaps sex
differences vary directly as a function of the requirements of the
spatial task itself (males performing better than females at some task
and vice versa for others). Many of these experiments have examined
subjects performances by having them report object location and/or object
identity by means of a two-dimensional map. However, in real world
scenarios humans are rarely presented with a birds-eye view map (or
allocentric map)of an area through which they had previously traversed.
Typically we navigate egocentrically and encounter routes and the objects
within them in a particular order and over a particular duration.
By
using virtual reality technology we are now able to further examine these
two categories of spatial navigation (object identity and object
location) by requiring subjects to actively navigate through an
immersive, three-dimensional, virtual environment. By having subjects
respond by actually travelling through a maze, we are able to compare the
performance in a three-dimensional response task to a two-dimensional
response task. It is expected that there will be differential sex
differences for the different task requirements (i.e. naming objects
versus naming locations versus naming both).
|
- Some pictures of
our testing environment (screen capture): 1, 2, 3, 4
|
- Reference:
Postma, A., Izendoorn. R. & De Hann, E. (1998). Sex
Differences in Object Location Memory. Brain and Cognition,
36, 334-345. [pdf full text available]
back
|
|
|
Egocentric
vs Allocentric Processing Experiment
|
It has been proposed that humans can form and maintain “cognitive
maps” of their environment and as they navigate, they continually update a
representation of their own allocentric position. An alternative theory
suggests that humans navigate using egocentric representations of space
(Wang and Spelke, 2000). During a recent examination of this theory we
have, through the use of virtual reality, created a battery of testing
environments, with different layouts, in a variety of different sizes,
including a similar version of the radial arm mazes typically used in the
study of rat hippocampal place cells. Past research has suggested that
humans may encode or perceive moveable objects (i.e. a chair or book) as
being different than stable objects (i.e. a doorway or wall). In virtual
reality we have the option of moving an “immovable” object and examining
the effects this might have on an individual’s spatial memory. We have
hypothesized that subjects will tend to have an allocentric
representation of stable landmarks, which could potentially be useful for
navigation, while subjects tend to process moveable objects in a more
egocentric manner.
|
·
Some pictures
of the testing environment:
|
- Reference: Wang,
R. F. & Spelke, E. S., (2000) Updating Egocentric
Representations in Human Navigation. Cognition, 77, 215 -
250. [pdf full text available]
back
|
|
|
Object
vs Self-Motion
|
back
|
Processing
Time-to-Collision
|
When a humann observer
moves through the environment, in addition to the “traditional” distance
cues (stereopsis, etc.), the continuous
transformation of the optic array of the environment (optic
flow) provides information about the
spatial and temporal relationships between the observer and theirhis
surroundings. Such visual input
is critical for the
observer to control theirhis
movement. There has been great
interest in behavioural studies on how humans use visual information to interact with their environment (especially in scenarios involving the potential collision between the observer and
objects in the environment). Lee (1980) argued that optical variable ttau provides the reliable information processing
ofaboutthe time-to-collision, provided by J,
which can then be used for visual motor control, such as modulating an
observer’s speed of locomotion to arrive at an intended target. Moreover, neither,
information about the object’s distance,
nor the observer’s
movement velocity is required.
Indeed,
research on the control of some naturalistic visual motor behaviours
has provided evidence that is consistent with the J strategy. TThe use of
the tFollowingSince
Lee’s (1976) proposed
tau
strategy for computing providing
the observer with "time-to-collision"
information, the use of such
athis strategy
in visual motor control,
however, has received both support and criticism. One criticism is based on the fact that
such previous experiments
fail to the lack of eexperimentally manipulateion of
potentially critical variables.
With the eException of
a few attempts, tTThe tau information has not been manipulated independently offrom
other cues, such as distance information (perhaps due to the
difficulty in manipulating environmental information dynamically in a natural settings). , with the exception ofexcept
a few attempts (Savelsbergh, Whiting and Bootsma, 1991; Sun, Carey and
Goodale, 1992, Ellard, 2001).
IIn order to both,
simulate the
visual informationenvironment of a real world
task requiring target-directed movement and sselectively manipulate the visual
environment dynamically, in real time during movement, we have used a
special virtual reality testing paradigm. We selectively manipulated the
time-to-collision information
during subjects’ approach to a visual target without affecting other static distance
information, which is often impossible in a
real world situation. This enables us to perform controlled
experiments to test the role of vision in human motor behaviour rather than
relying solely on the observation of natural behaviours, which is a typical
approach in this kind of research.
back
|
Using Virtual
Reality to Explore Risk Taking Behavior in Response to Social Interactions
|
Driving
is one of the most common forms of sensation-seeking that young men
participate in within modern societies. During driving, young males have been shown to be more
likely to tailgate, speed, make unsafe lane changes, fail to yield, and
disobey traffic signals, when compared to females and older males. When examining potential risk
factors involved in fatal automobile crashes, it was found that younger,
male drivers, with a high number of passengers (i.e. 3 or more), were shown
to account for the highest death rates.
In order to test this phenomenon empirically, we are currently
designing a highly controlled driving task in which various aspects of both
the driving context (e.g. sex, age and attractiveness of passengers) and
the task itself (e.g. stopping at lights, yielding to pedestrians), can be
manipulated systematically. This system allows us to explicitly measure
every aspect of subjects’ responding, from driving speed to braking
performance. By manipulating
various components independently of each other, we may be able to gain a
better understanding of the causal relations that exist between particular
factors and level of risk-taking.
We have developed a paradigm by which to explore how passenger
attributes (i.e., sex, age, attractiveness, etc.) impact risk-taking
behaviours during a simulated driving task. The task will involve navigating a virtual car through
virtual city streets, responding as necessary to traffic signals, other
vehicle traffic, and pedestrians.
Some conditions will include passengers differing in age, sex, and
level of attractiveness, and other conditions will involve solitary
driving. The VR interface
consists of a head-mounted display equipped with a head-tracking device,
coupled with an input device comprised of a steering wheel and a gas
pedal. Risk-taking behavior
will be assessed by measuring speed, driving distance behind other vehicles
(tailgating), (dis)obeying traffic signals, merging behaviours, driving in
the appropriate lane, and braking behaviours.
Based on previous literature, it is predicted that when driving
alone, overall younger males will engage in the highest level of
risk-taking behaviour, with no significant difference being observed
between older males and females.
It is also predicted that male’s risk-taking behaviour will increase
to some degree with the presence of either a same-sex peer or opposite-sex
peer, but will be the highest in the company of an attractive female.
Pictures (the process of
creating 3D computer model from TWO 2d pictures):
Demo
·
movie
in RealPlayer format
·
movie
in mpg format
back
|
3D
Motion
|
Using standard single unit recording techniques combined with
computer-generated, complex visual motion stimuli, we have found a group of neurons in
the pigeon nucleus rotundus (nRt) (equivalent to the mammalian pulvinar) of
pigeons that selectively responds to a
looming objects approaching on a collision course towards the animal, (but does not respond to
a to a simulated self-motionapproach towards a stationary objects). We
have identified three types of looming-sensitive neurons, each computing a
different optical variable generated from the image expansion of the approaching
objects. One
group of neurons signals relative rate of expansion (taut)J, a secondproaching
objects (Sun and Frost, 1998).
One type of neurons signals relative rate of expansion, tau. group
signals absolute rate of expansion (r) D , and a third group signals yet another optical
variable
(etah)
0. The rroeD
parameter is required
for the computation of both ttauJ
and heta0,
whose respective ecological functions probablyseem
to provide precise "time-to-collision"
information, and "early warning" for approaching
objects with a large visual angle substense (see also a commentary on our work written by Laurent and Gabbiani, 1998).
WIn
addition to thise
neurophysiological finding, we have also
developed quantitative models to explain the physiological response
properties of theose
looming-sensitive neuronsSun and Frost, 2002. These
models take into account the physiological response properties and
anatomical connections of the optic tectum, which sends a major input to nRt. These
models explain a
variety of many response propertiescharacteristics, including why these
looming sensitive neurons would only respond to object
motion in depth but do
not respond to self-motion.
- Sun, H.-J., &
Frost, B. J. (1998). Computation of different optical variables of
looming objects in pigeon nucleus rotundus neurons. Nature
Neuroscience, 1, 296-303. (Written up in "News and Views"
section of Nature Neuroscience, 1, 261-263).
·
Sun, H.-J., & Frost, B. J.
Looming detectors in nucleus of rotundus of the pigeons: Neuronal responses
and models. Submitted to Journal
of Neuroscience.
·
Frost, B. J. & Sun, H.-J. (2003) The
biological basis of time to collision computation. In H. Hecht & G.J.P.
Savelsbergh (Eds.), Time-to-contact (pp. 13-37), Advances in
Psychology Series, Amsterdam: Elsevier - North-Holland.
back
|
Center
Surround Mechanisms
|
In the real world, motion of an
object rarely occurs in isolation, and quite often something else in the
visual scene moves as well. We
have systematically investigated the effects of contextual cues on the response
of tectal neurons in pigeonsSun, Zhao, Southall & Xu, 2002. We found Our research showed that
some neurons process relative
motion information (in terms of both direction and velocity) between acrossthe regions that
fall both within and outside the
receptive field, rather than encoding only the absolute
motion of objectsinformation that fallings
within the receptive field. We
also discovered new types of neurons that exhibit unique ways to integrate
visual motion information from within and outside the beyond classical
receptive fields. These results challenges
the traditional notion of the receptive field, which has typically been was considered
to be limited in spatial extent. . Further, these results, and
should may also help
us to explain
how the brain distinguishes object motion from self-induced motion. Additionally,ditionally these findings further It should
also add to our understanding
of and segregates (differentiates?) ffigure/ from ground segregation.
- Sun, H.-J., Zhao, J., Southall, T. L,
& Xu, B. (2002). Contextual influences on the directional
responses of tectal cells in pigeons. Visual Neuroscience, 19,
133-144.
back
|
Parallel
Processing of Motion and Colour
|
- Sun, H.-J., &
Frost, B. J. (1997). Motion processing in pigeon tectum: equiluminant
chromatic mechanisms. Experimental Brain Research, 116: 434-444.
back
|
Potential
Applications
|
Our research has
considerable potential for application in the following fields.
DESIGN OF
ROBOTS AND AUTOMATED NAVIGATION VEHICLES:
The algorithm and model generated from our studies that work so
effectively to explain the neural computation of impending collision can be
implemented in robotic design and in prototyping new vehicular systems with
automatic navigation capacities. The visual information generated from such
device about the direction of the movement and time to collision with
external objects will complement the information calculated through
stereoscopic video cameras, which are typically used in modern robotic
design. Fewer computational resources are required than for the calculation
of absolute distance information from stereoscopic cameras.
TELEOPERATION
Unlike autonomous robotics, remotely controlled systems
(telerobotics) still depend on human intelligence and perception. It is
important to ensure that the human-machine interface is adequate for the
task. As behavioural scientists, we can determine how such technologies
should be developed to best match the perceptual and motor abilities of
human users.
Our research on virtual reality will touch some of the important
issues related to this field. For example, critical visual information should
be presented to the human operator in a rather natural display to
facilitate human-machine interactions. We can evaluate the effectiveness of
different kinds of display systems (e.g., monoscopic vs. stereoscopic
viewing). We will also evaluate the effect of temporal delay in
communication between a human operator and robotic end-effectors in the
remote site. With the time delay, operators will not be able to use real
time visual information; instead, visual information that is
"remembered" or "predicted" will be used to control
their motor action.
DESIGN OF
FLIGHT OR DRIVING SIMULATOR:
Flight simulators, which produce a profound illusion of
self-motion, are often used for the training of pilots. Our research will
provide important insights as to what components of the visual display are
critical. For example, most flight simulators today simulate the
approaching movement through presentation of image expansion of the larger
objects (such as a terrain) but do not simulate the size increase of the
individual texture elements inside. Whether this mismatch of the image
expansion will create a misjudgement of time to collision will be one of
our research projects.
IMPROVEMENT OF
QUALITY OF LIFE AND HUMAN HEALTH
Psychophysical
research has shown that people can be blind for motion-in-depth in certain
parts of their visual field, while their static stereo vision remains
intact. This demonstrates the existence of independent visual systems for
motion-in-depth. While we know a lot about the visual processing of static
distance, we know very little about the visual processing of motion in 3D,
which is critical for action in our daily life, such as avoiding obstacles,
walking, driving, and navigating through the environment. These specialised
functions of the visual system are not normally evaluated by the
conventional examinations of visual function. Research in this field will
certainly help us develop a set of standardised tests, to screen visual
motion deficits and ultimately reduce accidents among drivers or pilots who
lack acuity for visual motion in depth or are impaired in certain parts of
their visual field.
What we learn
through our research can also be used to train human observers to use
visual information more effectively. Children, for example, could be trained
to be better observers in high traffic situations (in fact, this has been
done in England where children are trained to use time-to-collision as a
cue when crossing the road). Similarly, pilots and athletes could benefit
from training in time-to-collision assessment, and motion blind or impaired
individuals could be trained to overcome their deficits by actively using
the part of the visual field that is intact.
Research has
also identified the involvement of the visual motion system in various perceptual
and cognitive deficits, such as dyslexia. A deeper understanding of visual
motion processing will increase our knowledge and eventually facilitate
diagnosis and the development of effective rehabilitation techniques.
Research Funding
- NSERC Operating
Grant, "Neural Computation of Visual Motion in 3-Dimensional
Space"
- NSERC Equipment
Grant, "Neural Computation of Visual Motion in 3-Dimensional
Space (equipment)"
- Canadian
Foundation for Innovation, New Opportunities Award, "A Physiology
and Behaviour Lab for the Study of Visual Processing and Visual Motor
Control"
- Ontario Innovation
Trust, "A Physiology and Behaviour Lab for the Study of Visual
Processing and Visual Motor Control"
back
|
|