My research addresses dynamical properties of visual perception. How does
our visual system adapt to a constantly changing environment? How do we
achieve a compromise between stability and flexibility in this process. I
address these questions by combining behavioral measurements with
computational modeling. As part of this work, I have been involved in the
development of psignifit3, a program to determine perceptual threshold
measurements from behavioral data.
If necessary, the perception of objects from simple shapes can be very
quick. As part of my doctoral thesis, I observed that even one of the very
first responses of the human electroencephalogram discriminates between
meaningful and meaningless shapes [S1].
However, what exactly makes a “shape”? What features do observers use to
answer this question? I address these questions in a current
collaboration with James Elder at York
University. With James, we developed a class of generative statistical models
for shapes that occur in natural images, such as photographs. We can adapt
these models to match natural shapes with respect to a well defined set of
features and be maximally random otherwise. Being generative, these models
allow us to generate synthetic shapes, that match natural shapes with respect
to the features represented by the distribution. Using psychophysical
measurements and ideal observer modeling, we could show that humans are
sensitive to local contour properties of shapes but most likely also use global properties of shapes to discriminate between different shapes [S2] and to segment coherent shapes from random backgrounds [S3].
|[S1]||Fründ, I, Busch, NA, Schadow, J, Gruber, T, Körner, U, Herrmann, CS (2008): Time pressure modulates electrophysiological correlates of early visual processing, PLoS one 3 (2), e1675|
|[S3]||Fründ, I, Elder, JH (2014): Closure and global shape contributions to contour grouping. Poster presentation at VSS 2014, J Vis August 22, 2014, 14(10): 257; doi:10.1167/14.10.257|
Modeling behavioral nonstationarity
Behavioral studies of visual perception typically present a sequence of
images to an observer (by observer, vision scientists mean either a human or
potentially also an animal). Observers often report that they adapt their
response behavior over the time course of a psychophysical experiment. In
other words, the response of an observer depends on the image that the
observer currently sees and and things that happened so far—including
other responses—in the experiment.
This differs from one of the standard assumptions in virtually all models
for visual perception: These models assume that all responses in an
experiment are independent realizations of a respective random variable.
We used one of the simplest models for visual perception to study the impact
of the violation of this independence assumption. The psychometric function
is routinely used in psychophysical studies of visual perception to quantify
sensitivity or bias of observers. We showed [N1] that these violations may
indeed result in incorrect inference on psychometric functions and we propose a
very generic way to correct for these errors.
More recently, we built a model that combines both, the current stimulus and
events on previous trials [N2]. This model allows us to tear apart the
effects from the current stimulus and events on previous trials. We observe
that effects from previous trials are very heterogeneous but at the same
time very strong: On difficult trials, the recent experimental history is
nearly as good a predictor as the current trial. This contradicts the naive
assumption that our perception is mainly a representation of the
environment. It suggests that our perception is rather a combination of the
world around us and our own assumptions and expectations of this world.
Human visually evoked potentials
During my PhD, I investigated one of the very first components of the human visual evoked potential.
This early component is best seen in time-frequency representations of the average evoked potential and manifests itself as a transient phase shift of spectral components above 20 Hz.
Due to its frequency localization, this signal is sometimes called the evoked gamma band response.
In my PhD project, I established that this signal is indeed reliably measureable [G1].
We performed a number of studies that further investigated this signal.
Most notably, we observed that evoked gamma band responses are not only modulated by stimulus parameters—as expected for early responses—but also by task demands.
In particular, extremely fast behavioral responses correlate with enhanced evoked gamma band responses [G2].
This effect is still present if observers were explicitly instructed to respond as fast as possible [G3].
|[G1]||Fründ, I, Schadow, J, Busch, NA, Körner, U, Herrmann, CS (2007): Evoked γ oscillations in human scalp EEG are test–retest reliable, Clinical neurophysiology 118 (1), 221-227.|
|[G2]||Fründ, I, Busch, NA, Schadow, J, Körner, U, Herrmann, CS (2007): From perception to action: phase-locked gamma oscillations correlate with reaction times in a speeded response task, BMC neuroscience, 8(1).|
|[G3]||Fründ, I, Busch, NA, Schadow, J, Gruber, T, Körner, U, Herrmann, CS (2008): Time pressure modulates electrophysiological correlates of early visual processing, PLoS one 3 (2), e1675|