Research

My research addresses dynamical properties of visual perception. How does our visual system adapt to a constantly changing environment? How do we achieve a compromise between stability and flexibility in this process. I address these questions by combining behavioral measurements with computational modeling. As part of this work, I have been involved in the development of psignifit3, a program to determine perceptual threshold measurements from behavioral data.

Shape perception

If necessary, the perception of objects from simple shapes can be very quick. As part of my doctoral thesis, I observed that even one of the very first responses of the human electroencephalogram discriminates between meaningful and meaningless shapes [S1].

However, what exactly makes a “shape”? What features do observers use to answer this question? I address these questions in a current collaboration with James Elder at York University. With James, we developed a class of generative statistical models for shapes that occur in natural images, such as photographs. We can adapt these models to match natural shapes with respect to a well defined set of features and be maximally random otherwise. Being generative, these models allow us to generate synthetic shapes, that match natural shapes with respect to the features represented by the distribution. Using psychophysical measurements and ideal observer modeling, we could show that humans are sensitive to a relatively small set of local contour properties of shapes and most likely use much higher order properties of shapes to discriminate between different shapes [S2].

[S1]Fründ, I, Busch, NA, Schadow, J, Gruber, T, Körner, U, Herrmann, CS (2008): Time pressure modulates electrophysiological correlates of early visual processing, PLoS one 3 (2), e1675
[S2]Fründ, I, Elder, JH (2013): Statistical Coding of Natural Closed Contours, Talk at VSS 2013, J Vis July 24, 2013 13(9): 119; doi:10.1167/13.9.119 .

Modeling behavioral nonstationarity

Behavioral studies of visual perception typically present a sequence of images to an observer (by observer, vision scientists mean either a human or potentially also an animal). Observers often report that they adapt their response behavior over the time course of a psychophysical experiment. In other words, the response of an observer depends on the image that the observer currently sees and and things that happened so far—including other responses—in the experiment.

This differs from one of the standard assumptions in virtually all models for visual perception: These models assume that all responses in an experiment are independent realizations of a respective random variable. We used one of the simplest models for visual perception to study the impact of the violation of this independence assumption. The psychometric function is routinely used in psychophysical studies of visual perception to quantify sensitivity or bias of observers. We showed [N1] that these violations may indeed result in incorrect inference on psychometric functions and we propose a very generic way to correct for these errors.

More recently, we built a model that combines both, the current stimulus and events on previous trials [N2]. This model allows us to tear apart the effects from the current stimulus and events on previous trials. We observe that effects from previous trials are very heterogeneous but at the same time very strong: On difficult trials, the recent experimental history is nearly as good a predictor as the current trial. This contradicts the naive assumption that our perception is mainly a representation of the environment. It suggests that our perception is rather a combination of the world around us and our own assumptions and expectations of this world.

[N1]Fründ, I, Haenel, NV, Wichmann, FA (2011): Inference for psychometric functions in the presence of nonstationary behavior, Journal of Vision, 11(6)
[N2]Fründ, I, Wichmann, FA, Macke, J (2012): Dealing with sequential dependencies in psychophysical data. Poster presented at CoSyNe 2012.

Human visually evoked potentials

During my PhD, I investigated one of the very first components of the human visual evoked potential. This early component is best seen in time-frequency representations of the average evoked potential and manifests itself as a transient phase shift of spectral components above 20 Hz. Due to its frequency localization, this signal is sometimes called the evoked gamma band response. In my PhD project, I established that this signal is indeed reliably measureable [G1]. We performed a number of studies that further investigated this signal. Most notably, we observed that evoked gamma band responses are not only modulated by stimulus parameters—as expected for early responses—but also by task demands. In particular, extremely fast behavioral responses correlate with enhanced evoked gamma band responses [G2]. This effect is still present if observers were explicitly instructed to respond as fast as possible [G3].

[G1]Fründ, I, Schadow, J, Busch, NA, Körner, U, Herrmann, CS (2007): Evoked γ oscillations in human scalp EEG are test–retest reliable, Clinical neurophysiology 118 (1), 221-227.
[G2]Fründ, I, Busch, NA, Schadow, J, Körner, U, Herrmann, CS (2007): From perception to action: phase-locked gamma oscillations correlate with reaction times in a speeded response task, BMC neuroscience, 8(1).
[G3]Fründ, I, Busch, NA, Schadow, J, Gruber, T, Körner, U, Herrmann, CS (2008): Time pressure modulates electrophysiological correlates of early visual processing, PLoS one 3 (2), e1675

Table Of Contents

Previous topic

Home

Next topic

Teaching

This Page