Dr. David Alais from the University of Sydney will present a talk titled
"Human time perception in audition and vision: how many clocks, and how precise?"
This study investigates whether sub-second time perception is a supramodal process or whether separate timing systems exist for vision and audition. The experiments measure duration increment thresholds for standard durations of 100, 200, 400 & 800 ms. These time intervals could be defined using visual stimuli (LEDs) or auditory stimuli (tones played over loudspeakers), and the interval could be represented by 1, 2, 4 or 8 stimuli presented in parallel. The reason for using multiple stimuli in parallel was that it allowed Gaussian distributed noise to be applied to the onsets and offsets. This effectively blurs the interval’s duration but leaves its mean constant. Increment thresholds for each standard duration were measured for various temporal noise levels using a two-interval forced-choice procedure and an adaptive staircase method (Quest). Duration increment thresholds as a function of added duration noise were then determined. This “equivalent noise” approach estimates (i) the level of internal noise associated with the timing process (because thresholds will only increase once the added external noise exceeds internal neural noise), and (ii) the number of samples the process pools into a single duration estimate. Overall, duration increment thresholds (Weber fractions) for audition were lower than for vision, indicating better auditory timing resolution. Auditory thresholds increased linearly with standard interval duration, closely adhering to Weber’s law, while visual thresholds were less clearly linear and showed a quadratic tendency. In addition, estimates of internal timing noise were lower for audition than for vision. Together, these differences suggest that distinct processes underlie timing in each modality. Further tests were conducted to assess how these distinct processes might interact. These results confirmed that there are independent timing processes for audition and vision, and that in audiovisual contexts the two processes can combine to improve efficiency.
Dr. Alais completed his PhD at the University of Sydney, before completing postdoctoral research fellowships in the U.S.A. France and Italy. Since 2003 he has been an Australian Research Fellow at the University of Sydney. Dr. Alais has specific research interests focussed on audio-visual cross modal perception and visual binocular rivalry.