DC ElementWertSprache
dc.contributor.advisorRöder, Brigitte-
dc.contributor.authorKramer, Alexander-
dc.date.accessioned2023-10-13T14:36:11Z-
dc.date.available2023-10-13T14:36:11Z-
dc.date.issued2023-
dc.identifier.urihttps://ediss.sub.uni-hamburg.de/handle/ediss/10504-
dc.description.abstractContinuous interaction between vision and audition allows for a coherent multisensory representation of the external world, which helps us to orient our gaze towards unexpected noises or guide our auditory attention towards specific sounds. Combining information of different senses into multisensory representations is referred to as multisensory integration. To optimally use the information available in vision and audition, the perceptual system must infer which information belongs together. Spatio-temporal features are particularly important in this process. However, sensory signals from a single source might be misaligned in space and time, due to noise or inaccuracies in either vision or audition. Noise determines sensory reliabilities, i.e., the consistency of the output of the sensory system over multiple observations of an identical stimulus, whereas sensory accuracy, determines the degree to which the sensory output reflects the characteristics of the stimulus in an unbiased manner. Moreover, the perceptual system does not even know a-priori how many sources are in the world but must infer their number from sensory evidence and prior beliefs, a process called Causal Inference (CI). This challenging inferential problem is always present when distinct sensory cues provide information about a particular common feature. Audio-visual spatial discrepancies can be induced in experimental setups. When participants localize the auditory component, the reported auditory position is shifted towards the visual position, which is commonly referred to as the Ventriloquism effect (VE). The VE is considered an example for multisensory integration and more specifically CI. Continuous exposure to audio-visual spatial discrepancies leads to shifts in subsequent unimodal auditory localization, which is often referred to as cumulative ventriloquism aftereffects (CVAE). Aftereffects can be induced by a single exposure to an audio-visual spatial discrepancy too, but this instantaneous ventriloquism aftereffect (IVAE) has been hypothesized to be mechanistically distinct from the CVAE. These aftereffects are examples of multisensory recalibration, whereby information across senses serves to maintain unisensory representation accurate. Understanding the computational principles of the Ventriloquism effect and its aftereffects is essential in order to understand how multisensory integration and recalibration interact to provide a coherent multisensory representation of the world. In Study 1 (Chapter III), two sounds were paired with visual stimuli and presented with opposite directions of audio-visual spatial discrepancies. Either the auditory or the visual component had to be localized. The reliability of the visual stimulation was high in one session and low in another. Both auditory and visual unimodal stimuli were intermixed to measure auditory and potential visual aftereffects. Whereas no visual aftereffects were found reliable auditory aftereffects were found across all conditions. The auditory CVAE as well as the auditory IVAE were reduced in the low visual reliability condition. In addition, we found a visual VE when the visual reliability was low. The paradigm of Study 2 (Chapter IV) followed Study 1 with some alterations. Across sessions, the absolute audio-visual spatial discrepancy was varied, changing the sensory evidence for a common cause. Furthermore, an association paradigm was applied before aftereffects were induced, where one audio-visual pair was presented spatio-temporally aligned, presumably increasing the system’s prior belief of a common cause, and another was presented spatio-temporally randomly misaligned, presumably decreasing the system’s prior belief of a common cause. VE, IVAE as well as CVAE increased with increasing audio-visual spatial disparity. Spatio-temporal alignment during association blocks led to an increased VE and CVAE in initial test blocks compared to misalignment during association blocks. In subsequent test blocks this pattern reversed. This modulation of CVAE and VE was limited to the large audio-visual disparity. Model based analysis of Study 1 revealed that learning mechanisms for the CVAE and IVAE are sensitive to sensory reliabilities. Study 1 and Study 2 both suggested that the CVAE is based on a rather distinct process from the VE, that depends however on the output of multisensory integration. By contrast, the IVAE seems to be an additional outcome of the same process that underlies the VE. While in Study 2 the CVAE did depend on the posterior of a common cause, it did not in Study 1indicating that the sensory context might influence which information the perceptual system considers for recalibration. Study 3 (Chapter V) investigated whether the VE and CVAE integrate explicit reward feedback to identify which of the sensory cues, vision, or audition, is inaccurate. When feedback indicated accurate audition, the VE decreased over time and no CVAE was observed. These results suggest that crossmodal recalibration and multisensory integration incorporate top-down driven feedback resulting in more accurate audio-visual spatial perception. In summary, our results are in line with a common computational process for multisensory integration and instantaneous recalibration. Both effects result from an inference process that dissociates whether audio-visual disparities are likely due to noise, distinct causes, or inaccuracies that vary dynamically over time. The CVAE on the other hand is rather a distinct process which relies on the output of multisensory integration. Importantly, the relation between multisensory recalibration and integration is neither linear nor monotonous with respect to size of the audio-visual disparity and sensory reliabilities. Furthermore, the CVAE is fine tuned to less volatile sources of inaccuracies compared to the IVAE. Thus, it seems that the perceptual system does learn the temporal dynamics of typical sources of inaccuracies. Moreover, it accounts for these distinct sources by evolving multiple recalibration mechanisms which are adjusted to the specific dynamics of these sources. External feedback might provide an important tool when it comes to learning about these sources of sensory inaccuracies and might therefore shape multisensory recalibration.en
dc.language.isoende_DE
dc.publisherStaats- und Universitätsbibliothek Hamburg Carl von Ossietzkyde
dc.rightshttp://purl.org/coar/access_right/c_abf2de_DE
dc.subjectcrossmodal recalibrationen
dc.subjectcausal inferenceen
dc.subjectspatial perceptionen
dc.subjectaudio-visual perceptionen
dc.subjectVentriloquismen
dc.subject.ddc150: Psychologiede_DE
dc.titleInterplay and Malleability of Multisensory Integration and Recalibration across multiple Timescalesen
dc.typedoctoralThesisen
dcterms.dateAccepted2023-09-13-
dc.rights.cchttps://creativecommons.org/licenses/by/4.0/de_DE
dc.rights.rshttp://rightsstatements.org/vocab/InC/1.0/-
dc.subject.bcl77.05: Experimentelle Psychologiede_DE
dc.subject.bcl77.40: Wahrnehmungspsychologiede_DE
dc.subject.gndMathematische Psychologiede_DE
dc.subject.gndNeurowissenschaftende_DE
dc.subject.gndMultisensorische Wahrnehmungde_DE
dc.subject.gndPlastizität <Psychologie>de_DE
dc.subject.gndBayes-Lernende_DE
dc.type.casraiDissertation-
dc.type.dinidoctoralThesis-
dc.type.driverdoctoralThesis-
dc.type.statusinfo:eu-repo/semantics/publishedVersionde_DE
dc.type.thesisdoctoralThesisde_DE
tuhh.type.opusDissertation-
thesis.grantor.departmentPsychologiede_DE
thesis.grantor.placeHamburg-
thesis.grantor.universityOrInstitutionUniversität Hamburgde_DE
dcterms.DCMITypeText-
dc.identifier.urnurn:nbn:de:gbv:18-ediss-112448-
item.advisorGNDRöder, Brigitte-
item.grantfulltextopen-
item.languageiso639-1other-
item.fulltextWith Fulltext-
item.creatorOrcidKramer, Alexander-
item.creatorGNDKramer, Alexander-
Enthalten in den Sammlungen:Elektronische Dissertationen und Habilitationen
Dateien zu dieser Ressource:
Datei Beschreibung Prüfsumme GrößeFormat  
Dissertation_Alexander_Kramer_druck.pdf559cfa0b8dd6d26b44c7bcf82a6717a012.55 MBAdobe PDFÖffnen/Anzeigen
Zur Kurzanzeige

Info

Seitenansichten

Letzte Woche
Letzten Monat
geprüft am null

Download(s)

Letzte Woche
Letzten Monat
geprüft am null
Werkzeuge

Google ScholarTM

Prüfe