The project is proposed under the scientific supervision of Dr. Raphaël Thézé, who is working on cross-modal influences of audio and visual stimuli during speech processing, UniGE/Auditory, Speech and Language Neuroscience Group

Description du poste

Job Description

Multisensory integration has been extensively studied using audio-visual speech, but only insofar as the subject is sitting in front of a computer and simultaneously listening to an audio track and watching a video clip. Whilst the brain is capable of segregating and integrating audiovisual speech streams in complex settings, as illustrated by the “Cocktail Party Problem”, current research is not properly addressing the neuronal mechanisms at work. To do so, it is required to generate an environment with a level of complexity that approaches reality, while controlling every variable. Virtual Reality (VR) could be the ideal tool for bridging this gap.

In 2019, we conducted a research project where we generated realistic avatars for studying audiovisual speech processing using perceptual illusions called “McGurk effect”. One of the main innovations in this project was the highly realistic lip movements synchronized to speech. The avatars and speech, both computer-generated, allowed us to mismatch the auditory and visual stimuli at will, but to maintain complete control of the time course of stimuli in both modalities. Moreover, we designed the avatars to be usable in a VR environment.  

We intend to push forward the McGurk project. The idea is to create a virtual audio-visual environment with multiple talking avatars in a “cocktail” situation. While being recorded with a mobile EEG, the subjects will listen to the conversations and identify target words. The virtual environment allows the subjects to move freely among the avatars and face them while listening to them. Eye tracking will verify if the subjects are using visual information from the lips. Finally, using McGurk illusions to create target words will confirm the occurrence of audio-visual processing (i.e. the subject can only hear the target word if they were looking at the avatar’s lips). For greater ecological validity, we will build the avatars’ movements on actual individuals digitalized by the means of motion capture. Embodiment and feeling of agency will be granted to the subject in the virtual space with a personal avatar and possible interaction with objects.

The project requires to:

  • build a 3D environment
  • create the avatars and their animations with motion capture
  • generate the stimuli (i.e. synchronize speech with lip movement)
  • pilot without and with EEG cap
  • generate triggers for the EEG
  • test the synchronization of events in the VR environment with the EEG data
  • collect and analyze the data


 

Contact
  • vr@fcbg.ch
Fichiers

Profil requis

We are looking for excellent candidates with a strong engineering background and interest or initial training in neuroscience. Prior experience in virtual reality and signal processing is recommended. This project will involve software development (C++/C#, Unity3D) and to conduct a behavioral experiment involving signal processing and analysis (Python/MATLAB).

The internship is for MSc level students performing their 5/6-months final research project in 2020.

The position is full-time at FCBG in Campus Biotech.

i-jobs

Other Jobs

Project Manager - Neuroengineering

The Wyss Center is seeking a talented and driven individual to join their growing international team of scientists, engineers and clinicians as a Project Manager in Neuroengineering. The individual will engage with the Center’s leadership, project teams and wider community to initiate and advance technical initiatives in the field. 

Software Engineer for Neuroengineering applications

The Wyss Center is seeking a talented and driven individual to join their growing international team of scientists, engineers and clinicians as a Software Engineer. The individual will work on the development of software for various medical / biotech applications in neuroengineering.