The project is proposed under the scientific supervision of Prof. Diego Ghezzi (LNE/EPFL, https://lne.epfl.ch/), who is working on the development of retinal prostheses.
The general design principle behind retinal prostheses follows a very simple assumption: the improvement of visual acuity can be obtained by increasing the electrode density, while a large visual field can be obtained by enlarging the retinal coverage with a larger prosthesis. Besides this assumption, very little is known about the impact of electrode density and field of view on behavioral performances under prosthetic vision. In literature, the effect of limited vision has been tested in different ways. The impact of visual acuity has been assessed with image recognition using pixelated images. For example, in a recent report (Jung et al 2015), authors estimated the numbers of required pixels by presenting images to healthy subjects via a monitor. This is a standard experimental setting in the field to emulate prosthetics vision. In a similar manner, the impact of visual field shrinking can be emulated. In addition, concerning the visual field, mobility skills have been tested on healthy subjects via a prosthetic vision simulator made by a portable monitor. Vision is a complex multi-sensorial experience, which involves multi-sensory integration and motor coordination at the same time. We hypothesize that experiments under emulated prosthetic vision should be improved by using immersive multi-sensorial environments that provide a fully realistic experience to the subjects. This will allow scientists to obtain meaningful information for the design of new retinal prostheses, validate the algorithm for image processing, and proving the functional benefit of the device. To reach this goal, we will build a virtual reality (VR) environment and use it to advance our work on POLYRETINA.
Reference: Jung, J.-H. H., Aloni, D., Yitzhaky, Y. & Peli, E. Active confocal imaging for visual prostheses. Vision Res. 111, 182–96 (2015)
Several scenarios that are common in daily life can be implemented in a VR ambient, such as the office, the kitchen, a street, and a corridor for obstacle avoidance (other scenarios may be implemented depending on the need). Those scenarios will be used to test the behavioral performance of subjects while emulating prosthetic vision. Prosthetic vision will be emulated by using a tool for graphical rendering of the virtual environment (vertex/pixel shader), and by simulating the specific filters related to the design and validation of POLYRETINA. Particularly relevant in this project is the use of eye-trackers embedded in the VR goggles to dynamically position the filters in the gaze direction of the user.
Validation phase. After the development of the VR scenarios and the graphical filters, the project will progress with a trial where healthy subjects will be monitored under emulated prosthetic vision. During the trial, we will monitor the success rate and the time needed to recognize common objects in daily common scenarios or to perform obstacle avoidance.
The final goal is to provide a quantitative evaluation of the performances depending on the various configurations of image filtering. This will allow us to determine the future designs of the POLYRETINA device (implant size, number of electrodes, and density) and to predict its functional efficacy.