6.1. Introduction 6 93 Figure 6.1: Schematic illustration of the three different simulation conditions used in our study. Left: illustration of the visual environment. Right: illustration of the simulated phosphene percepts before (top row) and after a saccade (bottom row). A) The gaze-locked condition simulates the perceptual experience of a prosthesis user with a head-steered visual prosthesis without eye-tracking compensation: the phosphenes encode what is straight in front of the head-mounted camera, but the phosphene locations are coupled to eye-movements. B) The gaze-contingent simulation condition emulates the percept created by a visual prosthesis with gazecontingent eye-tracking compensation. The phosphene locations are gaze-centered (similar to the gaze-locked condition), but instead of head-centered visual input, the phosphene patterns encode a specific region of interest that matches with the gaze direction. C) The gaze-ignored simulation neglects any gaze-dependent effects on phosphene vision and it serves as a control condition which is unattainable in reality. In the gaze-ignored simulation, the phosphenes encode head-steered visual information (similar to the gaze-locked condition), but the phosphene simulation is displayed at a stationary location in the center of the head mounted display. steered, rather than gaze-steered visual input. This has two interrelated implications: 1) In head-steered prostheses, only head movements will be available to scan the visual environment, while eye movements do not change the visual input. 2) Phosphenes are retinotopically encoded, and they are coupled to eye movements (Brindley & Lewin, 1968; Dobelle & Mladejovsky, 1974; Schmidt et al., 1996). Therefore the same visual input will be perceived at a different location, depending on the gaze direction. An illustration of this ‘gaze-locked’ phosphene percept that is produced by head-steered prostheses is displayed inFigure 6.1a. Gaze-locked phosphene perception has been well-characterized in the prior literature (Brindley & Lewin, 1968; Caspi et al., 2018; Dobelle & Mladejovsky, 1974; Paraskevoudi & Pezaris, 2019; Sabbah et al., 2014; Schmidt et al., 1996), and while it is known to cause localization problems (Sabbah et al., 2014), it is unclear to which extent these will restrict users in their daily life activities. To mitigate the disorientation of gaze-locked phosphene percepts in head-steered prostheses, users typically undergo training to suppress eye movements, but so far results are disappointing as localization problems are reported to persist even years after implantation(Sabbah et al., 2014). Recently, the incorporation of a compensatory eye-tracking system with gaze-contingent image processing has emerged as a potential solution (Caspi et al., 2018; McIntosh et al., 2013; Paraskevoudi & Pezaris, 2021; Titchener et al., 2018; Vurro et al., 2014). As illustrated inFigure 6.1b, gaze-contingent processing mimics natural visual sampling by processing only a gaze-centered region of interest (ROI) from a wide-field camera stream: the gaze determines which region of the camera input is used to generate the phosphenes, such that the phosphenes display information about the specific location in the environment that is targeted by the gaze direction. Several prior studies have investigated the benefits of gaze-contingent image processing. For instance, Caspi et al. (2018) performed a target localization task on a computer

RkJQdWJsaXNoZXIy MTk4NDMw