6 106 6. Gaze-contingent processing improves mobility performance studies have specifically compared gaze-contingent with gaze-locked vision in visuallyguided tasks, such as searching letters (McIntosh et al., 2013), pointing to targets on a screen(Caspi et al., 2018; Titchener et al., 2018) or even reading (Paraskevoudi & Pezaris, 2021). In addition to the aforementioned studies, that found an overall improvement with gaze-contingent image processing, the current results suggest that the improvements with gaze-contingent image processing extend to more complex situations such as 3D orientation and mobility with high phosphene counts. These findings underscore the importance of gaze-compensation for gaze-locked vision in head-steered prostheses. For the clinical cases in which eye-tracking is feasible, gaze-contingent image processing is expected to yield universal advantages for the functional and subjective quality of the prosthetic vision. 6.4.2. Superior performance in gaze-ignored simulation In Experiment 1 we found no significant difference in the performance between the gaze-contingent and gaze-ignored simulation conditions. The lower overall performance (i.e., lower scene recognition accuracy and lower subjective ratings) with gaze-contingent phosphene vision in Experiment 2 compared to gaze-ignored vision was unexpected. These findings are in contrast with results of a prior study byTitchener et al. (2018),who found that gaze-contingent simulations yielded similar or even better target localization performance compared to gaze-ignored vision. Although we cannot be certain, we speculate that inaccuracies in our mobile eye-tracking system may have caused adverse effects on the performance (seeSection 6.4.6). Imperfect gaze estimations may have caused discomfort or perceptual disturbances, masking potential benefits of the gaze-contingent phosphene simulation. In any case, the relatively high performance with the gaze-ignored control condition support the observation that adequate functionality can be achieved with head-steered visual scanning only. In natural vision, actions are strongly directed and guided by eyemovements (Land & Hayhoe, 2001). Nevertheless, our results are in line with in prior clinical work (Gilchrist et al., 1997) and simulated prosthetic vision studies (e.g., Dagnelie et al., 2006), showing that a lack of gaze-assisted visual scanning is surmountable. In this light, the functional limitations of gaze-locked artificial vision in head-steered prostheses (such as described bySabbah et al., 2014) likely do not originate from the restricted visual scanning, but from conflicts in spatial updating. 6.4.3. Implications of neglecting gaze in simulations Although the gaze-locked perception of phosphenes in head-steered prostheses has been well-characterized in the prior literature (Brindley & Lewin, 1968; Caspi et al., 2018; Dobelle & Mladejovsky, 1974; Paraskevoudi & Pezaris, 2019; Sabbah et al., 2014; Schmidt et al., 1996), eye movements are commonly ignored in simulation studies. Our finding that the performance with gaze-locked phosphene vision is significantly lower than with gaze-ignored phosphene vision is in line with results from prior work (Paraskevoudi & Pezaris, 2019) and this may hold even after years of training (Sabbah et al., 2014). Since simulations that neglect eye-movements evaluate a condition that is unattainable in the clinical setting, it is important to consider that studies with gazeignored simulations may yield overoptimistic expectations of the functional performance

RkJQdWJsaXNoZXIy MTk4NDMw