3.2. Methods 3 27 (Cha et al., 1992b; de Ruyter van Steveninck et al., 2022b; Han et al., 2021; Horne et al., 2015; Srivastava et al., 2009; Vergnieux et al., 2017). Several of these studies focused on the software requirements, such asVergnieux et al. (2017), who found that contour-based scene simplification improves navigation performance. Other studies investigate the hardware requirements, such as Cha et al. (1992b) and Srivastava et al. (2009), who found a minimal number of electrodes in the range of 325 to 600 informative for mobility. A previously published study by our own consortium (de Ruyter van Steveninck et al., 2022b) found an interaction between the number of phosphenes and the preferred complexity of the phosphene representation, suggesting that the ‘optimal’ strategy for image processing needs to be tailored to the characteristics of the implant. This latter study is used as an explicit baseline reference for the current work, and it will be referred toas the baseline study. In silico optimization with deep learning Besides human-based simulations, other studies - similar to the current study - have explored a computational framework for the optimization of prosthetic vision. For instance, White et al. (2019) trained a deep neural network with RL and explored the value of using intermediate layers as scene processing filter. An important difference with the current study is that our RL agents are used as virtual implant users to evaluate the usefulness of simulated phosphenes. Three prior related studies have investigated end-to-end deep learning frameworks for the optimization of prosthetic vision (de Ruyter van Steveninck et al., 2022a; Granley et al., 2022a; Küçükog˘lu et al., 2022). The first two studies focused on image reconstruction and the latter study extended the framework with more dynamic tasks (playing Atari games) using RL. Compared to the aforementioned studies, the current paper aims to increase the clinical relevance of the task by implementing a 3D virtual environment for testing mobility performance. This benchmark is more comparable to prior simulated prosthetic vision research with sighted participants (Dagnelie et al., 2007; de Ruyter van Steveninck et al., 2022b; McCarthy et al., 2015; Srivastava et al., 2009; Vergnieux et al., 2017). Besides the difference in task, our paper diverges from the aforementioned studies in the optimization approach. Rather than finding a completely new encoding strategy through optimization of a convolutional encoder network with many parameters, the current paper exploits the variety of existing image processing approaches for mobility, and investigates how these can be tailored. 3.2. Methods We present an RL-based computational pipeline for the evaluation of prosthetic vision. We explicitly compare our computational benchmark with abaseline study that tested mobility with sighted subjects using SPV (de Ruyter van Steveninck et al., 2022b). We implemented a virtual clone of the same environment - a long corridor with obstacles - in which a virtual agent is trained to navigate using SPV. We test several baseline conditions (including natural vision) and different phosphene simulation conditions in a series of two experiments. We compare the effect of different phosphene resolutions, different visual complexities of the environment and different image processing settings. 3.2.1. The virtual mobility setup

RkJQdWJsaXNoZXIy MTk4NDMw