3 26 3. Towards a task-based computational evaluation benchmark 3.1. Introduction In the near future, artificial phosphene vision with visual neuroprosthetics is expected to enable basic functional perception of the surroundings in individuals who became blind(Fernández et al., 2020; Lewis et al., 2015; Mirochnik & Pezaris, 2019; Niketeghad & Pouratian, 2019; Shepherd et al., 2013). The development of visual prosthetics is still in an early stage and many design choices are still under consideration. The field is actively investigating different prototype designs, evaluating the expected functional outcomes in a variety of visual tasks (Cha et al., 1992b; 1992a; Dagnelie et al., 2007; de Ruyter van Steveninck et al., 2022b; Han et al., 2021; Sanchez-Garcia et al., 2020). The usefulness of the visual percept in complex daily life activities will likely depend on many design factors. For instance, the number of implanted electrodes determines the number of available phosphenes, thus the ‘resolution’ of the prosthetic percept. Another influential design factor is the choice of image processing software for filtering the visual information that is captured from the environment. To make efficient use of the limited information that can be conveyed with visual prostheses, many different scene simplification algorithms have been proposed, including edge detection, depth-based processing, semantic segmentation and many other algorithms. Due to a large variety of proposed alternatives research is inconclusive about the optimal strategy and further experimentation is required. In complement to brain stimulation research, many studies implement simulation paradigms to speed up the experimental cycle of hypothesis testing. Simulated prosthetic vision (SPV) studies typically use sighted study participants that perform visual tasks using a simulated (e.g., virtual reality) rendering of what prosthesis users are expected to see. Although this line of research is non-invasive and relatively cost-effective, the value of behavioral SPV studies is often limited, as designs are typically the product of many arbitrary choices and parameter settings (e.g., image processing parameters). Setting up and performing behavioral experiments can be a time-consuming process. And even with motivated study participants, only a restricted number of study conditions can be compared. Hence, there is a remaining need for fast and standardized evaluation benchmarks. In this paper we propose a computational framework that can serve as a complementary benchmark to behavioral SPV experiments for evaluating and optimizing prosthetic vision. Our proposed method is based on a deep reinforcement learning (RL) pipeline that simulates a virtual patient performing a mobility task in a 3D visual environment. The pipeline enabled fast and controlled experimentation with a virtually endless number of study conditions. The proof-of-principle experiments presented in this study illustrate that our computational framework evaluates the effect of different implant characteristics in a similar way to behavioral SPV research. In addition, our framework can enable automated mathematical optimization of specific free parameters. Our simulation pipeline can form a valuable addition to the toolkit of the prosthetic engineer. 3.1.1. Related work Simulated prosthetic vision with sighted subjects Our computational approach is comparable to prior SPV studies with human participants that have investigated the influence of implant characteristics on mobility performance

RkJQdWJsaXNoZXIy MTk4NDMw