2.2. Materials and methods 2 11 amongst these studies, is focused on the extraction of geometric structure and object contours for scene simplification. McCarthy et al, for instance, proposed methods for extracting scene structure (McCarthy et al., 2013) and surface boundaries (McCarthy et al., 2011) from disparity data. Based on quantitative and qualitative image analysis the authors suggest that these methods may improve interpretability of prosthetic vision and could support obstacle avoidance. To behaviorally evaluate the benefits of such scene simplification approaches, Vergnieux et al. performed experiments with SPV in a virtual environment (Vergnieux et al., 2017). The study found that visual simplification reduces virtual wayfinding performance for normal vision, but improves the performance with SPV. The highest performance with SPV was achieved when the scene was reduced to only the surface boundaries (i.e., a wireframe rendering). The aforementioned literature provides solid evidence that scene simplification, and particularly contour extraction, can help to prevent ‘overcrowding’ (i.e., transmitting more visual features than can be clearly interpreted from the limited phosphene representation) and improves the interpretability of prosthetic vision in a mobility task. Nevertheless, few attempts have been undertaken to empirically test this in a real-world setup, and there are some remaining questions and challenges: firstly, complex scenes may contain abundant textures and background gradients, which complicate contour extraction with conventional image processing applications. Although previous work has demonstrated that intelligent scene simplification methods may work in basic virtual environments (Vergnieux et al., 2017) or when evaluated as pre-converted images and videos (Han et al., 2021; Sanchez-Garcia et al., 2020), the implementation of a real-time, effective and practical image processing method in a real-world complex visual environment is a pressing issue that can bring research closer to the clinical situation. Secondly, it is unclear to what extent scene simplification contributes to improved mobility with SPV. Reducing visual information may, on the one hand, increase interpretability by preventing overcrowding, but on the other hand, excessive deprivation of visual information may also lead to impaired mobility. For example, because texture is an important cue that is used in navigation (Gibson, 1950). Explicit investigation of this trade-off between interpretability and informativity for various phosphene resolutions may provide insight into the essential components for visually-guided mobility with prosthetic vision. In the current study, we empirically evaluate contour extraction in a real-world indoor mobility task using a simulation of cortical prosthetic vision. We test two levels of contourbased scene simplification: an edge-based representation, that extracts visual gradients from all areas of the visual scene, versus a stricter surface-boundary representation, in which all within-surface information and background textures are removed. With this comparison in mind, our experiment is designed to address three study aims. i) To explore the restorable benefits for mobility with prosthetic vision and the required number of implanted electrodes. ii) To examine the theoretically attainable benefits of a stricter surface-boundary representation by removal of all within-surface gradients and background textures. iii) Test the feasibility of software-based scene simplification using a pre-trained deep neural network architecture for real-time surface-boundary detection. 2.2. Materials and methods 2.2.1. Participants We recruited 21 participants at the university campus (Radboud University, Nijmegen, the Netherlands) who had no prior experience with simulated phosphene vision. Inclusion

RkJQdWJsaXNoZXIy MTk4NDMw