2 12 2. Real-world indoor mobility with simulated prosthetic vision Age [years]: 21 (20.8-23.3) Height [m]: 1.84 (1.75-1.87) Table 2.1: Summary of participant characteristics (n = 20). Median and interquartile range. criteria were: absence of mobility impairments, low susceptibility to motion sickness and normal or corrected to normal vision. One participant was unable to perform the experiments due to VR sickness and was therefore excluded from the analysis. The demographics of the remaining 20 participants are listed inTable2.1. The conducted research was approved by the local ethical committee (REC, Radboud University, Faculty of Sciences) and all subjects gave written informed consent to participate. 2.2.2. Experimental setup The experiments were situated in a 3-m-wide corridor in the basement of the university building. Two 22-m-long mobility courses were prepared containing 7 small (30 × 50 × 90 cm) and 6 large (30 × 75 × 180 cm) cardboard boxes that were placed along the corridor and acted as obstacles. In one of the two courses, which we will refer to as the “complex environment” (opposed to the “simple environment”), wallpaper and tape were used to provide supplemental visual gradients to the floor, the walls, and the obstacles (Figures 2.1 and 2.2). A combination of a laptop (Precision 7550, Dell Technologies, United States) and an attached-by-wire head-mounted VR device (Vive Pro Eye, HTC Corporation, Taiwan) was used for the simulation of prosthetic vision. To eliminate trip hazard, the participant was always accompanied by one of the researchers and connection cables were suspended in the air using a rod. Visual input was captured by the inbuilt frontal camera of the headset and was processed using Python (version 3.6.12) making use of the OpenCV (version 4.4.0) image pre-processing library (Bradski, 2000). During the experiments, a low-quality version of the video input and the displayed phosphene simulation was recorded and saved for post hoc inspection. Trial duration and collisions were registered manually. Furthermore, after each trial, participants were asked to provide a subjective rating on a 10-point Likert scale, indicating to what degree they agreed with the statement that in the current condition it was “easy to walk to the end of the hallway whilst avoiding the obstacles”. In addition to these primary endpoints, which were measured for every trial, we also gave participants the opportunity to comment on their general experience in an exit survey. Relevant observations are discussed in the results section. 2.2.3. Image processing Input frames were obtained from the inbuilt frontal fisheye camera of the VR device. Each frame was processed separately. The frames were cropped and resized to 480 × 480 pixels and depending on the experimental condition, either conventional edge detection was performed with the Canny edge detection algorithm (CED) (Canny, 1986), or surface boundary detection using SharpNet. We used the inbuilt OpenCV CED implementation together with prior smoothing using a two-dimensional Gaussian filter. In CED, gradient pixels are accepted as an edge if the gradient is higher than the upper threshold, or if the gradient is between the two thresholds and it is connected to a pixel that is above

RkJQdWJsaXNoZXIy MTk4NDMw