Code and Data Datasets The experimental data that were collected for the studies inChapters 2and6canbe found on the Radboud University data repository: Reference Collection Name DOI Chapter2 Obstacle avoidance with simulated prosthetic vision in plain and complex visual environments https://doi.org/10.34973/ymcn-fe47 Chapter6 Gaze-contingent XR phosphene simulation for mobility, scene recognition and visual search https://doi.org/10.34973/gx4r-n774 The deep learning models described inChapters 4and5were trained using publicly available datasets (cited in the corresponding chapters). No datasets were used for the virtual experiments inChapter3. The experimental results can be reproduced using the scripts referenced in the section below. Analysis Scripts All scripts for the experiments and data analyses in this dissertation can be found on GitHub: Reference Repository URL Chapter2 https://github.com/neuralcodinglab/WF1-Experiments Chapter3 https://github.com/neuralcodinglab/RL-mobility Chapter4 https://github.com/neuralcodinglab/viseon/tree/e2e_paper Chapter5 https://github.com/neuralcodinglab/dynaphos-experiments Chapter6 https://github.com/neuralcodinglab/SPVGazeAnalysis 151

RkJQdWJsaXNoZXIy MTk4NDMw