3.3. Results 3 33 studyde Ruyter van Steveninck et al. (2022b). We trained three virtual implant users for each of the study conditions, controlling for the visual complexity of the environment (plain versus complex textures) and testing six different phosphene resolutions: 10×10, 18×18, 26×26, 34×34, 42×42and50×50. 3.3. Results 3.3.1. Baseline results The training performance of the baseline models with natural vision and random vision are displayed inFigure 3.6. As can be observed, for the natural vision condition the reward starts to increase after 50000 optimization steps, indicating successful training. The model is successfully able to avoid obstacles, as indicated by the absence of collisions after training. The model trained with no vision was unable to avoid obstacles and maximize the validation reward. Example trajectories of the first 50 meters in the fixed test environment are visualized inFigure3.7. Figure 3.6: Training curves for the baseline model with natural vision. Left: number of collisions during validation. Right: the obtained validation reward. Figure 3.7: An example trajectory in the fixed test environment, after training the baseline model with natural vision. The results of the perturbation analysis are displayed inFigure3.8. Masking the box directly in front of the camera resulted in an increase of the predicted reward for stepping forward. In contrast, masking the more distant area above the box is associated with a reduction of the predicted reward for forward stepping.

RkJQdWJsaXNoZXIy MTk4NDMw