5 68 5. Towards biologically plausible phosphene simulation Figure 5.3: (a-c) Psychometric curves (solid lines) overlaid on experimental data (dashed lines) by Fernández et al. (2021, Fig. 2a, b). The model’s probability of phosphene perception is visualized as a function of charge per phase for (a) different pulse widths, (b) different frequencies, and (c) different train durations. Note that rather than the total charge per trial, we report the charge per phase to facilitate easy comparison with aforementioned experimental data. In panel (d) the probabilities of phosphene perception reproduced with our model are compared to the detection probabilities reported by Fernández et al. (2021, Fig. 2a, b). Predicted probabilities in panel (d) are the results of a 3-fold cross-validation on held-out test data. Colors conform to the conditions in panels a, b and c In this pipeline, a convolutional neural network encoder is trained to process images or video frames and generate adequate electrode stimulation parameters. To train the encoder, a simulation of the prosthetic percept is generated by the phosphene simulator. This simulated percept is evaluated by a second convolutional neural network, the decoder, which decodes the simulated percept into a reconstruction of the original input image (Figure5.6). The quality of the phosphene encoding is optimized by iteratively updating the network parameters of the encoder and decoder (simultaneously) using backpropagation of the reconstruction error. In addition to the reconstruction error, which measures similarity between the reconstruction and the input, we used a regularization term that measures similarity between the phosphenes and the input. For a more detailed description of the end-to-end optimization pipeline, see (de Ruyter van Steveninck et al., 2022a). Dynamic end-to-end encoding of videos In a first experiment, we explored the potential of using our simulator in a dynamic end-to-end encoding pipeline. The simulator is initialized with 1000 possible phosphenes
RkJQdWJsaXNoZXIy MTk4NDMw