8 116 8. General discussion Hardware: prototyping the implant design The results inChapter2andChapter6illustrate how functional prototyping with simulation studies can help to validate intuitive design assumptions in an early stage. For instance, implanting more electrodes is instinctively expected to improve functional performance, but our results and those of other simulation studies (e.g., Cha et al., 1992b; Dagnelie et al., 2007; Srivastava et al., 2009) indicate that basic mobility could be achieved with a relatively low number of electrodes. In a similar line, there may exist unanticipated effects for other design factors, such as the electrode separation distance or the implantation site (e.g., seeEndo et al., 2019; Thorn et al., 2020; Zapf et al., 2016). Simulation studies can help to provide a more nuanced view on the hardware requirements of visual prostheses. They provide a means for an informed trade-off between limiting factors, such as surgical risks, against the functional benefits that can be gained. Software: deep learning-based scene simplification Besides hardware prototyping, SPV can facilitate the evaluation of scene processing software (e.g., seeBarnes et al., 2011; Guo et al., 2018; Han et al., 2021; Horne et al., 2015; Sanchez-Garcia et al., 2020; van Rheede et al., 2010). Interestingly, in contrast to related work (Han et al., 2021; Sanchez-Garcia et al., 2020), the results inChapter2revealed no benefits of deep learning-based scene processing. Although these negative results are likely implementation-specific, they are illustrative of how testing scene processing software in real-time and more real-world conditions can lead to unsatisfactory results. On the one hand, this is not unique to deep-learning-based implementations, but it holds for image processing in general: finding an optimal scene processing strategy for natural environments requires a lot of evaluation and iterative optimization. On the other hand, deep neural networks are commonly noted for their potentially limited generalizability and interpretability, and our results underline the need for more robust solutions. Toward generalizable and understandable AI Notably, deep learning is a fast-developing field and many efforts are being undertaken in the development of more robust and understandable models (Barredo Arrieta et al., 2020). By making a robust and understandable choice of filtering the relevant information in the environment, intelligent scene processing software performs similar steps compared to the brain in natural vision. With the right datasets, objective functions, architectures and learning rules, deep neural networks can be constrained to behave more brain-like (Kubilius et al., 2019; Richards et al., 2019). From this perspective, rather than inherent restrictions, the development of intelligent scene processing software is only restricted by implementational challenges. Further theoretical developments, software prototyping and experimentation can forward the quality of intelligent scene processing towards more generalizable solutions. Guiding software development using virtual environments In simulation experiments, virtual environments (e.g., seeChapter6) can provide an alternative for implementing intelligent deep learning-based algorithms. As all information of the environment is readily available, there is no need for computational deep neural network-based predictions regarding the structure of the environment. In fact, the precise ‘ground-truth’ scene representations in virtual environments can serve as an idealized
RkJQdWJsaXNoZXIy MTk4NDMw