viii Preface implants for restoring speech perception (Zeng, 2022) can form an inspiration. Visual neuroprostheses are following similar steps in their development (Fernández et al., 2020) and are expected to become a clinical reality in the near future. From the neuroscience perspective, neural interfaces can improve our knowledge about the brain. Age-old questions regarding the brain’s machinery are becoming easier to address by opening the ‘black box’. Canonical work by Hubel and Wiesel (1962) allready studied the fundamental processing functions responsible for understanding our visual surroundings by recording neuron responses in kitten brains with micro-electrodes. Likewise, more complex processing functions like motion detection can be explored using electrical perturbations (Britten & Van Wezel, 1998). With the recent progress in hardware technology and computational modeling (e.g, seeDado et al., 2022; Leetal., 2022), visual prosthetics and other neural interfaces can help us to learn more about the neural representations in our brain than we ever did before. From an artificial intelligence perspective, understanding our brain and natural intelligence is also an important goal. However, in this field a more model-based approach is commonly adapted - as summarized in a famous quote of physicist Richard Feynmann: “what I cannot create, I cannot understand”. A canonical example is the work of Fukushima (1980) who created an algorithmic model of the visual system to understand how it can detect objects - forming an important basis for contemporary artificial neural networks. While, for most users, these algorithms serve a different purpose than brain modeling, they are still considered an invaluable resource in that context (Richards et al., 2019). Deep neural networks have a remarkable brain-like capacity to store visual information in hierarchically organized abstract representations (Güçlü & van Gerven, 2015). And to close the circle: it is this brain-like design and their remarkable performance in visual tasks that make deep learning software a useful resource for creating intelligent visual prosthetics. One central theme: digital simulations The projects in this dissertation are of a diverse nature, but nevertheless there is a clear central theme. This thesis explores how prosthetic researchers - just like the curious dolphins on the cover of this thesis - can adapt simulations for learning and optimization. Dolphins use play as a safe form of practice for complex behaviour (Kuczaj & Eskelinen, 2014). Researchers and engineers build safe models of reality (prototypes) to safely test hypotheses in an early stage of development. The termdigital simulations is used in this dissertation as a purposely broad term that encompasses various prototyping tools. These include virtual reality technology (for ultra-realistic, controlled, environments), simulated prosthetic vision images (to recreate the experienced percept of visual prosthesis users) or deep neural networks (for creating virtual patients to evaluate and optimize prosthetic design parameters). More on the use of digital simulations for prototyping and optimization of prosthetic designs is discussed in the next chapters of this dissertation. By conveying the potential and the possible limitations of digital simulations, I want this dissertation to be a practical resource for scientists and prosthetic engineers working on visual prosthetics, contributing to our scientific understanding of the opportunities and challenges for developing effective prosthetic healthcare technology.
RkJQdWJsaXNoZXIy MTk4NDMw