1 4 1. Introduction Figure 1.1: By selectively stimulating a subset of electrodes, a pattern of phosphenes (left) can be induced to shape a simplified visual representation of the surroundings (right). 1.1.3. Limitations of phosphene vision There are many ways in which phosphene vision differs from natural vision, and the goal is not to recreate natural sight. Prosthetic percepts lack stereovision, and the color of the phosphenes cannot be controlled. Other factors can be modulated to some extent, such as the brightness and the size, but these require precise control of the electrical stimulation parameters. Possibly the most striking limitation of prosthetic vision compared to natural vision is the restricted resolution and field of view. Although these properties can to some extent be influenced by improving the implant design, the achieved resolution will not be comparable to that of natural vision. Note that some design characteristics, such as the location and number of implanted electrodes may also depend on surgical restrictions. Altogether, the different nature of the prosthetic vision and ongoing developments in hardware design choices make it difficult to predict the functional outcomes, endorsing further investigation. 1.1.4. The relevance of scene simplification For achieving a functional form of vision, it is essential to optimize the (restricted) information transfer with scene processing software on a mobile computer. The goal is to summarize the most relevant visual features from the complex visual surroundings into a simplified, informative representation that is conveyed through phosphene vision. The choice of image processing algorithm is not trivial. Research is investigating a wide variety of solutions. On the one end of the spectrum there are well-established basic image processing algorithms like thresholding, histogram equalization or edge detection (e.g., see Boyle, 2008). On the other end, ongoing research is exploring more intelligent software that can selectively extract task-relevant visual information, including depth estimation, saliency detection or semantic segmentation (e.g., seeHan et al., 2021), among many other possibilities. 1.1.5. Deep neural networks for prosthetic vision In the recent literature, a particular interest is directed towards deep neural networks (DNNs). Partly owed to the increased availability of (mobile) computational resources
RkJQdWJsaXNoZXIy MTk4NDMw