On the predictive and explanative roles of deep neural networks in neuroscience
Keywords:
Neural modeling, Computer vision, Unsupervised learning, Neuroimaging biomarkersSynopsis
This dissertation investigates the application of deep neural networks (DNNs) in neuroscience, with an emphasis on the trade-off between predictive power and explanatory insight. The research highlights several key findings. First, a novel, unsupervised learning approach using an Adversarial Autoencoder (AAE) successfully detected early signs of epileptogenesis in EEG recordings, demonstrating the potential of DNNs for proactive diagnosis, even when labeled data is scarce. Second, experiments with Convolutional Neural Networks (CNNs) showed that they use spatial relationships between features for object recognition, especially with textureless images, but they do not capture the overall shape in a holistic manner. This finding helps us figure out how these models "see" and demonstrates a practical application of post-hoc model insight. Third, the research investigated what is more important for mimicking different parts of the visual cortex: the structure of the network or the training it receives. For the early visual area (V1), the inherent complexity of even randomly initialized networks was surprisingly effective in predicting responses, whereas higher-level areas (IT, VO) required specific training. This distinction shows how different brain regions may rely on different computational strategies, an insight gained through comparative use of the models and without a full explanation of the models themselves. In conclusion, this dissertation argues that DNNs are valuable tools in neuroscience, not only for predictions but also for gaining deeper insight. While the research acknowledges that complex models can be "black boxes," it emphasizes that careful validation allows us to use them effectively. These models can be powerful predictors (as in early disease detection) and, crucially, tools that help us generate new questions and expand our understanding of the brain.
Published
Series
Categories
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.