Miguel Solinas' thesis defence titled "Dual memory system to overcome catastrophic forgetting" will take place on Thursday, December 9, 2021 at 2 p.m.
One of the main characteristics that make human beings unique is their ability to learn continually. It is part of individual development and it is vital to progress and to avoid stagnation. In order to evolve, human beings need to gain experience and acquire competencies to broaden their skills continually. Artificial neural networks lack the capacity to store memories and to learn continually. Indeed, artificial neural networks suffer from catastrophic forgetting of old experiences as new experiences are learned.
Deep learning has yielded remarkable results in many applications; however, artificial neural networks continue to forget. For almost three decades, researchers have been dealing with the problem of catastrophic forgetting by studying the neurogenesis of the brain, synaptic consolidation and replay systems. First, neurogenesis-based approaches evolve the neural network architecture to adapt to different training experiences using independent sets of parameters. Second, synaptic consolidation-based approaches limit the changes in important parameters of previously learned experiences. Thus, new experiences employ neurons that are less useful for previous experiences. Third, replay-based approaches overcome catastrophic forgetting by replaying an amount of previously learned experiences. It is therefore possible to replay previously learned information in two ways: with real samples (rehearsal) or with synthetic samples (pseudo- rehearsal).
This thesis brings together contributions on continual learning, on the properties of autoencoders and knowledge transfer. First, we make a distinction between continual learning and catastrophic forgetting. Second, we introduce an auto-associative memory module and a sampling method to generate synthetic samples for capturing and transferring knowledge that replay-based approaches can employ in continual learning. Third, we propose a continual learning model when privacy issues exist. We improve and extend this model by combining pseudo-rehearsal and rehearsal approaches to provide an efficient and competitive solution that improves state-of-the-art results. Finally, in a comprehensive investigation, we attempt to determine which pseudo-samples to use in replay-based approaches to alleviate catastrophic forgetting. We detail methodological aspects of each contribution and we provide evidence for our contributions on datasets such as MNIST, SVHN, CIFAR-10 and CIFAR-100.
You can follow this thesis defence (in english) via the following link: https://univ-grenoble-alpes-fr.zoom.us/j/96446529234?pwd=VEQzVERmM1dvcGpodUdUNjlyTUc2QT09