catastrophic forgetting

#datascience #artificialneuralnetworks #definition
Deep Neural Networks have a weakness - they are unable to learn multiple tasks sequentially.

There is a tendency for the knowledge of previous tasks to abruptly be forgotten while training on new tasks. The weights in the network that are important for task A are changed to better handle task B. In trying to learn how to do task B, the model forgets how to do task A.
This is catastrophic forgetting.

Unlike Artificial Neural Networks, people and animals can learn continuously. Mammals may avoid catastrophic forgetting by protecting knowledge in a special circuit. Knowledge is encoded into the brain by rendering specific parts of the brain less plastic, and therefore more stable.

Can't teach an old dog new tricks? That's because its brain has protected many parts of the brain to maintain its long term memory. More plastic = more learning, but also more forgetting.

Does this mean that when your brain is more plastic, as a child, that you retain the information you need to recall often, but quickly forget information you don't encounter as much?


https://www.pnas.org/doi/10.1073/pnas.1611835114