Continual learning — where are we?

towards-data-science

This post was originally published by Saurav Jha at Towards Data Science

“Continuous learning ability is one of the hallmarks of human intelligence.” — Lifelong Machine Learning

three men sitting while using laptops and watching man beside whiteboard

Photo by Austin Distel on Unsplash

Three trade-offs for a continual learning agent: Scalability comes into play when a computationally efficient agent is equally desirable.

1. Why memory rehearsal-based methods work better?

Image Source: Knoblauch et al (2020). Left: an optimal CL algorithm searches for parameters satisfying the task distributions of all observed tasks. Right: Replay-based CL algorithms try to find the parameters that satisfy the reconstructed approximation — SAT(Q1:3) — of the actual task distributions (SAT1:3).

Image source: Soccer Coach

2. How task semantics affect the performance of a CL agent?

The components of the weight vector for a toy linear regression model

Two incremental learning setups of Ramasesh et al. (2019): (a) Setup 1 trains the model first on the ship-truck classification problem followed by Task 2 which can be either cat-horse or plane-car classification, (b) Setup 2 trains the model to recognize deer, dog, ship and truck first followed by plane-car recognition.

References

Spread the word

This post was originally published by Saurav Jha at Towards Data Science

Related posts