Researchers borrowed equations from calculus to redesign the core machinery of deep learning so it can model a radical new Neural Network and continuous processes like changes in health.
David Duvenaud was collaborating on a project involving medical data when he ran up against a major shortcoming in AI.
An AI researcher at the University of Toronto, he wanted to build a deep-learning model that would predict a patient’s health over time. But data from medical records is kind of messy: throughout your life, you might visit the doctor at different times for different reasons, generating a smattering of measurements at arbitrary intervals. A traditional neural network struggles to handle this. Its design requires it to learn from data with clear stages of observation. Thus it is a poor tool for modeling continuous processes, especially ones that are measured irregularly over time.
For further reading
- @AI News: Scientists develop AI that can turn brain activity into text
- @AI News: List of inverse Reinforcement Learning IRL papers