A foolproof way to shrink deep learning models

Researchers unveil a pruning algorithm to make artificial intelligence applications run faster by Shrinking deep learning models.

As more artificial intelligence applications move to smartphones, deep learning models are getting smaller to allow apps to run faster and save battery power. Now, MIT researchers have a new and better way to compress models.

​Researchers unveil a pruning algorithm to make artificial intelligence applications run faster.

For further reading

Click here to connect with us

Spread the word

Related posts