Meta-Learning Millions of Hyper-parameters using the Implicit Function Theorem
Last night on the train I read this nice paper by David Duvenaud and colleagues. Around midnight I got a calendar notification “it’s David Duvenaud’s birthday”. So I thought it’s time for a David Duvenaud birthday special (don’t get too excited David, I won’t make it an annual tradition…) Jonathan Lorraine, Paul Vicol, David Duvenaud (2019) Optimizing Millions of Hyperparameters by Implicit Differentiation Background I recently covered iMAML: the meta-learning algorithm that makes use of implicit gradients to […]