1. 程式人生 > >Don't Peek: Deep Learning without looking … at test data

Don't Peek: Deep Learning without looking … at test data

What is the purpose of a theory? To explain why something works. But to also make predictions–testable predictions. Recently we introduced the theory of Implicit Self-Regularization in Deep Neural Networks. Most notably, we observe that in all pretrained models, the layer weight matrices display near Universal power law behavior.