Machine Learning
Machine Learning是什么
简单的理解就是在输入和输出中找一个函数
Deep Learning Introduce
history (Ups and downs of Deep Learning)
- 1958: Perceptron (linear model)
- 1969: Perceptron has limitation
- 1980: Multi-layer perceptron
- Do not have significant difference from DNN today
- 1986: Backpropagation
- Usually more than 3 hidden layers is not helpful
- 1989: 1 hidden layer is “good enough”, why deep?
- 2006: RBM initialization (breakthrough)
- 2009: GPU
- 2011: Start to be popular in speech recognition
- 2012: win ILSVRC image competition
Fully Connect Feedforward Network
- 输入叫
Input Layer
- 输出叫
Output Layer
- 中间层叫
hidden Layers
Deep = Many hidden layers
- AlexNet(2012), 8 layers, error rate: 16.4%
- VGG(2014), 19 layers, error rate: 7.3%
- GoogleNet(2014), 22 layers, error rate: 6.7%
- Residual Net(2015), 152 layers, error rate: 3.57%
FAQ
- Q: How many layers? How many neurons for each layer?
Trial and Error
+Intuition
- Q: Can the structure be automatically determined?
- Evolutionary Artificial Neural Networks
- Q: Can we design the network structure?
- Convolutional Neural Network (CNN)
- Q: Deeper is Better?
- Universality Theorem
- Any continuous function f
- $ f : R^N \rightarrow R^M $
- Can be realized by a network with one hidden layer (given
enough
hidden neurons) - Why
Deep
neural network notFat
neural network?
- Universality Theorem
Reference
系列文档是国立台湾大学 李宏毅 老师Machine Learning系列教材的学习整理。
- Machine Learning 2022
- Machine Learning 2021
- Machine Learning 2020
- Machine Learning 2019
- Machine Learning 2018
- Machine Learning 2016 FALL
- MLDS 2015 FALL
Reference Video
Nvidia Resources
Reference Book
- Neural Networks and Deep Learning
- written by Michael Nielsen
- Deep Learning
- written by Yoshua Bengio, Ian J. Goodfellow and Aaron Courville