NTU Machine Learning

Machine Learning (Hung-yi Lee, NTU)


ML Lecture 0-1: Introduction of Machine Learning


ML Lecture 0-2: Why we need to learn machine learning?


ML Lecture 1: Regression - Case Study

迴歸分析(英語:Regression Analysis)是一種統計學上分析數據的方法,目的在於了解兩個或多個變數間是否相關、相關方向與強度,並建立數學模型以便觀察特定變數來預測研究者感興趣的變數。

ML Lecture 2: Where does the error come from?

The testing result of training data decides if your trained model can fit the training data. The trained model s under-fitting if the result is bad.
The testing result of test data decides if your trained model can fit the test data. The trained model s over-fitting if the result is bad.

The error is coming from :
  • bias
  • More complexity of the model to reduce the bias.
  • variance
  • More data to be trained to reduce the variance.

ML Lecture 3-1: Gradient Descent


The learning rate can be tuned by monitoring the change of Loss function.
Different features have different scale in value, each feature should be scaled or normalized to the same range while deciding the model's parameters.

ML Lecture 3-2: Gradient Descent (Demo by AOE)


ML Lecture 3-3: Gradient Descent (Demo by Minecraft)


ML Lecture 4: Classification

A normal distribution in a variate X with mean μ and variance σ^2 is a statistic distribution with probability density function


A standard normal distribution is a normal distribution with zero mean and unit variance , given by the probability density function and distribution function



共變異數(Covariance)在機率論和統計學中用于衡量兩個變數的母體誤差。而變異數是共變異數的一種特殊情況,即當兩個變數是相同的情況。



A Bernoulli distribution has only two possible outcomes, 1 (success) and 0 (failure), and a single trial. So the random variable X which has a Bernoulli distribution can take value 1 with the probability of success, say p, and the value 0 with the probability of failure, say q or 1-p.

A sigmoid function is a mathematical function having a characteristic "S"-shaped curve or sigmoid curve.


A sigmoid function is monotonic.

ML Lecture 5: Logistic Regression


Linear Regression 用來預測輸出值
Logistic Regression用來分類, 依機率判斷結果(是或不是, 0 or 1).
This is similar to Bernoulli Distribution.

以上的課程是推導出Neuron的由來

ML Lecture 6: Brief Introduction of Deep Learning


ML Lecture 7: Backpropagation


ML Lecture 8-1: “Hello world” of deep learning


ML Lecture 8-2: Keras 2.0


ML Lecture 8-3: Keras Demo


ML Lecture 9-1: Tips for Training DNN


ML Lecture 9-2: Keras Demo 2


ML Lecture 9-3: Fizz Buzz in Tensorflow (sequel)


ML Lecture 10: Convolutional Neural Network


ML Lecture 11: Why Deep?


ML Lecture 12: Semi-supervised


ML Lecture 13: Unsupervised Learning - Linear Methods


ML Lecture 14: Unsupervised Learning - Word Embedding


ML Lecture 15: Unsupervised Learning - Neighbor Embedding


ML Lecture 16: Unsupervised Learning - Auto-encoder


ML Lecture 17: Unsupervised Learning - Deep Generative Model (Part I)


ML Lecture 18: Unsupervised Learning - Deep Generative Model (Part II)


ML Lecture 19: Transfer Learning


ML Lecture 20: Support Vector Machine (SVM)


ML Lecture 21-1: Recurrent Neural Network (Part I)


ML Lecture 21-2: Recurrent Neural Network (Part II)


ML Lecture 22: Ensemble


ML Lecture 23-1: Deep Reinforcement Learning


ML Lecture 23-2: Policy Gradient (Supplementary Explanation)


ML Lecture 23-3: Reinforcement Learning (including Q-learning)




留言

熱門文章