NTU Machine Learning
Machine Learning (Hung-yi Lee, NTU)
ML Lecture 0-1: Introduction of Machine Learning
ML Lecture 0-2: Why we need to learn machine learning?
ML Lecture 1: Regression - Case Study
迴歸分析(英語:Regression Analysis)是一種統計學上分析數據的方法,目的在於了解兩個或多個變數間是否相關、相關方向與強度,並建立數學模型以便觀察特定變數來預測研究者感興趣的變數。
ML Lecture 2: Where does the error come from?
The testing result of training data decides if your trained model can fit the training data. The trained model s under-fitting if the result is bad.The testing result of test data decides if your trained model can fit the test data. The trained model s over-fitting if the result is bad.
The error is coming from :
- bias More complexity of the model to reduce the bias.
- variance More data to be trained to reduce the variance.
ML Lecture 3-1: Gradient Descent
The learning rate can be tuned by monitoring the change of Loss function.
Different features have different scale in value, each feature should be scaled or normalized to the same range while deciding the model's parameters.
ML Lecture 3-2: Gradient Descent (Demo by AOE)
ML Lecture 3-3: Gradient Descent (Demo by Minecraft)
ML Lecture 4: Classification
A normal distribution in a variate X with mean μ and variance σ^2 is a statistic distribution with probability density functionA standard normal distribution is a normal distribution with zero mean and unit variance , given by the probability density function and distribution function
共變異數(Covariance)在機率論和統計學中用于衡量兩個變數的母體誤差。而變異數是共變異數的一種特殊情況,即當兩個變數是相同的情況。
A Bernoulli distribution has only two possible outcomes, 1 (success) and 0 (failure), and a single trial. So the random variable X which has a Bernoulli distribution can take value 1 with the probability of success, say p, and the value 0 with the probability of failure, say q or 1-p.
A sigmoid function is a mathematical function having a characteristic "S"-shaped curve or sigmoid curve.
A sigmoid function is monotonic.
ML Lecture 5: Logistic Regression
Linear Regression 用來預測輸出值
Logistic Regression用來分類, 依機率判斷結果(是或不是, 0 or 1).
This is similar to Bernoulli Distribution.
以上的課程是推導出Neuron的由來
留言