[MLE] W3 Classification Problem

Classification and Representation

Classification

To attempt classification, one method is to use linear regression and map all predictions greater than 0.5 as a 1 and all less than 0.5 as a 0. However, this method doesn’t work well because classification is not actually a linear function.
The classification problem is just like the regression problem, except that the values y we now want to predict take on only a small number of discrete values. For now, we will focus on the binary classification problem in which y can take on only two values, 0 and 1. (Most of what we say here will also generalize to the multiple-class case.) For instance, if we are trying to build a spam classifier for email, then x(i) may be some features of a piece of email, and y may be 1 if it is a piece of spam mail, and 0 otherwise. Hence, y∈{0,1}. 0 is also called the negative class, and 1 the positive class, and they are sometimes also denoted by the symbols “-” and “+.” Given x(i), the corresponding y(i) is also called the label for the training example.

Hypothesis Representation

We have covered hypothesis function for linear regression problem, so how about classification?
We could approach the classification problem ignoring the fact that y is discrete-valued, and use our old linear regression algorithm to try to predict y given x. However, it is easy to construct examples where this method performs very poorly. Intuitively, it also doesn’t make sense for \(h_θ(x)\) to take values larger than 1 or smaller than 0 when we know that y ∈ {0, 1}. To fix this, let’s change the form for our hypotheses \(h_θ(x)\) to satisfy 0≤\(h_θ(x)\)≤1. This is accomplished by plugging \(θ^Tx\) into the Logistic Function.
Our new form uses the “Sigmoid Function,” also called the “Logistic Function”:
  • \(h_\theta = g(\theta^Tx)\)
  • \(z = \theta^Tx\)
  • \(g(z) = \frac{1}{1+e^{-z}}\)
The Sigmoid Function looks like:
enter image description here
The function g(z), shown here, maps any real number to the (0, 1) interval, making it useful for transforming an arbitrary-valued function into a function better suited for classification.

Interpretation

\(h_θ(x)\) will give us the probability that our output is 1. For example, \(h_θ(x)\)=0.7 gives us a probability of 70% that our output is 1. Our probability that our prediction is 0 is just the complement of our probability that it is 1 (e.g. if probability that it is 1 is 70%, then the probability that it is 0 is 30%).
\(h_\theta(x) = P(y = 1| x;\theta) = 1 - P(y = 0| x;\theta)\)

Decision Boundary

In order to get our discrete 0 or 1 classification, we can translate the output of the hypothesis function as follows:
\(h_\theta(x) \geq 0.5 -> y = 1\)
\(h_\theta(x) \lt 0.5 -> y = 0\)
The way our logistic function g behaves is that when its input is greater than or equal to zero, its output is greater than or equal to 0.5:
\(g(z) \geq 0.5 when z \geq 0\)
which is also
\(\theta^Tx \geq 0 => y = 1\)
\(\theta^Tx \lt 0 => y = 0\)

Logistic Regression Model

Cost Function

Now we know that hypothesis function for classification problem is a sigmoid function, how about the cost function then?
We cannot use the same cost function that we use for linear regression because the Logistic Function will cause the output to be wavy, causing many local optima. In other words, it will not be a convex function.
Instead, our cost function for logistic regression looks like:
\(J(\theta) = \frac{1}{m}\sum^m_{i=1} Cost(h_\theta(x^{(i)}), y^{(i)})\)
\(Cost(h_\theta(x), y) = -log(h_\theta(x)) \qquad if \ y = 1\)
\(Cost(h_\theta(x), y) = -log(1-h_\theta(x)) \qquad if \ y = 0\)
enter image description here
If our correct answer ‘y’ is 0, then the cost function will be 0 if our hypothesis function also outputs 0. If our hypothesis approaches 1, then the cost function will approach infinity.
If our correct answer ‘y’ is 1, then the cost function will be 0 if our hypothesis function outputs 1. If our hypothesis approaches 0, then the cost function will approach infinity.

Simplified Cost Function and Gradient Descent

We can compress our cost function’s two conditional cases into one case:
\(Cost(h_\theta(x),y) = - y log(h_\theta(x))- (1-y)log(1 - h_\theta(x))\)
enter image description here

Gradient Descent

Remember for linear regression, gradient descent is
Repeat{
\(\theta_j = \theta_j - \alpha\frac{d}{d\theta_j}J(\theta) = \theta_j - \frac{\alpha}{m}\sum^m_{i=1}(h_\theta(x^{(i)})-y^{(i)})x_j^{(i)}\)
}
Notice that this algorithm is identical to the one we used in linear regression. We still have to simultaneously update all values in theta.
enter image description here

Advanced Optimisation

Except for gradient descent, there are a lot of other optimisation algorithms:
  • Conjugate gradient
  • BFGS
  • L-BFGS
    The advantages of these algorithms are:
  • No need to manually pick learning rate \(\alpha\)(There is a inner loop automatically try every \(\alpha\) and pick the best one)
  • Often faster then gradient descent
    But it is also more complex!
“Conjugate gradient”, “BFGS”, and “L-BFGS” are more sophisticated, faster ways to optimize θ that can be used instead of gradient descent. We suggest that you should not write these more sophisticated algorithms yourself (unless you are an expert in numerical computing) but use the libraries instead, as they’re already tested and highly optimized. Octave provides them.

Multi-class Classification: One vs-all

Now we will approach the classification of data when we have more than two categories. Instead of y = {0,1} we will expand our definition so that y = {0,1…n}.
Since y = {0,1…n}, we divide our problem into n+1 (+1 because the index starts at 0) binary classification problems; in each one, we predict the probability that ‘y’ is a member of one of our classes.
y∈{0,1…n}
\(h^{(0)}_θ(x)=P(y=0|x;θ)\)
\(h^{(1)}_θ(x)=P(y=1|x;θ)⋯\)
\(h^{(n)}_θ(x)=P(y=n|x;θ)\)
\(prediction=max_i(h^{(i)}_θ(x))\)
We are basically choosing one class and then lumping all the others into a single second class. We do this repeatedly, applying binary logistic regression to each case, and then use the hypothesis that returned the highest value as our prediction.
The following image shows how one could classify 3 classes:
enter image description here

To summarize:

Train a logistic regression classifier \(h_θ(x)\) for each class to predict the probability that  y = i .
To make a prediction on a new x, pick the class that maximizes \(h_θ(x)\)

评论

此博客中的热门博文

[MLE] W2 Multivariate linear regression

[MLE] W1 Introduction

[AIM] MetaHeuristics