Week 3
Classification with Logistic Regression
We want to use learning algorithms to classify data into two or more categories represented by the y
variable.
The simplest case is to represent outputs as 0 and 1, representing a negative or positive result.
if
if
We don't use Linear Regression as it is affected by outliers very much, and doesn't give us an accurate best fit line for predicting.
The dividing line is called the Decision Boundary.
Logistic Regression
Let's use the cancer tumor prediction example. 1 represents malignant, and 0 reprsents a non-malignant tumor. Linear Regression is not a good algorithm for this problem.
However, Logistic Regression can fit an S-shaped curve to the data, which is more accurate. We want outputs between 0 and 1.
We use the sigmoid or logistic function which always gives an output between 0 and 1.
Here is the math for logistic regression model:
A straight line function was defined as
Therefore,
This represents the probability that the label is 1.
The threshold above or below which you predict whether y is 1 or 0 is called the Decision Boundary.
whenever
The decision boundary corresponds to when z = 0. This can be a line equation or a polynomial depending on the z expression
Cost Function for Logistic Regression
We need a different cost function to choose better parameters for Logistic Regression.
Squared Error Cost:
In Linear Regression
Now, we will have to use:
The Linear Regression old cost function is non-convex
For Gradient Descent, there are lots of local minima that you can get stuck in. So there's a different cost function that we described above.
is the loss
Logistic Loss Function tells us how well we're doing on that example.
Last updated