Lecture 26 – Examples and Classifier Evaluation

DSC 80, Spring 2022

Announcements

Agenda

One-hot encoding and multicollinearity

One-hot encoding and multicollinearity

When we one-hot encode categorical features, we create several redundant columns.

Aside: You can use pd.get_dummies in EDA, but don't use it for modeling (instead, use OneHotEncoder, which works with Pipelines).

Remember that under the hood, LinearRegression() creates a design matrix that has a column of all ones (for the intercept term). Let's add that column above for demonstration.

Now, many of the above columns can be written as linear combinations of other columns!

Note that if we get rid of the four redundant columns above, the rank of our design matrix – that is, the number of linearly independent columns it has – does not change (and so the "predictive power" of our features doesn't change either).

However, without the redundant columns, there is only a single unique set of optimal parameters $w^*$, and the multicollinearity is no more.

Aside: Most one-hot encoding techniques (including OneHotEncoder) have an in-built drop argument, which allow you to specify that you'd like to drop one column per categorical feature.

The above array only has $(2-1) + (2-1) + (4-1) + (2-1) = 6$ columns, rather than $2 + 2 + 4 + 2 = 10$, since we dropped 1 per categorical column in tips_features.

Key takeaways

Modeling using text features

Example: Predicting reviews

We have a dataset containing Amazon reviews and ratings for patio, lawn, and gardening products. (Aside: Here is a good source for such data.)

Goal: Use a review's 'summary' to predict its 'overall' rating.

Note that there are five possible 'overall' rating values – 1, 2, 3, 4, 5 – not just two. As such, this is an instance of multiclass classification.

Question: What is the worst possible accuracy we should expect from a ratings classifier, given the above distribution?

Aside: CountVectorizer

Entries in the 'summary' column are not currently quantitative! We can use the bag-of-words encoding to create quantitative features out of each 'summary'. Instead of performing a bag-of-words encoding manually as we did before, we can rely on sklearn's CountVectorizer.

count_vec learned a vocabulary from the corpus we fit it on.

Note that the values in count_vec.vocabulary_ correspond to the positions of the columns in count_vec.transform(example_corp).toarray(), i.e. 'billy' is the first column and 'your' is the last column.

Creating an initial Pipeline

Let's build a Pipeline that takes in summaries and overall ratings and:

But first, a train-test split (like always).

The accuracy of our random forest is just above 50%, on both the training and testing sets. This doesn't seem much better than just predicting "5 stars" every time!

Choosing tree depth via GridSearchCV

We arbitrarily chose max_depth=8 before, but it seems like that isn't working well. Let's perform a grid search to find the max_depth with the best generalization performance.

Note that while pl has already been fit, we can still give it to GridSearchCV, which will repeatedly re-fit it during cross-validation.

Recall, fit GridSearchCV objects are estimators on their own as well. This means we can compute the training and testing accuracies of the "best" random forest directly:

Still not much better on the testing set! 🤷

Training and validation accuracy vs. depth

Below, we plot how training and validation accuracy varied with tree depth. Note that the $y$-axis here is accuracy, and that larger accuracies are better (unlike with RMSE, where smaller was better).

Unsurprisingly, training accuracy kept increasing, while validation accuracy leveled off around a depth of ~100.

Classifier evaluation

Accuracy isn't everything!

$$\text{accuracy} = \frac{\text{# data points classified correctly}}{\text{# data points}}$$

The Boy Who Cried Wolf 👦😭🐺

(source)

A shepherd boy gets bored tending the town's flock. To have some fun, he cries out, "Wolf!" even though no wolf is in sight. The villagers run to protect the flock, but then get really mad when they realize the boy was playing a joke on them.

Repeat the previous paragraph many, many times.

One night, the shepherd boy sees a real wolf approaching the flock and calls out, "Wolf!" The villagers refuse to be fooled again and stay in their houses. The hungry wolf turns the flock into lamb chops. The town goes hungry. Panic ensues.

The wolf classifier

Some questions to think about:

The wolf classifier

Below, we present a confusion matrix, which summarizes the four possible outcomes of the wolf classifier.

Screen%20Shot%202019-03-03%20at%206.05.29%20PM.png

Outcomes in binary classification

When performing binary classification, there are four possible outcomes.

(Note: A "positive prediction" is a prediction of 1, and a "negative prediction" is a prediction of 0.)

Outcome of Prediction Definition True Class
True positive (TP) ✅ The predictor correctly predicts the positive class. P
False negative (FN) ❌ The predictor incorrectly predicts the negative class. P
True negative (TN) ✅ The predictor correctly predicts the negative class. N
False positive (FP) ❌ The predictor incorrectly predicts the positive class. N
⬇️
Predicted Negative Predicted Positive
Actually Negative TN ✅ FP ❌
Actually Positive FN ❌ TP ✅


The confusion matrix above is organized the same way that sklearn's confusion matrices are (but differently than in the wolf example).

Note that in the four acronyms – TP, FN, TN, FP – the first letter is whether the prediction is correct, and the second letter is what the prediction is.

Example: COVID testing 🦠

Accuracy of COVID tests

The results of 100 UCSD Health COVID tests are given below.

Predicted Negative Predicted Positive
Actually Negative TN = 90 ✅ FP = 1 ❌
Actually Positive FN = 8 ❌ TP = 1 ✅
UCSD Health test results

🤔 Question: What is the accuracy of the test?

🙋 Answer: $$\text{accuracy} = \frac{TP + TN}{TP + FP + FN + TN} = \frac{1 + 90}{100} = 0.91$$

Recall

Predicted Negative Predicted Positive
Actually Negative TN = 90 ✅ FP = 1 ❌
Actually Positive FN = 8 TP = 1
UCSD Health test results

🤔 Question: What proportion of individuals who actually have COVID did the test identify?

🙋 Answer: $\frac{1}{1 + 8} = \frac{1}{9} \approx 0.11$

More generally, the recall of a binary classifier is the proportion of actually positive instances that are correctly classified. We'd like this number to be as close to 1 (100%) as possible.

$$\text{recall} = \frac{TP}{TP + FN}$$

To compute recall, look at the bottom (positive) row of the above confusion matrix.

Recall isn't everything, either!

$$\text{recall} = \frac{TP}{TP + FN}$$

🤔 Question: Can you design a "COVID test" with perfect recall?

🙋 Answer: Yes – just predict that everyone has COVID!

Predicted Negative Predicted Positive
Actually Negative TN = 0 ✅ FP = 91 ❌
Actually Positive FN = 0 TP = 9
everyone-has-COVID classifier
$$\text{recall} = \frac{TP}{TP + FN} = \frac{9}{9 + 0} = 1$$

Like accuracy, recall on its own is not a perfect metric. Even though the classifier we just created has perfect recall, it has 91 false positives!

Precision

Predicted Negative Predicted Positive
Actually Negative TN = 0 ✅ FP = 91
Actually Positive FN = 0 ❌ TP = 9
everyone-has-COVID classifier

The precision of a binary classifier is the proportion of predicted positive instances that are correctly classified. We'd like this number to be as close to 1 (100%) as possible.

$$\text{precision} = \frac{TP}{TP + FP}$$

To compute precision, look at the right (positive) column of the above confusion matrix.

Precision and recall

(source)

Precision and recall

$$\text{precision} = \frac{TP}{TP + FP} \: \: \: \: \: \: \: \: \text{recall} = \frac{TP}{TP + FN}$$

🤔 Question: When might high precision be more important than high recall?

🙋 Answer: For instance, in deciding whether or not someone committed a crime. Here, false positives are really bad – they mean that an innocent person is charged!

🤔 Question: When might high recall be more important than high precision?

🙋 Answer: For instance, in medical tests. Here, false negatives are really bad – they mean that someone's disease goes undetected!

Discussion Question

Consider the confusion matrix shown below.

Predicted Negative Predicted Positive
Actually Negative TN = 22 ✅ FP = 2 ❌
Actually Positive FN = 23 ❌ TP = 18 ✅

What is the accuracy of the above classifier? The precision? The recall?


After calculating all three on your own, click below to see the answers.

Accuracy (22 + 18) / (22 + 2 + 23 + 18) = 40 / 65
Precision 18 / (18 + 2) = 9 / 10
Recall 18 / (18 + 23) = 18 / 41

Summary, next time

Summary