Att mäta "bra" – utvärdering

Confusion Without Math

~18 min

Now you'll see why 99% accuracy can be completely worthless. This is the most important lesson in metrics.

🚨 Critical: Try "Imbalanced (99:1)" mode

You MUST switch to imbalanced data. The demonstration won't make sense until you see a model achieve 99% accuracy while being completely useless.

Confusion Matrix

Predicted: 0
Predicted: 1
Actual: 0
True Negative
23
Correctly rejected
False Positive
27
False alarm
Actual: 1
False Negative
0
Missed it!
True Positive
50
Correctly caught
Conservative (fewer positives)Aggressive (more positives)
Accuracy
73.0%
(TP+TN)/All
Precision
64.9%
TP/(TP+FP)
Recall
100.0%
TP/(TP+FN)
F1 Score
78.7%
Balance

What Optimizing for ACCURACY Means:

You care about overall correctness. Works well for balanced data. Fails catastrophically on imbalanced data (like fraud detection).

Understanding the Confusion Matrix

✅ True Positive (TP)

Said "yes" and was correct. The good hits.

🚨 False Positive (FP)

Said "yes" but was wrong. False alarms.

😰 False Negative (FN)

Said "no" but was wrong. Missed cases.

✅ True Negative (TN)

Said "no" and was correct. Correctly ignored.

💡 Why Accuracy Lies

On imbalanced data, a model can predict "always no" and get 99% accuracy — while catching zero of the cases you actually care about. That's why precision and recall exist: they measure what accuracy hides.