TP True Positives
In the context of binary classification, True Positives (TP) are one of the four possible outcomes when evaluating the performance of a machine learning model or a classification algorithm. True Positives represent the instances that the model correctly identifies as positive (or belonging to the positive class) out of all the actual positive instances in the dataset.
Let's break down the components of True Positives:
Binary Classification: In binary classification problems, the goal is to classify instances into one of two classes, typically denoted as "positive" and "negative." For instance, in a medical diagnosis scenario, the classes could be "disease" and "no disease," and the goal is to correctly identify whether a patient has the disease (positive) or does not have the disease (negative).
Positive Class: The positive class is the class of interest in the classification task. It is the class for which we want to evaluate the model's ability to correctly identify instances. In the medical diagnosis example, the positive class would be "disease" since we are interested in correctly identifying patients with the disease.
True Positives (TP): True Positives occur when the model correctly predicts an instance as belonging to the positive class, and the instance is indeed from the positive class in the actual dataset. In other words, TP represents the number of correctly identified positive instances.
- TP = Number of instances correctly predicted as positive.
False Positives (FP): To understand True Positives better, it's essential to differentiate them from False Positives. False Positives occur when the model predicts an instance as positive, but it is actually from the negative class in the actual dataset. FP represents the number of instances that were incorrectly classified as positive when they should have been negative.
- FP = Number of instances incorrectly predicted as positive.
True Positive Rate (TPR) or Sensitivity or Recall: The True Positive Rate, also known as Sensitivity or Recall, measures the model's ability to correctly identify positive instances relative to all the actual positive instances. It is defined as the ratio of True Positives to the total number of positive instances in the dataset.
- TPR = TP / (TP + FN) where FN = False Negatives (instances from the positive class that were incorrectly classified as negative).
Precision: Precision is another important metric used in classification evaluation. It measures the accuracy of the positive predictions made by the model. Precision is defined as the ratio of True Positives to the total number of instances predicted as positive (both correctly and incorrectly).
- Precision = TP / (TP + FP)
In summary, True Positives (TP) are the instances that a binary classification model correctly identifies as belonging to the positive class out of all the actual positive instances in the dataset. Evaluating TP, along with other metrics like False Positives (FP), True Negative (TN), and False Negative (FN), helps in assessing the overall performance of the model and its ability to correctly classify instances. These metrics are crucial in understanding the strengths and weaknesses of the classification algorithm and making informed decisions based on its performance.