Contents

MLClassifierMetrics

Metrics you use to evaluate a classifier’s performance.

Declaration

struct MLClassifierMetrics

Mentioned in

Overview

Use MLClassifierMetrics to evaluate your model’s ability to distinguish between different categories when it’s classifying data.

You can determine the model’s accuracy using the classificationError metric. For information about how your model is mislabeling or missing a certain category, use the precisionRecall metric. To determine specific cases where your model is mistaking one label for another, use the confusion property.

Accuracy can be a misleading metric if you use unbalanced data, which means the number of examples for some categories are much larger than others. Instead, use precisionRecall or confusion.

Topics

Understanding the model

Handling errors

Creating metrics

Describing metrics

Default Implementations

See Also

Model accuracy