To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

From Wikipedia, the free encyclopedia

P4 metric [1][2] enables performance evaluation of the binary classifier. It is calculated from precision, recall, specificity and NPV (negative predictive value). P4 is designed in similar way to F1 metric, however addressing the criticisms leveled against F1. It may be perceived as its extension.

Like the other known metrics, P4 is a function of: TP (true positives), TN (true negatives), FP (false positives), FN (false negatives).

Justification

The key concept of P4 is to leverage the four key conditional probabilities:

- the probability that the sample is positive, provided the classifier result was positive.
- the probability that the classifier result will be positive, provided the sample is positive.
- the probability that the classifier result will be negative, provided the sample is negative.
- the probability the sample is negative, provided the classifier result was negative.

The main assumption behind this metric is, that a properly designed binary classifier should give the results for which all the probabilities mentioned above are close to 1. P4 is designed the way that requires all the probabilities being equal 1. It also goes to zero when any of these probabilities go to zero.

Definition

P4 is defined as a harmonic mean of four key conditional probabilities:

In terms of TP,TN,FP,FN it can be calculated as follows:

Evaluation of the binary classifier performance

Evaluating the performance of binary classifier is a multidisciplinary concept. It spans from the evaluation of medical tests, psychiatric tests to machine learning classifiers from a variety of fields. Thus, many metrics in use exist under several names. Some of them being defined independently.

Predicted condition Sources: [3][4][5][6][7][8][9][10][11]
Total population
= P + N
Positive (PP) Negative (PN) Informedness, bookmaker informedness (BM)
= TPR + TNR − 1
Prevalence threshold (PT)
=
Actual condition
Positive (P) True positive (TP),
hit
False negative (FN),
type II error, miss,
underestimation
True positive rate (TPR), recall, sensitivity (SEN), probability of detection, hit rate, power
= TP/P = 1 − FNR
False negative rate (FNR),
miss rate
= FN/P = 1 − TPR
Negative (N) False positive (FP),
type I error, false alarm,
overestimation
True negative (TN),
correct rejection
False positive rate (FPR),
probability of false alarm, fall-out
= FP/N = 1 − TNR
True negative rate (TNR),
specificity (SPC), selectivity
= TN/N = 1 − FPR
Prevalence
= P/P + N
Positive predictive value (PPV), precision
= TP/PP = 1 − FDR
False omission rate (FOR)
= FN/PN = 1 − NPV
Positive likelihood ratio (LR+)
= TPR/FPR
Negative likelihood ratio (LR−)
= FNR/TNR
Accuracy (ACC) = TP + TN/P + N False discovery rate (FDR)
= FP/PP = 1 − PPV
Negative predictive value (NPV) = TN/PN = 1 − FOR Markedness (MK), deltaP (Δp)
= PPV + NPV − 1
Diagnostic odds ratio (DOR) = LR+/LR−
Balanced accuracy (BA) = TPR + TNR/2 F1 score
= 2 PPV × TPR/PPV + TPR = 2 TP/2 TP + FP + FN
Fowlkes–Mallows index (FM) = Matthews correlation coefficient (MCC)
=
Threat score (TS), critical success index (CSI), Jaccard index = TP/TP + FN + FP

Properties of P4 metric

  • Symmetry - contrasting to the F1 metric, P4 is symmetrical. It means - it does not change its value when dataset labeling is changed - positives named negatives and negatives named positives.
  • Range:
  • Achieving requires all the key four conditional probabilities being close to 1.
  • For it is sufficient that one of the key four conditional probabilities is close to 0.

Examples, comparing with the other metrics

Dependency table for selected metrics ("true" means depends, "false" - does not depend):

P4 true true true true
F1 true true false false
Informedness false true true false
Markedness true false false true

Metrics that do not depend on a given probability are prone to misrepresentation when it approaches 0.

Example 1: Rare disease detection test

Let us consider the medical test aimed to detect kind of rare disease. Population size is 100 000, while 0.05% population is infected. Test performance: 95% of all positive individuals are classified correctly (TPR=0.95) and 95% of all negative individuals are classified correctly (TNR=0.95). In such a case, due to high population imbalance, in spite of having high test accuracy (0.95), the probability that an individual who has been classified as positive is in fact positive is very low:

And now we can observe how this low probability is reflected in some of the metrics:

  • (Informedness / Youden index)
  • (Markedness)

Example 2: Image recognition - cats vs dogs

We are training neural network based image classifier. We are considering only two types of images: containing dogs (labeled as 0) and containing cats (labeled as 1). Thus, our goal is to distinguish between the cats and dogs. The classifier overpredicts in favor of cats ("positive" samples): 99.99% of cats are classified correctly and only 1% of dogs are classified correctly. The image dataset consists of 100000 images, 90% of which are pictures of cats and 10% are pictures of dogs. In such a situation, the probability that the picture containing dog will be classified correctly is pretty low:

Not all the metrics are noticing this low probability:

  • (Informedness / Youden index)
  • (Markedness)

See also

References

  1. ^ Sitarz, Mikolaj (2022). "Extending F1 metric, probabilistic approach". arXiv:2210.11997 [cs.LG].
  2. ^ "P4 metric, a new way to evaluate binary classifiers".
  3. ^ Balayla, Jacques (2020). "Prevalence threshold (ϕe) and the geometry of screening curves". PLoS One. 15 (10). doi:10.1371/journal.pone.0240215.
  4. ^ Fawcett, Tom (2006). "An Introduction to ROC Analysis" (PDF). Pattern Recognition Letters. 27 (8): 861–874. doi:10.1016/j.patrec.2005.10.010.
  5. ^ Piryonesi S. Madeh; El-Diraby Tamer E. (2020-03-01). "Data Analytics in Asset Management: Cost-Effective Prediction of the Pavement Condition Index". Journal of Infrastructure Systems. 26 (1): 04019036. doi:10.1061/(ASCE)IS.1943-555X.0000512.
  6. ^ Powers, David M. W. (2011). "Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation". Journal of Machine Learning Technologies. 2 (1): 37–63.
  7. ^ Ting, Kai Ming (2011). Sammut, Claude; Webb, Geoffrey I. (eds.). Encyclopedia of machine learning. Springer. doi:10.1007/978-0-387-30164-8. ISBN 978-0-387-30164-8.
  8. ^ Brooks, Harold; Brown, Barb; Ebert, Beth; Ferro, Chris; Jolliffe, Ian; Koh, Tieh-Yong; Roebber, Paul; Stephenson, David (2015-01-26). "WWRP/WGNE Joint Working Group on Forecast Verification Research". Collaboration for Australian Weather and Climate Research. World Meteorological Organisation. Retrieved 2019-07-17.
  9. ^ Chicco D, Jurman G (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation". BMC Genomics. 21 (1): 6-1–6-13. doi:10.1186/s12864-019-6413-7. PMC 6941312. PMID 31898477.
  10. ^ Chicco D, Toetsch N, Jurman G (February 2021). "The Matthews correlation coefficient (MCC) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation". BioData Mining. 14 (13): 1-22. doi:10.1186/s13040-021-00244-z. PMC 7863449. PMID 33541410.
  11. ^ Tharwat A. (August 2018). "Classification assessment methods". Applied Computing and Informatics. doi:10.1016/j.aci.2018.08.003.
This page was last edited on 27 October 2022, at 09:44
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.