Development of a Wireless Sensor Network and IoT-based Smart Irrigation System
Table 1
The threshold metrics for classification evaluations.
Metrics
Formula
Evaluation focus
Accuracy (acc)
In general, the accuracy metric measures the ratio of correct predictions over the total number of instances evaluated.
Error rate (err)
Misclassification error measures the ratio of incorrect predictions over the total number of instances evaluated.
Sensitivity (sn)
This metric is used to measure the fraction of the positive patterns that are correctly classified.
Specificity (sp)
This metric is used to measure the fraction of the negative patterns that are correctly classified.
Precision (p)
Precision is used to measure the positive patterns that are correctly predicted from the total predicted patterns in a positive class.
Recall (r)
Recall is used to measure the fraction of the positive patterns that are correctly classified.
F-measure (FM)
This metric represents the harmonic mean between recall and precision values.
Geometric-mean (GM)
This metric is used to maximize the tp rate and tn rate, and simultaneously keeping both rates relatively balanced.
Averaged accuracy
The average effectiveness of all classes.
Averaged error rate
The average errors of all classes.
Averaged precision
The average of per-class precision.
Averaged recall
The average of per-class recall.
Averaged F-measure
The average of per-class F-measure.
Note: Each class of data; tpi—true positive for Ci; fpi—false positive for Ci; fni—false negative for Ci; tni—true negative for Ci; and m—macro-averaging.