| Technique used | Pavement indicator | Data sources | Metrics | References | Strength | Weakness |
| Artificial neural network ANN | IRI | LTPP | R2, RMSE, MAE, MSE, CF, and VAF | Abdelaziz, et al. [64] | Able to work with vast amounts of data and most challenging problems, change the structure to the used parameters, suitable for time-series problems. | Expensive to train, requires long training time and massive data | Observed | R2, RMSE, MAE | Lin et al. [50], Mallika, et al. [65] | PCI | Observed | MAE, RMSE, and R2 | Shahriazari et al. [46], Jalal, et al. [47] |
| Neuro-fuzzy Model NFM | IRI | LTPP observed | R2 and RMSE correlation factor R | Soncim et al. [66], Ngnyen, et al. [32] | Suitable for complex data interactions, easy to scale and have high converge | Requires huge data, complex and difficult to debug. |
| Regression | IRI | LTPP | R2, MSE, RMSE | Elhadidy et al. [22], Piryonesi and El-Diraby [29] | Simple, requires a minimum number of parameters, suitable in classification and recognition works | Expensive, not able to work with a multifeatures dataset and poor in presenting the extreme events. | PCI | Observed | R2 | Ahmed, et al. [63] |
| Support Victor machine | IRI | Observed | Ransom output error | Roberts and Attoh‐Okine [57] | Training is simple and relatively easy, suitable in high-dimensional data | Requires high memory and more time for training the model. | IRI | LTPP | MSE MAE and RMSE | Kargah-Ostadi and Stoffels [67] |
|
|
R2: coefficient of determination, RMSE: root mean squared error, MAPE: mean absolute presenting error, CF: correction factor, VAF: variance account for, MAE: mean absolute error, RE: relative error, MSE: mean squared error, and SDMSE: standard deviation of mean squared error.
|