Complexity and Robustness Trade-Off for Traditional and Deep Models 2022
1National University of Computer and Emerging Sciences, Faisalabad, Pakistan
2Innopolis University, Innopolis, Russia
3University of Messina, Messina, Italy
Complexity and Robustness Trade-Off for Traditional and Deep Models 2022
Description
Conventional, as well as Deep Learning, have attained incredible results in many real-world applications depending upon the availability of large quantities and high quality of labeled training examples. However, the acquisition of reliable labeled training examples is a big challenge for the practitioners for the deep models, due to the fact that deep models require a large number of labeled training examples for learning.
The labeling process for real-world applications is complex, time-intensive, and expensive. Therefore, there is a need to develop some learning strategies which may help practitioners to acquire reliable, informative, and heterogeneous labeled training examples by machine-machine interaction without human involvement. Moreover, a large amount of labeled training examples leads to more complex models. The complex models are not easy to interpret, not easy to reproduce, and have more risk of overfitting which ultimately produces biased results.
Thus, this Special Issue aims to present a collection of new trends in learning strategies to limit complexity and enhance the generalization performance of Deep Models. Original research and review articles are welcome.
Potential topics include but are not limited to the following:
- Complexity and robustness
- Learning strategies
- Multi-level and multi-sensor imaging
- Traditional/multispectral/hyperspectral imaging
- IoT and security
- Domain adaptation and randomization
- Statistical learning
- Fuzzy logic for interaction