| Hyperparameters | Search space |
| Number of neurons in 1st layer | 13, 25, 50 | Number of neurons in a final layer | 1 | Number of hidden layers | 1, 2, 3 | Optimizer | Adam, Nadam, SGD, RMSprop | Activation functions | ReLU, Tanh, Sigmoid, Softmax | Recurrent activation functions | Sigmoid, ReLU, Tanh | Dropout | 0, 0.1, 0.2 | Batch size | 64, 100, 200 | Number of iterations | 15, 20, 35, 40 | Kernel regularizers l1 | 1 × 10−4, 1 × 10−5, 1 × 10−6 |
|
|