Alex Honchar

Hello everyone! First of all I am thankful to you all, who is reading my blog, subscribing, and sharing opinions. It really makes me feel like what I do is not totally senseless and helps someone ❤.

In five last tutorials we were discussing financial forecasting with artificial neural networks where we compared different architectures for financial time series forecasting, realized how to do this forecasting adequately with correct data preprocessing and regularization, performed our forecasts based on multivariate time series and could produce really nice results for volatility forecasting and implemented custom loss functions. In the last one we have set and experiment with using data from different sources and solving two tasks with single neural network.

I think you have noticed that I usually take some architecture of the network as granted and don’t explain why I take this particular number of layers, this particular activation function, this loss function etc. This is really tricky question. Yes, it’s “normal” in deep learning community to take **ReLU** (or more modern in 2k17 alternative like **ELU** or **SELU**) as activation and be happy with this, but we usually don’t think if it’s correct. Talking about number of layers or learning rate for optimizer — we just take something standard. Today I want to talk about the way how to automatize the process of making a choice from these options.

#### Previous posts:

- Simple time series forecasting (and mistakes done)
- Correct one-dimensional time series forecasting + backtesting
- Multivariate time series forecasting
- Volatility forecasting and custom losses
- Multitask and multimodal learning

As always, code is available on the Github.