ShrutiAppiah / crypto-forecasting-with-neuralnetworks

Feedforward and LSTM neural networks forecast our favourite financial markets. 🌊

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Forecasting with feedforward and LSTM neural networks

Paper & assoicated research

Read the research paper.

Highlights

Noise can sometimes be good!

  • Noise helps optimizers escape saddle points and local maxima/minima

Vanishing gradients are a problem. LSTM units save the day.

  • LSTM (Long Short-term Memory) neural networks are mindful of long-term dependencies. They remember things from the past just like your girlfriend does. Read more about gradient descents.

Adam optimizers can recalculate neuron weights based on both first and second order moments

  • Adam optimizers combine Adaptive Gradient (AdaGrad) and Root Mean Square Propogation (RMS Prop) calculators.
  • In a distribution, the first-order moment is the mean. The second-order moment is the variance.
  • AdaGrad is great at handling sparse gradients. It calculates second-order moments based on multiple past gradients.
  • RMSProp is based solely on first-order moments i.e means.
  • Combined, the Adam Optimizer produces more sensible learning rates in each iteration.

Overview

Research Summary

License

License: MIT

Copyright (c) 2018 Shruti Appiah

About

Feedforward and LSTM neural networks forecast our favourite financial markets. 🌊

License:MIT License


Languages

Language:Python 100.0%