Your headshot

Anthony Polloreno, Ph.D.

Research Engineer

The Impact of Noise on Recurrent Neural Networks II

In this section, we are going to consider the simulation of the echo state networks discussed in the last post. This is an oddly constrained problem, and there are actually a few design decisions. Dealing with the variable training length is annoying. I messed around with this for a while (we will explore this in the appendix), but discovered that a much better thing to do is simulate an ensemble of reservoirs. In principle, each size (and each reservoir in each ensemble for each size) should have a different training length, so we don't over- or under-fit. We still see an improvement with reservoir size if we ignore this, so we do for now. Simulating an ensemble now gives us a clear dimension to parallelize over. To do this, we now need to chunk the time data, because loading the entire sequence onto the GPU takes up space that can otherwise be used for simulation and storing neural activations.

To remind ourselves of our final goal, we are interested in simulating echo state networks, and considering their performance when we also have the output product signals available to us. The final twist is that we are interested in understanding how noise impacts the performance of these networks. Noise is abundant, and normally protected against in digital logic by error correcting codes, however if we are interested in building hardware that runs native machine learning algorithms, there are several reasons why noise may become relevant. First, if the bits per float become small enough, we will see rounding error - especially in recurrent settings where the errors have a chance to accumulate. If the hardware itself is physical, or thermodynamic in any sense, thermal noise can cause fluctuations around the expected behavior of the circuits. If the network is observing data from sources such as sensors, these sensors will often have noise from picking up undesired signals. In this sense, this is an interesting problem for any real world setting that involves randomness.

While this is not the focus of this notebook, we note in passing that this is often the setting in reinforcement learning contexts in settings such as the REINFORCE algorithm (Williams, Ronald J. "Simple statistical gradient-following algorithms for connectionist reinforcement learning." Machine learning 8 (1992): 229-256.) where the data being gathered about the world is used to evaluate a gradient in expectation, which is also a continuous, rather than discrete, value. The REINFORCE algorithm specifically is a policy gradient method used in reinforcement learning for training policy networks. Given a parameterized policy, the algorithm approximates the gradient by sampling from the policy policy and using the sampled returns for the estimation of the gradient. This form of sampling-based estimation allows REINFORCE to update policy parameters in the direction that, on average, increases the probability of actions that lead to higher returns. That being said, any estimate of the gradient is imperfect. The imperfections come from finite sample effects in the case that the environment is stochastic, and come from systematic errors in the case that the policy is a poor representation of the optimal policy. In this notebook we take a much simpler approach, and simply add Gaussian noise to each value output by the echo state network.

Check out the next notebook here!

Acknowledgements

A special thanks to Alex Meiburg, André Melo and Eric Peterson for feedback on this post!