ClassificationMachine Learningrecurrent neural networkstensorflow

Building Recurrent Neural Networks in Tensorflow

Geplaatst

Introduction

In the previous blog posts we have seen how we can build Convolutional Neural Networks in Tensorflow and also how we can use Stochastic Signal Analysis techniques to classify signals and time-series. In this blog post, lets have a look and see how we can build Recurrent Neural Networks in Tensorflow and use them to classify Signals.

 

1. Introduction to Recurrent Neural Networks

Recurrent Neural Nets (RNN) detect features in sequential data (e.g. time-series data). Examples of applications which can be made using RNN’s are anomaly detection in time-series data, classification of ECG and EEG data, stock market prediction, speech recogniton, sentiment analysis, etc.

This is done by unrolling the data into N different copies of itself (if the data consists of N time-steps) .
In this way, the input data at the previous time steps t_n - 1, t_n - 2, t_n - 3, ... , t_0 can be used when the data at timestep t_n is evaluated. If the data at the previous time steps is somehow correlated to the data at the current time step, these correlations are remembered and otherwise they are forgotten.

By unrolling the data, the weights of the Neural Network are shared across all of the time steps, and the RNN can generalize beyond the example seen at the current timestep, and beyond sequences seen in the training set.

This is a very short description of how an RNN works. For people who want to know more, here is some more reading material to get you up to speed. For now, what I would like you to remember is that Recurrent Neural Networks can learn whether there are temporal dependencies in the sequential data, and if there are, which dependencies / features can be used to classify the data. A RNN therefore is ideal for the classification of time-series, signals and text documents.

So, Lets start with implementing RNN’s in Tensorflow and using them to classify signals.

 

 

2. Loading the Data

This blog we will work with the CPU-friendly Human Activity Recognition Using Smartphones dataset. This dataset contains measurements done by 30 people between the ages of 19 to 48. These people have a smartphone placed on the waist while doing one of the following six activities:

  • walking,
  • walking upstairs,
  • walking downstairs,
  • sitting,
  • standing or
  • laying.

During these activities, sensor data is recorded at a constant rate of 50Hz. The signals are cut in fixed-width windows of 2.56 sec with 50% overlap. Since, these signals of 2.56 sec long have a sampling rate of 50 Hz, they will have 128 samples in total. For an illustration of this, see Figure 1a.

The smartphone measures three-axial linear body acceleration, three-axial linear total acceleration and three-axial angular velocity. So per measurement, the signal has nine components in total (see Figure 1b).

 

Figure 1a. A plot of the first components (body acceleration measured in the x-direction) of the signal.
Figure 1b. A 3D plot of the nine different components of a signal.

 

 

The dataset is already splitted into a training and a test part, so we can immediately load the signal into two different numpy ndarrays containing the training part and test part.

 

 

The number of signals in the training set is 7352,  and the number of signals in the test set is 2947. As we can see in Figure 2, each signal has a length of of 128 samples and 9 different components, so numerically it can be considered as an array of size 128 x 9.

Figure 2. A single signal can be represented by an array of size 128 x 9.

 

 

3. Recurrent Neural Networks in Tensorflow

As we have also seen in the previous blog posts, our Neural Network consists of a tf.Graph() and a tf.Session(). The tf.Graph() contains all of the computational steps required for the Neural Network, and the tf.Session is used to execute these steps.

The computational steps defined in the tf.Graph can be divided into four main parts;

  1. We initialize placeholders which are filled with batches of training data during the run.
  2. We define the RNN model and to calculate the output values (logits)
  3. The logits are used to calculate a loss value, which then
  4. is used in an Optimizer to optimize the weights of the RNN.

 

 

As you can see, there are different RNN Models and optimizers to choose from.

GradientDescentOptimizer is a vanilla (simple) implementation of Stochastic Gradient Descent while other implementations like the AdaOptimizer, MomentumOptimizer and AdamOptimizer dynamically adapt the learning rate to the parameters resulting in a more computational intensive process with better results. For a good explanation of the differences between all the different optimizers, have a look at Sebastian Ruders’ blog.

 

Besides the different types of optimizers, Tensorflow also contains different flavours of RNN’s.
We can choose from different types of cells and wrappers use them to reconstruct different types of Recurrent Neural Networks.

The basic types of cells are a BasicRNNCell, GruCell, LSTMCell,  MultiRNNCell, These can be placed inside a static_rnn, dynamic_rnn or a static_bidirectional_rnn container.

 

In Figure 3 we can see (on the left side) a schematic overview of the process-steps of constructing a RNN Model together with (on the right side) the lines of code accompanying these steps.

 

Figure 3. A schematic overview of a Recurrent Neural Network implemented in Tensorflow.

 

As you can see, we first split the data into a list of N different arrays with tf.unstack(). Then the type of cell is chosen and passed into the recurrent neural network together with the splitted data.

Now that we have schematically seen how we can create a RNN model, lets have a look at how we can create the different types of models in more detail.

 

 

3.1 Building the model for a RNN

Above, we have seen what the computational steps of the Neural Network consists of. But we have not yet seen the contents of our rnn_model, lstm_rnn_model, bidirectional_lstm_rnn_model, twolayer_lstm_rnn_model or gru_rnn_model. Lets have a look at how these models are constructed in more detail in the few sections below.

 

As you can see, we first split the Tensor containing the data (size batch_size, 128, 9) into a list of 128 Tensors of size (batch_size, 9) each. This is used, together with BasicRNNCell as an input for the static_rnn, which gives us a list of outputs (also of length 128).
The last output in this list (the last time step) contains information from all previous timesteps, so this is the output we will use to classify this signal.

BasicRNNCell is the most basic and vanille cell present in Tensorflow. It is an basic implementation of a RNN cell and does not have an LSTM implementation like BasicLSTMCell has. The accuracy you can achieve with BasicLSTMCell therefore is higher than BasicRNNCelll.

 

 

3.2 From BasicRNNCell to BasicLSTMCell (and beyond)

Since it does not have LSTM implemented, BasicRNNCell has its limitations. Instead of a BasicRNNCell we can use a BasicLSTMCell or an LSTMCell. Both are comparable, but a LSTMCell has some additional options like peephole structures, clipping of values, etc.

 

 

 

3.3 GruCell: A Gated Recurrent Unit Cell

Besides BasicRNNCell and BasicLSTMCell,  Tensorflow also contains GruCell, which is an abstract implementation of the Gated Recurrent Unit, proposed in 2014 by Kyunghyun Cho et al.

 

 

3.4 bi-directional LSTM RNN

The vanille RNN and LSTM RNN models we have seen so far, assume that the data at a step t only depend on ‘past’ events. A bidirectional LSTM RNN, assumes that the output at step t can also depend on the data at future steps. This is not so strange if you think about applications in text analytics or speech recognition: subjects often precede verbs, adjectives precede nouns and in speech recognition the meaning of current sound may depend on the meaning of the next few sounds.

To implement a bidirectional RNN, two BasicLSTMCell’s are used; the first one looks for temporal dependencies in the backward direction and the second one for dependencies in the forward direction.

 

 

 

3.5 Two-layered RNN

We have seen how we can implement a bi-directional LSTM by stacking two LSTM Cells on top of each other, where the first on looks for sequential dependencies in the forward direction, and the second one in the backward direction. You could also place two LSTM cells on top of each other, simply to increase the neural network strength.

 

 

 

3.6 Multi-layered RNN

In this RNN network, n layers of RNN are stacked on top of each other. The output of each layer is mapped into the input of the next layer, and this allows the RNN to hierarchically looks for temporal dependencies. With each layer the representational power of the Neural Network increases (in theory).

 

 

The num_layer parameter determines how many layers are used to determine the temporal dependencies in the data. The more layers you have, the higher the representation power of the RNN is.

 

 

4. Classification results

We have seen how we can build several different types of Recurrent Neural Networks. The question then is, how do these RNN’s perform in practice?
Does the accuracy really increase a lot with the number of layers or the number of hidden units?
What is the effect of the chosen optimizer and the learning rate?

In each image you can see the final accuracy in the test set for different learning rates, models, optimizers and hidden units. You can click on each image for a more detailed graph of the training and test accuracies.

 

 

 

 

5. Conclusion and Final Words

In this blog-post we have seen how we can build an Recurrent Neural Network in Tensorflow, from a vanille RNN model, to an LSTM RNN, GRU RNN, bi-directional or multi-layered RNN’s. Such Recurrent Neural Networks are (powerful) tools which can be used for the analysis of time-series data or other data which is sequential in nature (like text or speech).

What I have noticed so far is:

  • The most important factor to achieve high accuracy values is the chosen learning rate. It should be carefully tuned, first with large steps than with finer steps.
  • AdamOptimizer usually performs best.
  • More hidden units is not necessarily better. In any case, if you change the number of hidden units, you probably need to find the optimum value for learning rate again.
  • For the type of RNN also; more layers is not necessarily better. BasicRNNCell has the worst performance, but except for BasicRNNCell there is no single implementation which outperforms all others in all regards. If you implement a RNN containing a BasicLSTMCell and carefully tune the learning rate and implement some l2-regularization it should be good enough for most applications.
  • I am not that impressed with RNN’s in general. Same accuracy values can be / are achieved with simple stochastic analysis techniques with much less effort. With Stochastic analysis techniques you also have the benefit of knowing what the characteristic feature of each type of signal is.

 

 

[1] If you feel like you need to refresh your understanding of CNN’s, here are some good starting points to get you up to speed:

go back to top

 

Share This:

3 gedachten over “Building Recurrent Neural Networks in Tensorflow

  1. Hi Ahmet,

    Searching the web to understand the unstacking of 3D tensors in TF I came up with your blog post. Although it helped me to solve the problem, I think that Figure 3 is wrong.

    Let me explain with an example:

    a = array of shape (20, 3, 2)
    If unstacking with axis=1; then num = shape[axis] = shape[1] = 3.
    So, as the second index gets clipped and as in the docs is stated:
    Unpacks num tensors from value by chipping it along the axis dimension. That is, we get 3 tensors of shape (20,3).

    So, in the image you show the tensors numbers as if they had been clipped with axis=2, which is not the case. The real image should be taking the first column of each stacked tensor (in my example, 2 columns as they are 2 stacked arrays) and building the new array as (20,2); same with all the 2nd columns and finally the same with all the 3rd columns, resulting in a list of three tensors.

    I hope you understand my explanation.

    Thanks for the blog post; it really helped me deeping in the notions.

Geef een antwoord

Het e-mailadres wordt niet gepubliceerd.