Searching the web to understand the unstacking of 3D tensors in TF I came up with your blog post. Although it helped me to solve the problem, I think that Figure 3 is wrong.

Let me explain with an example:

a = array of shape (20, 3, 2)

If unstacking with axis=1; then num = shape[axis] = shape[1] = 3.

So, as the second index gets clipped and as in the docs is stated:

Unpacks `num`

tensors from `value`

by chipping it along the `axis`

dimension. That is, we get 3 tensors of shape (20,3).

So, in the image you show the tensors numbers as if they had been clipped with axis=2, which is not the case. The real image should be taking the first column of each stacked tensor (in my example, 2 columns as they are 2 stacked arrays) and building the new array as (20,2); same with all the 2nd columns and finally the same with all the 3rd columns, resulting in a list of three tensors.

I hope you understand my explanation.

Thanks for the blog post; it really helped me deeping in the notions.

]]>Thanks for the nice tutorial. I’m experiencing the following error during the training phase:

ValueError: Dimensions must be equal, but are 32768 and 25088 for ‘MatMul’ (op: ‘MatMul’) with input shapes: [16,32768], [25088,4096].

Any help would be appreciated!

]]>ValueError: Dimensions must be equal, but are 32768 and 25088 for ‘MatMul’ (op: ‘MatMul’) with input shapes: [16,32768], [25088,4096]. ]]>

http://ataspinar.com/2018/07/05/building-recurrent-neural-networks-in-tensorflow/ ]]>