Conv2D lost a dimension from tensor. resulting in incompatible dimension error


I’m trying to make some LSTM+CNN hybrid for my college project.

Basically, I have 2 kind of data: a grid which is 10×12 matrix containing 12 technical indicators for stock’s price for the last 10 days, and the price itself

The idea is to process the grid with CNN then use the output as LSTM’s input along with the price

here’s my code

def model_robo():
  grid=tf.keras.Input(shape=(10,12,1),dtype=tf.float32) #grid input
  #processing with CNN
  cnn_result=tf.keras.layers.TimeDistributed(Conv2D(1,kernel_size=(3,3),data_format="channels_first"))(grid) #here's the error 
  #processing with LSTM
  result=LSTM(50, name='LSTM')(masked_position)
  return model

but when I’m trying to call the model with


it gives:

ValueError: Input 0 of layer conv2d_1 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: (None, 12, 1)

which stems from:


I’ve also tried to put:


above the error line, no luck.

i’ve check the grid’s dimension with print(grid.shape) and it gives me (None,10,12,1) which I’m sure is the correct dimension but the code delete the second dimension for some reason.

does anyone know how to fix this? let me know if you need additional information and thank you


After reviewing what each function does again I realized I shouldn’t use TimeDistributed because each batch already represent one period of time.

changing the line into cnn_result=tf.keras.layers.Conv2D(1,kernel_size=(3,3))(grid) fix the problem.

Answered By – Ikhwan Nuttaqwa

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published