How to use 6×6 filter size in a 2D Convolutional Neural Network without causing negative dimensions?


I a doing classification with a 2D CNN. My data is composed of samples of size (3788, 6, 1). (rows, columns, channels)

The 6 columns in each tensor represent X, Y and Z values of an accelerometer sensor and a gyroscope sensor, respectively. My aim is to predict which of 11 possible movements is performed in a sample, based on the sensor data.

From a logical standpoint, it would make sense to me to make the filter consider the X, Y and Z values of both sensors all together in each stride, since all 6 values combined are key to defining the movement.
This would leave me with a filter of size (?, 6). Since filters are mostly used in quadratic shape, I tried to use a filter of size (6, 6). This raises an error since the filters turn the shape of my data into negative dimensons.
A filter size of (2, 2) works, but as I have just described, it does not make logical sense to me since it only considers two column values of two rows at a time, and thereby only considers a fraction of the entire movement at a given point in time.


model = keras.Sequential()
model.add(Conv2D(filters=32, kernel_size=(6, 6), strides=1, activation = 'relu', input_shape= (3788, 6, 1))) 
model.add(MaxPooling2D(pool_size=(6, 6)))
model.add(Conv2D(filters=64, kernel_size=(6, 6), strides=1, activation = 'relu'))
model.add(Dense(11, activation='softmax'))

model.compile(optimizer=Adam(learning_rate = 0.001), loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
history =, y_train, epochs = 10, validation_data = (X_test, y_test), verbose=1)


Negative dimension size caused by subtracting 6 from 1 for '{{node max_pooling2d_7/MaxPool}} = MaxPool[T=DT_FLOAT, data_format="NHWC", ksize=[1, 6, 6, 1], padding="VALID", strides=[1, 6, 6, 1]](conv2d_13/Identity)' with input shapes: [?,3783,1,32].

I have 3 questions:

  1. Is it even possible to use a 6×6 filter for my data? Changing filter sizes and/or leaving out the Pooling layer did not work.
  2. I must admit that I still do not completely understand how the shape of my data changes from one layer to the next. Can you recommend an exemplary and explanatory resource for this?
  3. Would a 1×6 filter be also an option, although not shaped qhadratically? This would assure that one datapoint (composed of X,Y and Z values of both sensors) is considered within each stride.


For using 6×6 convolutions, try using "same" padding for all the Conv2D layers. Add a padding='same' parameter in both Conv2D layers.
For your second question try simulating this behavior on this website
Not so sure about your third question.

Answered By – Hetarth Chopra

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published