TFIDF vector into LSTM model


I am trying to feed my TFIDF vector into an LSTM model.
TfidfVectorizer(ngram_range=(1,2), use_idf=True, analyzer='word', max_features = 5000)

Here’s the Vector shapes
train_vector.shape = (22895, 5000)
test_vector.shape = (5724, 5000)

I am defining a model like below:

model = models.Sequential()

model.add(layers.LSTM(64, input_shape=(5000, 1), activation='relu'))

model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))

Other Parameters

model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']), y_train, validation_data=(test_vector, y_test), epochs=10, batch_size=1024)

Tensorflow is being used here.

I am getting this error

ValueError: Input 0 of layer sequential_2 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 5000)

I am trying to reshape the arrays, but the errors are still showing. I know LSTM needs 3D array. So how can I shape my arrays in a way that can be fed into LSTM???


To add a new dimension to your training and test data, you can try:

train_vector = train_vector[..., None] # or tf.newaxis instead of None
test_vector = test_vector[..., None] # or tf.newaxis instead of None


train_vector = tf.expand_dims(train_vector, axis=-1)
test_vector = tf.expand_dims(test_vector, axis=-1)

Also, note that if you have one node in your output layer and you are using a sigmoid activation function, you usually combine it with the binary_crossentropy loss function instead of sparse_categorical_crossentropy, which is usually used for more than 2 classes.

Answered By – AloneTogether

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published