tf.reshape(tensor, [-1]) VS tf.reshape(tensor, -1)


What is the difference between these two?
1- tf.reshape(tensor, [-1])
2- tf.reshape(tensor, -1)

I can not find any difference between these two, but when I use -1 without brackets, an error occurs when trying to map the function to a TensorSliceDataset.
Here is the simplified version of the code:

def reshapeME(tensor):
    reshaped = tf.reshape(tensor,-1)

    return reshaped

new_y_test =

and here is the Error:

 ValueError: Shape must be rank 1 but is rank 0 for '{{node Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](one_hot, Reshape/shape)' with input shapes: [6], [].

If I add the bracket, there is no error. Also, there is no error when the function is used by calling and feeding a tensor.


tf.reshape expects a tensor or tensor-like variable as the shape in Graph mode:

A Tensor. Must be one of the following types: int32, int64. Defines the shape of the output tensor.

So, simple scalars will not work in this case. The map function of a is always executed in Graph mode:

Note that irrespective of the context in which map_func is defined
(eager vs. graph), traces the function and executes it as a

Answered By – AloneTogether

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published