Building Neural Network over and over with same parameters give different result

Issue

I am using tensorflow 2.3.0

I am creating RNN to predict stock prices, whenever I Restart jupyter notebook & re-Run all my code it outputs very different predictions. Why is it so when all hyperparameters are exactly the same?

I know this is very basic question but can someone tell me or tell me where I can read about it?

Here is my code:

def GetStockData(ticker_name, period, start_date, end_date):
    tickerData = yf.Ticker(ticker_name)
    df = tickerData.history(period=period, start=start_date, end=end_date)
    return df

full_nvda_df = GetStockData("NVDA", "1d", "2016-01-01", "2020-10-10")
nvda_df = full_nvda_df[["Close"]].copy()
train_df = nvda_df[:1000]
test_df = nvda_df[1000:]

train_arr = np.array(train_df)
test_arr = np.array(test_df)

for i in range(30, len(train_df)):
    X_train.append(train_arr[i-30:i])
    y_train.append(train_arr[i, 0])
    
X_train, y_train = np.array(X_train), np.array(y_train)



np.random.seed(1337)
model = tf.keras.Sequential([
    tf.keras.layers.LSTM(units=60, activation='relu', return_sequences=True, input_shape=(X_train.shape[1], 1) ),
    tf.keras.layers.Dropout(0.2),
    
    tf.keras.layers.LSTM(units=60, activation='relu', return_sequences=True),
    tf.keras.layers.Dropout(0.2),
    
    tf.keras.layers.LSTM(units=80, activation='relu', return_sequences=True),
    tf.keras.layers.Dropout(0.2),
    
    tf.keras.layers.LSTM(units=120, activation='relu'),
    tf.keras.layers.Dropout(0.2),
    
    tf.keras.layers.Dense(units=1)
])

model.summary()
model.compile(loss = "mean_squared_error",
              optimizer = "adam")

Solution

UPDATE (TF >= 2.9)

Starting from TF 2.9 (TF >= 2.9), if you want your TF models to run deterministically, the following lines need to be added at the beginning of the program.

import tensorflow as tf

tf.keras.utils.set_random_seed(1)
tf.config.experimental.enable_op_determinism()

Important note: The first line sets the random seed for the following : Python, NumPy and TensorFlow. The second line makes each TensorFlow operation deterministic.

OLDER THAN TF 2.8

Without seeing your code, I may only infer that it is because the weights of the neural network are randomly initialized; when you refer to

"configurations are all the same"

you may refer to the hyperparameters which you actually set them from the beginning or iterate over a set to determine the most suitable values for them on your particular problem.

However, the weights are randomly initialized and this will inevitably lead to a different convergence path of your network, and thus evidently a different local optimum (i.e. different results).

The only way to ensure reproducibility(as far as I am aware of) is to "seed everything", like below:

random.seed(seed)
np.random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
tf.random.set_seed(seed)

According to this thread on GitHub https://github.com/keras-team/keras/issues/14986,

In TensorFlow >= 2.5, one can use TF_DETERMINISTIC_OPS=1 is also a solution to ensure the reproducibility.

However it is mentioned that there are still some GPU Ops not reproducible, and that perfect reproducibility between the results CPU and GPU may never be achieved.

Answered By – Timbus Calin

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published