## Issue

Trying to convert the batch normalization layer from Tensorlayer version 1.11.1 to Tensorflow 2 and getting different outputs from this layer during inference using the same pretrained model.

**Tensorlayer 1.11.1**

`tensorlayer.layers.BatchNormLayer(network, is_train=False, name="batch_norm") `

**Tensorflow 2.8.0**

`tf.keras.layers.BatchNormalization(trainable=False, momentum=0.9, axis=3, epsilon=1e-05, gamma_initializer=tf.random_normal_initializer(mean=1.0, stdev=0.002))(network)`

What am I missing to get the BatchNorm output to match?

## Solution

The TF1 model I had was in NPZ format.

The weights from Tensorlayer are saved in the order of:

**beta, gamma**, moving mean, variance.

In TF2, the batch norm layer is in the order of:

**gamma, beta**, moving mean, variance.

If the order of the weights for beta and gamma are reversed when moving from TF1 to TF2 it solves the issue.

Answered By – ro5423

**This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 **