Is it possible to convert a TensorFlow (Keras) model from BGR to RGB?


I have converted a Caffe model, learned on BGR data, to ONNX format and then from ONNX to TensorFlow (Keras)
So now I have a Keras model, learned on BGR data. Is it possible to convert it in such a way, that it will properly work with RGB data?

I’ve tried to convert it to OpenVINO with the –reverse_input_channel flag and then back to TensorFlow, but it seems that openvino2tensorflow works very poorly, so it didn’t work. Maybe there is some simpler way?


I’ve realized that I get SavedModel through Keras model, so I’ve updated the question.

Update 2

I’ve applied AndrzejO solution. However, now my model gives much worse results than before. Am I doing something wrong?

from keras.layers import Input, Lambda
from keras.models import Model

input_shape = k_model.get_layer(index = 0).input_shape[0][1:]
inputs = Input(shape=input_shape)
lambda_layer = Lambda(lambda x: x[:,:,:,::-1])(inputs)
outputs = k_model(lambda_layer)
k_model = Model(inputs=inputs, outputs=outputs)

Update 3

Regarding AndrdzejO hint, I’ve tested the reversed model on the reversed (BGR) images and compared results with the former model on normal (RGB) images. That’s strange – they are similar but not the same. Below is the code (in Java) that reverses the image:

  public static byte[] getPixelsBGR(Bitmap image) {
    // calculate how many bytes our image consists of
    int bytes = image.getByteCount();

    ByteBuffer buffer = ByteBuffer.allocate(bytes); // Create a new buffer
    image.copyPixelsToBuffer(buffer); // Move the byte data to the buffer

    byte[] pixels = buffer.array(); // Get the underlying array containing the data.

    // Copy pixels into place
    for (int i = 0; i < pixels.length/4; i++)
      byte pom = pixels[i*4];
      pixels[i * 4] = pixels[i * 4 + 2];
      pixels[i * 4 + 2] = pom;

    return pixels;
if (!modelBGR)
    byte[] pixels = getPixelsBGR(resizedBitmap);
    ByteBuffer pixelBuffer = ByteBuffer.wrap(pixels);

**Update 4**

**AndrzejO**'s solution works perfectly. It correctly reverses the channel order. The thing was I had been subtracting channels mean in the tflite metadata and forgot that I need to also reverse the order of channels there. After I've corrected this, I have the exact same results, which implies that the reversion of channels has worked perfectly.

For some reason, reversing the channel order in my case makes inference less accurate (as if though the channels have alreasdy been reversed in some earlier conversion process), but that's a different thing to investigate


You can create a new model: first a lambda layer which will reverse the channel order, than your saved model:

input_shape = old_model.get_layer(index = 0).input_shape[0][1:]
inputs = Input(shape=input_shape)
lambda_layer = Lambda(lambda x: x[:,:,:,::-1])(inputs)
outputs = old_model(lambda_layer)
new_model = Model(inputs=inputs, outputs=outputs)

Answered By – AndrzejO

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published