how to apply imgaug augmentation to tf.dataDataset in Tensorflow 2.0

Issue

I have an application with an input pipeline that uses a tf.data.Dataset of images and labels. Now I would like to use augmentations, and I’m trying to use the imgaug library for that purpose. However, I do not know how to do that. All the examples I have found use Keras ImageDataGenerator or Sequence.

In code, given a sequential augmenter like this:

  self.augmenter = iaa.Sequential([
        iaa.Fliplr(config.sometimes),
        iaa.Crop(percent=config.crop_percent),
        ...
        ], random_order=config.random_order)

I am trying to apply that augmenter to batches of images in my dataset, without success. It seems that I cannot eval tensors since I’m running my augmentations inside a map function.

def augment_dataset(self, dataset):
    dataset = dataset.map(self.augment_fn())
    return dataset

def augment_fn(self):
    def augment(images, labels):
        img_array = tf.make_ndarray(images)
        images = self.augmenter.augment_images(img_array) 
        return images, labels
    return augment

For example, If I try to use make_ndarray I get an AttributeError: 'Tensor' object has no attribute 'tensor_shape'

Is this due to Dataset.map not using eager mode?. Any ideas on how to approach this?

Update #1

I tried the suggested tf.numpy_function, as follows:

def augment_fn(self):
    def augment(images, labels):
        images = tf.numpy_function(self.augmenter.augment_images,
                                   [images],
                                   images.dtype)
        return images, labels
    return augment

However, the resulting images have an unknown shape, which results in other errors later on. How can I keep the original shape of images? Before applying the augmentation function my batch of images have shape (batch_size, None, None, 1), but afterwards shape is <unknown>.

Update #2

I solved the issue with the unknown shape by first finding the dynamic (true) shape of the images and then reshaping the result of applying the augmentation.

def augment_fn(self):
    def augment(images, labels):
        img_dtype = images.dtype
        img_shape = tf.shape(images)
        images = tf.numpy_function(self.augmenter.augment_images,
                                   [images],
                                   img_dtype)
        images = tf.reshape(images, shape = img_shape)
        return images, labels
    return augment

Solution

Is this due to non using eager mode? I thought Eager mode was default in TF2.0. Any ideas on how to approach this?

Yes, Dataset pre-processing is not executed in eager mode. This is, I assume, deliberate and certainly makes sense if you consider that Datasets can represent arbitrarily large (even infinite) streams of data.

Assuming that it is not possible/practical for you to translate the augmentation you are doing to tensorflow operations (which would be the first choice!) then you can use tf.numpy_function to execute arbitrary python code (this is a replacement for the now deprecated tf.py_func)

Answered By – Stewart_R

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published