Concatenate multiple from_tensor_slices() Datasets


I am trying to use a util function from the TF Recommender System (TFRS) to turn a TF Dataset into a Listwise TF Dataset.


Basically, the function (above) makes X amount of lists with Y amount of random samples in each list for every user Z in the dataset. My problem is that when I pass in a TF Dataset with a few million records in it, the program crashes.

tensor_slices = {"user_id": [], "movie_title": [], "user_rating": []}

It appears, that eventually, the tensor_slices dict inside the function fills up with so much information that it run out of memory and crashes.

I modified the original function by turning each users sampled list into a TF dataset via the from_tensor_slices() method at the end of processing each user. This allowed me to not let the tensor_slices dict implode. I let the program loop through each user and concatenate each from_tensor_slice() dataset onto each other before returning a full TF Datatset.

def sample_listwise(
    num_list_per_user: int = 50,
    num_examples_per_list: int = 5,
    seed: Optional[int] = None,
) ->

    random_state = np.random.RandomState(seed)

    example_lists_by_user = defaultdict(_create_feature_dict)

    movie_title_vocab = set()
    for example in rating_dataset:
        user_id = example["user_id"].numpy()

    i = 0
    for user_id, feature_lists in example_lists_by_user.items():
        tensor_slices = {"user_id": [], "movie_title": [], "user_rating": []}
        for _ in range(num_list_per_user):

            # Drop the user if they don't have enough ratings.
            if len(feature_lists["movie_title"]) < num_examples_per_list:

            sampled_movie_titles, sampled_ratings = _sample_list(

        # check if all lists for a user are stored in tensor_slices
        if len(tensor_slices["user_id"]) == num_list_per_user:

            tmp_tf_dataset =
            # clear out tensor slice dict for that user
            # concat tmp_tf_dataset to the main tf dataset 
            if i == 0:
                tf_dataset = tmp_tf_dataset
                tf_dataset = tf_dataset.concatenate(tmp_tf_dataset)

            i += 1

    return tf_dataset

I can pass the result of this function to a model if I keep the amount of data very small (250k records). If I increase the amount of data to process…. eventually the model fails with a Segmentation Fault error.

So my question is, how do I properly concatenate all this data together to form one coherent dataset, so my program won’t crash and I can side step the tenor_slices dict implosion?


The original function first creates example_lists_by_user using the input dataset, then creates tensor_slices object, and finally converts it to another tf.Dataset. It takes a dataset and returns a dataset.

If the problem arises from the size of tensor_slices (and not from example_lists_by_user), there is a way to avoid creating it altogether, using a generator expression and

Specifically, you could have something like:

def sample_listwise(...):
  # generate example_lists_by_user...

  def example_generator():
    for user_id, feature_lists in example_lists_by_user.items():
      for _ in range(num_list_per_user):
        movie_titles, ratings = _sample_list(...)
        yield {'user_id': user_id, 'movie_title': movie_titles, 'user_rating': ratings}

  # create a dataset from the generator function above
      'user_id':tf.TensorSpec([], tf.string), 
      'movie_title':tf.TensorSpec([num_examples_per_list], tf.string), 
      'user_rating':tf.TensorSpec([num_examples_per_list], tf.string)

Answered By – vasiliykarasev

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published