What is a better approach to cache data in django app when running on gunicorn


I have a django application that is running on gunicorn app server. Gunicorn running N wokers. We know that each worker is a separate python process. In one of my applications services i have a query to a database that takes long time (10-20 sec).

Ok i decided to cache the result, so just simply add django.core.cache and call cache.get(key) if result is not None app returned data from cache if not it called service and stored data in cache using cache.set(key, data, time). But if we have a N workers and first query was addressed to worker 0 app stored result of long running service in cache (process memory) of worker 0 but when similar (request contains paging option, but i store whole RawDataSet in memory, so every page returns fast) request is addressed to worker 1 cache as expected wouldn’t work because this is a different process. So obviously i have to use some cache that could be used by all workers.

What approach (i.e. use in memory database or something different) is better to use to solve this issue?


I solved this using django-redis package, the main advantage of this solution is that you have not to change code and still use cache.get() and cache.set() function from django.core.cache you just have to add redis specific cache settings to setting file like this:

    'default': {
        'BACKEND': 'django_redis.cache.RedisCache',
        'LOCATION': 'redis://',
        'OPTIONS': {
            'CLIENT_CLASS': 'django_redis.client.DefaultClient'
        'KEY_PREFIX': 'text_analyzer'

Answered By – Michael Ushakov

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published