Loading...

What is Appliku?

Icon
Simplest way to deploy Python/Django apps

Push code to Git repo, Appliku will build & deploy the app to your cloud servers.

Learn more .

Start Deploying
Icon
Django project template

Django project template that allows you to start building your app, skipping days of fine tuning project settings..

Get Djangitos
Icon
Appliku SaaS Discord Community

The place where you can talk to like minded individuals who are at different stages of building their SaaS or other apps.

Join Community

Celery Flower Tutorial

Share post:

Celery Flower is a tool for monitoring and administrating Celery clusters.

When you run Celery background tasks you want to get some observability on how they perform, how long does it take to run a task, how spot failures and debug their behaviour.

In this tutorial we will talk about running flower locally in Docker Compose as a service and in staging/production.

We will also see that it works great with both RabbitMQ and Redis as Broker and how to connect to Redis as the result backend.

Celery Flower in Docker

For local development consider we are running our Django project in docker-compose and we have the following list of services in docker-compose.yml:


version: '3.3'
services:
  redis:
    image: redis
    ports:
      - "6379:6379"
  rabbitmq:
    image: rabbitmq
    environment:
      - RABBITMQ_DEFAULT_USER=djangito
      - RABBITMQ_DEFAULT_PASS=djangito
      - RABBITMQ_DEFAULT_VHOST=djangito
    ports:
      - "21001:5672"
      - "21002:15672"
  db:
    image: postgres
    environment:
      - POSTGRES_USER=djangito
      - POSTGRES_PASSWORD=djangito
      - POSTGRES_DB=djangito
    ports:
      - "21003:5432"
  web:
    build: .
    restart: always
    command: python manage.py runserver 0.0.0.0:8060
    env_file:
      - .env
    ports:
      - "127.0.0.1:8060:8060"
    volumes:
      - .:/code
    links:
      - db
      - redis
      - rabbitmq
    depends_on:
      - db
      - redis
      - rabbitmq

  tailwind:
    build: .
    restart: always
    command: python manage.py tailwind start
    env_file:
      - .env
    ports:
      - "127.0.0.1:8383:8383"
    volumes:
      - .:/code
    links:
      - db
      - redis
      - rabbitmq
    depends_on:
      - db
      - redis
      - rabbitmq

  celery:
    build: .
    restart: always
    command: celery -A project.celeryapp:app  worker -Q default -n djangitos.%%h --loglevel=INFO --max-memory-per-child=512000 --concurrency=1
    env_file:
      - .env
    volumes:
      - .:/code
    links:
      - db
      - redis
      - rabbitmq
    depends_on:
      - db
      - redis
      - rabbitmq

  celery-beat:
    build: .
    restart: always
    command: celery -A project.celeryapp:app beat -S redbeat.RedBeatScheduler  --loglevel=DEBUG --pidfile /tmp/celerybeat.pid
    env_file:
      - .env
    volumes:
      - .:/code
    links:
      - db
      - redis
      - rabbitmq
    depends_on:
      - db
      - redis
      - rabbitmq

As you can see we have 2 celery services: celery and celery-beat. Beat is needed for sending scheduled tasks and celery is actually a worker that executes those tasks.

What else you can see is that both of them depend on db service which is our PostgreSQL database, redis which is used as result backed and rabbitmq as the broker.

Let's add our flower service here.


  celery-flower:
    build: .
    restart: always
    command: celery -A project.celeryapp:app flower --loglevel=DEBUG --port=9090
    ports:
      - "127.0.0.1:9090:9090"
    env_file:
      - .env
    volumes:
      - .:/code
    links:
      - db
      - redis
      - rabbitmq
    depends_on:
      - db
      - redis
      - rabbitmq

This instructs docker compose to run our flower on port 9090 and open the port for us to access it.

Run this command to start your project.

docker-compose up

When all services are up you can open http://127.0.0.1:9090/ and see celery flower interface.

Celery Flower Web Interface

Celery Flower Authentication

Right now our Flower can be accessed by anyone without a password.

The proble is that anyone can come and manipulate your Celery cluster. This can disrupt the work of your project.

But even worse is the fact that anyone can see the data sent as arguments to the tasks and results of our Celery tasks. Those can contain highly sensitive data and we want to prevent leaking of it.

Go back to docker-compose.yml and edit the command of celery-flower. We will add --basic_auth= option.

The whole service will look like this:


  celery-flower:
    build: .
    restart: always
    command: celery -A project.celeryapp:app flower --loglevel=DEBUG --port=9090 --basic_auth=djangitos:testpassword
    ports:
      - "127.0.0.1:9090:9090"
    env_file:
      - .env
    volumes:
      - .:/code
    links:
      - db
      - redis
      - rabbitmq
    depends_on:
      - db
      - redis
      - rabbitmq

Our login will be djangitos and password testpassword

Go back to terminal where you have docker-compose running and press CTRL-C to stop it. Wait for it to stop all containers and start docker-compose up again.

When all services are up – open the Celery Flower web interface again. You will see the login and password prompt.

Celery Flower Authentication

Celery Flower in Production

While it is easy to spin Flower in local development, it requires additional effort to run Flower in production environment.

One of the important things is having Flower not only with password authentication, but also accessible only via HTTPS.

Appliku makes it easy to since every app, having a web worker gets SSL certificate and accessible only via HTTPS.

But every app can have only one web worker, so in order to have Celery Flower we need to create another app where we'll run only flower.

Fork this GitHub repo https://github.com/appliku/flowermonitor

In Appliku dashboard create an application with that forked repository.

In application settings specify the follwing environment variables:

  • Add environment variable BROKER_URL pointing to RabbitMQ or Redis instance
  • Add environment variable RESULT_BACKEND pointing to Redis instance
  • Add environment variable FLOWER_BASIC_AUTH in format USERNAME:PASSWORD (login and password, separated by a colon). This will be used to authenticate Celery Flower web interface

On the Processes tab enable the web worker.

Hit Deploy.

When deployment is finished – click "Open App" link in navigation and you will see the password prompt. Use login and password from FLOWER_BASIC_AUTH environment variable.

Share post:
Image Description Top