Celery Flower is a web-based monitoring tool for Celery, a distributed task queue for Python. It provides a visual interface for monitoring and managing Celery clusters, allowing users to view task progress, task history, and worker status in real-time.
Benefits of Celery Flower:
- Real-time monitoring: Celery Flower provides real-time monitoring of Celery clusters, allowing users to track the progress of tasks as they are executed.
- Task history: Celery Flower keeps track of all tasks that have been executed, providing users with a history of completed tasks.
- Worker status: Celery Flower allows users to monitor the status of individual workers within a cluster, providing valuable insight into the health of the system.
- Scalability: Celery Flower is highly scalable, making it suitable for use in large-scale distributed systems.
- Customization: Celery Flower is highly customizable, allowing users to configure it to meet their specific monitoring needs.
Celery Flower is a powerful tool for monitoring and managing Celery clusters, that is used to ensure the smooth operation of distributed systems.
When you run Celery background tasks you want to get some observability on how they perform, how long does it take to run a task, how spot failures and debug their behaviour.
In this tutorial we will talk about running flower locally in Docker Compose as a service and in staging/production.
We will also see that it works great with both RabbitMQ and Redis as Broker and how to connect to Redis as the result backend.
You can also check out our Django Celery Tutorial, read about Celery Shared Task and Celery rate limiting
Table Of Contents¶
Celery Flower in Docker¶
For local development consider we are running our Django project in docker-compose and we have the following list of services in docker-compose.yml
:
version: '3.3'
services:
redis:
image: redis
ports:
- "6379:6379"
rabbitmq:
image: rabbitmq
environment:
- RABBITMQ_DEFAULT_USER=djangito
- RABBITMQ_DEFAULT_PASS=djangito
- RABBITMQ_DEFAULT_VHOST=djangito
ports:
- "21001:5672"
- "21002:15672"
db:
image: postgres
environment:
- POSTGRES_USER=djangito
- POSTGRES_PASSWORD=djangito
- POSTGRES_DB=djangito
ports:
- "21003:5432"
web:
build: .
restart: always
command: python manage.py runserver 0.0.0.0:8060
env_file:
- .env
ports:
- "127.0.0.1:8060:8060"
volumes:
- .:/code
links:
- db
- redis
- rabbitmq
depends_on:
- db
- redis
- rabbitmq
tailwind:
build: .
restart: always
command: python manage.py tailwind start
env_file:
- .env
ports:
- "127.0.0.1:8383:8383"
volumes:
- .:/code
links:
- db
- redis
- rabbitmq
depends_on:
- db
- redis
- rabbitmq
celery:
build: .
restart: always
command: celery -A project.celeryapp:app worker -Q default -n djangitos.%%h --loglevel=INFO --max-memory-per-child=512000 --concurrency=1
env_file:
- .env
volumes:
- .:/code
links:
- db
- redis
- rabbitmq
depends_on:
- db
- redis
- rabbitmq
celery-beat:
build: .
restart: always
command: celery -A project.celeryapp:app beat -S redbeat.RedBeatScheduler --loglevel=DEBUG --pidfile /tmp/celerybeat.pid
env_file:
- .env
volumes:
- .:/code
links:
- db
- redis
- rabbitmq
depends_on:
- db
- redis
- rabbitmq
As you can see we have 2 celery services: celery
and celery-beat
. Beat is needed for sending scheduled tasks and celery
is actually a worker that executes those tasks.
What else you can see is that both of them depend on db
service which is our PostgreSQL database, redis
which is used as result backed and rabbitmq
as the broker.
Let's add our flower
service here.
celery-flower:
build: .
restart: always
command: celery -A project.celeryapp:app flower --loglevel=DEBUG --port=9090
ports:
- "127.0.0.1:9090:9090"
env_file:
- .env
volumes:
- .:/code
links:
- db
- redis
- rabbitmq
depends_on:
- db
- redis
- rabbitmq
This instructs docker compose to run our flower on port 9090 and open the port for us to access it.
Run this command to start your project.
docker-compose up
When all services are up you can open http://127.0.0.1:9090/ and see celery flower interface.
Celery Flower Authentication¶
Right now our Flower can be accessed by anyone without a password.
The proble is that anyone can come and manipulate your Celery cluster. This can disrupt the work of your project.
But even worse is the fact that anyone can see the data sent as arguments to the tasks and results of our Celery tasks. Those can contain highly sensitive data and we want to prevent leaking of it.
Go back to docker-compose.yml
and edit the command
of celery-flower
. We will add --basic_auth=
option.
The whole service will look like this:
celery-flower:
build: .
restart: always
command: celery -A project.celeryapp:app flower --loglevel=DEBUG --port=9090 --basic_auth=djangitos:testpassword
ports:
- "127.0.0.1:9090:9090"
env_file:
- .env
volumes:
- .:/code
links:
- db
- redis
- rabbitmq
depends_on:
- db
- redis
- rabbitmq
Our login will be djangitos
and password testpassword
Go back to terminal where you have docker-compose
running and press CTRL-C to stop it. Wait for it to stop all containers and start docker-compose up
again.
When all services are up – open the Celery Flower web interface again. You will see the login and password prompt.
Celery Flower in Production¶
While it is easy to spin Flower in local development, it requires additional effort to run Flower in production environment.
One of the important things is having Flower not only with password authentication, but also accessible only via HTTPS.
Appliku makes it easy to since every app, having a web
worker gets SSL certificate and accessible only via HTTPS.
But every app can have only one web
worker, so in order to have Celery Flower we need to create another app where we'll run only flower.
Fork this GitHub repo https://github.com/appliku/flowermonitor
In Appliku dashboard create an application with that forked repository.
In application settings specify the follwing environment variables:
- Add environment variable BROKER_URL pointing to RabbitMQ or Redis instance
- Add environment variable RESULT_BACKEND pointing to Redis instance
- Add environment variable FLOWER_BASIC_AUTH in format USERNAME:PASSWORD (login and password, separated by a colon). This will be used to authenticate Celery Flower web interface
On the Processes tab enable the web worker.
Hit Deploy.
When deployment is finished – click "Open App" link in navigation and you will see the password prompt. Use login and password from FLOWER_BASIC_AUTH environment variable.