Asyncio programs should log, like any other production-quality Python program.
There are special considerations and best practices when logging from asyncio programs. These practices have been collected together into this asyncio logging best practices guide.
In this tutorial, you will discover the best practices for logging in asyncio programs in Python.
Let’s get started.
What is Logging
Logging is a way of tracking events within a program.
There are many types of events that may be logged within a program, ranging in different levels of severity, such as debugging and information to warnings, errors, and critical events.
The logging module provides infrastructure for logging within Python programs.
This module defines functions and classes which implement a flexible event logging system for applications and libraries.
— logging — Logging facility for Python
Logging is achieved by first configuring the log handler and then adding calls to the logging infrastructure at key points in the program.
Handler objects are responsible for dispatching the appropriate log messages (based on the log messages’ severity) to the handler’s specified destination.
— logging — Logging facility for Python
The default handler will report log messages on the command prompt (e.g. terminal or system output stream).
Alternate handlers can be configured to write log messages to file, to a database, or to custom target locations. The handler can specify the severity of messages to report.
For example, we can log to file by calling the logging.basicConfig() function and specifying the file name and path to log to (e.g. application.log), the level of logging to capture to the file (e.g. from logging.DEBUG to logging.CRITICAL).
1 2 3 |
... # log everything to file logging.basicConfig(filename='application.log', level=logging.DEBUG) |
Events are logged via function calls based on the type of event performed, e.g. logging.debug() for debugging messages.
Logging provides a set of convenience functions for simple logging usage. These are debug(), info(), warning(), error() and critical().
— Logging HOWTO
For example, we may add information messages to our application code by calling logging.info() and passing in the string details of the event.
1 2 3 |
... # log that data was loaded successfully logger.info('All data was loaded successfully') |
We may also log failures, such as exceptions that are important but not critical to the program via the logger.error() or logger.exception() functions.
For example:
1 2 3 4 5 6 |
... # try a thing try: # ... except Exception as err: logger.exception(f'Unable to perform task because: {err.message} ') |
These function calls can then be added throughout an application to capture events of different types, and the log handler or runtime configuration can be used to filter the severity of the messages that are actually stored.
Now that we know about logging, let’s consider logging best practices in asyncio programs.
Run loops using all CPUs, download your FREE book to learn how.
12 Asyncio Logging Best Practices
This section lists some best practices related to logging in to asyncio programs, they are:
- Enable asyncio Module Logging
- Log Application Details With ContextVars
- Log Asyncio Warnings
- Log From Asyncio Without Blocking
- Log CancelledError Exceptions
- Log Application Exceptions in Asyncio
- Log Task Exceptions Automatically With a Done Callback
- Log Silent Exceptions Automatically
- Log Without Blocking via aiologger
- Log Long-Running Tasks with aiodebug
- Enable Logging When in Asyncio Debug Mode
- Log Trace Of All Running Tasks
Do you know any more best practices for logging in asyncio programs?
Let me know in the comments below.
Let’s take a closer look at each of these practices in turn.
Practice #01. Enable asyncio Module Logging
It is a good practice to log messages from the asyncio module.
The asyncio module will log messages automatically.
This includes details, such as which selector the event loop is using.
We can enable logging for the asyncio module, by getting the logger for the module and setting the log level.
For example:
1 2 3 |
... # enable debug logging for the asyncio module logging.getLogger("asyncio").setLevel(logging.WARNING) |
Alternatively, we can enable debug level for the entire program.
For example:
1 2 3 |
... # enable debug logging logging.basicConfig(level=logging.DEBUG) |
You can learn more about the asyncio module logging in the tutorial:
Free Python Asyncio Course
Download your FREE Asyncio PDF cheat sheet and get BONUS access to my free 7-day crash course on the Asyncio API.
Discover how to use the Python asyncio module including how to define, create, and run new coroutines and how to use non-blocking I/O.
Practice #02. Log Application Details With ContextVars
It is a good practice to store application details in context vars.
Threads provide a facility for private variables called thread local storage.
It allows different threads to execute the same code and the same variables, yet have their own private copy of the variables.
We can achieve the same effect in asyncio for coroutines and tasks using context vars.
This can be useful to have different tasks execute and access the same global variables, yet have their own private values.
This mechanism can be used to log details such as:
- Task id
- Session id
- Request details
- Error id
And so much more.
For example, we can create a global context variable:
1 2 3 |
... # create shared context variable (for all coroutines/tasks) shared_request_id = contextvars.ContextVar('request_id') |
We can then store a value in this within a task, e.g. at the root level.
1 2 3 |
... # write data in the context variable shared_request_id.set(value) |
Later, a subtask may log and access a private version of this variable specific to the parent task hierarchy.
1 2 3 |
... # read data from the context variable value = shared_request_id.get() |
You can learn more about how to use contextvars with asyncio in the tutorial:
Overwhelmed by the python concurrency APIs?
Find relief, download my FREE Python Concurrency Mind Maps
Practice #03. Log Asyncio Warnings
It is a good practice to log asyncio warnings.
The asyncio event loop will automatically emit warnings under unusual circumstances, such as:
- When a coroutine is created and not awaited.
- When a task fails with an exception and the exception is not retrieved.
By default, these warnings will be reported to standard output.
We can log these warnings by configuring the Python logging infrastructure to capture warnings.
For example:
1 2 3 |
... # turn warnings into WARNING level log message. logging.captureWarnings(True) |
We can also enable more warnings from asyncio, such as those related to resources.
1 2 3 4 |
... import warnings # always show ResourceWarning warnings warnings.filterwarnings('always', category=ResourceWarning) |
You can learn more about asyncio warnings in the tutorial:
Practice #04. Log From Asyncio Without Blocking
It is a good practice to log from asyncio without blocking.
By default, the Python logging infrastructure will block when reporting log messages.
This is typically fine for most Python programs, not can cause problems in asyncio programs.
If the logging operation involves storing log messages in a file or in a remote database, the call to log will block the asyncio event loop.
This means that no other coroutines or tasks will execute until the log operation is complete.
Therefore, it is a good practice to log in a way that does not block the event loop.
This can be achieved by configuring logging to use a QueueHandler so that log messages are put on a queue.
For example:
1 2 3 4 5 6 7 |
... # get the root logger log = logging.getLogger() # create the shared queue que = Queue() # add a handler that uses the shared queue log.addHandler(QueueHandler(que)) |
A QueueListener can then be configured to consume log messages and dispatch them as needed within a separate thread.
The QueueListener thread must then be shut down when we close our asyncio program, to ensure all log messages have been dispatched. We can achieve this by running an asyncio task in the background that will close the QueueListener for us when the task is canceled.
For example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
... # create a listener for messages on the queue listener = QueueListener(que, StreamHandler()) try: # start the listener listener.start() # report the logger is ready logging.debug(f'Logger has started') # wait forever while True: await asyncio.sleep(60) finally: # report the logger is done logging.debug(f'Logger is shutting down') # ensure the listener is closed listener.stop() |
You can learn more about logging in asyncio without blocking in the tutorial:
Practice #05. Log CancelledError Exceptions
It can be a good practice to log CancelledError exceptions.
Recall that when a task is requested to cancel via the cancel() method the target task will raise a CancelledError exception the next time it is run.
You can learn about canceling asyncio tasks in the tutorial:
The CancelledError exception typically bubbles up and terminates the target task.
The caller that canceled the target task typically must handle the CancelledError exception that is bubbled up.
This may be achieved with a try-except block.
We can log the CancelledError exception in the except block in the caller that canceled the task. This provides confirmation in the logs that the cancellation occurred as expected and that the target task did not suppress the CancelledError exception or raise another exception during cancellation.
This can be performed as part of a cancel-and-wait pattern.
For example:
1 2 3 4 5 6 7 |
... # cancel the target task task.cancel() try: await task except CancelledError: logging.debug(f'Confirmed that task {task} was cancelled as expected') |
You can learn more about the cancel and wait pattern in the tutorial:
A task may also log its own CancelledError exception, as long as it propagates the CancelledError exception afterward, meeting the contact with the cancel() method.
For example:
1 2 3 4 5 6 7 8 9 10 |
... try: ... except CancelledError: # shutdown resources ... # log the cancellation logging.debug(f'Task was cancelled, details: {...}') # re-raise the exception raise |
You can learn more about asyncio task cancellation best practices in the tutorial:
Practice #06. Log Application Exceptions in Asyncio
It is a good practice to log applications in asyncio programs.
This goes without saying and is the same as the practice of logging exceptions in normal Python programs.
When an exception occurs within an asyncio task, it is suppressed until a caller retrieves it.
It can be retrieved by awaiting the task or by calling the result() method, after which the exception is re-raised.
A caller can handle the exception and log it.
For example:
1 2 3 4 5 6 |
... # await a task that may fail try: await task except Exception as e: logger.exception(f'Task {task} failed with exception {e}') |
This is better than retrieving the exception via the exception() method on the task because in the above case the exception is re-raised providing a trace that will be logged as part of the logger.exception() call in the except block.
You can learn more about logging task exceptions in the tutorial:
Practice #07. Log Task Exceptions With a Done Callback
It is a good practice to log task exceptions automatically using a done callback function.
A task is done if it completes normally, is canceled, or fails with an exception.
Recall that we can have a callback function executed automatically when a task is done.
You can learn more about done callback functions in the tutorial:
We can define a done callback function to check if a task failed with an exception, and then log it.
For example:
1 2 3 4 5 6 7 8 9 |
# callback func called for all tasks def helper_done_callback(task): # check if the task had an exception if task.exception(): try: # re-raise exception task.result() except Exception as e: logger.exception(f'Task {task} failed with exception {e}') |
We can reuse this callback function to automatically log exceptions in all tasks automatically by defining a helper function to create tasks and add the done callback.
For example:
1 2 3 4 5 6 7 8 |
# helper for creating all tasks def create_task_helper(coroutine): # wrap and schedule the task task = asyncio.create_task(coroutine) # add custom callback to task task.add_done_callback(helper_done_callback) # return the task that was created return task |
You can learn more about logging task exceptions automatically in the tutorial:
Practice #08. Log Silent Exceptions Automatically
It is a best practice to automatically log never-retrieved exceptions in asyncio.
Recall that an asyncio task can fail with an exception. If the exception is never retrieved, it is referred to as a never-retrieved exception.
We can log these exceptions automatically once the event loop has terminated.
This can be achieved by configuring an exception handler in the event loop.
Firstly, we can define an exception handler that will log the exception for us.
1 2 3 4 |
# define an exception handler def exception_handler(loop, context): # log exception logging.error(f'Never retrieved exception: {context['exception']}') |
We can then configure the asyncio event loop to call our handler function for each never-retrieved exception.
1 2 3 4 5 |
... # get the event loop loop = asyncio.get_running_loop() # set the exception handler loop.set_exception_handler(exception_handler) |
You can learn more about logging task exceptions automatically in the tutorial:
Practice #09. Log Without Blocking via aiologger
It is a good practice to log from asyncio without blocking and the aiologger library offers this capability.
This library does all the heavy lifting for us, providing a ready-to-use logging module for asyncio programs that will perform the log operations in a separate thread, behind the scenes.
The first step is to install the aiologger library.
1 |
pip install aiologger |
Once aiologger is installed, we can use it in our asyncio programs.
This requires first importing the aiologger.Logger class.
1 |
from aiologger import Logger |
Once imported, we can then configure the Logger class in terms of the log name, handlers, and log level we wish to use.
This can be achieved via the Logger.with_default_handlers() function.
For example:
1 2 3 |
... # configure the logger logger = Logger.with_default_handlers(name='my-logger') |
We can use the Logger instance to log directly, without blocking.
For example:
1 2 3 |
... # log a message logger.debug("something happened") |
We can also await the log message.
1 2 3 |
... # log a message await logger.debug("something happened") |
You can learn more about how to use the aiologger library in the tutorial:
Practice #10. Log Long-Running Tasks with aiodebug
It is a good practice to log long-running tasks via the aiodebug library.
The asyncio event loop will log asyncio tasks that block the event loop for a configurable long time.
This is helpful to find blocking calls in asyncio that could be performed in a separate thread.
It is typically slow to run asyncio in debug mode in production in order to achieve this capability.
Instead, we can get this capability via the aiodebug library.
The aiodebug library can be installed using your favorite package manager.
For example, we can install it using pip:
1 |
pip install aiodebug |
We can use aiodebug in our program to identify long-running tasks that block the asyncio event loop for “too long”.
This can be achieved by calling the aiodebug.log_slow_callbacks.enable() function and specifying the time in seconds that a task takes that is too long.
For example:
1 2 3 4 |
import aiodebug.log_slow_callbacks ... # set time in sec for long running tasks aiodebug.log_slow_callbacks.enable(0.1) |
When a task is completed, if it has blocked the event loop for longer than the specified time, then its details will be reported in standard output.
We can configure a handler function so that we can log these tasks.
This can be achieved by defining a function that takes the task and duration it ran as an argument and then logging it using the Python logging infrastructure.
For example:
1 2 3 4 |
# handler for long running tasks def log_slow_handler(task, duration): # log the task logging.debug(f'Task blocked async loop for too long. Task {task} Duration {duration} ') |
We can then configure the aiodebug package to call our function each time a task is done and has blocked the event loop for too long.
This can be achieved by calling the aiodebug.log_slow_callbacks.enable() function and specifying the “on_slow_callback” argument to equal the name of our function.
For example:
1 2 3 |
... # enable logging of long running tasks aiodebug.log_slow_callbacks.enable(1, on_slow_callback=log_slow_handler) |
You can learn more about logging blocking calls made in asyncio programs in the tutorial:
Practice #11. Enable Logging When in Asyncio Debug Mode
It is a good practice to enable logging when configuring the asyncio event loop into debug mode.
Recall, you can configure the event loop to run in debug mode by setting the “debug” argument to True when starting the event loop with asyncio.run().
For example:
1 2 3 |
... # start the event loop asyncio.run(main(), debug=True) |
When the event loop is run in debug mode, the asyncio module will also log additional details such as:
- When the event loop is closed
- Warnings for coroutines that block the event loop for too long.
You can learn more about the asyncio logging and debug mode in the tutorial:
Practice #12. Log Trace Of All Running Tasks
It is a good practice to log a trace of long-running tasks.
Recall that we can log a trace of a given task via the print_trace() method. The trace can be stored in a string buffer and logged.
For example:
1 2 3 4 5 6 7 8 9 |
... # create a string buffer buffer = io.StringIO("") # print the task stack task.print_stack(file=buffer) # log the trace logging.debug(buffer.getvalue()) # close the string buffer buffer.close() |
We can run a background asyncio task that keeps track of how long tasks have been alive and log their trace if they have been alive for too long, e.g. minutes, hours, days, depending on the application.
For example, the coroutine below will monitor all running tasks by name and log a message for those tasks that have been alive for too long.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
# report duration of all current tasks # assumes task names are unique async def monitor(task_limit_sec, exclude=[]): # record of all task names and their start times task_dict = dict() # loop forever while True: # get all tasks for task in asyncio.all_tasks(): # skip excluded tasks if task in exclude: continue # get task name name = task.get_name() # check if not previously known if name not in task_dict: # add start time (first time seen) task_dict[name] = time.monotonic() continue # compute duration for current task duration = time.monotonic() - task_dict[name] # check if not too long if duration < task_limit_sec: continue # report task that has been alive too long logging.debug(f'{name} alive for too long: {duration:.3f} seconds') # check every second await asyncio.sleep(1) |
Note, this monitor requires that tasks are given meaningful names in the application, instead of the default “Task-1”, “Task-2”, etc. that may repeat.
This monitor can then be started at the beginning of the program as a background task.
For example:
1 2 3 4 5 |
... # create the monitor task monitor_coro = monitor(3.0, [asyncio.current_task()]) # start the monitor task monitor_task = asyncio.create_task(monitor_coro) |
You can learn more about reporting tasks that have been alive for too long in the tutorial:
Further Reading
This section provides additional resources that you may find helpful.
Python Asyncio Books
- Python Asyncio Mastery, Jason Brownlee (my book!)
- Python Asyncio Jump-Start, Jason Brownlee.
- Python Asyncio Interview Questions, Jason Brownlee.
- Asyncio Module API Cheat Sheet
I also recommend the following books:
- Python Concurrency with asyncio, Matthew Fowler, 2022.
- Using Asyncio in Python, Caleb Hattingh, 2020.
- asyncio Recipes, Mohamed Mustapha Tahrioui, 2019.
Guides
APIs
- asyncio — Asynchronous I/O
- Asyncio Coroutines and Tasks
- Asyncio Streams
- Asyncio Subprocesses
- Asyncio Queues
- Asyncio Synchronization Primitives
References
Takeaways
You now know the best practices for logging in asyncio programs in Python.
Did I make a mistake? See a typo?
I’m a simple humble human. Correct me, please!
Do you have any additional tips?
I’d love to hear about them!
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Photo by Connor Olson on Unsplash
Do you have any questions?