You can log to a file from an asyncio program by initializing the Python logging infrastructure and configuring it to use a FileHandler.
In this tutorial, you will discover how to log to a file from an asyncio program.
Let’s get started.
Need to Log To File in Asyncio Program
Logging is a way of tracking events within a program.
We may need to log from our asyncio program to a file.
This is perhaps one of the most common requirements when logging from Python programs generally, and from asyncio programs specifically.
Unlike logging to standard output, the log file is persisted and can be checked at a later date.
Logging to a file from Python is a common practice with several compelling reasons:
- Persistent Record Keeping: Logging to a file allows you to maintain a persistent record of application events and activities. These logs can serve as valuable historical records for auditing, debugging, and troubleshooting.
- Troubleshooting and Debugging: When issues or errors occur in your application, log files provide detailed information about what went wrong, making it easier to diagnose and debug problems.
- Monitoring and Analysis: Log files enable real-time or post-analysis of application behavior, performance, and usage patterns. They can help identify bottlenecks, performance degradation, and usage trends.
- Error Tracking: Logging errors and exceptions to a file allows developers to track and prioritize issues. Error logs provide crucial details like error messages, stack traces, and context information for efficient resolution.
- Performance Monitoring: Log files can contain performance-related data, including execution times, memory usage, and resource consumption. Monitoring these metrics over time helps identify performance trends and optimization opportunities.
- Debugging in Production: Logging to a file allows debugging in production environments without affecting end-users. Developers can access log files to investigate issues without interrupting normal operations.
It provides critical insights into application behavior.
A log file can be inspected periodically and after a program has run to check for normal operation and identify issues. It can also be inspected after a failure event in order to discover the cause of the problem.
How can we log to file in asyncio?
Run loops using all CPUs, download your FREE book to learn how.
How to Log to File With Asyncio
We can log to a file in an asyncio program by first getting access to the logging infrastructure.
For example, we can get access to the root logger directly via the logging.basicConfig() module function.
1 2 3 |
... # get the root logger log = logging.basicConfig() |
We can then configure a new logging.FileHandler object that specifies the path and filename in which to store log messages.
For example:
1 2 3 |
... # create log file handler log_fh = logging.FileHandler('application.log') |
This can then be added to the root logger via the addHandler() method on the log.
1 2 3 |
... # add the log file handler to the root logger log.addHandler(log_fh) |
Finally, we can configure the level of messages to be logged via the setLevel() method on the log, e.g. DEBUG messages and above.
For example:
1 2 3 |
... # log debug message and above log.setLevel(logging.DEBUG) |
We can do all of this in one line in a call to the logging.basicConfig() function.
For example:
1 2 3 |
... # configure the root logger to log to file with debug level and above _ = logging.basicConfig(filename='application.log', level=logging.DEBUG) |
We can then sprinkle log messages throughout our asyncio program.
For example:
1 2 3 |
... # log a message logger.info('Application has started') |
Next, let’s explore how we might log to a file without blocking.
How to Log to File With Asyncio Without Blocking
Logging from an asyncio program is blocking.
This means that logging a is a function call that will prevent the asyncio event loop from progressing.
Often this is not a problem, such as when logging a few messages to standard output.
It does become a problem when there are a large number of log messages and/or the target of the logging infrastructure is a blocking I/O call, such as storing messages to a file, database, or network connection.
… it should be noted that when logging from async code, network and even file handlers could lead to problems (blocking the event loop) because some logging is done from asyncio internals.
— Logging Cookbook
Therefore, it is recommended to perform logging from asyncio using a different thread.
We can log in asyncio programs without blocking using a QueueHandler and QueueListener.
A queue.Queue can be created and used to store log messages.
We can create a QueueHandler that is configured to use our Queue object and store messages in the queue.
This is a quick function call that will return nearly immediately, assuming the size of the Queue is unbounded in memory (reasonable for most programs).
… attach only a QueueHandler to those loggers which are accessed from performance-critical threads. They simply write to their queue, which can be sized to a large enough capacity or initialized with no upper bound to their size. The write to the queue will typically be accepted quickly
— Logging Cookbook
For example:
1 2 3 4 5 6 7 |
... # get the root logger log = logging.getLogger() # create the shared queue que = Queue() # add a handler that uses the shared queue log.addHandler(QueueHandler(que)) |
We can then configure a QueueListener to consume log messages from a shared queue and store them in some way.
The QueueListener takes the shared Queue object as an argument, as well as a logging handler object. This could be a FileHandler, some network-based handler for storing log messages remotely, or even a simple StreamHandler for reporting messages on standard output.
… QueueListener, which has been designed as the counterpart to QueueHandler. A QueueListener is very simple: it’s passed a queue and some handlers, and it fires up an internal thread which listens to its queue for LogRecords sent from QueueHandlers (or any other source of LogRecords, for that matter). The LogRecords are removed from the queue and passed to the handlers for processing.
— Logging Cookbook
For example:
1 2 3 4 5 |
... # create the file handler for logging file_handler = FileHandler('asyncio.log') # create a listener for messages on the queue listener = QueueListener(que, file_handler) |
The QueueListener is not connected to the logging infrastructure directly, only the shared queue.
Internally, it has a daemon worker thread that will consume log messages. As such the QueueListener must be started and stopped explicitly.
For example:
1 2 3 4 5 6 |
... # start the listener listener.start() ... # ensure the listener is closed listener.stop() |
In asyncio, we can develop a coroutine that can initialize the logging infrastructure and configure the shared Queue and QueueHandler. It can then configure the QueueListener and manage its life cycle, starting it initially, and stopping it once the task is canceled.
This helper coroutine can then be run as a background task, which will be canceled automatically when the event loop is terminated.
You can learn more about when asyncio tasks are automatically canceled in the tutorial:
Below is an example of a helper coroutine for initializing non-blocking logging in an asyncio program
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
# helper coroutine to setup and manage the logger async def init_logger(): # get the root logger log = logging.getLogger() # create the shared queue que = Queue() # add a handler that uses the shared queue log.addHandler(QueueHandler(que)) # log all messages, debug and up log.setLevel(logging.DEBUG) # create the file handler for logging file_handler = FileHandler('asyncio.log') # create a listener for messages on the queue listener = QueueListener(que, file_handler) try: # start the listener listener.start() # report the logger is ready logging.debug(f'Logger has started') # wait forever while True: await asyncio.sleep(60) finally: # report the logger is done logging.debug(f'Logger is shutting down') # ensure the listener is closed listener.stop() |
This coroutine can then be run as a background task as the first step in our asyncio program.
We must suspend the caller in order to allow the background task to run and start the worker thread within the QueueListener.
For example:
1 2 3 4 5 |
... # initialize the logger logger = asyncio.create_task(init_logger()) # allow the logger to start await asyncio.sleep(0) |
You can learn more about non-blocking logging for asyncio in the tutorial:
Now that we know how to log to a file from an asyncio program, let’s look at a worked example.
Free Python Asyncio Course
Download your FREE Asyncio PDF cheat sheet and get BONUS access to my free 7-day crash course on the Asyncio API.
Discover how to use the Python asyncio module including how to define, create, and run new coroutines and how to use non-blocking I/O.
Example of Asyncio Logging to File (Blocking)
We can explore an example of logging to a file from an asyncio program that is blocking.
That is, each call to the logging infrastructure blocks the asyncio event loop until the log message is stored in the file. Generally, this is not a concern unless writing to disk is slow or there are an enormous number of log messages.
We will use our init_logger() function developed above.
It initializes the root logger to log all messages from a DEBUG level and up and configures a FileHandler to store all logged messages into a file “asyncio.log” in the current working directory (same as the script file).
1 2 3 4 5 6 7 8 9 10 |
# helper function to setup the logger def init_logger(): # get the root logger log = logging.getLogger() # log all messages, debug and up log.setLevel(logging.DEBUG) # create the file handler for logging file_handler = FileHandler('asyncio.log') # add the handler to the logger log.addHandler(file_handler) |
We can then define a task that takes a unique id and blocks for a random fraction of 5 seconds, logging a message before and after it is blocked.
This is to ensure we see log messages reported randomly during the operation of our program.
The task() coroutine below implements this.
1 2 3 4 5 6 7 8 |
# task that does work and logs async def task(value): # log a message logging.info(f'Task {value} is starting') # simulate doing work await asyncio.sleep(random() * 5) # log a message logging.info(f'Task {value} is done') |
We can then define the main() coroutine that first initializes the logging infrastructure, and then issues ten tasks using an asyncio.TaskGroup.
This will ensure we have many different tasks logging many messages randomly, all of which are blocking calls.
1 2 3 4 5 6 7 8 9 10 11 12 |
# main coroutine async def main(): # initialize the logger init_logger() # log a message logging.info(f'Main is starting') # issue many tasks async with asyncio.TaskGroup() as group: for i in range(10): _ = group.create_task(task(i)) # log a message logging.info(f'Main is done') |
If you are new to the asyncio.TaskGroup, you can learn how to use it in the tutorial:
Finally, we can start the asyncio event loop.
1 2 3 |
... # start the event loop asyncio.run(main()) |
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
# SuperFastPython.com # example of asyncio logging to file, blocking import logging from logging import FileHandler from random import random import asyncio # helper function to setup the logger def init_logger(): # get the root logger log = logging.getLogger() # log all messages, debug and up log.setLevel(logging.DEBUG) # create the file handler for logging file_handler = FileHandler('asyncio.log') # add the handler to the logger log.addHandler(file_handler) # task that does work and logs async def task(value): # log a message logging.info(f'Task {value} is starting') # simulate doing work await asyncio.sleep(random() * 5) # log a message logging.info(f'Task {value} is done') # main coroutine async def main(): # initialize the logger init_logger() # log a message logging.info(f'Main is starting') # issue many tasks async with asyncio.TaskGroup() as group: for i in range(10): _ = group.create_task(task(i)) # log a message logging.info(f'Main is done') # start the event loop asyncio.run(main()) |
Running the example first starts the asyncio event loop and runs the main() coroutine.
The main() coroutine runs and initializes the logger, ensuring that all log messages at a DEBUG level and higher are reported and that all messages are stored in the local file “asyncio.log” in the current working directory.
No messages are reported on standard output.
The main() coroutine then logs an initial message.
A TaskGroup is created and 10 instances of the task() coroutine are created and issued as tasks. The main() coroutine then blocks until all tasks are done.
The task() coroutines run, each reporting its own unique start message, sleeping for a fraction of 5 seconds, then reporting a unique done message.
Once all tasks are done, the main() coroutine resumes and reports a final done message.
A copy of the contents of the log file “asyncio.log” are listed below.
This highlights how we can log to a file from an asyncio program in a way that will block the event loop each time a record is stored in the file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
Main is starting Task 0 is starting Task 1 is starting Task 2 is starting Task 3 is starting Task 4 is starting Task 5 is starting Task 6 is starting Task 7 is starting Task 8 is starting Task 9 is starting Task 2 is done Task 3 is done Task 9 is done Task 8 is done Task 6 is done Task 0 is done Task 1 is done Task 5 is done Task 7 is done Task 4 is done Main is done |
Next, let’s explore how we might update this example so that logging does not block the asyncio event loop.
Overwhelmed by the python concurrency APIs?
Find relief, download my FREE Python Concurrency Mind Maps
Example of Asyncio Logging to File (Non-Blocking)
We can explore an example of non-blocking logging to a file from an asyncio program.
In this case, we can update the above example to initialize the logging infrastructure using our helper coroutine developed above.
This helper first creates a shared queue, and then configures a QueueHandler so that the act of logging is limited to putting a message in the queue. A QueueListener is then created and configured to consume messages from the shared queue in an internal worker thread, then report those messages to a separate handler, in this case, a FileHandler that reports log messages to a local file “asyncio.log“.
The init_logger() coroutine below implements this. It is designed to run as a background task for as long as the program is running, sleeping most of the time. It will be canceled automatically when the event loop is terminated, allowing the QueueListener to be closed correctly and purge any messages that may be on the queue.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
# helper coroutine to setup and manage the logger async def init_logger(): # get the root logger log = logging.getLogger() # create the shared queue que = Queue() # add a handler that uses the shared queue log.addHandler(QueueHandler(que)) # log all messages, debug and up log.setLevel(logging.DEBUG) # create the file handler for logging file_handler = FileHandler('asyncio.log') # create a listener for messages on the queue listener = QueueListener(que, file_handler) try: # start the listener listener.start() # report the logger is ready logging.debug(f'Logger has started') # wait forever while True: await asyncio.sleep(60) finally: # report the logger is done logging.debug(f'Logger is shutting down') # ensure the listener is closed listener.stop() |
We can then update the main() coroutine to schedule the init_logger() as a background task and then suspend to allow the task to begin executing.
1 2 3 4 5 6 7 |
# main coroutine async def main(): # initialize the logger logger = asyncio.create_task(init_logger()) # allow the logger to start await asyncio.sleep(0) ... |
And that’s it.
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
# SuperFastPython.com # example of asyncio logging to file, non-blocking import logging from logging import FileHandler from logging.handlers import QueueHandler from logging.handlers import QueueListener from random import random from queue import Queue import asyncio # helper coroutine to setup and manage the logger async def init_logger(): # get the root logger log = logging.getLogger() # create the shared queue que = Queue() # add a handler that uses the shared queue log.addHandler(QueueHandler(que)) # log all messages, debug and up log.setLevel(logging.DEBUG) # create the file handler for logging file_handler = FileHandler('asyncio.log') # create a listener for messages on the queue listener = QueueListener(que, file_handler) try: # start the listener listener.start() # report the logger is ready logging.debug(f'Logger has started') # wait forever while True: await asyncio.sleep(60) finally: # report the logger is done logging.debug(f'Logger is shutting down') # ensure the listener is closed listener.stop() # task that does work and logs async def task(value): # log a message logging.info(f'Task {value} is starting') # simulate doing work await asyncio.sleep(random() * 5) # log a message logging.info(f'Task {value} is done') # main coroutine async def main(): # initialize the logger logger = asyncio.create_task(init_logger()) # allow the logger to start await asyncio.sleep(0) # log a message logging.info(f'Main is starting') # issue many tasks async with asyncio.TaskGroup() as group: for i in range(10): _ = group.create_task(task(i)) # log a message logging.info(f'Main is done') # start the event loop asyncio.run(main()) |
Running the example first starts the asyncio event loop and runs the main() coroutine.
The main() coroutine then schedules the init_logger() coroutine as a background task and suspends to allow it to run.
The init_logger() task runs and configures the root logger to log all messages at a DEBUG level and above and creates and configures the shared Queue and the QueueHandler that makes use of it and configures the logging infrastructure to use the QueueHandler.
It then creates the QueueListener and starts its internal thread to consume log messages from the shared queue and handle them with a FileHandler that stores the log messages in the file “asyncio.log“. The init_logger() task then logs an initial message then runs forever in a loop, suspending itself with calls to sleep.
The main() coroutine resumes and logs an initial message.
A TaskGroup is created and 10 instances of the task() coroutine are created and issued as tasks. The main() coroutine then blocks until all tasks are done.
The task() coroutines run, each reporting its own unique start message, sleeping for a fraction of 5 seconds, then reporting a unique done message.
The log messages are put into the shared queue via the QueueHandler and the calls return almost immediately. The internal thread of the QueueListener runs and consumes log messages from the queue and reports them to standard output.
Once all tasks are done, the main() coroutine resumes and reports a final done message.
A copy of the contents of the “asyncio.log” log file is reported below.
This highlights how we can log to a file from an asyncio program without blocking.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
Logger has started Main is starting Task 0 is starting Task 1 is starting Task 2 is starting Task 3 is starting Task 4 is starting Task 5 is starting Task 6 is starting Task 7 is starting Task 8 is starting Task 9 is starting Task 8 is done Task 5 is done Task 1 is done Task 4 is done Task 0 is done Task 2 is done Task 9 is done Task 7 is done Task 6 is done Task 3 is done Main is done Logger is shutting down |
Further Reading
This section provides additional resources that you may find helpful.
Python Asyncio Books
- Python Asyncio Mastery, Jason Brownlee (my book!)
- Python Asyncio Jump-Start, Jason Brownlee.
- Python Asyncio Interview Questions, Jason Brownlee.
- Asyncio Module API Cheat Sheet
I also recommend the following books:
- Python Concurrency with asyncio, Matthew Fowler, 2022.
- Using Asyncio in Python, Caleb Hattingh, 2020.
- asyncio Recipes, Mohamed Mustapha Tahrioui, 2019.
Guides
APIs
- asyncio — Asynchronous I/O
- Asyncio Coroutines and Tasks
- Asyncio Streams
- Asyncio Subprocesses
- Asyncio Queues
- Asyncio Synchronization Primitives
References
Takeaways
You now know how to log to file from an asyncio program.
Did I make a mistake? See a typo?
I’m a simple humble human. Correct me, please!
Do you have any additional tips?
I’d love to hear about them!
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Photo by Sonnie Hiles on Unsplash
Do you have any questions?