You can profile target functions and coroutines in asyncio programs using the line_profiler module.
The line_profiler module is a third-party library that gathers profile statistics and generates a report of target functions, highlighting how long each line of the function is spent executing.
In this tutorial, you will discover how to use the line_profiler module to profile targeted coroutines in asyncio.
Let’s get started.
Need to Profile Slow Asyncio Programs
Sometimes our asyncio Python programs are slow.
We can use profiling to help identify which parts of an asyncio program are slow so that we can change or fix them to be different or faster.
In software engineering, profiling (“program profiling”, “software profiling”) is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aid program optimization, and more specifically, performance engineering.
— Profiling (computer programming), Wikipedia.
Profiling slow asyncio programs in Python is crucial for several reasons.
- Optimizing I/O-Bound Applications. Asyncio is often used in I/O-bound applications where the performance gains of asynchronous programming are most significant. Profiling helps identify which parts of the code are causing slowdowns, enabling us to prioritize optimizations and ensure that I/O-bound tasks are executed as efficiently as possible.
- Managing Complexity in Asyncio Programs. Asyncio programs can be complex, involving numerous asynchronous tasks and callbacks. Profiling provides insights into the execution flow, revealing whether tasks are waiting excessively or if there are concurrency issues that need addressing. This information is invaluable for fine-tuning event loop management and task scheduling.
- Resource Bottleneck Detection. Profiling helps detect resource bottlenecks, such as excessive CPU usage, memory leaks, or blocking operations that hinder the responsiveness of asyncio programs. Identifying and resolving these issues is critical for maintaining smooth and responsive asynchronous applications.
- Opportunities for Code Optimization. Profiling can highlight opportunities for code optimization and performance improvements. By pinpointing specific functions or parts of the code that consume the most time or resources, developers can focus their efforts on areas that provide the most significant performance gains.
Profiling slow asyncio programs not only helps diagnose performance issues but also guides optimizations and ultimately leads to more responsive and scalable asynchronous Python applications.
Now that we know that we need to profile our slow asyncio programs, let’s look at how we can use the built-in Python profiler to do so.
Run loops using all CPUs, download your FREE book to learn how.
Need to Profile Targeted Functions
A limitation of the cProfile and profile modules built into the Python standard library is that they can provide too much information.
Not only do they report details of the functions and coroutines of our program, but also the details of the asyncio infrastructure, and of the underlying libraries.
You can learn more about profiling asyncio programs using cProfile in the tutorial:
Once we know the function or coroutine that is slow or requires optimization, we would prefer to focus the profiling efforts.
This can be achieved using the line_profiler module.
Line-by-line profiling for Python
— line_profiler Project on GitHub.
The line_profiler module is specifically designed to profile one or more targeted functions or coroutines in our Python programs.
We target the functions for profiling ourselves using custom function decorators, then run a script to gather profile information about our program. The line_profiler is then used to report the profile report for our targeted functions.
When a function or coroutine is targeted for profiling, each line of the target is profiled, providing a wealth of statistics. Hence the name of the module: “line_profiler“
line_profiler is a module for doing line-by-line profiling of functions.
— line_profiler Project on GitHub.
How to Use line_profiler
Using the line_profiler involves four steps, they are:
- Install line_profiler
- Decorate Python code for profiling
- Collect profile statistics
- Review profile statistics
Let’s take a closer look at each in turn.
Step 1: How to Install line_profiler
The first step is to install the line_profiler library.
This can be achieved using your favorite Python package manager, such as pip.
For example:
1 |
pip install line_profiler |
Step 2: How to Decorate Python Code
The next step is to mark up our Python program.
A custom decorator @profile must be added to each function and coroutine in our program that we wish to profile.
For example:
1 2 3 |
@profile async def task(value): # ... |
Step 3: How to Collect Profile Statistics
The next step is to run our decorated program using a custom script called kernprof.
The kernprof script will execute our program and collect profile statistics for our targeted functions and coroutines.
For example:
1 |
kernprof -l program.py |
The results will be saved into a binary new file named after the target file that was profiled.
For example:
1 |
program.py.lprof |
Step 4: How to Review Profile Statistics
The final step is to analyze the collected profile information and generate a report.
This can be achieved using the Python interpreter on the command line configured to run the line_profiler module as a program via the -m flag and providing the generated binary program.py.lprof file as an argument.
For example:
1 |
python -m line_profiler program.py.lprof |
This will produce a profile report of the decorated functions and coroutines that can be used to determine the parts of the program that need optimization.
Each line of the targeted functions is reproduced with statistics in a table, showing time spent executing, times executed, and other information.
Now that we know how to use the line_profiler, let’s look at a worked example.
Free Python Asyncio Course
Download your FREE Asyncio PDF cheat sheet and get BONUS access to my free 7-day crash course on the Asyncio API.
Discover how to use the Python asyncio module including how to define, create, and run new coroutines and how to use non-blocking I/O.
Example of Profiling Asyncio With a Blocking Call
We can explore an example of how to use the line_profiler module to profile an asyncio coroutine that is known to have a slow function call.
We will define a coroutine work() that calls the time.sleep() function. This blocks the event loop, which is a bad practice. The coroutine has a number of other lines with short sleeps and reporting messages. This is to give the line profiler some “lines” to work with.
We can pretend we don’t know the cause of the slowness, target this coroutine for profiling, and add the @profile function decorator.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
# long running task @profile async def work(): # report a message print('Task is running') # block a moment await asyncio.sleep(1) # block the event loop for a long time time.sleep(5) # block a moment await asyncio.sleep(1) # report a message print('Task is done.') |
The main() coroutine reports a message, creates and awaits our work() task, then reports a final message.
1 2 3 4 5 6 7 8 9 10 |
# main program async def main(): # report a message print('Main is running') # create the task task = asyncio.create_task(work()) # await our task await task # report a message print('Main is done.') |
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
# SuperFastPython.com # example of a program that has a slow function call import time import asyncio # long running task @profile async def work(): # report a message print('Task is running') # block a moment await asyncio.sleep(1) # block the event loop for a long time time.sleep(5) # block a moment await asyncio.sleep(1) # report a message print('Task is done.') # main program async def main(): # report a message print('Main is running') # create the task task = asyncio.create_task(work()) # await our task await task # report a message print('Main is done.') # start the event loop asyncio.run(main()) |
Next, we can run our program via the kernprof custom script to collect profile information.
1 |
kernprof -l program1.py |
This runs the program and stores the results in a binary file program1.py.lprof.
1 2 3 4 5 6 7 |
Main is running Task is running Task is done. Main is done. Wrote profile results to program1.py.lprof Inspect results with: python -m line_profiler -rmt "program1.py.lprof" |
We can then inspect a report of the collected profile statistics.
1 |
python -m line_profiler program1.py.lprof |
This produces a report.
We can see each line of our target function on the right-hand side.
We can also see the number of times each line of the function was executed (Hits), which was once, and the wall clock time (Time) spent executing each line.
Helpfully, we can also see the percentage of time (% Time) spent on each line.
The report shows that the time.sleep(5) occupied the entire duration, e.g. 100 percent. This is because this call blocks the event loop, whereas the calls to print() and awaiting other coroutines do not, they take only fractions of a second to issue their calls.
This highlights how we can use the line_profiler module to identify slow lines within a target function.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
Timer unit: 1e-06 s Total time: 5.00428 s File: test1.py Function: work at line 7 Line # Hits Time Per Hit % Time Line Contents ============================================================== 7 @profile 8 async def work(): 9 # report a message 10 1 9.0 9.0 0.0 print('Task is running') 11 # block a moment 12 1 28.0 28.0 0.0 await asyncio.sleep(1) 13 # block the event loop for a long time 14 1 5004119.0 5e+06 100.0 time.sleep(5) 15 # block a moment 16 1 111.0 111.0 0.0 await asyncio.sleep(1) 17 # report a message 18 1 15.0 15.0 0.0 print('Task is done.') |
Next, let’s explore how we might use line_profiler to identify a function that is called many times.
Overwhelmed by the python concurrency APIs?
Find relief, download my FREE Python Concurrency Mind Maps
Example of Profiling Asyncio With a Repeated Function Call
We can explore how to use the line_profiler to profile a function that repeatedly calls the same function.
In this case, we will define a coroutine that creates a large list of squared integers. The activity is performed in a list comprehension and takes many seconds.
This is a CPU-bound task that will block the asyncio event loop for the duration.
It is also a target coroutine with a single line, which makes things challenging for the line_profiler that can only report at the level of lines.
We will annotate this coroutine so that we can profile it, pretending we don’t know the cause of the issue.
1 2 3 4 5 |
# long running task @profile async def work(): # block the event loop for a long time data = [math.sqrt(i) for i in range(1, 100000000)] |
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
# SuperFastPython.com # example of a program that calls a function repeatedly import math import asyncio # long running task @profile async def work(): # block the event loop for a long time data = [math.sqrt(i) for i in range(1, 100000000)] # main program async def main(): # report a message print('Main is running') # create the task task = asyncio.create_task(work()) # await our task await task # report a message print('Main is done.') # start the event loop asyncio.run(main()) |
Next, we can run our program via the kernprof custom script to collect profile information.
1 |
kernprof -l program2.py |
This runs the program and stores the results in a binary file program2.py.lprof.
1 2 3 4 5 |
Main is running Main is done. Wrote profile results to program2.py.lprof Inspect results with: python3 -m line_profiler -rmt "program2.py.lprof" |
We can then inspect a report of the collected profile statistics.
1 |
python -m line_profiler program2.py.lprof |
This produces a report.
We can see each line of our target function on the right-hand side.
We can also see the number of times each line of the function was executed (Hits), which was once, and the wall clock time spent executing each line.
Given that the task within our target function is occurring on a single line within a list comprehension, we cannot see more detail than the line level in the profile report.
This highlights the limitation of the line_profiler module, that it does allow a function or coroutine to be targeted for profiling, but is limited to reporting profile information at the line level.
Here, are not given more detail about the function (or functions) within the list comprehension which is slow. We would have to use the cProfile tool to get more information.
1 2 3 4 5 6 7 8 9 10 11 12 |
Timer unit: 1e-06 s Total time: 20.5964 s File: test2.py Function: work at line 7 Line # Hits Time Per Hit % Time Line Contents ============================================================== 7 @profile 8 async def work(): 9 # block the event loop for a long time 10 1 20596383.0 2e+07 100.0 data = [math.sqrt(i) for i in range(1, 100000000)] |
Further Reading
This section provides additional resources that you may find helpful.
Python Asyncio Books
- Python Asyncio Mastery, Jason Brownlee (my book!)
- Python Asyncio Jump-Start, Jason Brownlee.
- Python Asyncio Interview Questions, Jason Brownlee.
- Asyncio Module API Cheat Sheet
I also recommend the following books:
- Python Concurrency with asyncio, Matthew Fowler, 2022.
- Using Asyncio in Python, Caleb Hattingh, 2020.
- asyncio Recipes, Mohamed Mustapha Tahrioui, 2019.
Guides
APIs
- asyncio — Asynchronous I/O
- Asyncio Coroutines and Tasks
- Asyncio Streams
- Asyncio Subprocesses
- Asyncio Queues
- Asyncio Synchronization Primitives
References
Takeaways
You now know how to use the line_profiler module to profile targeted coroutines in asyncio.
Did I make a mistake? See a typo?
I’m a simple humble human. Correct me, please!
Do you have any additional tips?
I’d love to hear about them!
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Photo by Solé Bicycles on Unsplash
Do you have any questions?