You can develop a custom asynchronous context manager to automatically benchmark asyncio code in Python.
An asynchronous context manager is a context manager that can be suspended in asyncio when it is entered and exited. We can wrap code that we wish to automatically benchmark using a custom asynchronous context manager.
In this tutorial, you will discover how to benchmark asyncio using an asynchronous context manager.
Let’s get started.
Help to Benchmark Asyncio Coroutines and Tasks
We can benchmark Python code using the time module.
The time.perf_counter() function will return a value from a high-performance counter.
Return the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration.
— time — Time access and conversions
The difference between the two calls to the time.perf_counter() function can provide a high-precision estimate of the execution time of a block of code.
Unlike the time.time() function, the time.perf_counter() function is not subject to updates, such as daylight saving and synchronizing the system clock with a time server. This makes the time.perf_counter() function is a reliable approach to benchmarking Python code.
We can call the time.perf_counter() function at the beginning of the code we wish to benchmark, and again at the end of the code we wish to benchmark.
For example:
1 2 3 4 5 6 7 |
... # record start time time_start = time.perf_counter() # call benchmark code task() # record end time time_end = time.perf_counter() |
The difference between the start and end time is the total duration of the program in seconds.
For example:
1 2 3 4 5 |
... # calculate the duration time_duration = time_end - time_start # report the duration print(f'Took {time_duration:.3f} seconds') |
You can learn more about benchmarking Python code with the time.perf_counter() function in the tutorial:
This approach to benchmarking can be used to benchmark asyncio programs that await coroutines and tasks.
How can we hide all of this code so that we can benchmark with a simple interface?
Can we develop a custom asynchronous context manager that will benchmark our code automatically?
Run loops using all CPUs, download your FREE book to learn how.
How to Develop a Benchmark Asynchronous Context Manager
We can hide manual benchmarking of asyncio code in an asynchronous context manager.
An asynchronous context manager is a context manager that is able to suspend execution in its __aenter__ and __aexit__ methods.
— Asynchronous Context Managers
Recall that an asynchronous context manager provides is a type of context manager that can be suspended when entering and exiting.
The __aenter__ and __aexit__ methods are defined as coroutines and are awaited by the caller.
This is achieved using the “async with” expression.
You can learn more about asynchronous context managers in the tutorial:
We can define a new class that implements a constructor __init__() and the __aenter__() and __aexit__() coroutines.
The __init__() constructor can take a name argument for the benchmark case and store it in an object attribute.
For example:
1 2 3 4 |
# constructor def __init__(self, name): # store the name of this benchmark self.name = name |
The __aenter__() coroutine can initialize the start time and store it in an object attribute. It can then return an instance of the context manager itself, as a good practice.
For example:
1 2 3 4 5 6 |
# enter the async context manager async def __aenter__(self): # record the start time self.time_start = perf_counter() # return this object return self |
The __aexit__() coroutine must take some standard arguments about any exception that occurred while running the context code. It can then record the end time, calculate and store the duration, and report the calculated duration along with the name of the benchmark case.
1 2 3 4 5 6 7 8 9 10 |
# exit the async context manager async def __aexit__(self, exc_type, exc, tb): # record the end time self.time_end = perf_counter() # calculate the duration self.duration = self.time_end - self.time_start # report the duration print(f'{self.name} took {self.duration:.3f} seconds') # do not suppress any exception return False |
Tying this together, we can define a Benchmark asynchronous context manager below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
# benchmark asynchronous context manager class Benchmark: # constructor def __init__(self, name): # store the name of this benchmark self.name = name # enter the async context manager async def __aenter__(self): # record the start time self.time_start = perf_counter() # return this object return self # exit the async context manager async def __aexit__(self, exc_type, exc, tb): # record the end time self.time_end = perf_counter() # calculate the duration self.duration = self.time_end - self.time_start # report the duration print(f'{self.name} took {self.duration:.3f} seconds') # do not suppress any exception return False |
We can then use it by creating an instance of the Benchmark class within the “async with” expression and then list any code within the context we wish to benchmark.
For example:
1 2 3 4 5 |
... # create the benchmark context async with Benchmark('Task'): # run the task ... |
The code within the context will run as per normal, and once finished, the total execution time will be reported automatically.
You can learn more about the “async with” expression in the tutorial:
You can learn more about a regular (non-asynchronous) context manager for benchmarking in the tutorial:
Now that we know how to develop and use a benchmark context manager, let’s look at some examples.
Example of Benchmarking Coroutine with Context Manager
We can explore how to use our Benchmark asynchronous context manager to benchmark the execution time of a custom coroutine.
In this example, we will define a coroutine that blocks the event loop with a CPU-bound task.
The coroutine creates a list of 100 million squared integers in a list comprehension.
For example:
1 2 3 4 |
# task to benchmark async def work(): # create a large list data = [i*i for i in range(100000000)] |
We can then await this comprehension within the Benchmark asynchronous context manager to have the execution time automatically recorded and reported.
For example:
1 2 3 |
# benchmark the execution of our task async with Benchmark('work()'): await work() |
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
# SuperFastPython.com # example of benchmarking an asyncio coroutine with an async context manager from time import perf_counter import asyncio # benchmark asynchronous context manager class Benchmark: # constructor def __init__(self, name): # store the name of this benchmark self.name = name # enter the async context manager async def __aenter__(self): # record the start time self.time_start = perf_counter() # return this object return self # exit the async context manager async def __aexit__(self, exc_type, exc, tb): # record the end time self.time_end = perf_counter() # calculate the duration self.duration = self.time_end - self.time_start # report the duration print(f'{self.name} took {self.duration:.3f} seconds') # do not suppress any exception return False # task to benchmark async def work(): # create a large list data = [i*i for i in range(100000000)] # main coroutine async def main(): # report a message print('Main starting') # benchmark the execution of our task async with Benchmark('work()'): await work() # report a message print('Main done') # start the event loop asyncio.run(main()) |
Running the example first starts the asyncio event loop and runs the main() coroutine.
The main() coroutine runs and reports a message.
It then creates the Benchmark asynchronous context manager via the “async with” expression and provides the name “work()“, which is stored in an object attribute.
The main() coroutine suspends and the context manager is entered, automatically awaiting the __aenter__() coroutine where the start time is recorded in an object attribute.
The task() coroutine is then awaited in the body of the context manager and the list is created.
Finally, the asynchronous context manager is exited, automatically awaiting the __aexit__() method, recording the end time, calculating the duration, and reporting it to standard out.
In this case, we can see that the task() coroutine took about 6.340 seconds to complete.
This highlights how we can benchmark arbitrary asyncio code using a custom asynchronous context manager.
1 2 3 |
Main starting work() took 6.340 seconds Main done |
Next, let’s take a look at how we might benchmark an asyncio task instead of a coroutine.
Free Python Benchmarking Course
Get FREE access to my 7-day email course on Python Benchmarking.
Discover benchmarking with the time.perf_counter() function, how to develop a benchmarking helper function and context manager and how to use the timeit API and command line.
Example of Benchmarking a Task with Context Manager
We can explore how to use our Benchmark asynchronous context manager to benchmark the execution time of an asyncio Task.
In this case, we can update the above example to first create an asyncio.Task to run our work() coroutine, then await the new task within the Benchmark asynchronous context manager.
1 2 3 4 5 6 |
... # create the task task = asyncio.create_task(work()) # benchmark the execution of our task async with Benchmark('work()'): await task |
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
# SuperFastPython.com # example of benchmarking an asyncio task with an async context manager from time import perf_counter import asyncio # benchmark asynchronous context manager class Benchmark: # constructor def __init__(self, name): # store the name of this benchmark self.name = name # enter the async context manager async def __aenter__(self): # record the start time self.time_start = perf_counter() # return this object return self # exit the async context manager async def __aexit__(self, exc_type, exc, tb): # record the end time self.time_end = perf_counter() # calculate the duration self.duration = self.time_end - self.time_start # report the duration print(f'{self.name} took {self.duration:.3f} seconds') # do not suppress any exception return False # task to benchmark async def work(): # create a large list data = [i*i for i in range(100000000)] # main coroutine async def main(): # report a message print('Main starting') # create the task task = asyncio.create_task(work()) # benchmark the execution of our task async with Benchmark('work()'): await task # report a message print('Main done') # start the event loop asyncio.run(main()) |
Running the example first starts the asyncio event loop and runs the main() coroutine.
The main() coroutine runs and reports a message. It then creates an asyncio.Task for our work() coroutine.
Next, the main coroutine creates the Benchmark asynchronous context manager via the “async with” expression and provides the name “work()“, which is stored in an object attribute.
The main() coroutine suspends and the context manager is entered, automatically awaiting the __aenter__() coroutine where the start time is recorded in an object attribute.
Our task is then awaited in the body of the context manager. The work() task runs and the list is created.
Finally, the asynchronous context manager is exited, automatically awaiting the __aexit__() method, recording the end time, calculating the duration, and reporting it to standard out.
In this case, we can see that the task() coroutine took about 6.279 seconds to complete.
This highlights how we can benchmark an arbitrary asyncio.Task using a custom asynchronous context manager.
1 2 3 |
Main starting work() took 6.279 seconds Main done |
Overwhelmed by the python concurrency APIs?
Find relief, download my FREE Python Concurrency Mind Maps
Further Reading
This section provides additional resources that you may find helpful.
Books
- Python Benchmarking, Jason Brownlee (my book!)
Also, the following Python books have chapters on benchmarking that may be helpful:
- Python Cookbook, 2013. (sections 9.1, 9.10, 9.22, 13.13, and 14.13)
- High Performance Python, 2020. (chapter 2)
Guides
- 4 Ways to Benchmark Python Code
- 5 Ways to Measure Execution Time in Python
- Python Benchmark Comparison Metrics
Benchmarking APIs
- time — Time access and conversions
- timeit — Measure execution time of small code snippets
- The Python Profilers
References
Takeaways
You now know how to benchmark asyncio using an asynchronous context manager.
Did I make a mistake? See a typo?
I’m a simple humble human. Correct me, please!
Do you have any additional tips?
I’d love to hear about them!
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Photo by Luis Pinho on Unsplash
Do you have any questions?