You can develop a custom context manager to automatically benchmark code in Python.
In this tutorial, you will discover how to benchmark Python code using a context manager.
Let’s get started.
Table of Contents
Benchmark Python Code With time.time()
We can benchmark Python code using the time module.
The time.time() function will return the current time in seconds since January 1st 1970 (called the epoch).
Return the time in seconds since the epoch as a floating point number.
— time — Time access and conversions
We can call this function at the beginning of the code we wish to benchmark, and again at the end of the code we wish to benchmark.
For example:
1 2 3 4 5 6 7 |
... # record start time time_start = time.time() # call benchmark code task() # record end time time_end = time.time() |
The difference between the start and end time is the total duration of the program in seconds.
For example:
1 2 3 4 5 |
... # calculate the duration time_duration = time_end - time_start # report the duration print(f'Took {time_duration:.3f} seconds') |
You can learn more about benchmarking Python code in the tutorial:
How can we hide all of this code so that we can benchmark with a simple interface?
Can we develop a benchmark context manager?
Run your loops using all CPUs, download my FREE book to learn how.
How to Benchmark Code Using a Context Manager
We can hide manual benchmarking of Python code in a context manager.
A context manager is an object that defines the runtime context to be established when executing a with statement. The context manager handles the entry into, and the exit from, the desired runtime context for the execution of the block of code.
— With Statement Context Managers
Recall that a context manager is a Python object that has __enter__() and __exit__() methods and is used via the with expression.
You can learn more about context managers here:
Examples include opening a file via the open() built-in function and using a ThreadPoolExecutor.
For example:
1 2 3 |
# create thread pool with ThreadPoolExecutor() as tpe: # ... |
We can define a new class that implements a constructor __init__() the __enter__() and __exit__() methods.
The __init__() constructor can take a name argument for the benchmark case and store it in an object attribute.
For example:
1 2 3 4 |
# constructor def __init__(self, name): # store the name of this benchmark self.name = name |
The __enter__() method can initialize the start time and store it in object attributes. It can then return an instance of the context manager itself, as a good practice.
For example:
1 2 3 4 5 6 7 |
... # enter the context manager def __enter__(self): # record the start time self.time_start = time() # return this object return self |
The __exit__() method must take some standard arguments about any exception that occurred while running the context code. It can then record the end time, calculate and store the duration and report the calculated duration along with the name of the benchmark case.
1 2 3 4 5 6 7 8 9 10 11 |
... # exit the context manager def __exit__(self, exc_type, exc_value, traceback): # record the end time self.time_end = time() # calculate the duration self.duration = self.time_end - self.time_start # report the duration print(f'{self.name} took {self.duration:.3f} seconds') # do not suppress any exception return False |
Tying this together, we can define a Benchmark context manager below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
# define the benchmark context manager class Benchmark(object): # constructor def __init__(self, name): # store the name of this benchmark self.name = name # enter the context manager def __enter__(self): # record the start time self.time_start = time() # return this object return self # exit the context manager def __exit__(self, exc_type, exc_value, traceback): # record the end time self.time_end = time() # calculate the duration self.duration = self.time_end - self.time_start # report the duration print(f'{self.name} took {self.duration:.3f} seconds') # do not suppress any exception return False |
We can then use it by creating an instance of the Benchmark class within the “with” expression and then list any code within the context we wish to benchmark.
For example:
1 2 3 4 5 |
... # create the benchmark context with Benchmark('Task'): # run the task ... |
The code within the context will run as per normal, and once finished, the total execution time will be reported automatically.
This is a clever idea, but I am not the first to think of it. The first time I came across the idea was in a post by Dave Beazley. All credit to him:
- A Context Manager for Timing Benchmarks, Dave Beazley, 2010.
Now that we know how to develop and use a benchmark context manager, let’s look at some examples.
Example of Benchmarking a Function Using a Context Manager
We can explore how to use our Benchmark context manager to benchmark the execution time of a custom function.
In this example, we will define a custom function that takes a moment to complete.
The function creates a list of 100 million squared integers in a list comprehension.
For example:
1 2 3 4 |
# function to benchmark def task(): # create a large list data = [i*i for i in range(100000000)] |
We can then call this function within the Benchmark context manager to have the execution time automatically recorded and reported.
For example:
1 2 3 4 |
# create the benchmark context with Benchmark('Task'): # run the task task() |
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
# SuperFastPython.com # example of a benchmark context manager from time import time # define the benchmark context manager class Benchmark(object): # constructor def __init__(self, name): # store the name of this benchmark self.name = name # enter the context manager def __enter__(self): # record the start time self.time_start = time() # return this object return self # exit the context manager def __exit__(self, exc_type, exc_value, traceback): # record the end time self.time_end = time() # calculate the duration self.duration = self.time_end - self.time_start # report the duration print(f'{self.name} took {self.duration:.3f} seconds') # do not suppress any exception return False # function to benchmark def task(): # create a large list data = [i*i for i in range(100000000)] # protect the entry point if __name__ == '__main__': # create the benchmark context with Benchmark('Task'): # run the task task() |
Running the example first creates the Benchmark context manager within the with expression and provides the name “Task“, which is stored in an object attribute.
The context manager is entered, automatically calling the __enter__() method where the start time is recorded in an object attribute.
The task() function is then called and the list is created.
Finally, the context manager is exited, automatically calling the __exit__() method, recording the end time, calculating the duration, and reporting it to standard out.
In this case, we can see that the task() function took about 6.091 seconds to complete.
This highlights how we can benchmark arbitrary Python code using a custom context manager.
1 |
Task took 6.091 seconds |
Further Reading
This section provides additional resources that you may find helpful.
Books
There are no books dedicated to Python benchmarking.
The following Python books have chapters on benchmarking that may be helpful:
The book Python Cookbook, 2013 has a few sections on benchmarking, such as:
- 9.1. Putting a Wrapper Around a Function
- 9.10. Applying Decorators to Class and Static Methods
- 9.22. Defining Context Managers the Easy Way
- 13.13. Making a Stopwatch Timer
- 14.13. Profiling and Timing Your Program
The book High Performance Python: Practical Performant Programming for Humans has a chapter on benchmarking: Chapter 2: Profiling to Find Bottlenecks, including a section titled "Simple Approaches to Timing—print and a Decorator", and a section titled "Simple Timing Using the Unix time Command".
Benchmarking Guides
- 4 Ways to Benchmark Python Code
- 5 Ways to Measure Execution Time in Python
- Python Benchmark Comparison Metrics
Benchmarking APIs
- time — Time access and conversions
- timeit — Measure execution time of small code snippets
- The Python Profilers
References
Overwheled by the python concurrency APIs?
Find relief, download my FREE Python Concurrency Mind Maps
Takeaways
You now know how to benchmark Python code using a context manager.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Photo by Fonsi Fernández on Unsplash
Do you have any questions?