Last Updated on October 3, 2023
You can develop a custom benchmark decorator that will automatically record and report the execution time of target functions.
This requires defining the decorator function and adding the decoration to the function to be benchmarked.
In this tutorial, you will discover how to automatically benchmark target functions using a benchmark decorator in Python.
Let’s get started.
Benchmark Python Code With time.perf_counter()
We can benchmark Python code using the time module.
The time.perf_counter() function will return a value from a high-performance counter.
Return the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration.
— time — Time access and conversions
The difference between the two calls to the time.perf_counter() function can provide a high-precision estimate of the execution time of a block of code.
Unlike the time.time() function, the time.perf_counter() function is not subject to updates, such as daylight saving and synchronizing the system clock with a time server. This makes the time.perf_counter() function is a reliable approach to benchmarking Python code.
We can call the time.perf_counter() function at the beginning of the code we wish to benchmark, and again at the end of the code we wish to benchmark.
For example:
1 2 3 4 5 6 7 |
... # record start time time_start = time.perf_counter() # call benchmark code task() # record end time time_end = time.perf_counter() |
The difference between the start and end time is the total duration of the program in seconds.
For example:
1 2 3 4 5 |
... # calculate the duration time_duration = time_end - time_start # report the duration print(f'Took {time_duration:.3f} seconds') |
You can learn more about benchmarking Python code with the time.perf_counter() function in the tutorial:
How can we hide all of this code so that we can benchmark with a simple interface?
Can we develop a function decorator that will benchmark our code automatically?
Run loops using all CPUs, download your FREE book to learn how.
How to Develop a Benchmark Function Decorator
We can develop our custom function decorator to automatically benchmark our target functions.
Firstly, let’s review function decorators.
Function Decorators
A function decorator in Python allows a custom function to be called automatically that will in turn call our target function.
This can be used to insert code before and after calling our target function, such as recording the start time, and end time, and calculating an overall execution duration.
A function decorator can be defined as a custom function that returns a function that in turn calls our target function.
Typically, the new separate function is defined as an inner function, that is, within the called function.
For example:
1 2 3 4 5 6 |
# define the custom decorator def custom_decorator(func): # inner function that wraps the target function def inner_wrapper(): # call the target function func() |
The decorator can be added to arbitrary functions in our program as follows:
1 2 3 4 |
# function that has the decorator @custom_decorator def task(): # ... |
The function we are adding the decorator to, e..g task() may take arguments and may have a return value.
Therefore our inner wrapper function needs to handle this accordingly.
For example:
1 2 3 4 5 6 |
# define the custom decorator def custom_decorator(func): # inner function that wraps the target function def inner_wrapper(*args, **kwargs): # call the target function return func(*args, **kwargs) |
Additionally, we may want to leave the decorator on the target function and have the function look and feel unchanged, e.g. if we print the function or call help() on the function. With the decorator in place, this will not be the case.
Instead, we can have the decorator look like the target function passed in, so that any interrogation of the target function looks normal.
This can be achieved by adding the functools.wraps decorator to our inner wrapper function.
This is a convenience function for invoking update_wrapper() as a function decorator when defining a wrapper function. […] Without the use of this decorator factory, the name of the example function would have been ‘wrapper’, and the docstring of the original example() would have been lost.
— functools — Higher-order functions and operations on callable objects
For example:
1 2 3 4 5 6 7 |
# define the custom decorator def custom_decorator(func): # inner function that wraps the target function @wraps(func) def inner_wrapper(*args, **kwargs): # call the target function return func(*args, **kwargs) |
And that’s about it for decorators for now.
Next, let’s look at how we can define custom decorators to automatically benchmark our target functions.
One-Off Benchmark Function Decorator
We can develop a function decorator to benchmark a Python function automatically.
The decorator will be called benchmark_decorator() and takes the function to be decorated, e.g. called.
1 2 3 |
# define the benchmark decorator def benchmark_decorator(func): # ... |
Next, we can define the inner wrapper function.
It must take arguments for the target function, just in case. It then must record the start time before calling the function, and the end time after calling the function.
It then calculates the duration and reports it before returning any return value from the target function itself.
For example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
# inner function that wraps the target function @wraps(func) def wrapper(*args, **kwargs): # record start time time_start = perf_counter() # call the custom function result = func(*args, **kwargs) # record end time time_end = perf_counter() # calculate the duration time_duration = time_end - time_start # report the duration print(f'Took {time_duration:.3f} seconds') # pass on the return value return result |
Tying this together, the complete function decorator for benchmarking target functions is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
# define the benchmark decorator def benchmark_decorator(func): # inner function that wraps the target function @wraps(func) def wrapper(*args, **kwargs): # record start time time_start = perf_counter() # call the custom function result = func(*args, **kwargs) # record end time time_end = perf_counter() # calculate the duration time_duration = time_end - time_start # report the duration print(f'Took {time_duration:.3f} seconds') # pass on the return value return result # return the inner function return wrapper |
It requires two import statements, and these could be moved into the function itself if needed.
Finally, to use the decorator, we add “@benchmark_decorator” above the target function.
For example:
1 2 3 |
@benchmark_decorator def custom_function(): # ... |
This is not a new idea. Many people have proposed developing benchmark function annotations.
A notable example is in the Python Cookbook in the recipe: “14.13. Profiling and Timing Your Program“.
Next, let’s look at how we might update the decorator to report the average execution time.
Repeated Benchmark Function Decorator
It can be a good idea to repeat a benchmark many times and report the average duration.
This is because each benchmark score may be slightly different due to the Python interpreter having to load modules for the first time, the Python garbage collector, and other programs running in the background of the system at the same time.
We can update the benchmark function decorator from the last section to calculate the average execution time instead of a one-off execution time.
This will require executing the target function multiple times, e.g. 3, which could have side effects if the function is changing the program state.
We can update the internal wrapper() function to loop three times, and each iteration benchmarks the target function.
The scores can be collected in a list.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
... results = list() # repeat the benchmark many times n_repeats = 3 for i in range(n_repeats): # record start time time_start = perf_counter() # call the custom function result = func(*args, **kwargs) # record end time time_end = perf_counter() # calculate the duration time_duration = time_end - time_start # store the result results.append(time_duration) # report progress print(f'>run {i+1} took {time_duration:.3f} seconds') |
After all executions of the target function, we can calculate the average execution time and report the result, before returning the final return value from the target function.
1 2 3 4 5 6 7 |
... # calculate average duration avg_duration = sum(results) / n_repeats # report the average duration print(f'Took {avg_duration:.3f} seconds on average') # pass on the return value return result |
Tying this together, the complete updated benchmark decorator is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
# define the average benchmark decorator def average_benchmark_decorator(func): # inner function that wraps the target function @wraps(func) def wrapper(*args, **kwargs): results = list() # repeat the benchmark many times n_repeats = 3 for i in range(n_repeats): # record start time time_start = perf_counter() # call the custom function result = func(*args, **kwargs) # record end time time_end = perf_counter() # calculate the duration time_duration = time_end - time_start # store the result results.append(time_duration) # report progress print(f'>run {i+1} took {time_duration:.3f} seconds') # calculate average duration avg_duration = sum(results) / n_repeats # report the average duration print(f'Took {avg_duration:.3f} seconds on average') # pass on the return value return result # return the inner function return wrapper |
As above, this would be used by adding the @average_benchmark_decorator above the target function.
For example:
1 2 3 |
@average_benchmark_decorator def custom_function(): # ... |
Now that we know how to develop a benchmark function decorator, let’s look at some worked examples.
Example of Benchmark Function Decorator
We can explore how to use our one-off benchmark function decorator to benchmark the execution time of a custom function.
In this example, we will define a custom function that takes a moment to complete.
The function creates a list of 100 million squared integers in a list comprehension.
For example:
1 2 3 4 |
# function to benchmark def task(): # create a large list data = [i*i for i in range(100000000)] |
We can then add our @benchmark_decorator decoration to our task() function.
1 2 3 4 5 |
# function to benchmark, with benchmark decorator @benchmark_decorator def task(): # create a large list data = [i*i for i in range(100000000)] |
Then, all we need to do is call the task() function from the entry point of the program and it will be benchmarked automatically.
Tying this together, the complete example of using our function decorator function to estimate the duration of our task() target function is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
# SuperFastPython.com # example of benchmarking execution time using a decorator from time import perf_counter from functools import wraps # define the benchmark decorator def benchmark_decorator(func): # inner function that wraps the target function @wraps(func) def wrapper(*args, **kwargs): # record start time time_start = perf_counter() # call the custom function result = func(*args, **kwargs) # record end time time_end = perf_counter() # calculate the duration time_duration = time_end - time_start # report the duration print(f'Took {time_duration:.3f} seconds') # pass on the return value return result # return the inner function return wrapper # function to benchmark, with benchmark decorator @benchmark_decorator def task(): # create a large list data = [i*i for i in range(100000000)] # protect the entry point if __name__ == '__main__': # call the custom function task() |
Running the program calls the task() function.
The benchmark decorator runs and records the start time.
It then calls the target task() function with any arguments provided to the helper function, in this case, no arguments, and retrieves the return value.
The task function runs normally and returns.
The benchmark function records the end time and then calculates the duration.
The duration is then reported to the standard output.
In this case, we can see that the task() function took about 6.441 seconds to complete.
This highlights how we can benchmark arbitrary Python functions using our benchmark function decorator.
1 |
Took 6.441 seconds |
Next, let’s look at how we might report the average duration using our repeated benchmark decorator.
Free Python Benchmarking Course
Get FREE access to my 7-day email course on Python Benchmarking.
Discover benchmarking with the time.perf_counter() function, how to develop a benchmarking helper function and context manager and how to use the timeit API and command line.
Example of Repeated Benchmark Function Decorator
We can explore how to repeatedly benchmark our target function using a decorator.
In this example, we will use our benchmark_decorator() decorator developed above which has three repeated benchmark runs to estimate the average run time of our task() function.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
# define the average benchmark decorator def average_benchmark_decorator(func): # inner function that wraps the target function @wraps(func) def wrapper(*args, **kwargs): results = list() # repeat the benchmark many times n_repeats = 3 for i in range(n_repeats): # record start time time_start = perf_counter() # call the custom function result = func(*args, **kwargs) # record end time time_end = perf_counter() # calculate the duration time_duration = time_end - time_start # store the result results.append(time_duration) # report progress print(f'>run {i+1} took {time_duration:.3f} seconds') # calculate average duration avg_duration = sum(results) / n_repeats # report the average duration print(f'Took {avg_duration:.3f} seconds on average') # pass on the return value return result # return the inner function return wrapper |
We can then change the decorator above our target function.
1 2 3 4 5 |
# function to benchmark, with benchmark decorator @average_benchmark_decorator def task(): # create a large list data = [i*i for i in range(100000000)] |
And that’s it.
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
# SuperFastPython.com # example of benchmarking average execution time using a decorator from time import perf_counter from functools import wraps # define the average benchmark decorator def average_benchmark_decorator(func): # inner function that wraps the target function @wraps(func) def wrapper(*args, **kwargs): results = list() # repeat the benchmark many times n_repeats = 3 for i in range(n_repeats): # record start time time_start = perf_counter() # call the custom function result = func(*args, **kwargs) # record end time time_end = perf_counter() # calculate the duration time_duration = time_end - time_start # store the result results.append(time_duration) # report progress print(f'>run {i+1} took {time_duration:.3f} seconds') # calculate average duration avg_duration = sum(results) / n_repeats # report the average duration print(f'Took {avg_duration:.3f} seconds on average') # pass on the return value return result # return the inner function return wrapper # function to benchmark, with benchmark decorator @average_benchmark_decorator def task(): # create a large list data = [i*i for i in range(100000000)] # protect the entry point if __name__ == '__main__': # call the custom function task() |
Running the program calls our custom task() function.
The benchmark decorator runs the main benchmark loop.
Each iteration, the loop records the start time. It then calls the target function with any arguments provided to the helper function, in this case, no arguments.
The task function runs normally and returns. The benchmark decorator records the end time and then calculates the duration and reports it as a progress indicator.
This is repeated three times.
Finally, the average of all runs is calculated and then reported.
In this case, we can see that the task() function took about 6.136 seconds to complete on average.
This highlights how we can automatically calculate and report the average execution time for a target function using a custom benchmark decorator.
1 2 3 4 |
>run 1 took 6.191 seconds >run 2 took 6.077 seconds >run 3 took 6.138 seconds Took 6.136 seconds on average |
Overwhelmed by the python concurrency APIs?
Find relief, download my FREE Python Concurrency Mind Maps
Further Reading
This section provides additional resources that you may find helpful.
Books
- Python Benchmarking, Jason Brownlee (my book!)
Also, the following Python books have chapters on benchmarking that may be helpful:
- Python Cookbook, 2013. (sections 9.1, 9.10, 9.22, 13.13, and 14.13)
- High Performance Python, 2020. (chapter 2)
Guides
- 4 Ways to Benchmark Python Code
- 5 Ways to Measure Execution Time in Python
- Python Benchmark Comparison Metrics
Benchmarking APIs
- time — Time access and conversions
- timeit — Measure execution time of small code snippets
- The Python Profilers
References
Takeaways
You now know how to automatically benchmark target functions using a benchmark decorator in Python.
Did I make a mistake? See a typo?
I’m a simple humble human. Correct me, please!
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Photo by Anna Tkocz on Unsplash
Do you have any questions?