Last Updated on October 3, 2023
You can develop a helper function to benchmark target functions in Python.
In this tutorial, you will discover how to develop and use a benchmark helper function to benchmark functions in Python.
Let’s get started.
Benchmark Python Code With time.perf_counter()
We can benchmark Python code using the time module.
The time.perf_counter() function will return a value from a high-performance counter.
Return the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration.
— time — Time access and conversions
The difference between the two calls to the time.perf_counter() function can provide a high-precision estimate of the execution time of a block of code.
Unlike the time.time() function, the time.perf_counter() function is not subject to updates, such as daylight saving and synchronizing the system clock with a time server. This makes the time.perf_counter() function is a reliable approach to benchmarking Python code.
We can call the time.perf_counter() function at the beginning of the code we wish to benchmark, and again at the end of the code we wish to benchmark.
For example:
1 2 3 4 5 6 7 |
... # record start time time_start = time.perf_counter() # call benchmark code task() # record end time time_end = time.perf_counter() |
The difference between the start and end time is the total duration of the program in seconds.
For example:
1 2 3 4 5 |
... # calculate the duration time_duration = time_end - time_start # report the duration print(f'Took {time_duration:.3f} seconds') |
You can learn more about benchmarking Python code with the time.perf_counter() function in the tutorial:
How can we hide all of this code so that we can benchmark with a simple interface?
Can we develop a custom function that will benchmark our code automatically?
Run loops using all CPUs, download your FREE book to learn how.
How to Develop a Benchmark Function
We can develop a helper function to automatically benchmark our Python code.
Let’s explore two versions of a benchmark helper function:
- One-off benchmark helper function.
- Repeated benchmark helper function.
One-Off Benchmark Helper Function
Our function can take the name of our target function that we wish to benchmark as arguments.
For example:
1 2 3 |
# benchmark function def benchmark(fun, *args): # ... |
The *args is an optional list of function arguments. It allows us to specify zero, one, or many arguments for our target function to the benchmark function, which we can pass on to the target function directly, for example:
1 2 3 |
... # call the custom function fun(*args) |
Our function can then record the start time, call the target function, record the end time, and report the overall duration.
For example:
1 2 3 4 5 6 7 8 9 10 11 |
... # record start time time_start = perf_counter() # call the custom function fun(*args) # record end time time_end = perf_counter() # calculate the duration time_duration = time_end - time_start # report the duration print(f'Took {time_duration:.3f} seconds') |
Tying this together a helper function for benchmarking arbitrary Python functions is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 |
# benchmark function def benchmark(fun, *args): # record start time time_start = perf_counter() # call the custom function fun(*args) # record end time time_end = perf_counter() # calculate the duration time_duration = time_end - time_start # report the duration print(f'Took {time_duration:.3f} seconds') |
It requires that we import the time.perf_counter() function, but this could be imported into the function directly if we choose.
For example:
1 2 3 4 5 |
# benchmark function def benchmark(fun, *args): # import required function from time import time ... |
Repeated Benchmark Helper Function
It is good practice to repeat a benchmark task a few times and report the average duration.
This is because each time a benchmark is performed, it can give slightly different results. This might be because the Python interpreter needs to load something, or because some other program was running behind the scenes on the computer.
We can repeat the benchmark by updating our function to take a number of repeats as an argument.
For example:
1 2 3 |
# repeated benchmark function def repeated_benchmark(fun, n_repeats, *args): # ... |
We can then loop the main benchmark operation and store the results of each iteration in a list.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
... results = list() # repeat the benchmark many times for i in range(n_repeats): # record start time time_start = perf_counter() # call the custom function fun(*args) # record end time time_end = perf_counter() # calculate the duration time_duration = time_end - time_start # store the result results.append(time_duration) # report progress print(f'>run {i+1} took {time_duration:.3f} seconds') |
Finally, at the end of the repeats, we can calculate the average duration and report the final result.
1 2 3 4 5 |
... # calculate average duration avg_duration = sum(results) / n_repeats # report the average duration print(f'Took {avg_duration:.3f} seconds on average') |
Tying this together, the complete example of a repeated benchmark helper function is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
# repeated benchmark function def repeated_benchmark(fun, n_repeats, *args): results = list() # repeat the benchmark many times for i in range(n_repeats): # record start time time_start = perf_counter() # call the custom function fun(*args) # record end time time_end = perf_counter() # calculate the duration time_duration = time_end - time_start # store the result results.append(time_duration) # report progress print(f'>run {i+1} took {time_duration:.3f} seconds') # calculate average duration avg_duration = sum(results) / n_repeats # report the average duration print(f'Took {avg_duration:.3f} seconds on average') |
Now that we know how to develop benchmark helper functions, let’s explore some worked examples.
Example of Benchmarking Using Custom Function
We can explore how to use our one-off benchmark helper function to benchmark the execution time of a custom function.
In this example, we will define a custom function that takes a moment to complete.
The function creates a list of 100 million squared integers in a list comprehension.
For example:
1 2 3 4 |
# function to benchmark def task(): # create a large list data = [i*i for i in range(100000000)] |
We can then call our benchmark() function and pass it the name of our target function, in this case, “task“.
1 2 3 |
... # benchmark the task() function benchmark(task) |
And that’s it.
Tying this together, the complete example of using our helper benchmark function to estimate the duration of our task() target function is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
# SuperFastPython.com # example of benchmarking using a custom function from time import perf_counter # benchmark function def benchmark(fun, *args): # record start time time_start = perf_counter() # call the custom function fun(*args) # record end time time_end = perf_counter() # calculate the duration time_duration = time_end - time_start # report the duration print(f'Took {time_duration:.3f} seconds') # function to benchmark def task(): # create a large list data = [i*i for i in range(100000000)] # protect the entry point if __name__ == '__main__': # benchmark the task() function benchmark(task) |
Running the program calls the benchmark() function.
The benchmark function runs and records the start time.
It then calls the target function with any arguments provided to the helper function, in this case, no arguments.
The task function runs normally and returns.
The benchmark function records the end time and then calculates the duration.
The duration is then reported.
In this case, we can see that the task() function took about 6.275 seconds to complete.
This highlights how we can benchmark arbitrary Python functions using our benchmark helper function.
1 |
Took 6.275 seconds |
Next, let’s explore an example of using our repeated benchmark function.
Free Python Benchmarking Course
Get FREE access to my 7-day email course on Python Benchmarking.
Discover benchmarking with the time.perf_counter() function, how to develop a benchmarking helper function and context manager and how to use the timeit API and command line.
Example of Repeated Benchmarking Using Custom Function
We can explore how to repeatedly benchmark our target function.
In this example, we will use our repeated_benchmark() with three repeats to estimate the average run time of our task() function.
This requires calling our repeated_benchmark() function and specifying the name of the function, the number of repeats (e.g. 3), and any arguments to the task function (there are none).
For example:
1 2 3 |
... # benchmark the task() function repeated_benchmark(task, 3) |
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
# SuperFastPython.com # example of benchmarking using a custom function from time import perf_counter # repeated benchmark function def repeated_benchmark(fun, n_repeats, *args): results = list() # repeat the benchmark many times for i in range(n_repeats): # record start time time_start = perf_counter() # call the custom function fun(*args) # record end time time_end = perf_counter() # calculate the duration time_duration = time_end - time_start # store the result results.append(time_duration) # report progress print(f'>run {i+1} took {time_duration:.3f} seconds') # calculate average duration avg_duration = sum(results) / n_repeats # report the average duration print(f'Took {avg_duration:.3f} seconds on average') # function to benchmark def task(): # create a large list data = [i*i for i in range(100000000)] # protect the entry point if __name__ == '__main__': # benchmark the task() function repeated_benchmark(task, 3) |
Running the program calls the repeated_benchmark() function.
The benchmark function runs the main benchmark loop.
Each iteration, the loop records the start time. It then calls the target function with any arguments provided to the helper function, in this case, no arguments. The task function runs normally and returns. The benchmark function records the end time and then calculates the duration and reports it as a progress indicator.
This is repeated three times.
Finally, the average of all runs is calculated and then reported.
In this case, we can see that the task() function took about 6.146 seconds to complete on average.
This highlights how we can repeatedly benchmark arbitrary Python functions using our benchmark helper function.
1 2 3 4 |
>run 1 took 6.260 seconds >run 2 took 6.171 seconds >run 3 took 6.005 seconds Took 6.146 seconds on average |
Overwhelmed by the python concurrency APIs?
Find relief, download my FREE Python Concurrency Mind Maps
Further Reading
This section provides additional resources that you may find helpful.
Books
- Python Benchmarking, Jason Brownlee (my book!)
Also, the following Python books have chapters on benchmarking that may be helpful:
- Python Cookbook, 2013. (sections 9.1, 9.10, 9.22, 13.13, and 14.13)
- High Performance Python, 2020. (chapter 2)
Guides
- 4 Ways to Benchmark Python Code
- 5 Ways to Measure Execution Time in Python
- Python Benchmark Comparison Metrics
Benchmarking APIs
- time — Time access and conversions
- timeit — Measure execution time of small code snippets
- The Python Profilers
References
Takeaways
You now know how to develop and use a benchmark helper function to benchmark functions in Python.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Do you have any questions?