Last Updated on October 29, 2022
You can call a function for each item in an iterable in a new thread asynchronously via the ThreadPool map_async() method.
In this tutorial you will discover how to use the map_async() method for the ThreadPool in Python.
Let’s get started.
Need an Asynchronous Version of map()
The multiprocessing.pool.ThreadPool in Python provides a pool of reusable threads for executing ad hoc tasks.
A thread pool object which controls a pool of worker threads to which jobs can be submitted.
— multiprocessing — Process-based parallelism
The ThreadPool class extends the Pool class. The Pool class provides a pool of worker processes for process-based concurrency.
Although the ThreadPool class is in the multiprocessing module it offers thread-based concurrency and is best suited to IO-bound tasks, such as reading or writing from sockets or files.
A ThreadPool can be configured when it is created, which will prepare the new threads.
We can issue one-off tasks to the ThreadPool using methods such as apply() or we can apply the same function to an iterable of items using functions such as map().
Results for issued tasks can then be retrieved synchronously, or we can retrieve the result of tasks later by using asynchronous versions of the functions such as apply_async() and map_async().
The built-in map() function allows you to apply a function to each item in an iterable.
The Python ThreadPool provides an asynchronous and multithreaded version via the map_async() method.
How can we use the map_async() method method in the ThreadPool?
Run loops using all CPUs, download your FREE book to learn how.
How to Use ThreadPool map_async()
The ThreadPool provides a multithreaded and asynchronous map() method via map_async() method.
Recall that the built-in map() function will apply a given function to each item in a given iterable.
Return an iterator that applies function to every item of iterable, yielding the results.
— Built-in Functions
It yields one result returned from the given target function called with one item from a given iterable. It is common to call map and iterate the results in a for-loop.
For example:
1 2 3 4 |
... # iterates results from map for result in map(task, items): # ... |
The ThreadPool provides a version of the map() function where the target function is called for each item in the provided iterable in a new thread and the call returns immediately.
The map_async() method does not block while the function is applied to each item in the iterable, instead it returns a AsyncResult object from which the results may be accessed.
A variant of the map() method which returns an AsyncResult object.
— multiprocessing — Process-based parallelism
For example:
1 2 3 |
... # call the function for each item in the iterable result = map_async(task, items) |
Each item in the iterable is taken as a separate task in the ThreadPool.
Like the built-in map() function, the returned iterator of results will be in the order of the provided iterable. This means that tasks are issued (and perhaps executed) in the same order as the results are returned.
Unlike the built-in map() function, the map_async() method only takes one iterable as an argument. This means that the target function executed in the worker threads can only take a single argument.
The iterable of items that is passed is traversed in order to issue all tasks to the ThreadPool. Therefore, if the iterable is very long, it may result in many tasks waiting in memory to execute, e.g. one per worker thread.
It is possible to split up the items in the iterable evenly to worker thread.
For example, if we had a ThreadPool with 4 worker threads and an iterable with 40 items, we can split up the items into 4 chunks of 10 items, with one chunk allocated to each worker threads.
The effect is less overhead in transmitting tasks to worker threads and collecting results.
This can be achieved via the “chunksize” argument to map().
This method chops the iterable into a number of chunks which it submits to the process pool as separate tasks. The (approximate) size of these chunks can be specified by setting chunksize to a positive integer.
— multiprocessing — Process-based parallelism
For example:
1 2 3 |
... # apply the function with a chunksize result = map_async(task, items, chunksize=10) |
Because map_async() does not block, it allows the caller to continue and retrieve the result when needed.
The result is an iterable of return values which can be accessed via the get() method on the AsyncResult object.
For example:
1 2 3 4 |
... # iterate over return values for value in result.get(): # ... |
A callback function can be called automatically if the task was successful, e.g. no error or exception.
The callback function must take one argument, the result of the target function.
If callback is specified then it should be a callable which accepts a single argument. When the result becomes ready callback is applied to it, that is unless the call failed …
— multiprocessing — Process-based parallelism
The function is specified via the “callback” argument to the map_async() function.
For example:
1 2 3 4 5 6 7 |
# callback function def custom_callback(result): print(f'Got result: {result}') ... # issue a task asynchronously to the thread pool with a callback result = pool.map_async(task, callback=custom_callback) |
Similarly, an error callback function can be specified via the “error_callback” argument that is called only when an unexpected error or exception is raised.
If error_callback is specified then it should be a callable which accepts a single argument. If the target function fails, then the error_callback is called with the exception instance.
— multiprocessing — Process-based parallelism
The error callback function must take one argument, which is the instance of the error or exception that was raised.
For example:
1 2 3 4 5 6 7 |
# error callback function def custom_error_callback(error): print(f'Got error: {error}') ... # issue a task asynchronously to the thread pool with an error callback result = pool.map_async(task, error_callback=custom_error_callback) |
Difference Between map_async() vs map()
How does the map_async() method compare to the map() method for issuing tasks to the ThreadPool?
Both the map_async() and map() may be used to issue tasks that call a function to all items in an iterable via the ThreadPool.
The following summarizes the key differences between these two methods:
- The map_async() method does not block, whereas the map() method does block.
- The map_async() method returns an AsyncResult, whereas the map() method returns an iterable of return values from the target function.
- The map_async() method can execute callback functions on return values and errors, whereas the map() method does not support callback functions.
The map_async() method should be used for issuing target task functions to the ThreadPool where the caller cannot or must not block while the task is executing.
The map() method should be used for issuing target task functions to the ThreadPool where the caller can or must block until all function calls are complete.
Now that we know how to use the map_async() method to execute tasks in the ThreadPool, let’s look at some worked examples.
Free Python ThreadPool Course
Download your FREE ThreadPool PDF cheat sheet and get BONUS access to my free 7-day crash course on the ThreadPool API.
Discover how to use the ThreadPool including how to configure the number of worker threads and how to execute tasks asynchronously
Example of map_async()
We can explore how to use the parallel and asynchronous version of map_async() on the ThreadPool.
In this example, we can define a target task function that takes an integer as an argument, generates a random number, reports the value then returns the value that was generated. We can then call this function for each integer between 0 and 9 using the ThreadPool map().
This will call the function to each integer in parallel using as many cores as are available in the system. We will not block waiting for the result, instead the calls will be issued asynchronously.
Firstly, we can define the target task function.
The function takes an argument, generates a random number between 0 and 1, reports the integer and generated number. It then blocks for a fraction of a second to simulate computational effort, then returns the number that was generated.
The task() function below implements this.
1 2 3 4 5 6 7 8 9 10 |
# task executed in a worker thread def task(identifier): # generate a value value = random() # report a message print(f'Task {identifier} executing with {value}') # block for a moment sleep(value) # return the generated value return value |
We can then create and configure a ThreadPool.
We will use the context manager interface to ensure the pool is shutdown automatically once we are finished with it.
1 2 3 4 |
... # create and configure the thread pool with ThreadPool() as pool: # ... |
We can then call the map_async() method on the thread pool to apply our task() function to each value in a range between 0 and 1.
This will return immediately with an AsyncResult object.
1 2 3 |
... # issues tasks to thread pool result = pool.map_async(task, range(10)) |
We will then iterate over the iterable of results accessible via the get() method on the AsyncResult object.
This can be achieved via a for-loop.
1 2 3 4 |
... # iterate results for result in result.get(): print(f'Got result: {result}') |
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
# SuperFastPython.com # example of parallel map_async() with the thread pool from random import random from time import sleep from multiprocessing.pool import ThreadPool # task executed in a worker thread def task(identifier): # generate a value value = random() # report a message print(f'Task {identifier} executing with {value}') # block for a moment sleep(value) # return the generated value return value # protect the entry point if __name__ == '__main__': # create and configure the thread pool with ThreadPool() as pool: # issues tasks to thread pool result = pool.map_async(task, range(10)) # iterate results for result in result.get(): print(f'Got result: {result}') # thread pool is closed automatically |
Running the example first creates the ThreadPool with a default configuration.
It will have one worker thread for each logical CPU in your system.
The map_async() method is then called for the range.
This issues ten calls to the task() function, one for each integer between 0 and 9.
A AsyncResult object is returned immediately, and the main thread is free to continue on.
Each call to the task function generates a random number between 0 and 1, reports a message, blocks, then returns a value.
The main thread then accesses the iterable of return values. It iterates over the values returned from the calls to the task() function and reports the generated values, matching those generated in each worker thread.
Importantly, all task() function calls are issued and executed before the iterator of results is made available. We cannot iterate over results as they are completed by the caller.
Note, results will differ each time the program is run given the use of random numbers.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
Task 0 executing with 0.9007258080099242 Task 1 executing with 0.23625842780722572 Task 2 executing with 0.8933159406484387 Task 3 executing with 0.2993439084649945 Task 4 executing with 0.6944466923348013 Task 5 executing with 0.11458459944166266 Task 6 executing with 0.30284660043813394 Task 7 executing with 0.42590551466960025 Task 8 executing with 0.06368046502125513 Task 9 executing with 0.29431122693187717 Got result: 0.9007258080099242 Got result: 0.23625842780722572 Got result: 0.8933159406484387 Got result: 0.2993439084649945 Got result: 0.6944466923348013 Got result: 0.11458459944166266 Got result: 0.30284660043813394 Got result: 0.42590551466960025 Got result: 0.06368046502125513 Got result: 0.29431122693187717 |
Next, let’s look at an example where we might call a map_async() for a function with no return value.
Overwhelmed by the python concurrency APIs?
Find relief, download my FREE Python Concurrency Mind Maps
Example of map_async with No Return Value
We can explore using the map_async() method to call a function for each item in an iterable that does not have a return value.
This means that we are not interested in the iterable of results returned by the call to map_async() and instead are only interested that all issued tasks get executed.
This can be achieved by updating the previous example so that the task() function does not return a value.
The updated task() function with this change is listed below.
1 2 3 4 5 6 7 8 |
# task executed in a worker thread def task(identifier): # generate a value value = random() # report a message print(f'Task {identifier} executing with {value}') # block for a moment sleep(value) |
Then, in the main thread, we can call map_async() with our task() function and the range, as before.
1 2 3 |
... # issues tasks to thread pool result = pool.map_async(task, range(10)) |
Importantly, because the call to map_async() does not block, we need to wait for the issued tasks to complete.
If we do not wait for the tasks to complete, we will exit the context manager for the ThreadPool which will terminate the worker threads and stop the tasks.
This can be achieved by calling the wait() method on the AsyncResult object.
1 2 3 |
... # wait for tasks to complete result.wait() |
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
# SuperFastPython.com # example of parallel map_async() with the thread pool and no return values from random import random from time import sleep from multiprocessing.pool import ThreadPool # task executed in a worker thread def task(identifier): # generate a value value = random() # report a message print(f'Task {identifier} executing with {value}') # block for a moment sleep(value) # protect the entry point if __name__ == '__main__': # create and configure the thread pool with ThreadPool() as pool: # issues tasks to thread pool result = pool.map_async(task, range(10)) # wait for tasks to complete result.wait() # thread pool is closed automatically |
Running the example first creates the ThreadPool with a default configuration.
The map_async() method is then called for the range. This issues ten calls to the task() function, one for each integer between 0 and 9.
A AsyncResult object is returned immediately, and the main thread is free to continue on.
Each call to the task function generates a random number between 0 and 1, reports a message, then blocks.
The main thread waits on the AsyncResult object, blocking until all calls to the task() function are complete.
The tasks finish, and the wait() method returns and the main thread is free to carry on.
Note, results will differ each time the program is run given the use of random numbers.
1 2 3 4 5 6 7 8 9 10 |
Task 0 executing with 0.4848649184278908 Task 1 executing with 0.46745471498311963 Task 2 executing with 0.7630853480809312 Task 3 executing with 0.17371437659959565 Task 4 executing with 0.7449430702056101 Task 5 executing with 0.14195064424587067 Task 6 executing with 0.7580130554593287 Task 7 executing with 0.8895579739217265 Task 8 executing with 0.01959577173638638 Task 9 executing with 0.20506453143484382 |
Next, let’s look at issuing many tasks over multiple calls, then wait for all tasks in the ThreadPool to complete.
Example of map_async And Wait For All Tasks To Complete
We can explore how to issue many tasks to the ThreadPool via map_async(), and wait for all issued tasks to complete.
This could be achieved by calling map_async() multiple times, and calling the wait() method AsyncResult object after each call.
An alternate approach is to call map_async() multiple times, then wait on the ThreadPool itself for all issued tasks to complete.
In this example, we can update the previous example to call the map_async() twice and ignore the AsyncResult object that is returned.
1 2 3 4 5 |
... # issues tasks to thread pool _ = pool.map_async(task, range(10)) # issues tasks to thread pool _ = pool.map_async(task, range(11, 20)) |
We can then explicitly close the ThreadPool to prevent additional tasks being submitted to the pool, then call the join() method to wait for all issued tasks to complete.
1 2 3 4 5 |
... # close the thread pool pool.close() # wait for all tasks to complete and threads to close pool.join() |
You can learn more about joining the ThreadPool in the tutorial:
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
# SuperFastPython.com # example of parallel map_async() with the thread pool and wait for tasks to complete from random import random from time import sleep from multiprocessing.pool import ThreadPool # task executed in a worker thread def task(identifier): # generate a value value = random() # report a message print(f'Task {identifier} executing with {value}') # block for a moment sleep(value) # protect the entry point if __name__ == '__main__': # create and configure the thread pool with ThreadPool() as pool: # issues tasks to thread pool _ = pool.map_async(task, range(10)) # issues tasks to thread pool _ = pool.map_async(task, range(11, 20)) # close the thread pool pool.close() # wait for all tasks to complete and threads to close pool.join() # thread pool is closed automatically |
Running the example first creates the ThreadPool with a default configuration.
The map_async() method is then called for the range. This issues ten calls to the task() function, one for each integer between 0 and 9.
The map_async() method is then called again for a different range, issuing ten more calls to the task() function, one for each integer between 11 and 19.
Both calls return immediately, and the AsyncResult object returned is ignored.
Each call to the task function generates a random number between 0 and 1, reports a message, then blocks.
The main thread then closes the ThreadPool, preventing any additional tasks from being issued. It then joins the ThreadPool, blocking until all issued tasks are completed.
The tasks finish, and the join() method returns and the main thread is free to carry on.
Note, results will differ each time the program is run given the use of random numbers.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
Task 0 executing with 0.5210505089409927 Task 1 executing with 0.11227675040387985 Task 2 executing with 0.02762619090056706 Task 3 executing with 0.9442528992347273 Task 4 executing with 0.31900270915963513 Task 5 executing with 0.007510009852929156 Task 6 executing with 0.33484069956300677 Task 7 executing with 0.6542168161376911 Task 8 executing with 0.9623628314229364 Task 9 executing with 0.792526302209146 Task 11 executing with 0.5390496918798763 Task 12 executing with 0.7914472026739061 Task 13 executing with 0.5352834827034805 Task 14 executing with 0.570590906997596 Task 15 executing with 0.7548121471125823 Task 16 executing with 0.40681724474214365 Task 17 executing with 0.7108877449579454 Task 18 executing with 0.8170415795140294 Task 19 executing with 0.22130434147056122 |
Next, let’s look at issuing tasks and handle each return value with a callback function.
Example of map_async() with a Callback Function
We can issue tasks to the ThreadPool that return a value and specify a callback function to handle each returned value.
This can be achieved via the “callback” argument.
In this example, we can update the above example so that the task() function generates a value and returns it. We can then define a function to handle the return value, in this case to simply report the value.
Firstly, we can define the function to call to handle the value returned from the function.
The custom_callback() below implements this, taking one argument which is the value returned from the target task function.
1 2 3 |
# custom callback function def custom_callback(result): print(f'Callback got values: {result}') |
Next, we can update the task function to generate a random value between 0 and 1. It then reports the value, sleeps, then returns the value that was generated.
The updated task() function with these changes is listed below.
1 2 3 4 5 6 7 8 9 10 |
# task executed in a worker thread def task(identifier): # generate a value value = random() # report a message print(f'Task {identifier} executing with {value}') # block for a moment sleep(value) # return the generated value return value |
Finally, we can update the call to map_async() to specify the callback function via the “callback” argument, giving the name of our custom function.
1 2 3 |
... # issues tasks to thread pool _ = pool.map_async(task, range(10), callback=custom_callback) |
The main thread will then close the ThreadPool and wait for all issued tasks to complete.
1 2 3 4 5 |
... # close the thread pool pool.close() # wait for all tasks to complete and threads to close pool.join() |
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
# SuperFastPython.com # example of parallel map_async() with the thread pool and a callback function from random import random from time import sleep from multiprocessing.pool import ThreadPool # custom callback function def custom_callback(result): print(f'Callback got values: {result}') # task executed in a worker thread def task(identifier): # generate a value value = random() # report a message print(f'Task {identifier} executing with {value}') # block for a moment sleep(value) # return the generated value return value # protect the entry point if __name__ == '__main__': # create and configure the thread pool with ThreadPool() as pool: # issues tasks to thread pool _ = pool.map_async(task, range(10), callback=custom_callback) # close the thread pool pool.close() # wait for all tasks to complete and threads to close pool.join() |
Running the example first creates and configures the ThreadPool.
The map_async() method is then called for the range and a return callback function. This issues ten calls to the task() function, one for each integer between 0 and 9.
The main thread then closes the ThreadPool and blocks until all tasks complete and all threads in the ThreadPool close.
Each call to the task function generates a random number between 0 and 1, reports a message, then blocks.
When all task() functions are finished, the callback function is expected, provided with the iterable of return values.
The iterable is printed directly, showing all return values at once.
Finally, the worker threads are closed and the main thread continues on.
1 2 3 4 5 6 7 8 9 10 11 |
Task 0 executing with 0.9171700614171218 Task 1 executing with 0.05406905784548488 Task 2 executing with 0.3766967615444746 Task 3 executing with 0.5036952277424112 Task 4 executing with 0.6450441772229422 Task 5 executing with 0.1565585219590684 Task 6 executing with 0.3697967187142057 Task 7 executing with 0.43266108037038686 Task 8 executing with 0.3475005642131187 Task 9 executing with 0.7599478268217883 Callback got values: [0.9171700614171218, 0.05406905784548488, 0.3766967615444746, 0.5036952277424112, 0.6450441772229422, 0.1565585219590684, 0.3697967187142057, 0.43266108037038686, 0.3475005642131187, 0.7599478268217883] |
Next, let’s look at an example of issuing tasks to the pool with an error callback function.
Example of map_async() with an Error Callback Function
We can issue tasks to the ThreadPool that may raise an unhandled exception and specify an error callback function to handle the exception.
This can be achieved via the “error_callback” argument.
In this example, we can update the above example so that the task() function raises an exception for one specific argument. We can then define a function to handle the raised example, in this case to simply report the details of the exception.
Firstly, we can define the function to call to handle an exception raised by the function.
The custom_error_callback() below implements this, taking one argument which is the exception raised by the target task function.
1 2 3 |
# custom error callback function def custom_error_callback(error): print(f'Got an error: {error}') |
Next, we can update the task() function so that it raises an exception conditionally for one argument.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
# task executed in a worker thread def task(identifier): # conditionally raise an error if identifier == 5: raise Exception('Something bad happened') # generate a value value = random() # report a message print(f'Task {identifier} executing with {value}') # block for a moment sleep(value) # return the generated value return value |
Finally, we can update the call to map_async() to specify the error callback function via the “custom_error_callback” argument, giving the name of our custom function.
1 2 3 |
... # issues tasks to thread pool _ = pool.map_async(task, range(10), error_callback=custom_error_callback) |
The main thread then closes the ThreadPool and waits for all tasks to complete.
1 2 3 4 5 |
... # close the thread pool pool.close() # wait for all tasks to complete and threads to close pool.join() |
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
# SuperFastPython.com # example of parallel map_async() with the thread pool and an error callback function from random import random from time import sleep from multiprocessing.pool import ThreadPool # custom error callback function def custom_error_callback(error): print(f'Got an error: {error}') # task executed in a worker thread def task(identifier): # conditionally raise an error if identifier == 5: raise Exception('Something bad happened') # generate a value value = random() # report a message print(f'Task {identifier} executing with {value}') # block for a moment sleep(value) # return the generated value return value # protect the entry point if __name__ == '__main__': # create and configure the thread pool with ThreadPool() as pool: # issues tasks to thread pool _ = pool.map_async(task, range(10), error_callback=custom_error_callback) # close the thread pool pool.close() # wait for all tasks to complete and threads to close pool.join() |
Running the example first creates and configures the ThreadPool.
The map_async() method is then called for the range and an error callback function. This issues ten calls to the task() function, one for each integer between 0 and 9.
An AsyncResult object is returned immediately and is ignored, and the main thread is free to continue on.
The main thread then closes the ThreadPool and blocks waiting for all issued tasks to complete.
Each call to the task function generates a random number between 0 and 1, reports a message, then blocks. The task that receives the value of 5 as an argument raises an exception.
All tasks finish executing and the exception callback function is called with the raised exception, reporting a message.
All worker threads are closed and the main thread is free to continue on.
Note, results will differ each time the program is run given the use of random numbers.
1 2 3 4 5 6 7 8 9 10 |
Task 0 executing with 0.708290204685365 Task 1 executing with 0.21975717689796537 Task 2 executing with 0.19760187109440475 Task 3 executing with 0.562754074470823 Task 4 executing with 0.006898751807449699 Task 6 executing with 0.024819517180200146 Task 7 executing with 0.026457676674423558 Task 8 executing with 0.6036225533545265 Task 9 executing with 0.21084029493566703 Got an error: Something bad happened |
Next, let’s look at an example of issuing tasks to the pool and accessing the results, where one task may raise an exception.
Example of map_async() with Exception
It is possible for the target function to raise an exception.
If this occurs in just one call to the target function, it will prevent all return values from being accessed.
We can demonstrate this with a worked example.
Firstly, we can update the task() function to conditionally raise an exception, in this case if the argument to the function is equal to five.
1 2 3 4 |
... # conditionally raise an error if identifier == 5: raise Exception('Something bad happened') |
The updated task() function with this change is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
# task executed in a worker threads def task(identifier): # conditionally raise an error if identifier == 5: raise Exception('Something bad happened') # generate a value value = random() # report a message print(f'Task {identifier} executing with {value}') # block for a moment sleep(value) # return the generated value return value |
We can then call map_async() to call the task() function for each value in the range from 0 to 9.
1 2 3 |
... # issues tasks to thread pool result = pool.map_async(task, range(10)) |
We can then get the iterable of return values.
1 2 |
... values = result.get() |
This may raise an exception if one of the calls failed, so we must protect the call with a try-except pattern.
1 2 3 4 5 6 |
... # get the return values try: values = result.get() except Exception as e: print(f'Failed with: {e}') |
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
# SuperFastPython.com # example of parallel map_async() with the thread pool and handle an exception from random import random from time import sleep from multiprocessing.pool import ThreadPool # task executed in a worker threads def task(identifier): # conditionally raise an error if identifier == 5: raise Exception('Something bad happened') # generate a value value = random() # report a message print(f'Task {identifier} executing with {value}') # block for a moment sleep(value) # return the generated value return value # protect the entry point if __name__ == '__main__': # create and configure the thread pool with ThreadPool() as pool: # issues tasks to thread pool result = pool.map_async(task, range(10)) # get the return values try: values = result.get() except Exception as e: print(f'Failed with: {e}') |
Running the example first creates and configures the ThreadPool.
The map_async() method is then called for the range and a return callback function. This issues ten calls to the task() function, one for each integer between 0 and 9.
An AsyncResult object is returned immediately, and the main thread is free to continue on.
The main thread then blocks, attempting to get the iterable of return values from the AsyncResult object.
Each call to the task function generates a random number between 0 and 1, reports a message, then blocks. The task that receives the value of 5 as an argument raises an exception.
The exception is then re-raised in the main thread, handled and a message is reported.
The iterable cannot be looped and no return values are reported.
The ThreadPool is then closed automatically by the context manager.
This highlights that a failure in one task can prevent the results from all tasks from being accessed.
Note, results will differ each time the program is run given the use of random numbers.
1 2 3 4 5 6 7 8 9 |
Task 0 executing with 0.6542294234233761 Task 1 executing with 0.22626163036725988 Task 2 executing with 0.3173660593718851 Task 3 executing with 0.2896568834985973 Task 4 executing with 0.7817242151925866 Task 6 executing with 0.6127492036476327a Task 8 executing with 0.20802673656877713 Task 9 executing with 0.6491045529860338 Failed with: Something bad happened |
Further Reading
This section provides additional resources that you may find helpful.
Books
- Python ThreadPool Jump-Start, Jason Brownlee (my book!)
- Threading API Interview Questions
- ThreadPool PDF Cheat Sheet
I also recommend specific chapters from the following books:
- Python Cookbook, David Beazley and Brian Jones, 2013.
- See: Chapter 12: Concurrency
- Effective Python, Brett Slatkin, 2019.
- See: Chapter 7: Concurrency and Parallelism
- Python in a Nutshell, Alex Martelli, et al., 2017.
- See: Chapter: 14: Threads and Processes
Guides
- Python ThreadPool: The Complete Guide
- Python Multiprocessing Pool: The Complete Guide
- Python ThreadPoolExecutor: The Complete Guide
- Python Threading: The Complete Guide
APIs
References
Takeaways
You now know how to use the map_async() method for the ThreadPool in Python.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Photo by Yoshiki 787 on Unsplash
Do you have any questions?