The ThreadPoolExecutor in Python allows one-off and batches of tasks to be issued for asynchronous execution.
The API provides functions like concurrent.futures.wait() that can be used to block until one in a collection of tasks is done so that we can get the result from the first task to complete.
The problem with this function is that it does not support the case where we want to issue many tasks over a long period of time, such as streaming tasks into the ThreadPoolExecutor, yet are only interested in the result of the first task to complete.
Instead, we must develop our own solution.
In this tutorial, you will discover how to get the first result from a sequence of tasks streamed into the ThreadPoolExecutor.
Let’s get started.
Need to Get the First Result From a Stream of Tasks
Consider the following situation:
You have a suite of tasks to execute using a ThreadPoolExecutor.
The tasks are issued to the pool one by one and/or in small batches, likely achieved using the submit() method.
How can we wait on or report the first task result in the stream of tasks to complete?
Similarly, how can we report task results as tasks in the stream are completed?
This was a question that I received via email and was also shared on StackOverflow.
The API does not currently support this ability, instead, we must develop our own solution.
Run loops using all CPUs, download your FREE book to learn how.
How to Get Result From First Task in Stream
There are a few ways to solve this problem.
Perhaps the simplest is to use a done callback function to report or notify about task results and use a variable to ensure only the first task result is reported.
Recall, we can add a done callback function to a Future object via the add_done_callback() method.
For example:
1 2 3 |
... # add a done callback function future.add_done_callback(handler) |
The callback function must take a single argument which is the Future object for the task that is completed. The function is called as soon as the task is done, e.g. completed normally or failed with an exception.
1 2 3 |
# custom done callback function def handler(future): print(future.result()) |
You can learn more about done callback functions in the tutorial:
My preferred approach would be to use a threading.Event to indicate that a single result has been seen and to no longer report results from other tasks.
The threading.Event object is a thread-safe boolean variable and ensures expected behavior, even in cases where threads are executed in parallel using future and alternate Python interpreters.
For example:
1 2 3 4 5 6 |
# custom done callback function def handler(future): if event.is_set(): return event.set() print(future.result()) |
You can learn more about the threading.Event in the tutorial:
There is a race here between checking and setting the state of the event. If it is a concern for your use case, you can use a mutex lock via threading.Lock. I recommend this approach.
For example:
1 2 3 4 5 6 7 |
# custom done callback function def handler(future): with lock: if event.is_set(): return event.set() print(future.result()) |
You can learn more about mutex locks in the tutorial:
Now that we know how to get the first task result from a stream of tasks, let’s look at some worked examples.
Update: If there is a lot of demand for this type of solution (let me know in the comments), I will develop a more robust version that extends the wait() and as_completed() functions.
Example of Streamed Tasks and Report of First to Complete
We can explore the case of reporting the first task result from a stream of tasks issued to the ThreadPoolExecutor.
Firstly, we will define a task to be executed in the thread pool.
The task will have a unique task identifier and will simulate effort with a sleep for a random number of seconds between 5 and 20.
Once complete, it will return its task id and the time slept. This will be the result we seek.
The task() function below implements this.
1 2 3 4 5 6 7 8 |
# task function that will sleep for a random number of seconds def task(task_id): # generate value between 5 and 20 value = 5 + (random() * 20) # block to simulate effort sleep(value) # return the result of the task return (task_id, value) |
Next, we will define a done callback function.
This function is responsible for alerting the program about the first task to complete, in this case by reporting the result of the first task in the stream to be done.
To ensure thread safety, we will use an event variable to determine whether a first result has been seen or not and a mutex lock to ensure the check and set of the event on the first time is atomic.
The handler() function below implements this.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# callback function to call when a task is completed def handler(future): # declare global state global event global lock # acquire the mutex lock to avoid race condition with lock: # check if the event is already set if event.is_set(): return # set the event event.set() # report a message print(f'First done: {future.result()}') |
Next, we need a task that will issue a stream of tasks into the ThreadPoolExecutor.
In this case, we will issue 50 examples of the task() function to the pool. Each task that is issued via the submit() function will return a Future object. On this Future object, we will register the callback function that we defined above.
To slow things down a bit, this task will issue one task to the pool every 200 milliseconds, meaning that the stream of all 50 tasks will take 10 seconds to issue.
We will also stop issuing tasks once a result is observed, which we can determine via the shared Event object. This is not required but seems efficient in this case.
Tying this together, the task_stream() function below implements this.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# task to inject a stream of tasks into the thread pool def task_stream(tpe, event): for i in range(1, 51): # check if we are all done if event.is_set(): return # issue a task future = tpe.submit(task, i) # add the done handler future.add_done_callback(handler) # report progress print(f'>issued task {i}') # block a moment sleep(0.2) |
Finally, we kick all this off with the entry to the program.
First, we will define global variables for the event and mutex lock needed in the done callback.
1 2 3 4 5 |
... # create event to signal the done state event = Event() # create the mutex lock lock = Lock() |
We can then create the ThreadPoolExecutor with sufficient capacity to execute many tasks simultaneously. We can then issue the task that will spawn a stream of tasks into the thread pool and wait for this task to complete.
1 2 3 4 5 6 7 8 |
... # create a thread pool with ThreadPoolExecutor(max_workers=50) as tpe: # issue task that streams work into the pool future = tpe.submit(task_stream, tpe, event) # wait for the event to be set event.wait() # wait until all tasks are done... |
And that’s it.
Tying all of this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
# SuperFastPython.com # example of stream of tasks and reporting the first result from time import sleep from random import random from concurrent.futures import ThreadPoolExecutor from threading import Event from threading import Lock # task function that will sleep for a random number of seconds def task(task_id): # generate value between 5 and 20 value = 5 + (random() * 20) # block to simulate effort sleep(value) # return the result of the task return (task_id, value) # callback function to call when a task is completed def handler(future): # declare global state global event global lock # acquire the mutex lock to avoid race condition with lock: # check if the event is already set if event.is_set(): return # set the event event.set() # report a message print(f'First done: {future.result()}') # task to inject a stream of tasks into the thread pool def task_stream(tpe, event): for i in range(1, 51): # check if we are all done if event.is_set(): return # issue a task future = tpe.submit(task, i) # add the done handler future.add_done_callback(handler) # report progress print(f'>issued task {i}') # block a moment sleep(0.2) # protect the entry point if __name__ == '__main__': # create event to signal the done state event = Event() # create the mutex lock lock = Lock() # create a thread pool with ThreadPoolExecutor(max_workers=50) as tpe: # issue task that streams work into the pool future = tpe.submit(task_stream, tpe, event) # wait for the event to be set event.wait() # wait until all tasks are done... |
Running the example first creates the event and mutex lock used in the done callback.
The thread pool is created and the task used to issue a stream of tasks is issued to the pool. The main thread then blocks until this streaming task is done.
The task streaming begins and one by one tasks are issued into the thread pool.
Eventually one task finishes.
It’s done callback function is called, setting the event object and reporting the result.
Any subsequent tasks that are completed also call the done callback function but do not set the event or report their result because the event has already been set.
The set event causes the streaming task to complete.
The main thread continues and waits for any tasks still running in the pool to complete before terminating the program.
This highlights how we can get the first result from a stream of tasks issued to the ThreadPoolExecutor.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
>issued task 1 >issued task 2 >issued task 3 >issued task 4 >issued task 5 >issued task 6 >issued task 7 >issued task 8 >issued task 9 >issued task 10 >issued task 11 >issued task 12 >issued task 13 >issued task 14 >issued task 15 >issued task 16 >issued task 17 >issued task 18 >issued task 19 >issued task 20 >issued task 21 >issued task 22 >issued task 23 >issued task 24 >issued task 25 >issued task 26 >issued task 27 >issued task 28 >issued task 29 >issued task 30 >issued task 31 >issued task 32 >issued task 33 First done: (7, 5.401081129857179) |
Why Not Use concurrent.futures.wait()?
This problem cannot be solved using the concurrent.futures.wait() function.
Recall that the wait() function takes an iterable of Future objects and will return a set of Future objects that match a condition, such as the first done or all done.
You can learn more about the concurrent.futures.wait() function in the tutorials:
- How to Wait For The First Task To Finish In The ThreadPoolExecutor
- How to Wait For All Tasks to Finish in the ThreadPoolExecutor
This function cannot be used in this case.
The reason is that the tasks are streamed, e.g. created sequentially, and waiting must begin after the first task has been issued.
The wait() function requires first that all tasks be issued before waiting can begin.
We cannot update a list or set of tasks provided to the wait() function as an argument, as changes to the structure are not monitored by the wait function (I know, I tried).
We can demonstrate this by updating the above example to use the concurrent.futures.wait() function.
This involves removing the done callback function and the event and mutex locks that go along with it. instead, we define a new task function that takes a shared list and waits on it only after at least one task has been added to it. After the wait() function returns the result is reported.
1 2 3 4 5 6 7 8 9 10 11 12 |
# task to wait for the first task to complete and return results def wait_for_first_result(task_list): # declare global task list while len(task_list) == 0: sleep(0.1) print('list not empty') # wait on the task list done,_ = wait(task_list, return_when=FIRST_COMPLETED) # get the first task future = done.pop() # report result of first task print(future.result()) |
Note, we are spinning with a busy wait loop here to check the status of the list.
You can learn more about busy wait loops in the tutorial:
I did this for simplicity, but a better solution is to use wait/notify via a condition.
You can learn more about condition variables in the tutorial:
The task_stream() function is then updated to add each Future to this same shared list.
1 2 3 4 5 6 7 8 9 10 11 |
# task to inject a stream of tasks into the thread pool def task_stream(tpe, task_list): for i in range(1, 51): # issue a task future = tpe.submit(task, i) # store future in task list task_list.append(future) # report progress print(f'>issued task {i}') # block a moment sleep(0.2) |
The complete example with these changes is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
# SuperFastPython.com # example of stream of tasks and reporting the first result using wait() from time import sleep from random import random from concurrent.futures import ThreadPoolExecutor from concurrent.futures import wait from concurrent.futures import FIRST_COMPLETED # task function that will sleep for a random number of seconds def task(task_id): # generate value between 5 and 20 value = 5 + (random() * 20) # block to simulate effort sleep(value) # return the result of the task return (task_id, value) # task to inject a stream of tasks into the thread pool def task_stream(tpe, task_list): for i in range(1, 51): # issue a task future = tpe.submit(task, i) # store future in task list task_list.append(future) # report progress print(f'>issued task {i}') # block a moment sleep(0.2) # task to wait for the first task to complete and return results def wait_for_first_result(task_list): # declare global task list while len(task_list) == 0: sleep(0.1) print('list not empty') # wait on the task list done,_ = wait(task_list, return_when=FIRST_COMPLETED) # get the first task future = done.pop() # report result of first task print(future.result()) # protect the entry point if __name__ == '__main__': # list of tasks task_list = list() # create a thread pool tpe = ThreadPoolExecutor(max_workers=50) # issue task to wait on the first result future = tpe.submit(wait_for_first_result, task_list) # issue task that streams work into the pool tpe.submit(task_stream, tpe, task_list) # wait for the first result to be reported future.result() # close the thread pool tpe.shutdown(wait=False, cancel_futures=True) print('Main is done.') |
Running the example creates the thread pool, issues the waiting task, then the streaming task.
All tasks are streamed into the thread pool.
The first task issued is added to the shared list.
The busy wait loop notices the shared list is no longer empty and begins waiting.
Tasks continue to be issued to the pool and appended to the list.
Eventually, the first task completes and the result is reported.
We can see the result is for task 1. This will always be the case.
The reason is that all subsequent Future objects appended to the list are not considered by the internals of the wait() function.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
>issued task 1 list not empty >issued task 2 >issued task 3 >issued task 4 >issued task 5 >issued task 6 >issued task 7 >issued task 8 >issued task 9 >issued task 10 >issued task 11 >issued task 12 >issued task 13 >issued task 14 >issued task 15 >issued task 16 >issued task 17 >issued task 18 >issued task 19 >issued task 20 >issued task 21 >issued task 22 >issued task 23 >issued task 24 >issued task 25 >issued task 26 >issued task 27 >issued task 28 >issued task 29 >issued task 30 >issued task 31 >issued task 32 >issued task 33 >issued task 34 >issued task 35 >issued task 36 >issued task 37 >issued task 38 >issued task 39 >issued task 40 >issued task 41 >issued task 42 >issued task 43 >issued task 44 >issued task 45 >issued task 46 >issued task 47 >issued task 48 >issued task 49 >issued task 50 (1, 12.481404668121845) Main is done. |
Why Not Use concurrent.futures.as_completed()?
This problem cannot be solved using the concurrent.futures.as_completed() function.
Recall that the as_completed() function takes an iterable of Future objects and yields those Future objects in the order they are done.
You can learn more about the concurrent.futures.as_completed() function in the tutorial:
This function cannot be used in this case.
The reason is the same as that for the concurrent.futures.wait() function.
Changes to the list or set provided to the as_completed() function are not monitored and honored by the function, meaning we cannot continuously update the function as we issue new tasks.
We can also explore this with a worked example.
The wait_for_first_result() function in the previous section can be updated to call the as_completed() function and report the first result seen.
1 2 3 4 5 6 7 8 9 10 |
# task to wait for the first task to complete and return results def wait_for_first_result(task_list): # declare global task list while len(task_list) == 0: sleep(0.1) print('list not empty') # wait on the task list for future in as_completed(task_list): print(future.result()) return |
The complete example with this change is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
# SuperFastPython.com # example of stream of tasks and reporting the first result using as_completed() from time import sleep from random import random from concurrent.futures import ThreadPoolExecutor from concurrent.futures import as_completed # task function that will sleep for a random number of seconds def task(task_id): # generate value between 5 and 20 value = 5 + (random() * 20) # block to simulate effort sleep(value) # return the result of the task return (task_id, value) # task to inject a stream of tasks into the thread pool def task_stream(tpe, task_list): for i in range(1, 51): # issue a task future = tpe.submit(task, i) # store future in task list task_list.append(future) # report progress print(f'>issued task {i}') # block a moment sleep(0.2) # task to wait for the first task to complete and return results def wait_for_first_result(task_list): # declare global task list while len(task_list) == 0: sleep(0.1) print('list not empty') # wait on the task list for future in as_completed(task_list): print(future.result()) return # protect the entry point if __name__ == '__main__': # list of tasks task_list = list() # create a thread pool tpe = ThreadPoolExecutor(max_workers=50) # issue task to wait on the first result future = tpe.submit(wait_for_first_result, task_list) # issue task that streams work into the pool tpe.submit(task_stream, tpe, task_list) # wait for the first result to be reported future.result() # close the thread pool tpe.shutdown(wait=False, cancel_futures=True) print('Main is done.') |
Running the example creates the thread pool, issues the waiting task, then the streaming task.
Tasks are streamed into the thread pool.
The first task issued is added to the shared list.
The busy wait loop notices the shared list is no longer empty and begins waiting via the as_completed() loop.
Tasks continue to be issued to the pool and appended to the list.
Eventually, the first task completes and the result is reported.
We can see the result is for task 1. This will always be the case.
The reason is that all subsequent Future objects appended to the list are not considered by the internals of the as_complete() function.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
>issued task 1 list not empty >issued task 2 >issued task 3 >issued task 4 >issued task 5 >issued task 6 >issued task 7 >issued task 8 >issued task 9 >issued task 10 >issued task 11 >issued task 12 >issued task 13 >issued task 14 >issued task 15 >issued task 16 >issued task 17 >issued task 18 >issued task 19 >issued task 20 >issued task 21 >issued task 22 >issued task 23 >issued task 24 >issued task 25 >issued task 26 >issued task 27 >issued task 28 >issued task 29 >issued task 30 >issued task 31 >issued task 32 >issued task 33 >issued task 34 >issued task 35 >issued task 36 >issued task 37 >issued task 38 >issued task 39 >issued task 40 >issued task 41 >issued task 42 >issued task 43 >issued task 44 >issued task 45 >issued task 46 >issued task 47 >issued task 48 >issued task 49 >issued task 50 (1, 13.766543332563181) Main is done. |
Free Python ThreadPoolExecutor Course
Download your FREE ThreadPoolExecutor PDF cheat sheet and get BONUS access to my free 7-day crash course on the ThreadPoolExecutor API.
Discover how to use the ThreadPoolExecutor class including how to configure the number of workers and how to execute tasks asynchronously.
Example of Streamed Tasks and Reporting as Tasks Complete
There is another similar use case when getting results from streamed task.
Rather than getting and using the result of the first task that completes, we may want to get and use all results from streamed tasks, as the tasks are completed.
This is like using a call to as_completed() but for streamed tasks rather than a single batch of tasks.
Again, there are many ways to solve this problem, but an approach I like is to use a thread-safe queue.Queue.
The done callback handler can be updated to get and put results on a shared queue.
1 2 3 4 5 6 7 8 |
# callback function to call when a task is completed def handler(future): # declare global queue global queue # get result from task result = future.result() # put result on the queue queue.put(f'Done: {result}') |
A new task can then be defined to consume results from the queue and report them.
1 2 3 4 5 6 7 8 |
# consume and report results forever def result_consumer(queue): # consume results forever while True: # consume result result = queue.get() # report result print(result) |
We can then create a daemon thread to run the result consumer task for as long as we need.
1 2 3 4 |
... # start daemon thread to report results as they are available daemon = Thread(target=result_consumer, args=(queue,), daemon=True) daemon.start() |
If you are new to using the queue.Queue, you can get started in the tutorial:
We use a daemon thread because it does not need to be shut down. We can close our program and the daemon thread will not stop the program from exiting, unlike a non-daemon thread.
If you are new to using daemon threads, you can learn more in the tutorial:
Tying these changes together, the complete example of reporting results for streamed tasks as tasks are completed is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
# SuperFastPython.com # example of stream of tasks and reporting the first result from time import sleep from random import random from concurrent.futures import ThreadPoolExecutor from concurrent.futures import wait from queue import Queue from threading import Thread # task function that will sleep for a random number of seconds def task(task_id): # generate value between 5 and 20 value = 5 + (random() * 20) # block to simulate effort sleep(value) # return the result of the task return (task_id, value) # callback function to call when a task is completed def handler(future): # declare global queue global queue # get result from task result = future.result() # put result on the queue queue.put(f'Done: {result}') # task to inject a stream of tasks into the thread pool def task_stream(tpe): for i in range(1, 51): # issue a task future = tpe.submit(task, i) # add the done handler future.add_done_callback(handler) # report progress print(f'>issued task {i}') # block a moment sleep(0.2) # consume and report results forever def result_consumer(queue): # consume results forever while True: # consume result result = queue.get() # report result print(result) # protect the entry point if __name__ == '__main__': # create a shared queue queue = Queue() # start daemon thread to report results as they are available daemon = Thread(target=result_consumer, args=(queue,), daemon=True) daemon.start() # create a thread pool with ThreadPoolExecutor(max_workers=50) as tpe: # issue task that streams work into the pool future = tpe.submit(task_stream, tpe) # wait on the streamer task wait([future]) # wait until all tasks are done... |
Running the example first creates the shared queue as a global variable.
Next, the daemon thread is created and started. This task blocks until results are available on the queue. Because
The thread pool is created and the task streaming task is started and the main thread blocks until this task is done.
Tasks are streamed into the thread pool and the done callback is added to each.
As tasks are completed, the done callback function is called, and results are retrieved and put on the queue.
The result consumer notices objects on the queue retrieves them, and reports the results.
Once all tasks are streamed, the main thread blocks until all issued tasks in the thread pool are done.
This highlights how we can get and use results from streamed tasks in the order that tasks are completed.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
>issued task 1 >issued task 2 >issued task 3 >issued task 4 >issued task 5 >issued task 6 >issued task 7 >issued task 8 >issued task 9 >issued task 10 >issued task 11 >issued task 12 >issued task 13 >issued task 14 >issued task 15 >issued task 16 >issued task 17 >issued task 18 >issued task 19 >issued task 20 >issued task 21 >issued task 22 >issued task 23 >issued task 24 >issued task 25 >issued task 26 >issued task 27 >issued task 28 >issued task 29 >issued task 30 >issued task 31 >issued task 32 >issued task 33 >issued task 34 >issued task 35 >issued task 36 >issued task 37 Done: (3, 6.924631245284321) >issued task 38 >issued task 39 Done: (6, 6.723111956475185) Done: (4, 7.1699159728956126) >issued task 40 >issued task 41 >issued task 42 >issued task 43 >issued task 44 Done: (19, 5.217706149540529) >issued task 45 >issued task 46 >issued task 47 >issued task 48 >issued task 49 Done: (23, 5.372807839240585) >issued task 50 Done: (12, 8.019315097166409) Done: (35, 5.792907473539836) Done: (37, 5.525447943776321) Done: (36, 5.901940793513239) Done: (34, 7.102549903548725) Done: (7, 12.977708180254538) Done: (28, 9.049853326624772) Done: (41, 7.122348761882598) Done: (30, 9.380128974512012) Done: (9, 14.18480939583592) Done: (13, 13.430369875356355) Done: (5, 15.131460610909519) Done: (10, 14.589536036559599) Done: (33, 10.749000592053278) Done: (17, 14.383864055545452) Done: (14, 15.593349433075778) Done: (22, 14.440562808188396) Done: (40, 10.844932551292612) Done: (25, 14.2756621454851) Done: (18, 15.819311808688974) Done: (26, 14.22840587296549) Done: (1, 20.202985721700202) Done: (20, 16.729957791080363) Done: (46, 11.787319921266148) Done: (45, 12.085828786775716) Done: (38, 13.587892368751035) Done: (32, 14.818876027215818) Done: (27, 16.394197361134808) Done: (2, 21.59001845129945) Done: (31, 15.735354124235638) Done: (8, 20.4739632048844) Done: (29, 16.626035568832016) Done: (15, 19.514429726230816) Done: (50, 12.477077854741658) Done: (49, 13.018280382966449) Done: (47, 13.865094538552006) Done: (11, 21.260599668210297) Done: (21, 19.712779042929405) Done: (48, 17.554122089845233) Done: (16, 24.160599737054017) Done: (39, 19.666578849552316) Done: (24, 24.930859272011837) Done: (43, 21.315162475883795) Done: (42, 21.56986347732603) Done: (44, 24.743008724627146) |
Overwhelmed by the python concurrency APIs?
Find relief, download my FREE Python Concurrency Mind Maps
Further Reading
This section provides additional resources that you may find helpful.
Books
- ThreadPoolExecutor Jump-Start, Jason Brownlee, (my book!)
- Concurrent Futures API Interview Questions
- ThreadPoolExecutor Class API Cheat Sheet
I also recommend specific chapters from the following books:
- Effective Python, Brett Slatkin, 2019.
- See Chapter 7: Concurrency and Parallelism
- Python in a Nutshell, Alex Martelli, et al., 2017.
- See: Chapter: 14: Threads and Processes
Guides
- Python ThreadPoolExecutor: The Complete Guide
- Python ProcessPoolExecutor: The Complete Guide
- Python Threading: The Complete Guide
- Python ThreadPool: The Complete Guide
APIs
References
Takeaways
You now know how to get the first result from tasks streamed into the ThreadPoolExecutor.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Photo by Robert Bye on Unsplash
Do you have any questions?