Last Updated on September 12, 2022
You can forcefully kill tasks in the process pool by the Pool terminate() function that will terminate all child worker processes immediately.
In this tutorial you will discover how to kill tasks in the process pool.
Let’s get started.
Need to Kill All Tasks
The multiprocessing.pool.Pool in Python provides a pool of reusable processes for executing ad hoc tasks.
A process pool can be configured when it is created, which will prepare the child workers.
A process pool object which controls a pool of worker processes to which jobs can be submitted. It supports asynchronous results with timeouts and callbacks and has a parallel map implementation.
— multiprocessing — Process-based parallelism
We can issue one-off tasks to the process pool using functions such as apply() or we can apply the same function to an iterable of items using functions such as map().
Results for issued tasks can then be retrieved synchronously, or we can retrieve the result of tasks later by using asynchronous versions of the functions such as apply_async() and map_async().
When using the process pool, we may need to forcefully kill all running tasks.
This may be for many reasons such as:
- User requests to close the application.
- An error or fault that negates the tasks.
- A problem accessing a required resource.
How can we kill all tasks in the process pool?
Run loops using all CPUs, download your FREE book to learn how.
Consider Not Killing Tasks
Forcefully killing tasks in the process pool is an aggressive approach.
It may cause undesirable side effects.
For example, killing tasks in the process while they are running may mean that resources like files, sockets, and data structures used by running tasks are not closed or left in a useful state.
An alternative to forcefully killing tasks may be to safely pause or stop a task using a synchronization primitive like a multiprocessing.Event object.
You can learn more about safely stopping running tasks in the process pool in the tutorial:
Tasks in the process pool can be forcefully killed, let’s look at how in the next section.
How To Kill All Tasks in the Process Pool
The multiprocessing.pool.Pool does not provide a mechanism to kill all tasks and continue using the pool.
Nevertheless, the process pool does provide the terminate() function which will forcefully terminate all child worker processes immediately, in turn killing all tasks.
Stops the worker processes immediately without completing outstanding work.
— multiprocessing — Process-based parallelism
For example:
1 2 3 |
... # forcefully terminate the process pool and kill all tasks pool.terminate() |
This is achieved by sending a SIGKILL signal to each child worker process in the process pool. This signal cannot be handled and immediately terminates the child processes.
You can learn more about shutting down the process pool in the tutorial:
The process pool may take a moment to forcefully kill all child processes, so it is a good practice to call the join() function after calling terminate(). The join() function will only return after all child worker processes are completely closed.
For example:
1 2 3 4 5 |
... # forcefully terminate the process pool and kill all tasks pool.terminate() # wait a moment for all worker processes to stop pool.join() |
You can learn more about joining the process pool in the tutorial:
Killing tasks in the process pool assumes that the main process that issues tasks to the process pool is free and able to kill the process pool.
This means that tasks issued to the process pool are issued in an asynchronous rather than synchronous manner. This can be achieved using functions such as apply_async(), map_async() and starmap_async() to issue tasks.
Now that we know how to kill all tasks in the process pool, let’s look at some worked examples.
Free Python Multiprocessing Pool Course
Download your FREE Process Pool PDF cheat sheet and get BONUS access to my free 7-day crash course on the Process Pool API.
Discover how to use the Multiprocessing Pool including how to configure the number of workers and how to execute tasks asynchronously.
Example of Killing All Tasks in the Process Pool
We can explore how to forcefully kill running tasks in the process pool with the terminate() function.
In this example we will issue a small number of tasks that run for a long time. We will then wait a moment then forcefully terminate the process pool and all running tasks in the pool.
First, we can define a custom function to run as tasks in the process pool.
The function will take an integer argument to identify the task number, report a message, then loop ten times, blocking for one second each loop iteration. Finally, if the task completes it reports an “All Done” message.
The task() function below implements this.
1 2 3 4 5 6 7 8 9 |
# task executed in a worker process def task(identifier): print(f'Task {identifier} running', flush=True) # run for a long time for i in range(10): # block for a moment sleep(1) # report all done print(f'Task {identifier} Done', flush=True) |
Next, from the main process, we will create the process pool with the default configuration.
We will use the context manager interface to use the process pool to ensure it is closed automatically if something goes wrong. The process pool will use the default configuration.
1 2 3 4 |
... # create and configure the process pool with Pool() as pool: # ... |
You can learn more about the context manager interface for the process pool in the tutorial:
We can then issue four tasks to the process pool.
In this case, we will use the map_async() function to issue 4 tasks asynchronously so that we can carry on in the main process. We will then ignore the AsyncResult object returned.
1 2 3 |
... # issues tasks to process pool _ = pool.map_async(task, range(4)) |
If you are new to the map_async() for issuing tasks to the process pool, you can learn more in the tutorial:
The main process will then block for a moment to allow the tasks to start running.
1 2 3 |
... # wait a moment sleep(2) |
We can then forcefully terminate the child worker processes which will kill the running tasks immediately.
1 2 3 4 |
... # kill all tasks print('Killing all tasks') pool.terminate() |
Finally, we can block for a moment until the child worker processes are confirmed to be terminated.
1 2 3 |
... # wait for the pool to close down pool.join() |
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
# SuperFastPython.com # example of killing all tasks in the process pool from time import sleep from multiprocessing.pool import Pool # task executed in a worker process def task(identifier): print(f'Task {identifier} running', flush=True) # run for a long time for i in range(10): # block for a moment sleep(1) # report all done print(f'Task {identifier} Done', flush=True) # protect the entry point if __name__ == '__main__': # create and configure the process pool with Pool() as pool: # issues tasks to process pool _ = pool.map_async(task, range(4)) # wait a moment sleep(2) # kill all tasks print('Killing all tasks') pool.terminate() # wait for the pool to close down pool.join() |
Running the example first creates the process pool with the default configuration.
Four tasks are then issued to the process pool asynchronously. The main process then blocks for a moment.
Each task starts running, first reporting a message and then lopping and sleeping for a second each loop iteration.
The main process unblocks and continues on. It reports a message then forcefully terminates the process pool.
This sends a SIGKILL signal to each child worker process causing them to terminate immediately. In turn the tasks that each child worker process is executing is forcefully terminated immediately.
The main process then blocks for a fraction of a second until the child worker processes are stopped, then continues on, closing the application.
1 2 3 4 5 |
Task 0 running Task 1 running Task 2 running Task 3 running Killing all tasks |
Next, let’s explore the case of killing multiple batches of tasks running in the process pool.
Overwhelmed by the python concurrency APIs?
Find relief, download my FREE Python Concurrency Mind Maps
Kill Multiple Batches of Tasks in the Process Pool
We can explore how to forcefully kill multiple batches of tasks in the process pool.
This can be achieved by updating the example from the previous section to issue multiple batches of tasks to the process pool.
We can change the single call to map_async() into multiple calls that each issue tasks asynchronously into the process pool.
1 2 3 4 5 |
... # issues tasks to process pool _ = pool.map_async(task, range(2)) # issues more tasks to process pool _ = pool.map_async(task, range(2)) |
Tying this together, the complete example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
# SuperFastPython.com # example of killing all tasks in the process pool from time import sleep from multiprocessing.pool import Pool # task executed in a worker process def task(identifier): print(f'Task {identifier} running', flush=True) # run for a long time for i in range(10): # block for a moment sleep(1) # report all done print(f'Task {identifier} Done', flush=True) # protect the entry point if __name__ == '__main__': # create and configure the process pool with Pool() as pool: # issues tasks to process pool _ = pool.map_async(task, range(2)) # issues more tasks to process pool _ = pool.map_async(task, range(2)) # wait a moment sleep(2) # kill all tasks print('Killing all tasks') pool.terminate() # wait for the pool to close down pool.join() |
Running the example first creates the process pool with the default configuration.
Two tasks are then issued to the process pool asynchronously.
Again, two tasks are then issued to the process pool asynchronously.
There are now two batches of tasks issued and running in the process pool. The main process then blocks for a moment.
Each task starts running, first reporting a message and then lopping and sleeping for a second each loop iteration.
The main process unblocks and continues on. It reports a message then forcefully terminates the process pool.
This sends a SIGKILL signal to each child worker process causing them to terminate immediately. In turn the tasks that each child worker process is executing is forcefully terminated immediately.
The main process then blocks for a fraction of a second until the child worker processes are stopped, then continues on, closing the application.
1 2 3 4 5 |
Task 0 running Task 1 running Task 0 running Task 1 running Killing all tasks |
Common Questions
This section lists common questions about killing tasks in the process pool.
Do you have any questions about killing tasks in the process pool?
Ask your questions in the comments below and I may add them to this section.
Can We Kill Tasks Instead of Worker Processes?
No.
The process pool does not provide the capability to kill specific tasks in the process pool.
This capability could be developed into a custom process pool class.
Can We Kill The Worker Processes and Have The Pool Restart Them?
No.
If we manually kill the child worker processes, the multiprocessing.pool.Pool class will not detect this case and restart the worker processes.
This capability could be developed into a custom process pool class.
Can We Only Kill Tasks Associated With An AsyncResult?
No.
The AsyncResult does not provide a handle on the processes that are executing the tasks or a way to stop the tasks or their worker processes.
This capability could be developed into a custom process pool class.
Further Reading
This section provides additional resources that you may find helpful.
Books
- Multiprocessing Pool Jump-Start, Jason Brownlee (my book!)
- Multiprocessing API Interview Questions
- Pool Class API Cheat Sheet
I would also recommend specific chapters from these books:
- Effective Python, Brett Slatkin, 2019.
- See: Chapter 7: Concurrency and Parallelism
- High Performance Python, Ian Ozsvald and Micha Gorelick, 2020.
- See: Chapter 9: The multiprocessing Module
- Python in a Nutshell, Alex Martelli, et al., 2017.
- See: Chapter: 14: Threads and Processes
Guides
- Python Multiprocessing Pool: The Complete Guide
- Python ThreadPool: The Complete Guide
- Python Multiprocessing: The Complete Guide
- Python ProcessPoolExecutor: The Complete Guide
APIs
References
Takeaways
You now know how to kill tasks in the process pool.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Photo by Peggy Anke on Unsplash
Nikita K says
“If we manually kill the child worker processes, the multiprocessing.pool.Pool class will not detect this case and restart the worker processes.”
Is not manually killing the worker process forces
Pool
deadlock?Jason Brownlee says
Not in my experience, how so?
Nikita K says
This is [CPython issue](https://github.com/orgs/python/projects/14/views/1?pane=issue&itemId=5034306&sortedBy%5Bdirection%5D=asc&sortedBy%5BcolumnId%5D=Status)
If the process being killed does not hold the
lock
, then thePool
restarts this worker and all fine. However, If the process holds thelock
, then other workers will not acquire it and hang forever. Tho the probability of this is inversely proportional to the number of workers and may be hard to detect.Consider the following snippet. On my machine(ubuntu 23.04, python 3.11) it completes/deadlock from run to run.
import os
import time
from multiprocessing import Pool, active_children
def f(args):
time.sleep(1)
print(f'Echo from worker {os.getpid()}')
if __name__=='__main__':
p = Pool(2)
workers = active_children()
print(f"Spawned workers {[worker.pid for worker in workers]}")
print(f"Killing worker {workers[0].pid}")
workers[0].kill()
p.map_async(f, range(2)).get()
Jason Brownlee says
Thanks for sharing.
Killing can be dangerous for sure.