Last Updated on August 21, 2023
Copying files is typically slow.
It can become painfully slow in situations where you may need to copy thousands of files from one directory to another. The hope is that multithreading will speed up the file copying operation.
In this tutorial, you will explore how to copy thousands of files using multithreading.
Let’s dive in.
Create 10,000 Files To Copy
Before we can copy thousands of files, we must create them.
We can write a script that will create 10,000 CSV files, each file with 10,000 lines and each line containing 10 random numeric values.
First, we can create a function that will take a file path and string data and save it to file.
The save_file() function below implements this, taking the full path and data, opens the file in ASCII format, then saves the data to the file. The context manager is used so that the file is closed automatically.
1 2 3 4 5 6 |
# save data to a file def save_file(filepath, data): # open the file with open(filepath, 'w') as handle: # save the data handle.write(data) |
Next, we can generate the data to be saved.
First, we can write a function to generate a single line of data. In this case, a line is composed of 10 random numeric values in the range between 0 and 1, stored in CSV format (e.g. separated by a comma per value).
The generate_line() function below implements this, returning a string of random data.
1 2 3 |
# generate a line of mock data of 10 random data points def generate_line(): return ','.join([str(random()) for _ in range(10)]) |
Next, we can write a function to generate the contents of one file.
This will be 10,000 lines of random values, where each line is separated by a new line.
The generate_file_data() function below implements this, returning a string representing the data for a single data file.
1 2 3 4 5 6 |
# generate file data of 10K lines each with 10 data points def generate_file_data(): # generate many lines of data lines = [generate_line() for _ in range(10000)] # convert to a single ascii doc with new lines return '\n'.join(lines) |
Finally, we can generate data for all of the files and save each with a separate file name.
First, we can create a directory to store all of the created files (e.g. ‘tmp‘) under the current working directory.
1 2 3 |
... # create a local directory to save files makedirs(path, exist_ok=True) |
We can then loop 10,000 times and create the data and a unique filename for each iteration, then save the generated contents of the file to disk.
1 2 3 4 5 6 7 8 9 10 11 |
... # create all files for i in range(10000): # generate data data = generate_file_data() # create filenames filepath = join(path, f'data-{i:04d}.csv') # save data file save_file(filepath, data) # report progress print(f'.saved {filepath}') |
The generate_all_files() function listed below implements this, creating the 10,000 files of 10,000 lines of data.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# generate 10K files in a directory def generate_all_files(path='tmp'): # create a local directory to save files makedirs(path, exist_ok=True) # create all files for i in range(10000): # generate data data = generate_file_data() # create filenames filepath = join(path, f'data-{i:04d}.csv') # save data file save_file(filepath, data) # report progress print(f'.saved {filepath}') |
Tying this together, the complete example of creating a large number of CSV files is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
# SuperFastPython.com # create a large number of data files from os import makedirs from os.path import join from random import random # save data to a file def save_file(filepath, data): # open the file with open(filepath, 'w') as handle: # save the data handle.write(data) # generate a line of mock data of 10 random data points def generate_line(): return ','.join([str(random()) for _ in range(10)]) # generate file data of 10K lines each with 10 data points def generate_file_data(): # generate many lines of data lines = [generate_line() for _ in range(10000)] # convert to a single ascii doc with new lines return '\n'.join(lines) # generate 10K files in a directory def generate_all_files(path='tmp'): # create a local directory to save files makedirs(path, exist_ok=True) # create all files for i in range(10000): # generate data data = generate_file_data() # create filenames filepath = join(path, f'data-{i:04d}.csv') # save data file save_file(filepath, data) # report progress print(f'.saved {filepath}') |
Running the example will create 10,000 CSV files of random data into a tmp/ directory.
It takes a long time to run, depending on the speed of your hard drive, e.g. SSD used in modern computer systems are quite fast.
On my system it takes about 11 minutes to complete.
1 2 3 4 5 6 7 8 9 10 11 |
... .saved tmp/data-9990.csv .saved tmp/data-9991.csv .saved tmp/data-9992.csv .saved tmp/data-9993.csv .saved tmp/data-9994.csv .saved tmp/data-9995.csv .saved tmp/data-9996.csv .saved tmp/data-9997.csv .saved tmp/data-9998.csv .saved tmp/data-9999.csv |
Next, we can explore ways of copying all of these files into another directory.
Run loops using all CPUs, download your FREE book to learn how.
Copy Files One-by-One (Slowly)
Copying all files in one directory to a second directory in Python is relatively straightforward.
The shutil module provides many functions that can be used to efficiently copy files from one directory to another on disk.
In this tutorial, we will use the shutil.copy() function that can be used to copy a source file path to a destination directory.
First, we will create the destination directory if it does not already exist.
1 2 3 |
... # create the destination directory if needed makedirs(dest, exist_ok=True) |
Next, we will list all files in the source directory and create full paths that we can use in the copy operation. This can be achieved in a list comprehension.
1 2 3 |
... # create full paths for all files we wish to copy files = [join(src,name) for name in listdir(src)] |
We can then iterate over the list of source paths and copy each to the destination directory using the shutil.copy() function.
1 2 3 4 5 |
... # copy all files from src to dest for src_path in files: # copy source file to dest file dest_path = copy(src_path, dest) |
Tying this together, the complete example of copying all files from a source directory to a destination directory sequentially (one-by-one) is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
# SuperFastPython.com # copy files from one directory to another sequentially from os import makedirs from os import listdir from os.path import join from shutil import copy # copy files from src to dest def main(src='tmp', dest='tmp2'): # create the destination directory if needed makedirs(dest, exist_ok=True) # create full paths for all files we wish to copy files = [join(src,name) for name in listdir(src)] # copy all files from src to dest for src_path in files: # copy source file to dest file dest_path = copy(src_path, dest) # report progress print(f'.copied {src_path} to {dest_path}') print('Done') # entry point if __name__ == '__main__': main() |
Running the program will copy all files from the tmp/ sub-directory to a new subdirectory named tmp2/.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
... .copied tmp/data-0792.csv to tmp2/data-0792.csv .copied tmp/data-5832.csv to tmp2/data-5832.csv .copied tmp/data-2185.csv to tmp2/data-2185.csv .copied tmp/data-5826.csv to tmp2/data-5826.csv .copied tmp/data-2191.csv to tmp2/data-2191.csv .copied tmp/data-1498.csv to tmp2/data-1498.csv .copied tmp/data-0786.csv to tmp2/data-0786.csv .copied tmp/data-7957.csv to tmp2/data-7957.csv .copied tmp/data-6491.csv to tmp2/data-6491.csv .copied tmp/data-5198.csv to tmp2/data-5198.csv .copied tmp/data-4286.csv to tmp2/data-4286.csv Done |
It takes a while as there are 10,000 files, each about 2 megabytes in size to copy. The overall execution time will depend on the speed of your hardware, specifically the speed of your hard drive.
On my system with an SSD hard drive, it takes about 34.9 seconds to complete (deleting the tmp2 directory between runs).
1 2 3 |
[Finished in 34.8s] [Finished in 34.8s] [Finished in 35.3s] |
How long does it take to run on your system?
Let me know in the comments below.
Now that we know how to copy thousands of files from one directory to another, let’s consider how we might speed it up using concurrency.
How to Copy Files Concurrently
Copying a file involves two main operations.
- Read the file from disk into main memory.
- Write the file from main memory to disk.
Both operations are IO-bound, limited by the speed that bytes can be read and written from and to the hard drive.
We are assuming that we have a single hard drive in this use case. If you are copying files between hard drives, or to/from a drive on a networked system, then the limitations listed below may not apply.
With a single hard drive, we can assume some properties limitations:
- We can only read a single file from disk at a time.
- We can only write a single file to disk at a time.
- We can only read or write from or to disk at a time.
These are reasonable assumptions for classic spindle-based hard drives perhaps as well as modern solid state hard drives.
Therefore we might assume that adding concurrency to file copying on one hard drive will provide no benefit, and this is a reasonable and widely adopted assumption.
In fact, the overhead of performing the task using threads or processes would be expected to slow down the copying process.
Nevertheless, perhaps modern hard drives offer some internal caching or some capability for parallel reads and writes.
A straightforward approach would be to perform shutil.copy() operations as separate tasks and report performance.
An alternative approach may be to have one set of tasks for loading files from disk into main memory and a second set of tasks to write files from main memory to disk.
Now that we have some expectations of the effect of concurrent file copying and some approaches, let’s explore a worked example.
Free Concurrent File I/O Course
Get FREE access to my 7-day email course on concurrent File I/O.
Discover patterns for concurrent file I/O, how save files with a process pool, how to copy files with a thread pool, and how to append to a file from multiple threads safely.
Copy Files Concurrently with Threads
The ThreadPoolExecutor provides a pool of worker threads that we can use to multithread the copying of thousands of data files.
Each file in the tmp/ directory will represent a task that will require the file to be loaded from disk into main memory then written to the destination location.
First, we can define a function that takes the path of a file to copy and the destination directory then copies the file and reports progress. The copy_file() function below implements this.
1 2 3 4 5 6 |
# copy a file from source to destination def copy_file(src_path, dest_dir): # copy source file to dest file dest_path = copy(src_path, dest_dir) # report progress print(f'.copied {src_path} to {dest_path}') |
Next, we can call this function for each file in the tmp/ directory.
We can then create the thread pool with ten worker threads. We will use the context manager to ensure the thread pool is closed automatically once all pending and running tasks have been completed.
1 2 3 4 |
... # create the thread pool with ThreadPoolExecutor(10) as exe: # ... |
We can then call the submit() function for each file to be copied, calling the copy_file() function with the file path and destination directory as arguments.
This can be achieved in a list comprehension, and the resulting list of Future objects can be ignored as we do not require any result from the task.
1 2 3 |
... # submit all copy tasks _ = [exe.submit(copy_file, path, dest) for path in files] |
Tying this together, the complete example of multithreaded copying of files from one directory to another is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
# SuperFastPython.com # copy files from one directory to another concurrently with threads from os import makedirs from os import listdir from os.path import join from shutil import copy from concurrent.futures import ThreadPoolExecutor # copy a file from source to destination def copy_file(src_path, dest_dir): # copy source file to dest file dest_path = copy(src_path, dest_dir) # report progress print(f'.copied {src_path} to {dest_path}') # copy files from src to dest def main(src='tmp', dest='tmp2'): # create the destination directory if needed makedirs(dest, exist_ok=True) # create full paths for all files we wish to copy files = [join(src,name) for name in listdir(src)] # create the thread pool with ThreadPoolExecutor(10) as exe: # submit all copy tasks _ = [exe.submit(copy_file, path, dest) for path in files] print('Done') # entry point if __name__ == '__main__': main() |
Running the example copies the files as before, creating a new sub-directory tmp2/ containing 10,000 CSV files copied from tmp1/.
1 2 3 4 5 6 7 8 9 10 11 |
... .copied tmp/data-5826.csv to tmp2/data-5826.csv .copied tmp/data-2185.csv to tmp2/data-2185.csv .copied tmp/data-2191.csv to tmp2/data-2191.csv .copied tmp/data-0786.csv to tmp2/data-0786.csv .copied tmp/data-1498.csv to tmp2/data-1498.csv .copied tmp/data-7957.csv to tmp2/data-7957.csv .copied tmp/data-6491.csv to tmp2/data-6491.csv .copied tmp/data-5198.csv to tmp2/data-5198.csv .copied tmp/data-4286.csv to tmp2/data-4286.csv Done |
Surprisingly, we do see a lift in performance by using threads compared to the single-threaded version.
On my system, all files are copied in about 18.6 seconds, which is about 16.3 seconds faster or about a 1.88x speed increase.
This is a great result, considering all file read and write operations occurred on a single hard drive and likely has something to do specifically with modern capabilities of the drive.
1 2 3 |
[Finished in 18.4s] [Finished in 18.6s] [Finished in 18.9s] |
How long did it take to execute on your system?
Let me know in the comments below.
Perhaps 10 threads is too few.
We can run the example again with 100 threads instead to see if it makes any difference.
This require a single change to the number of worker threads in the constructor to the ThreadPoolExecutor, for example:
1 2 3 4 |
... # create the thread pool with ThreadPoolExecutor(100) as exe: # ... |
Running the example copies all files from the source directory to the destination directory, as before.
Using 100 threads does result in a small increase in performance compared to using 10 threads.
On my system it takes about 15.6 seconds compared to about 18.6 with 10 threads and about 34.9 seconds for the single-threaded example.
1 2 3 |
[Finished in 15.9s] [Finished in 16.3s] [Finished in 14.8s] |
Overwhelmed by the python concurrency APIs?
Find relief, download my FREE Python Concurrency Mind Maps
Copy Files Concurrently with Threads in Batch
Each task in the thread pool adds overhead that slows down the overall task.
This includes function calls and object creation for each task, occurring internally within the ThreadPoolExecutor class.
We might reduce this overhead by performing file operations in a batch within each worker thread.
This can be achieved by updating the copy_file() function to receive a list of files to copy, and splitting up the files in the main() function into chunks to be submitted to worker threads for batch processing.
The hope is that this would reduce some of the overhead required for submitting so many small tasks by replacing it with a few larger batch tasks.
First, we can change the target function to receive a list of file names to copy, listed below.
1 2 3 4 5 6 7 8 |
# copy files from source to destination def copy_files(src_paths, dest_dir): # process all file paths for src_path in src_paths: # copy source file to dest file dest_path = copy(src_path, dest_dir) # report progress print(f'.copied {src_path} to {dest_path}') |
Next, in the main() function we can select a chunksize based on the number of worker threads and the number of files to copy.
In this case, we will use 100 worker threads with 10,000 files to copy. Dividing the files evenly between workers gives (10,000 / 100) or 100 files to copy for each worker.
1 2 3 4 |
... # determine chunksize n_workers = 100 chunksize = round(len(files) / n_workers) |
Next, we can create the thread pool with the parameterized number of worker threads.
1 2 3 4 |
... # create the thread pool with ThreadPoolExecutor(n_workers) as exe: ... |
We can then iterate over the list of files to copy and split them into chunks defined by a chunksize.
This can be achieved by iterating over the list of files and using the chunksize as the increment size in a for-loop. We can then split off a chunk of files to send to a worker thread as a task.
1 2 3 4 5 6 7 |
... # split the copy operations into chunks for i in range(0, len(files), chunksize): # select a chunk of filenames filenames = files[i:(i + chunksize)] # submit the batch copy task _ = exe.submit(copy_files, filenames, dest) |
And that’s it.
The complete example of copying files in a batch concurrently using worker threads is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
# SuperFastPython.com # copy files from one directory to another concurrently with threads in batch from os import makedirs from os import listdir from os.path import join from shutil import copy from concurrent.futures import ThreadPoolExecutor # copy files from source to destination def copy_files(src_paths, dest_dir): # process all file paths for src_path in src_paths: # copy source file to dest file dest_path = copy(src_path, dest_dir) # report progress print(f'.copied {src_path} to {dest_path}') # copy files from src to dest def main(src='tmp', dest='tmp2'): # create the destination directory if needed makedirs(dest, exist_ok=True) # create full paths for all files we wish to copy files = [join(src,name) for name in listdir(src)] # determine chunksize n_workers = 100 chunksize = round(len(files) / n_workers) # create the process pool with ThreadPoolExecutor(n_workers) as exe: # split the copy operations into chunks for i in range(0, len(files), chunksize): # select a chunk of filenames filenames = files[i:(i + chunksize)] # submit the batch copy task _ = exe.submit(copy_files, filenames, dest) print('Done') # entry point if __name__ == '__main__': main() |
Running the example copies all 10,000 files from tmp/ to tmp2/ as before.
1 2 3 4 5 6 7 8 9 10 11 |
.copied tmp/data-9783.csv to tmp2/data-9783.csv .copied tmp/data-2150.csv to tmp2/data-2150.csv .copied tmp/data-9730.csv to tmp2/data-9730.csv .copied tmp/data-1224.csv to tmp2/data-1224.csv .copied tmp/data-4700.csv to tmp2/data-4700.csv .copied tmp/data-1604.csv to tmp2/data-1604.csv .copied tmp/data-7254.csv to tmp2/data-7254.csv .copied tmp/data-7655.csv to tmp2/data-7655.csv .copied tmp/data-5462.csv to tmp2/data-5462.csv .copied tmp/data-9279.csv to tmp2/data-9279.csv Done |
In this case, we see a slight increase in running time.
On my system it takes about 19.96 seconds to complete, compared to 15.6 seconds without the batch mode.
This suggests that copying without batch mode is preferred, although it is not obvious why this may be the case. Results may be within the margin of measurement error.
1 2 3 |
[Finished in 18.6s] [Finished in 21.9s] [Finished in 19.4s] |
Copy Files Concurrently with Processes
We can also try to copy files concurrently with processes instead of threads.
It is unclear whether processes can offer a speed benefit in this case. Given that we can get a benefit using threads, we know it is possible. Nevertheless, using processes requires data for each task to be serialized which introduces additional overhead that might negate any speed-up from performing file operations with true parallelism via processes.
We can explore using processes to copy files concurrently using the ProcessPoolExecutor.
This can be achieved by switching out the ThreadPoolExecutor directly and specifying the number of worker processes.
We will use 4 processes in this case as I have 4 physical CPU cores.
It may be interesting to try different configurations of the number of worker processes to see if it makes a difference on the overall running time.
1 2 3 4 |
... # create the process pool with ProcessPoolExecutor(4) as exe: # ... |
Tying this together, the complete example of copying files concurrently using the process pool is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
# SuperFastPython.com # copy files from one directory to another concurrently with processes from os import makedirs from os import listdir from os.path import join from shutil import copy from concurrent.futures import ProcessPoolExecutor # copy a file from source to destination def copy_file(src_path, dest_dir): # copy source file to dest file dest_path = copy(src_path, dest_dir) # report progress print(f'.copied {src_path} to {dest_path}', flush=True) # copy files from src to dest def main(src='tmp', dest='tmp2'): # create the destination directory if needed makedirs(dest, exist_ok=True) # create full paths for all files we wish to copy files = [join(src,name) for name in listdir(src)] # create the process pool with ProcessPoolExecutor(4) as exe: # submit all copy tasks _ = [exe.submit(copy_file, path, dest) for path in files] print('Done') # entry point if __name__ == '__main__': main() |
Running the example copies all files to the new directory as we did before.
1 2 3 4 5 6 7 8 9 10 11 12 |
... .copied tmp/data-5832.csv to tmp2/data-5832.csv .copied tmp/data-5826.csv to tmp2/data-5826.csv .copied tmp/data-2185.csv to tmp2/data-2185.csv .copied tmp/data-2191.csv to tmp2/data-2191.csv .copied tmp/data-1498.csv to tmp2/data-1498.csv .copied tmp/data-0786.csv to tmp2/data-0786.csv .copied tmp/data-6491.csv to tmp2/data-6491.csv .copied tmp/data-7957.csv to tmp2/data-7957.csv .copied tmp/data-5198.csv to tmp2/data-5198.csv .copied tmp/data-4286.csv to tmp2/data-4286.csv Done |
In this case, we do see a speed improvement over the single-threaded case, but perhaps worse results than the multithreaded case.
On my system it takes about 21.23 seconds to copy all 10,000 files, compared to about 34.9 seconds for the single-threaded example and about 15.6 seconds for the multithreaded example.
Using threads is preferred at this point.
1 2 3 |
[Finished in 21.5s] [Finished in 20.9s] [Finished in 21.3s] |
Copy Files Concurrently with Processes in Batch
Any data sent to a worker process or received from a worker process must be serialized (pickled).
This adds overhead for each task executed by worker threads.
This is likely the cause of worse performance seen when using the process pool in the previous section.
We can address this by batching the copy tasks into chunks to be executed by each worker process, just as we did when we batched filenames for the thread pools.
Again, this can be achieved by adapting the batched version of ThreadPoolExecutor from a previous section to use ProcessPoolExecutor.
We will use 4 worker processes. With 10,000 files to be copied, this means that each worker will receive a batch of 2500 files to copy.
1 2 3 4 |
... # determine chunksize n_workers = 4 chunksize = round(len(files) / n_workers) |
We can then create the process pool using the parameterized number of workers.
1 2 3 4 |
... # create the process pool with ProcessPoolExecutor(n_workers) as exe: # ... |
And that’s it.
The complete example of batch copying files using process pools in batch is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
# SuperFastPython.com # copy files from one directory to another concurrently with processes in batch from os import makedirs from os import listdir from os.path import join from shutil import copy from concurrent.futures import ProcessPoolExecutor # copy files from source to destination def copy_files(src_paths, dest_dir): # process all file paths for src_path in src_paths: # copy source file to dest file dest_path = copy(src_path, dest_dir) # report progress print(f'.copied {src_path} to {dest_path}', flush=True) # copy files from src to dest def main(src='tmp', dest='tmp2'): # create the destination directory if needed makedirs(dest, exist_ok=True) # create full paths for all files we wish to copy files = [join(src,name) for name in listdir(src)] # determine chunksize n_workers = 4 chunksize = round(len(files) / n_workers) # create the process pool with ProcessPoolExecutor(n_workers) as exe: # split the copy operations into chunks for i in range(0, len(files), chunksize): # select a chunk of filenames filenames = files[i:(i + chunksize)] # submit the batch copy task _ = exe.submit(copy_files, filenames, dest) print('Done') # entry point if __name__ == '__main__': main() |
Running the example copied all 10,000 files without a problem.
1 2 3 4 5 6 7 8 9 10 11 12 |
... .copied tmp/data-9938.csv to tmp2/data-9938.csv .copied tmp/data-9797.csv to tmp2/data-9797.csv .copied tmp/data-7915.csv to tmp2/data-7915.csv .copied tmp/data-5813.csv to tmp2/data-5813.csv .copied tmp/data-5864.csv to tmp2/data-5864.csv .copied tmp/data-5807.csv to tmp2/data-5807.csv .copied tmp/data-9086.csv to tmp2/data-9086.csv .copied tmp/data-7976.csv to tmp2/data-7976.csv .copied tmp/data-8398.csv to tmp2/data-8398.csv .copied tmp/data-9783.csv to tmp2/data-9783.csv Done |
In this case, we may see a slight improvement in performance compared to the multithreaded case.
On my system, the program took about 15.133 seconds to copy all files compared to the non-batch multithreaded case that took about 15.6 seconds, although the difference may be within the margin of measurement error.
This provides a viable alternative to the multithreaded case.
1 2 3 |
[Finished in 15.3s] [Finished in 14.6s] [Finished in 15.5s] |
Extensions
This section lists ideas for extending the tutorial.
- Vary Number of Thread Workers. Update the multithreaded examples to test different numbers of worker threads such as 50, 1000, and more to see if it makes a difference to the run time.
- Vary Number of Process Workers. Update the multiprocess examples to test different numbers of worker processes such as 2, 8, 10, 50, and more to see if it makes a difference to the run time.
- Separate Read and Write. Develop separate tasks for reading and writing file data and experiment with thread and process pools to see if similar results or further speed improvements are possible.
Share your extensions in the comments below, it would be great to see what you come up with.
Further Reading
This section provides additional resources that you may find helpful.
Books
- Concurrent File I/O in Python, Jason Brownlee (my book!)
Guides
Python File I/O APIs
- Built-in Functions
- os - Miscellaneous operating system interfaces
- os.path - Common pathname manipulations
- shutil - High-level file operations
- zipfile — Work with ZIP archives
- Python Tutorial: Chapter 7. Input and Output
Python Concurrency APIs
- threading — Thread-based parallelism
- multiprocessing — Process-based parallelism
- concurrent.futures — Launching parallel tasks
- asyncio — Asynchronous I/O
File I/O in Asyncio
References
Takeaways
In this tutorial you discovered how to use multithreading to speed-up the copying of a large number of files.
Do you have any questions?
Leave your question in a comment below and I will reply fast with my best advice.
Photo by PHILIP ABDO on Unsplash
Durgapriya says
Hi
I tried to implement the file copy using concurrent threads. But I’m getting below error intermittently.
Exception: Request requires 1 credits but only 0 credits are available
Jason Brownlee says
Sorry to hear that, it sounds like you do not have permission to copy files on your system. Perhaps you can talk o a system administrator?