如何在 bash 限制进程数中并行化 for 循环 [英] How to parallelize for-loop in bash limiting number of processes
问题描述
我有一个类似于以下内容的 bash 脚本:
I have a bash script similar to:
NUM_PROCS=$1
NUM_ITERS=$2
for ((i=0; i<$NUM_ITERS; i++)); do
python foo.py $i arg2 &
done
将并行进程的数量限制为 NUM_PROCS 的最直接方法是什么?如果可能,我正在寻找不需要包/安装/模块(如 GNU Parallel)的解决方案.
What's the most straightforward way to limit the number of parallel processes to NUM_PROCS? I'm looking for a solution that doesn't require packages/installations/modules (like GNU Parallel) if possible.
当我尝试 Charles Duffy 的最新方法时,我从 bash -x 得到以下错误:
When I tried Charles Duffy's latest approach, I got the following error from bash -x:
+ python run.py args 1
+ python run.py ... 3
+ python run.py ... 4
+ python run.py ... 2
+ read -r line
+ python run.py ... 1
+ read -r line
+ python run.py ... 4
+ read -r line
+ python run.py ... 2
+ read -r line
+ python run.py ... 3
+ read -r line
+ python run.py ... 0
+ read -r line
... 继续使用 0 到 5 之间的其他数字,直到系统启动了太多进程无法处理并且 bash 脚本被关闭.
... continuing with other numbers between 0 and 5, until too many processes were started for the system to handle and the bash script was shut down.
推荐答案
bash
4.4 将有一种有趣的新型参数扩展,可简化 Charles Duffy 的回答.
bash
4.4 will have an interesting new type of parameter expansion that simplifies Charles Duffy's answer.
#!/bin/bash
num_procs=$1
num_iters=$2
num_jobs="j" # The prompt escape for number of jobs currently running
for ((i=0; i<num_iters; i++)); do
while (( ${num_jobs@P} >= num_procs )); do
wait -n
done
python foo.py "$i" arg2 &
done
这篇关于如何在 bash 限制进程数中并行化 for 循环的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!