在不同的机器上训练 PyTorch 模型会导致不同的结果 [英] Training PyTorch models on different machines leads to different results
问题描述
我在两台不同的机器上训练同一个模型,但训练的模型并不相同.我采取了以下措施来确保重现性:
# 设置随机数随机种子(0)torch.cuda.manual_seed(0)np.random.seed(0)
#设置cudnntorch.backends.cudnn.benchmark=Falsetorch.backends.cudnn.deterministic=真
#设置数据加载器工作线程为0数据加载器(数据集,num_works=0)
当我在同一台机器上多次训练同一个模型时,训练的模型总是相同的.但是,在两台不同机器上训练的模型并不相同.这是正常的吗?我可以使用其他技巧吗?
有许多领域可以额外引入随机性,例如:
<块引用>PyTorch 随机数生成器
您可以使用 torch.manual_seed()
为所有设备(CPU 和 CUDA)设置 RNG:
<块引用>
CUDA 卷积确定性
虽然禁用 CUDA 卷积基准测试(上面讨论过)可确保每次应用程序运行时 CUDA 选择相同的算法,但该算法本身可能是不确定的,除非 torch.use_deterministic_algorithms(True)
或 torch.backends.cudnn.deterministic = True
已设置.后一个设置仅控制此行为,与 torch 不同.use_deterministic_algorithms()
这将使其他 PyTorch 操作也具有确定性的行为.
CUDA RNN 和 LSTM
在某些版本的 CUDA 中,RNN 和 LSTM 网络可能具有非确定性行为.请参阅torch.nn.RNN()
和 torch.nn.LSTM()
了解详情和解决方法.
DataLoader
DataLoader
将在多进程数据加载算法中按照随机性重新播种工人.使用 worker_init_fn()
来保持可重复性:
I am training the same model on two different machines, but the trained models are not identical. I have taken the following measures to ensure reproducibility:
# set random number
random.seed(0)
torch.cuda.manual_seed(0)
np.random.seed(0)
# set the cudnn
torch.backends.cudnn.benchmark=False
torch.backends.cudnn.deterministic=True
# set data loader work threads to be 0
DataLoader(dataset, num_works=0)
When I train the same model multiple times on the same machine, the trained model is always the same. However, the trained models on two different machines are not the same. Is this normal? Are there any other tricks I can employ?
There are a number of areas that could additionally introduce randomness e.g:
PyTorch random number generator
You can use
torch.manual_seed()
to seed the RNG for all devices (both CPU and CUDA):
CUDA convolution determinism
While disabling CUDA convolution benchmarking (discussed above) ensures that CUDA selects the same algorithm each time an application is run, that algorithm itself may be nondeterministic, unless either
torch.use_deterministic_algorithms(True)
ortorch.backends.cudnn.deterministic = True
is set. The latter setting controls only this behavior, unliketorch.use_deterministic_algorithms()
which will make other PyTorch operations behave deterministically, too.CUDA RNN and LSTM
In some versions of CUDA, RNNs and LSTM networks may have non-deterministic behavior. See
torch.nn.RNN()
andtorch.nn.LSTM()
for details and workarounds.
DataLoader
DataLoader
will reseed workers following Randomness in multi-process data loading algorithm. Useworker_init_fn()
to preserve reproducibility:
这篇关于在不同的机器上训练 PyTorch 模型会导致不同的结果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!