Google Cloud机器学习内存不足 [英] Google Cloud machine learning out of memory
问题描述
当我选择以下配置(config.yaml)时,我遇到了内存不足的问题:
I am having a issue of getting out of memory when I choose the following configuration (config.yaml):
trainingInput:
scaleTier: CUSTOM
masterType: large_model
workerType: complex_model_m
parameterServerType: large_model
workerCount: 10
parameterServerCount: 10
我一直在关注Google关于"criteo_tft"的教程: https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/criteo_tft/config-large.yaml
I was following Google's tutorial on "criteo_tft": https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/criteo_tft/config-large.yaml
该链接表明他们能够训练1TB数据!尝试给我留下深刻的印象!
That link says they were able to train 1TB data! I was so impressed to give a try!!!
我的数据集是分类的,因此在一次热编码(大小为520000 x 4000的2D numpy数组)后,它会创建一个很大的矩阵.我可以在具有32GB内存的本地计算机上训练我的数据集,但不能在云中执行相同的操作!!
My dataset is categorical so it creates a pretty big matrix after one-hot-encoding (a 2D numpy array of size 520000 x 4000). I am able to train my data set in a local machine having 32GB memory but I am not able to do the same in cloud!!!
这是我的错误:
ERROR 2017-12-18 12:57:37 +1100 worker-replica-1 Using TensorFlow
backend.
ERROR 2017-12-18 12:57:37 +1100 worker-replica-4 Using TensorFlow
backend.
INFO 2017-12-18 12:57:37 +1100 worker-replica-0 Running command:
python -m trainer.task --train-file gs://my_bucket/my_training_file.csv --
job-dir gs://my_bucket/my_bucket_20171218_125645
ERROR 2017-12-18 12:57:38 +1100 worker-replica-2 Using TensorFlow
backend.
ERROR 2017-12-18 12:57:40 +1100 worker-replica-0 Using TensorFlow
backend.
ERROR 2017-12-18 12:57:53 +1100 worker-replica-3 Command
'['python', '-m', u'trainer.task', u'--train-file',
u'gs://my_bucket/my_training_file.csv', '--job-dir',
u'gs://my_bucket/my_bucket_20171218_125645']' returned non-zero exit status -9
INFO 2017-12-18 12:57:53 +1100 worker-replica-3 Module
completed; cleaning up.
INFO 2017-12-18 12:57:53 +1100 worker-replica-3 Clean up
finished.
ERROR 2017-12-18 12:57:56 +1100 worker-replica-4 Command
'['python', '-m', u'trainer.task', u'--train-file',
u'gs://my_bucket/my_training_file.csv', '--job-dir',
u'gs://my_bucket/my_bucket_20171218_125645']' returned non-zero exit status -9
INFO 2017-12-18 12:57:56 +1100 worker-replica-4 Module
completed; cleaning up.
INFO 2017-12-18 12:57:56 +1100 worker-replica-4 Clean up
finished.
ERROR 2017-12-18 12:57:58 +1100 worker-replica-2 Command
'['python', '-m', u'trainer.task', u'--train-file',
u'gs://my_bucket/my_training_file.csv', '--job-dir',
u'gs://my_bucket/my_bucket_20171218_125645']' returned non-zero exit status -9
INFO 2017-12-18 12:57:58 +1100 worker-replica-2 Module
completed; cleaning up.
INFO 2017-12-18 12:57:58 +1100 worker-replica-2 Clean up
finished.
ERROR 2017-12-18 12:57:59 +1100 worker-replica-1 Command
'['python', '-m', u'trainer.task', u'--train-file',
u'gs://my_bucket/my_training_file.csv', '--job-dir',
u'gs://my_bucket/my_bucket_20171218_125645']' returned non-zero exit status -9
INFO 2017-12-18 12:57:59 +1100 worker-replica-1 Module
completed; cleaning up.
INFO 2017-12-18 12:57:59 +1100 worker-replica-1 Clean up finished.
ERROR 2017-12-18 12:58:01 +1100 worker-replica-0 Command
'['python', '-m', u'trainer.task', u'--train-file',
u'gs://my_bucket/my_training_file.csv', '--job-dir',
u'gs://my_bucket/my_bucket_20171218_125645']' returned non-zero exit status -9
INFO 2017-12-18 12:58:01 +1100 worker-replica-0 Module
completed; cleaning up.
INFO 2017-12-18 12:58:01 +1100 worker-replica-0 Clean up finished.
ERROR 2017-12-18 12:58:43 +1100 service The replica worker 0 ran
out-of-memory and exited with a non-zero status of 247. The replica worker 1
ran out-of-memory and exited with a non-zero status of 247. The replica
worker 2 ran out-of-memory and exited with a non-zero status of 247. The
replica worker 3 ran out-of-memory and exited with a non-zero status of 247.
The replica worker 4 ran out-of-memory and exited with a non-zero status of
247. To find out more about why your job exited please check the logs:
https://console.cloud.google.com/logs/viewer?project=a_project_id........(link to to my cloud log)
INFO 2017-12-18 12:58:44 +1100 ps-replica-0 Signal 15 (SIGTERM)
was caught. Terminated by service. This is normal behavior.
INFO 2017-12-18 12:58:44 +1100 ps-replica-1 Signal 15 (SIGTERM)
was caught. Terminated by service. This is normal behavior.
INFO 2017-12-18 12:58:44 +1100 ps-replica-0 Module completed;
cleaning up.
INFO 2017-12-18 12:58:44 +1100 ps-replica-0 Clean up finished.
INFO 2017-12-18 12:58:44 +1100 ps-replica-1 Module completed;
cleaning up.
INFO 2017-12-18 12:58:44 +1100 ps-replica-1 Clean up finished.
INFO 2017-12-18 12:58:44 +1100 ps-replica-2 Signal 15
(SIGTERM) was caught. Terminated by service. This is normal behavior.
INFO 2017-12-18 12:58:44 +1100 ps-replica-2 Module completed;
cleaning up.
INFO 2017-12-18 12:58:44 +1100 ps-replica-2 Clean up finished.
INFO 2017-12-18 12:58:44 +1100 ps-replica-3 Signal 15 (SIGTERM)
was caught. Terminated by service. This is normal behavior.
INFO 2017-12-18 12:58:44 +1100 ps-replica-5 Signal 15 (SIGTERM)
was caught. Terminated by service. This is normal behavior.
INFO 2017-12-18 12:58:44 +1100 ps-replica-3 Module completed;
cleaning up.
INFO 2017-12-18 12:58:44 +1100 ps-replica-3 Clean up finished.
INFO 2017-12-18 12:58:44 +1100 ps-replica-5 Module completed;
cleaning up.
INFO 2017-12-18 12:58:44 +1100 ps-replica-5 Clean up finished.
INFO 2017-12-18 12:58:44 +1100 ps-replica-4 Signal 15 (SIGTERM)
was caught. Terminated by service. This is normal behavior.
INFO 2017-12-18 12:58:44 +1100 ps-replica-4 Module completed;
cleaning up.
INFO 2017-12-18 12:58:44 +1100 ps-replica-4 Clean up finished.
INFO 2017-12-18 12:59:28 +1100 service Finished tearing down
TensorFlow.
INFO 2017-12-18 13:00:17 +1100 service Job failed.##
请不要担心使用TensorFlow后端".我得到的错误,即使对于其他较小的数据集,训练工作也成功了.
Please don't worry about "Using TensorFlow backend." error as I have got it even it the training job is successful for other smaller dataset.
任何人都可以解释造成内存用尽的原因(错误247)以及如何编写config.yaml文件以避免此类问题并在云中训练数据吗?
Can anyone please explain what is causing running out of memory (error 247) and how can I write my config.yaml file to avoid such issues, and train my data in cloud?
推荐答案
我已经解决了该问题.我需要做几件事:
I have fixed the problem. I needed to do few things:
-
更改tensorflow版本,尤其是我在云中提交培训作业的方式.
Change the tensorflow version especially how I was submitting training job in cloud.
我改用现在,它可以训练具有2.5密耳行和4200编码列的分类数据集.
Now it can train a categorical dataset with 2.5 millon rows and 4200 encoded columns.
这篇关于Google Cloud机器学习内存不足的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!