创建Dataproc集群时报告的DataNode数量不足 [英] Insufficient number of DataNodes reporting when creating dataproc cluster
本文介绍了创建Dataproc集群时报告的DataNode数量不足的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
使用gs://作为默认FS创建dataproc集群时,出现报告的DataNodes数量不足"错误.下面是我正在使用dataproc cluster的命令.
I am getting "Insufficient number of DataNodes reporting" error when creating dataproc cluster with gs:// as default FS. Below is the command i am using dataproc cluster.
gcloud dataproc clusters create cluster-538f --image-version 1.2 \
--bucket dataproc_bucket_test --subnet default --zone asia-south1-b \
--master-machine-type n1-standard-1 --master-boot-disk-size 500 \
--num-workers 2 --worker-machine-type n1-standard-1 --worker-boot-disk-size 500 \
--scopes 'https://www.googleapis.com/auth/cloud-platform' --project delcure-firebase \
--properties 'core:fs.default.name=gs://dataproc_bucket_test/'
我检查并确认我正在使用的存储桶能够在存储桶中创建默认文件夹.
I checked and confirmed that the bucket i am using is able to create default folder in the bucker.
推荐答案
正如Igor建议的那样,Dataproc不支持将GCS作为默认FS.我还建议取消设置此属性.请注意,fs.default.name
属性可以传递给各个作业,并且可以正常工作.
As Igor suggests, Dataproc does not support GCS as a default FS. I also suggest unsetting this property. Note, that fs.default.name
property can be passed to individual jobs and will work just fine.
这篇关于创建Dataproc集群时报告的DataNode数量不足的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文