GCS with GKE, 403 Insufficient permission for write into GCS bucket [英] GCS with GKE, 403 Insufficient permission for writing into GCS bucket

查看:23
本文介绍了GCS with GKE, 403 Insufficient permission for write into GCS bucket的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目前我正在尝试将文件写入 Google Cloud Storage 存储桶.为此,我使用了 django-storages 包.

Currently I'm trying to write files into Google Cloud Storage bucket. For this, I have used django-storages package.

我已经部署了我的代码,我通过 kubernetes kubectl 实用程序进入正在运行的容器,以检查 GCS 存储桶的工作情况.

I have deployed my code and I get into the running container through kubernetes kubectl utility to check the working of GCS bucket.

$ kubectl exec -it foo-pod -c foo-container --namespace=testing python manage.py shell

我可以读取存储桶,但如果我尝试写入存储桶,它会显示以下回溯.

I can able to read the bucket but if I try to write into the bucket, it shows the below traceback.

>>> from django.core.files.storage import default_storage
>>> f = default_storage.open('storage_test', 'w')
>>> f.write('hi')
2
>>> f.close()
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 946, in upload_from_file
    client, file_obj, content_type, size, num_retries)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 867, in _do_upload
    client, stream, content_type, size, num_retries)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 700, in _do_multipart_upload
    transport, data, object_metadata, content_type)
  File "/usr/local/lib/python3.6/site-packages/google/resumable_media/requests/upload.py", line 98, in transmit
    self._process_response(result)
  File "/usr/local/lib/python3.6/site-packages/google/resumable_media/_upload.py", line 110, in _process_response
    response, (http_client.OK,), self._get_status_code)
  File "/usr/local/lib/python3.6/site-packages/google/resumable_media/_helpers.py", line 93, in require_status_code
    status_code, u'Expected one of', *status_codes)
google.resumable_media.common.InvalidResponse: ('Request failed with status code', 403, 'Expected one of', <HTTPStatus.OK: 200>)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<console>", line 1, in <module>
  File "/usr/local/lib/python3.6/site-packages/storages/backends/gcloud.py", line 75, in close
    self.blob.upload_from_file(self.file, content_type=self.mime_type)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 949, in upload_from_file
    _raise_from_invalid_response(exc)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 1735, in _raise_from_invalid_response
    raise exceptions.from_http_response(error.response)
google.api_core.exceptions.Forbidden: 403 POST https://www.googleapis.com/upload/storage/v1/b/foo.com/o?uploadType=multipart: Insufficient Permission
>>> default_storage.url('new docker')
'https://storage.googleapis.com/foo.appspot.com/new%20docker'
>>>

似乎它与存储桶权限完全相关.因此,我已将 Storage admin , Storage object creator 角色分配给 google cloud build 服务帐户(通过存储桶-> 管理权限),但仍然显示相同的错误.

Seems like it was completely related to the bucket permissions. So I have assigned Storage admin , Storage object creator roles to google cloud build service account (through bucket -> manage permissions) but still it shows the same error.

推荐答案

对此的可能解释是,如果您没有为集群分配正确的范围.如果是这种情况,集群中的节点将没有写入谷歌云存储所需的授权/权限,这可以解释您看到的 403 错误.

A possible explanation for this would be if you haven't assigned your cluster with the correct scope. If this is the case, the nodes in the cluster would not have the required authorisation/permission to write to Google Cloud Storage which could explain the 403 error you're seeing.

如果创建集群时没有设置作用域,则分配默认作用域,这只提供对云存储的读取权限.

If no scope is set when the cluster is created, the default scope is assigned and this only provides read permission for Cloud Storage.

为了使用 Cloud SDK 检查集群的当前范围,您可以尝试从 Cloud Shell 运行描述"命令,例如:

In order to check the clusters current scopes using Cloud SDK you could try running a 'describe' command from the Cloud Shell, for example:

gcloud container clusters describe CLUSTER-NAME --zone ZONE

输出的 oauthScopes 部分包含分配给集群/节点的当前范围.

The oauthScopes section of the output contains the current scopes assigned to the cluster/nodes.

默认的只读云存储范围会显示:

The default read only Cloud Storage scope would display:

https://www.googleapis.com/auth/devstorage.read_only

如果设置了 Cloud Storage 读/写范围,输出将显示:

If the Cloud Storage read/write scope is set the output will display:

https://www.googleapis.com/auth/devstorage.read_write

可以在集群创建期间使用 --scope 开关后跟所需的范围标识符来设置范围.在您的情况下,这将是storage-rw".例如,您可以运行类似:

The scope can be set during cluster creation using the --scope switch followed by the desired scope identifier. In your case, this would be "storage-rw". For example, you could run something like:

gcloud 容器集群创建 CLUSTER-NAME --zone ZONE --scopes storage-rw

gcloud container clusters create CLUSTER-NAME --zone ZONE --scopes storage-rw

storage-rw 范围与您的服务帐户相结合,然后应该允许集群中的节点写入云存储.

The storage-rw scope, combined with your service account should then allow the nodes in your cluster to write to Cloud Storage.

或者,如果您不想重新创建集群,您可以使用新的所需范围创建一个新的节点池,然后删除您的旧节点池.请参阅 是否有必要重新创建 Google Container Engine 集群来修改 API 权限?了解如何实现这一点.

Alternatively you if you don't want to recreate the cluster you can create a new node pool with the new desired scopes, then delete your old node pool. See the accepted answer for Is it necessary to recreate a Google Container Engine cluster to modify API permissions? for information on how to achieve this.

这篇关于GCS with GKE, 403 Insufficient permission for write into GCS bucket的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆