具有GKE的GCS,403权限不足,无法写入GCS存储桶 [英] GCS with GKE, 403 Insufficient permission for writing into GCS bucket

本文介绍了具有GKE的GCS,403权限不足,无法写入GCS存储桶的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当前,我正在尝试将文件写入Google Cloud Storage存储桶.为此,我使用了 django-storages 软件包.

Currently I'm trying to write files into Google Cloud Storage bucket. For this, I have used django-storages package.

我已经部署了代码,并通过kubernetes kubectl 实用程序进入了正在运行的容器,以检查GCS存储桶的工作情况.

I have deployed my code and I get into the running container through kubernetes kubectl utility to check the working of GCS bucket.

$ kubectl exec -it foo-pod -c foo-container --namespace=testing python manage.py shell

我能够读取存储桶,但是如果我尝试写入存储桶,它将显示以下回溯.

I can able to read the bucket but if I try to write into the bucket, it shows the below traceback.

>>> from django.core.files.storage import default_storage
>>> f = default_storage.open('storage_test', 'w')
>>> f.write('hi')
2
>>> f.close()
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 946, in upload_from_file
    client, file_obj, content_type, size, num_retries)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 867, in _do_upload
    client, stream, content_type, size, num_retries)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 700, in _do_multipart_upload
    transport, data, object_metadata, content_type)
  File "/usr/local/lib/python3.6/site-packages/google/resumable_media/requests/upload.py", line 98, in transmit
    self._process_response(result)
  File "/usr/local/lib/python3.6/site-packages/google/resumable_media/_upload.py", line 110, in _process_response
    response, (http_client.OK,), self._get_status_code)
  File "/usr/local/lib/python3.6/site-packages/google/resumable_media/_helpers.py", line 93, in require_status_code
    status_code, u'Expected one of', *status_codes)
google.resumable_media.common.InvalidResponse: ('Request failed with status code', 403, 'Expected one of', <HTTPStatus.OK: 200>)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<console>", line 1, in <module>
  File "/usr/local/lib/python3.6/site-packages/storages/backends/gcloud.py", line 75, in close
    self.blob.upload_from_file(self.file, content_type=self.mime_type)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 949, in upload_from_file
    _raise_from_invalid_response(exc)
  File "/usr/local/lib/python3.6/site-packages/google/cloud/storage/blob.py", line 1735, in _raise_from_invalid_response
    raise exceptions.from_http_response(error.response)
google.api_core.exceptions.Forbidden: 403 POST https://www.googleapis.com/upload/storage/v1/b/foo.com/o?uploadType=multipart: Insufficient Permission
>>> default_storage.url('new docker')
'https://storage.googleapis.com/foo.appspot.com/new%20docker'
>>>

似乎完全与存储桶权限有关.因此,我已经将Storage admin,Storage对象创建者角色分配给了Google Cloud build service帐户(通过bucket->管理权限),但是仍然显示相同的错误.

Seems like it was completely related to the bucket permissions. So I have assigned Storage admin , Storage object creator roles to google cloud build service account (through bucket -> manage permissions) but still it shows the same error.

推荐答案

对此的可能解释是,如果您没有为群集分配正确的作用域.在这种情况下,群集中的节点将没有所需的授权/权限来写入Google Cloud Storage,这可以解释您看到的403错误.

A possible explanation for this would be if you haven't assigned your cluster with the correct scope. If this is the case, the nodes in the cluster would not have the required authorisation/permission to write to Google Cloud Storage which could explain the 403 error you're seeing.

如果在创建集群时未设置范围,则将分配默认范围,这仅提供对Cloud Storage的读取权限.

If no scope is set when the cluster is created, the default scope is assigned and this only provides read permission for Cloud Storage.

为了使用Cloud SDK检查集群的当前作用域,您可以尝试从Cloud Shell运行描述"命令,例如:

In order to check the clusters current scopes using Cloud SDK you could try running a 'describe' command from the Cloud Shell, for example:

gcloud container clusters describe CLUSTER-NAME --zone ZONE

输出的oauthScopes部分包含分配给集群/节点的当前范围.

The oauthScopes section of the output contains the current scopes assigned to the cluster/nodes.

默认的只读Cloud Storage范围将显示:

The default read only Cloud Storage scope would display:

https://www.googleapis.com/auth/devstorage.read_only

如果设置了Cloud Storage的读/写范围,则输出将显示:

If the Cloud Storage read/write scope is set the output will display:

https://www.googleapis.com/auth/devstorage.read_write

可以在群集创建过程中使用--scope开关以及所需的作用域标识符来设置作用域.在您的情况下,这将是"storage-rw".例如,您可以运行以下命令:

The scope can be set during cluster creation using the --scope switch followed by the desired scope identifier. In your case, this would be "storage-rw". For example, you could run something like:

gcloud容器集群创建CLUSTER-NAME --zone ZONE --scopes storage-rw

gcloud container clusters create CLUSTER-NAME --zone ZONE --scopes storage-rw

storage-rw 范围,再与您的服务帐户一起,应允许群集中的节点写入Cloud Storage.

The storage-rw scope, combined with your service account should then allow the nodes in your cluster to write to Cloud Storage.

或者,如果您不想重新创建集群,则可以使用所需的新作用域创建一个新的节点池,然后删除旧的节点池.请参见是否需要重新创建Google Container Engine集群以修改API权限?,以获取有关如何实现此目的的信息.

Alternatively you if you don't want to recreate the cluster you can create a new node pool with the new desired scopes, then delete your old node pool. See the accepted answer for Is it necessary to recreate a Google Container Engine cluster to modify API permissions? for information on how to achieve this.

这篇关于具有GKE的GCS,403权限不足,无法写入GCS存储桶的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆