Google Big Query - 从GCS加载文件失败,显示“未找到”,但该文件存在 [英] Google Big Query - Loading File From GCS Failed with "Not Found", but the file exists

查看:94
本文介绍了Google Big Query - 从GCS加载文件失败,显示“未找到”,但该文件存在的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有一个奇怪的问题经常发生。

我们有一个从源代码获取文件并将其加载到GCS中的过程。比,并且只有当文件上传成功,我们试着将它加载到BigQuery表中,并得到
的错误未找到:Uris uris列表(可能被截断):json:file_name:...。

经过深入调查,这一切都应该没有问题,我们也不知道发生了什么变化。在时间范围内,作业中的文件存在于云存储中,并在BigQuery尝试获取它之前2分钟上载到GCS中。

需要说明的是,我们将每个文件都加载到云存储中的整批字典中,例如 gs://< bucket> / path_to_dir / * 。这仍然是支持的吗?
此外,文件大小也很小 - 从几个字节到KB。是这样吗?



用于检查的作业ID:

load_file_8e4e16f737084ba59ce0ba89075241b7 load_file_6c13c25e1fc54a088af40199eb86200d

解决方案

解决方案是从多区域桶(在区域类型启用之前设置)移动到区域。
因为我们搬家了,所以我们从未遇到过这个问题。


We have a strange issue that happen quite often.

We have a process which getting files from sources and loading it into the GCS. Than, and only if the file uploaded successfully, we try to load it into the BigQuery table and get the error of "Not found: Uris List of uris (possibly truncated): json: file_name: ...".

After a deep investigation, it all supposed to be fine, and we don't know what had changed. In the time frames, the file in the job exists in the cloud storage, and uploaded into the GCS 2 minutes before BigQuery tried to get it.

There is need to say that we load every file as the whole batch dictionary in the Cloud Storage, like gs://<bucket>/path_to_dir/*. Is that still supported? Also, the file sizes are kind of small - from few bytes to KB. Is that matter?

job ids for checking:
load_file_8e4e16f737084ba59ce0ba89075241b7 load_file_6c13c25e1fc54a088af40199eb86200d

解决方案

The solution was to move from Multi Region Bucket(that was set before Region type was enable) to Region. Since we moved, we never faced this issue.

这篇关于Google Big Query - 从GCS加载文件失败,显示“未找到”,但该文件存在的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆