_A0& amp; amp;将Google App Engine标准日志存储在GCS中时的_S0日志文件 [英] What is the difference between _A0 & _S0 log files when storing Google App Engine Standard logs in GCS

查看:134
本文介绍了_A0& amp; amp;将Google App Engine标准日志存储在GCS中时的_S0日志文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已打开交换机将GAE标准应用的日志发送到GCS存储桶。我预计每天都会看到一个文件夹。每天的每个小时,我都会看到一个扩展名为_S0.json的非常大的json文件。几个小时后,我还看到一个扩展名为_A0:.json的小文件。例如:

<01> 00:00_01:59:59_S0.json& 01:00:00_01:59:59_A0:4679580000.json



有什么区别,我试图发布文件并需要知道。

解决方案

输出到GCS的日志被分割, _A0 _S0 只是日志分片的标识符。



来自 Google Cloud Storage中的日志条目(强调我的):


叶目录( DD / )包含多个文件,其中每个文件
包含文件
名称中指定时间段内的导出日志条目。这些文件被分割为 ,并且它们的名字以分片号结尾,
Sn An (n = 0,1,2,...)。例如,以下是可能存储在目录
my-gcs-bucket / syslog / 2015/01/13 / 中的两个文件:

  08:00:00_08:59:59_S0.json 
08:00:00_08:59:59_S1.json

这两个文件在0800 UTC开始的小时内包含所有
实例的syslog日志条目。要获得所有日志
的条目,您必须读取每个时间段的所有分片 - 在此
的情况下,文件分片0和1.所写入的文件分片的数量可以为
时间段取决于日志条目的数量。


我通过以下引用部分中的最后一个链接从< a href =https://cloud.google.com/appengine/docs/python/logs/#quotas_and_limits =nofollow noreferrer>配额和限制:


记录摄取分配



记录App Engine应用程序由 Stackdriver 。按
的默认值,日志可以免费存储在应用程序中,最多可存储7美元b $ b天和5GB。删除比最大保留时间早的日志,
并尝试存储超过5 GB的免费提取限制
将导致错误。您可以更新到Premium级别,以获得更高的存储容量和保留时间。有关日志记录速率和限制的更多信息,请参见 Stackdriver
定价
。如果您的
想要保留您的日志的时间超过Stackdriver允许的时间,那么您
可以将日志导出到Google Cloud Storage,Google BigQuery或Google
Cloud Pub / Sub


I have turned on the switch to send the logs of a GAE Standard app to a GCS bucket. I see there as expected a folder for each day. For every hour of every days I see a very big json file with the extension _S0.json. For some hours I also see a much smaller file with the extension _A0:.json. For instance:

01:00:00_01:59:59_S0.json & 01:00:00_01:59:59_A0:4679580000.json

What is the difference, I am trying to post process the files and need to know.

解决方案

Logs exported to GCS are sharded, the _A0 and _S0 are simply identifiers of the logs shards.

From Log entries in Google Cloud Storage (emphasis mine):

The leaf directories (DD/) contain multiple files, each of which holds the exported log entries for a time period specified in the file name. The files are sharded and their names end in a shard number, Sn or An (n=0, 1, 2, ...). For example, here are two files that might be stored within the directory my-gcs-bucket/syslog/2015/01/13/:

08:00:00_08:59:59_S0.json
08:00:00_08:59:59_S1.json

These two files together contain the syslog log entries for all instances during the hour beginning 0800 UTC. To get all the log entries, you must read all the shards for each time period—in this case, file shards 0 and 1. The number of file shards written can change for every time period depending on the volume of log entries.

I got to the above page via the last link in the below quoted section from Quotas and limits:

Logs ingestion allotment

Logging for App Engine apps is provided by Stackdriver. By default, logs are stored for an application free of charge for up to 7 days and 5GB. Logs older than the maximum retention time are deleted, and attempts to store above the free ingestion limit of 5 gigabytes will result in an error. You can update to the Premium Tier for greater storage capacity and retention length. See Stackdriver pricing for more information on logging rates and limits. If you want to retain your logs for longer than what Stackdriver allows, you can export logs to Google Cloud Storage, Google BigQuery, or Google Cloud Pub/Sub.

这篇关于_A0&amp; amp; amp;将Google App Engine标准日志存储在GCS中时的_S0日志文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆