stackdriver-metadata-agent-cluster级获得OOMKilled [英] stackdriver-metadata-agent-cluster-level gets OOMKilled
问题描述
我将GKE集群从1.13更新为1.15.9-gke.12.在此过程中,我从传统日志记录切换为Stackdriver Kubernetes Engine监视.现在,我有一个问题,因为stackdriver-metadata-agent-cluster-level
窗格由于得到OOMKilled
而不断重新启动.
I updated a GKE cluster from 1.13 to 1.15.9-gke.12. In the process I switched from legacy logging to Stackdriver Kubernetes Engine Monitoring. Now I have the problem that the stackdriver-metadata-agent-cluster-level
pod keeps restarting because it gets OOMKilled
.
虽然记忆似乎还不错.
The memory seems to be just fine though.
日志看起来也很好(与新创建的集群的日志相同):
The logs also look just fine (same as the logs of a newly created cluster):
I0305 08:32:33.436613 1 log_spam.go:42] Command line arguments:
I0305 08:32:33.436726 1 log_spam.go:44] argv[0]: '/k8s_metadata'
I0305 08:32:33.436753 1 log_spam.go:44] argv[1]: '-logtostderr'
I0305 08:32:33.436779 1 log_spam.go:44] argv[2]: '-v=1'
I0305 08:32:33.436818 1 log_spam.go:46] Process id 1
I0305 08:32:33.436859 1 log_spam.go:50] Current working directory /
I0305 08:32:33.436901 1 log_spam.go:52] Built on Jun 27 20:15:21 (1561666521)
at gcm-agent-dev-releaser@ikle14.prod.google.com:/google/src/files/255462966/depot/branches/gcm_k8s_metadata_release_branch/255450506.1/OVERLAY_READONLY/google3
as //cloud/monitoring/agents/k8s_metadata:k8s_metadata
with gc go1.12.5 for linux/amd64
from changelist 255462966 with baseline 255450506 in a mint client based on //depot/branches/gcm_k8s_metadata_release_branch/255450506.1/google3
Build label: gcm_k8s_metadata_20190627a_RC00
Build tool: Blaze, release blaze-2019.06.17-2 (mainline @253503028)
Build target: //cloud/monitoring/agents/k8s_metadata:k8s_metadata
I0305 08:32:33.437188 1 trace.go:784] Starting tracingd dapper tracing
I0305 08:32:33.437315 1 trace.go:898] Failed loading config; disabling tracing: open /export/hda3/trace_data/trace_config.proto: no such file or directory
W0305 08:32:33.536093 1 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0305 08:32:33.936066 1 main.go:134] Initiating watch for { v1 nodes} resources
I0305 08:32:33.936169 1 main.go:134] Initiating watch for { v1 pods} resources
I0305 08:32:33.936231 1 main.go:134] Initiating watch for {batch v1beta1 cronjobs} resources
I0305 08:32:33.936297 1 main.go:134] Initiating watch for {apps v1 daemonsets} resources
I0305 08:32:33.936361 1 main.go:134] Initiating watch for {extensions v1beta1 daemonsets} resources
I0305 08:32:33.936420 1 main.go:134] Initiating watch for {apps v1 deployments} resources
I0305 08:32:33.936489 1 main.go:134] Initiating watch for {extensions v1beta1 deployments} resources
I0305 08:32:33.936552 1 main.go:134] Initiating watch for { v1 endpoints} resources
I0305 08:32:33.936627 1 main.go:134] Initiating watch for {extensions v1beta1 ingresses} resources
I0305 08:32:33.936698 1 main.go:134] Initiating watch for {batch v1 jobs} resources
I0305 08:32:33.936777 1 main.go:134] Initiating watch for { v1 namespaces} resources
I0305 08:32:33.936841 1 main.go:134] Initiating watch for {apps v1 replicasets} resources
I0305 08:32:33.936897 1 main.go:134] Initiating watch for {extensions v1beta1 replicasets} resources
I0305 08:32:33.936986 1 main.go:134] Initiating watch for { v1 replicationcontrollers} resources
I0305 08:32:33.937067 1 main.go:134] Initiating watch for { v1 services} resources
I0305 08:32:33.937135 1 main.go:134] Initiating watch for {apps v1 statefulsets} resources
I0305 08:32:33.937157 1 main.go:142] All resources are being watched, agent has started successfully
I0305 08:32:33.937168 1 main.go:145] No statusz port provided; not starting a server
I0305 08:32:37.134913 1 binarylog.go:95] Starting disk-based binary logging
I0305 08:32:37.134965 1 binarylog.go:265] rpc: flushed binary log to ""
我已经尝试禁用日志记录并重新启用它,但没有成功.它一直保持重启状态(每分钟或多或少).
I already tried to disable the logging and reenable it without success. It keeps restarting all the time (more or less every minute).
有人有相同的经历吗?
推荐答案
之所以引起此问题,是因为在metadata-agent
部署上设置的LIMIT设置的资源太低,因此自POD以来POD被杀死(OOM被杀死).需要更多内存才能正常工作.
The issue is being caused because the LIMIT set on the metadata-agent
deployment is too low on resources so the POD is being killed (OOM killed) since the POD requires more memory to properly work.
在解决此问题之前,有一种解决方法.
There is a workaround for this issue until it is fixed.
您可以使用以下方法覆盖metadata-agent
的配置图中的基本资源:
You can overwrite the base resources in the configmap of the metadata-agent
with:
kubectl edit cm -n kube-system metadata-agent-config
设置baseMemory: 50Mi
应该足够,如果不起作用,请使用较高的值100Mi
或200Mi
.
Setting baseMemory: 50Mi
should be enough, if it doesn't work use higher value 100Mi
or 200Mi
.
所以metadata-agent-config
configmap应该看起来像这样:
So metadata-agent-config
configmap should look something like this:
apiVersion: v1
data:
NannyConfiguration: |-
apiVersion: nannyconfig/v1alpha1
kind: NannyConfiguration
baseMemory: 50Mi
kind: ConfigMap
还请注意,由于配置映射不会自动获取,因此您需要重新启动部署:
Note also that You need to restart the deployment, as the config map doesn't get picked up automatically:
kubectl delete deployment -n kube-system stackdriver-metadata-agent-cluster-level
有关更多详细信息,请参阅addon-resizer 文档.
For more details look into addon-resizer Documentation.
这篇关于stackdriver-metadata-agent-cluster级获得OOMKilled的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!