在Kubernetes的崩溃循环中调试容器 [英] Debugging a container in a crash loop on Kubernetes

查看:89
本文介绍了在Kubernetes的崩溃循环中调试容器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在GKE中,我有一个装有两个容器的吊舱.它们使用相同的图像,唯一的区别是我传递给它们的标志略有不同.一个运行良好,另一个进入崩溃循环.如何调试失败的原因?

In GKE, I have a pod with two containers. They use the same image, and the only difference is that I am passing them slightly different flags. One runs fine, the other goes in a crash loop. How can I debug the reason for the failure?

我的广告连播定义是

apiVersion: v1
kind: ReplicationController
metadata:
  name: doorman-client
spec:
  replicas: 10
  selector:
    app: doorman-client
  template:
    metadata:
      name: doorman-client
      labels:
        app: doorman-client
    spec:
      containers:
        - name: doorman-client-proportional
          resources:
            limits:
              cpu: 10m
          image: gcr.io/google.com/doorman/doorman-client:v0.1.1
          command: 
            - client
            - -port=80
            - -count=50
            - -initial_capacity=15
            - -min_capacity=5
            - -max_capacity=2000
            - -increase_chance=0.1
            - -decrease_chance=0.05
            - -step=5
            - -resource=proportional
            - -addr=$(DOORMAN_SERVICE_HOST):$(DOORMAN_SERVICE_PORT_GRPC)
            - -vmodule=doorman_client=2
            - --logtostderr
          ports:
            - containerPort: 80
              name: http

        - name: doorman-client-fair
          resources:
            limits:
              cpu: 10m
          image: gcr.io/google.com/doorman/doorman-client:v0.1.1
          command: 
            - client
            - -port=80
            - -count=50
            - -initial_capacity=15
            - -min_capacity=5
            - -max_capacity=2000
            - -increase_chance=0.1
            - -decrease_chance=0.05
            - -step=5
            - -resource=fair
            - -addr=$(DOORMAN_SERVICE_HOST):$(DOORMAN_SERVICE_PORT_GRPC)
            - -vmodule=doorman_client=2
            - --logtostderr
          ports:
            - containerPort: 80
              name: http

kubectl describe给了我以下内容:

6:06 [0] (szopa szopa-macbookpro):~/GOPATH/src/github.com/youtube/doorman$ kubectl describe pod doorman-client-tylba
Name:               doorman-client-tylba
Namespace:          default
Image(s):           gcr.io/google.com/doorman/doorman-client:v0.1.1,gcr.io/google.com/doorman/doorman-client:v0.1.1
Node:               gke-doorman-loadtest-d75f7d0f-node-k9g6/10.240.0.4
Start Time:         Sun, 21 Feb 2016 16:05:42 +0100
Labels:             app=doorman-client
Status:             Running
Reason:
Message:
IP:             10.128.4.182
Replication Controllers:    doorman-client (10/10 replicas created)
Containers:
  doorman-client-proportional:
    Container ID:   docker://0bdcb8269c5d15a4f99ccc0b0ee04bf3e9fd0db9fd23e9c0661e06564e9105f7
    Image:      gcr.io/google.com/doorman/doorman-client:v0.1.1
    Image ID:       docker://a603248608898591c84216dd3172aaa7c335af66a57fe50fd37a42394d5631dc
    QoS Tier:
      cpu:  Guaranteed
    Limits:
      cpu:  10m
    Requests:
      cpu:      10m
    State:      Running
      Started:      Sun, 21 Feb 2016 16:05:42 +0100
    Ready:      True
    Restart Count:  0
    Environment Variables:
  doorman-client-fair:
    Container ID:   docker://92fea92f1307b943d0ea714441417d4186c5ac6a17798650952ea726d18dba68
    Image:      gcr.io/google.com/doorman/doorman-client:v0.1.1
    Image ID:       docker://a603248608898591c84216dd3172aaa7c335af66a57fe50fd37a42394d5631dc
    QoS Tier:
      cpu:  Guaranteed
    Limits:
      cpu:  10m
    Requests:
      cpu:          10m
    State:          Running
      Started:          Sun, 21 Feb 2016 16:06:03 +0100
    Last Termination State: Terminated
      Reason:           Error
      Exit Code:        0
      Started:          Sun, 21 Feb 2016 16:05:43 +0100
      Finished:         Sun, 21 Feb 2016 16:05:44 +0100
    Ready:          False
    Restart Count:      2
    Environment Variables:
Conditions:
  Type      Status
  Ready     False
Volumes:
  default-token-ihani:
    Type:   Secret (a secret that should populate this volume)
    SecretName: default-token-ihani
Events:
  FirstSeen LastSeen    Count   From                            SubobjectPath                   Reason      Message
  ───────── ────────    ─────   ────                            ─────────────                   ──────      ───────
  29s       29s     1   {scheduler }                                                Scheduled   Successfully assigned doorman-client-tylba to gke-doorman-loadtest-d75f7d0f-node-k9g6
  29s       29s     1   {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6}   implicitly required container POD       Pulled      Container image "gcr.io/google_containers/pause:0.8.0" already present on machine
  29s       29s     1   {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6}   implicitly required container POD       Created     Created with docker id 5013851c67d9
  29s       29s     1   {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6}   implicitly required container POD       Started     Started with docker id 5013851c67d9
  29s       29s     1   {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6}   spec.containers{doorman-client-proportional}    Created     Created with docker id 0bdcb8269c5d
  29s       29s     1   {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6}   spec.containers{doorman-client-proportional}    Started     Started with docker id 0bdcb8269c5d
  29s       29s     1   {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6}   spec.containers{doorman-client-fair}        Created     Created with docker id ed0928176958
  29s       29s     1   {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6}   spec.containers{doorman-client-fair}        Started     Started with docker id ed0928176958
  28s       28s     1   {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6}   spec.containers{doorman-client-fair}        Created     Created with docker id 0a73290085b6
  28s       28s     1   {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6}   spec.containers{doorman-client-fair}        Started     Started with docker id 0a73290085b6
  18s       18s     1   {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6}   spec.containers{doorman-client-fair}        Backoff     Back-off restarting failed docker container
  8s        8s      1   {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6}   spec.containers{doorman-client-fair}        Started     Started with docker id 92fea92f1307
  29s       8s      4   {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6}   spec.containers{doorman-client-fair}        Pulled      Container image "gcr.io/google.com/doorman/doorman-client:v0.1.1" already present on machine
  8s        8s      1   {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6}   spec.containers{doorman-client-fair}        Created     Created with docker id 92fea92f1307

如您所见,退出代码为零,消息为错误",不是超级有用.

As you can see, the exit code is zero, with the message being "Error", which is not super helpful.

我尝试过:

  • 更改定义的顺序(第一个始终运行,第二个始终失败).

  • changing the order of the definitions (firs one always runs, second one always fails).

将使用的端口更改为其他端口(无效)

changing the used ports to be different (no effect)

将端口名称更改为其他名称(无效).

changing the name of the ports to be different (no effect).

推荐答案

在不了解您的应用程序的情况下很难确切地说,但是如果两个容器属于同一容器,则肯定不能使用相同的端口.在kubernetes中,每个Pod都有自己的IP地址,但是Pod中的每个容器都共享相同的IP地址.这就是为什么除非使用同一个端口将它们拆分成单独的Pod,否则不能超过一个.

It's tough to say exactly without knowing more about your app, but the two containers definitely can't use the same port if they're part of the same pod. In kubernetes, each pod gets its own IP address, but each container in the pod shares that same IP address. That's why you can't have more than one of them using the same port unless you split them into separate pods.

要获取更多信息,我建议使用kubectl logs [pod] [optional container name]命令,该命令可用于从容器中获取stdout/stderr. -p标志可用于从最近失败的容器中获取日志.

To get more info, I'd recommend using the kubectl logs [pod] [optional container name] command, which can be used to get the stdout/stderr from a container. The -p flag can be used to get the logs from the most recently failed container.

这篇关于在Kubernetes的崩溃循环中调试容器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆