LivenessProbe失败,但端口转发在同一端口上工作 [英] LivenessProbe is failing but port-forward is working on the same port
问题描述
我有以下部署yaml:
I have the following deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: gofirst
labels:
app: gofirst
spec:
selector:
matchLabels:
app: gofirst
template:
metadata:
labels:
app: gofirst
spec:
restartPolicy: Always
containers:
- name: gofirst
image: lbvenkatesh/gofirst:0.0.5
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 8080
livenessProbe:
httpGet:
path: /health
port: http
httpHeaders:
- name: "X-Health-Check"
value: "1"
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: http
httpHeaders:
- name: "X-Health-Check"
value: "1"
initialDelaySeconds: 30
periodSeconds: 10
我的服务Yaml是这样的:
and my service yaml is this:
apiVersion: v1
kind: Service
metadata:
name: gofirst
labels:
app: gofirst
spec:
publishNotReadyAddresses: true
type: NodePort
selector:
app: gofirst
ports:
- port: 8080
targetPort: http
name: http
"gofirst"是一个用Golang Gin编写的简单Web应用程序. 这是相同的dockerFile:
"gofirst" is a simple web application written in Golang Gin. Here is the dockerFile of the same:
FROM golang:latest
LABEL MAINTAINER='Venkatesh Laguduva <lbvenkatesh@gmail.com>'
RUN mkdir /app
ADD . /app/
RUN apt -y update && apt -y install git
RUN go get github.com/gin-gonic/gin
RUN go get -u github.com/RaMin0/gin-health-check
WORKDIR /app
RUN go build -o main .
ARG verArg="0.0.1"
ENV VERSION=$verArg
ENV PORT=8080
ENV GIN_MODE=release
EXPOSE 8080
CMD ["/app/main"]
我已经在Minikube中部署了此应用程序,当我尝试描述这些pod时,我看到了以下事件:
I have deployed this application in Minikube and when I try to describe this pods, I am seeing these events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10m (x2 over 10m) default-scheduler 0/1 nodes are available: 1 Insufficient cpu.
Normal Scheduled 10m default-scheduler Successfully assigned default/gofirst-95fc8668c-6r4qc to m01
Normal Pulling 10m kubelet, m01 Pulling image "lbvenkatesh/gofirst:0.0.5"
Normal Pulled 10m kubelet, m01 Successfully pulled image "lbvenkatesh/gofirst:0.0.5"
Normal Killing 8m13s (x2 over 9m13s) kubelet, m01 Container gofirst failed liveness probe, will be restarted
Normal Pulled 8m13s (x2 over 9m12s) kubelet, m01 Container image "lbvenkatesh/gofirst:0.0.5" already present on machine
Normal Created 8m12s (x3 over 10m) kubelet, m01 Created container gofirst
Normal Started 8m12s (x3 over 10m) kubelet, m01 Started container gofirst
Warning Unhealthy 7m33s (x7 over 9m33s) kubelet, m01 Liveness probe failed: Get http://172.17.0.4:8080/health: dial tcp 172.17.0.4:8080: connect: connection refused
Warning Unhealthy 5m35s (x12 over 9m25s) kubelet, m01 Readiness probe failed: Get http://172.17.0.4:8080/health: dial tcp 172.17.0.4:8080: connect: connection refused
Warning BackOff 31s (x17 over 4m13s) kubelet, m01 Back-off restarting failed container
我尝试了示例容器"hello-world",并且当我进行"minikube service hello-world"时运行良好,但是当我对"minikube service gofirst"进行尝试时,浏览器出现连接错误.
I tried the sample container "hello-world" and worked well when I did "minikube service hello-world" but when I tried the same with "minikube service gofirst", I got the connection error in the browser.
我必须做一些相对简单的事情,但无法找到错误.请检查我的yaml和docker文件,让我知道是否出错.
I must be doing something relatively simpler but am unable to locate the error. Please go through my yaml and docker file, let me know if I am making any error.
推荐答案
我已复制了您的方案,并遇到了同样的问题.因此,我决定删除活动性"和充实性"探针,以便能够登录到吊舱并进行调查.
I've reproduced your scenario and faced the same issues you have. So I decided to remove the liveness and rediness probes to be able to log in to the pod and investigate it.
这是我使用的Yaml:
Here is the yaml I used:
apiVersion: apps/v1
kind: Deployment
metadata:
name: gofirst
labels:
app: gofirst
spec:
selector:
matchLabels:
app: gofirst
template:
metadata:
labels:
app: gofirst
spec:
restartPolicy: Always
containers:
- name: gofirst
image: lbvenkatesh/gofirst:0.0.5
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 8080
我登录到Pod,以检查应用程序是否正在尝试测试的端口上进行监听:
I logged in the pod to check if the application is listening in the port you are trying to test:
kubectl exec -ti gofirst-65cfc7556-bbdcg -- bash
然后我安装了netstat:
Then I installed netstat:
# apt update
# apt install net-tools
检查应用程序是否正在运行:
Checked if the application is running:
# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 10:06 ? 00:00:00 /app/main
root 9 0 0 10:06 pts/0 00:00:00 sh
root 15 9 0 10:07 pts/0 00:00:00 ps -ef
最后检查端口8080是否正在侦听:
And finally checked if port 8080 is listening:
# netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN
tcp 0 0 10.28.0.9:56106 151.101.184.204:80 TIME_WAIT
tcp 0 0 10.28.0.9:56130 151.101.184.204:80 TIME_WAIT
tcp 0 0 10.28.0.9:56104 151.101.184.204:80 TIME_WAIT
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
我们可以看到,应用程序仅在监听localhost连接,而不是从任何地方监听.预期输出应为:0.0.0.0:8080
As we can see, application is listening to localhost connections only and not from everywhere. Expected output should be: 0.0.0.0:8080
希望它可以帮助您解决问题.
Hope it helps you to solve the problem.
这篇关于LivenessProbe失败,但端口转发在同一端口上工作的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!