Spring Boot应用程序不适用于Kubernetes集群 [英] Spring Boot app not working on Kubernetes cluster

查看:201
本文介绍了Spring Boot应用程序不适用于Kubernetes集群的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用jhipster开发具有微服务架构的应用程序.即使收到此警告,我也可以在开发模式下运行我的服务,但是当我收到此警告后,在kubernetes集群上运行它时,它会一遍又一遍地重启.我有4个微服务和一个网关.全都一样. 预先谢谢你.

I'm developing an application with microservice architecture using jhipster. I can run my services in dev mode even though i get this warning but when I run it on kubernetes cluster after i get this warning pod restarts itself over and over on loop. I have 4 microservies and a gateway. All the same. Thank you in advance.

这是警告:

2020-05-06 06:06:51.415 WARN 1 --- [scoveryClient-1] c.netflix.discovery.TimedSupervisorTask : task supervisor timed out 
java.util.concurrent.TimeoutException: null 
at java.base/java.util.concurrent.FutureTask.get(Unknown Source) 
at com.netflix.discovery.TimedSupervisorTask.run(TimedSupervisorTask.java:68) 
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
at java.base/java.util.concurrent.FutureTask.run(Unknown Source) 
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) 
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
at java.base/java.lang.Thread.run(Unknown Source)

application.yml文件:

application.yml file:

# ===================================================================
# Spring Boot configuration.
#
# This configuration will be overridden by the Spring profile you use,
# for example application-dev.yml if you use the "dev" profile.
#
# More information on profiles: https://www.jhipster.tech/profiles/
# More information on configuration properties: https://www.jhipster.tech/common-application-properties/
# ===================================================================

# ===================================================================
# Standard Spring Boot properties.
# Full reference is available at:
# http://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html
# ===================================================================

eureka:
  client:
    enabled: true
    healthcheck:
      enabled: true
    fetch-registry: true
    register-with-eureka: true
    instance-info-replication-interval-seconds: 10
    registry-fetch-interval-seconds: 10
  instance:
    appname: derinconfiguration
    instanceId: derinconfiguration:${spring.application.instance-id:${random.value}}
    lease-renewal-interval-in-seconds: 5
    lease-expiration-duration-in-seconds: 10
    status-page-url-path: ${management.endpoints.web.base-path}/info
    health-check-url-path: ${management.endpoints.web.base-path}/health
    metadata-map:
      zone: primary # This is needed for the load balancer
      profile: ${spring.profiles.active}
      version: #project.version#
      git-version: ${git.commit.id.describe:}
      git-commit: ${git.commit.id.abbrev:}
      git-branch: ${git.branch:}


# See https://github.com/Netflix/Hystrix/wiki/Configuration
hystrix:
  command:
    default:
      execution:
        isolation:
          thread:
            timeoutInMilliseconds: 30000
ribbon:
  ReadTimeout: 60000
  connection-timeout: 3000
  eureka:
    enabled: true
zuul:
  ignoredServices: '*'
  host:
    time-to-live: -1
    connect-timeout-millis: 5000
    max-per-route-connections: 10000
    max-total-connections: 5000
    socket-timeout-millis: 60000
  semaphore:
    max-semaphores: 500

management:
  endpoints:
    web:
      base-path: /management
      exposure:
        include: ['configprops', 'env', 'health', 'info', 'jhimetrics', 'logfile', 'loggers', 'prometheus', 'threaddump']
  endpoint:
    health:
      show-details: when_authorized
      roles: 'ROLE_ADMIN'
    jhimetrics:
      enabled: true
  info:
    git:
      mode: full
  health:
    mail:
      enabled: false # When using the MailService, configure an SMTP server and set this to true
  metrics:
    export:
      # Prometheus is the default metrics backend
      prometheus:
        enabled: true
        step: 60
    enable:
      http: true
      jvm: true
      logback: true
      process: true
      system: true
    distribution:
      percentiles-histogram:
        all: true
      percentiles:
        all: 0, 0.5, 0.75, 0.95, 0.99, 1.0
    tags:
      application: ${spring.application.name}
    web:
      server:
        request:
          autotime:
            enabled: true

spring:
  autoconfigure:
    exclude: org.springframework.boot.actuate.autoconfigure.metrics.jdbc.DataSourcePoolMetricsAutoConfiguration
  application:
    name: derinconfiguration
  jmx:
    enabled: false
  data:
    jpa:
      repositories:
        bootstrap-mode: deferred
  jpa:
    open-in-view: false
    properties:
      hibernate.jdbc.time_zone: UTC
      hibernate.id.new_generator_mappings: true
      hibernate.connection.provider_disables_autocommit: true
      hibernate.cache.use_second_level_cache: true
      hibernate.cache.use_query_cache: false
      hibernate.generate_statistics: false
      # modify batch size as necessary
      hibernate.jdbc.batch_size: 25
      hibernate.order_inserts: true
      hibernate.order_updates: true
      hibernate.query.fail_on_pagination_over_collection_fetch: true
      hibernate.query.in_clause_parameter_padding: true
      hibernate.cache.region.factory_class: com.hazelcast.hibernate.HazelcastCacheRegionFactory
      hibernate.cache.use_minimal_puts: true
      hibernate.cache.hazelcast.instance_name: derinconfiguration
      hibernate.cache.hazelcast.use_lite_member: true
    hibernate:
      ddl-auto: none
      naming:
        physical-strategy: org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
        implicit-strategy: org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
  messages:
    basename: i18n/messages
  main:
    allow-bean-definition-overriding: true
  mvc:
    favicon:
      enabled: false
  task:
    execution:
      thread-name-prefix: derinconfiguration-task-
      pool:
        core-size: 2
        max-size: 50
        queue-capacity: 10000
    scheduling:
      thread-name-prefix: derinconfiguration-scheduling-
      pool:
        size: 2
  thymeleaf:
    mode: HTML
  output:
    ansi:
      console-available: true
security:
  oauth2:
    client:
      access-token-uri: http://localhost:9080/auth/realms/jhipster/protocol/openid-connect/token
      user-authorization-uri: http://localhost:9080/auth/realms/jhipster/protocol/openid-connect/auth
      client-id: web_app
      client-secret: web_app
      scope: openid profile email
    resource:
      filter-order: 3
      user-info-uri: http://localhost:9080/auth/realms/jhipster/protocol/openid-connect/userinfo

server:
  servlet:
    session:
      cookie:
        http-only: true

# Properties to be exposed on the /info management endpoint
info:
  # Comma separated list of profiles that will trigger the ribbon to show
  display-ribbon-on-profiles: 'dev'

# ===================================================================
# JHipster specific properties
#
# Full reference is available at: https://www.jhipster.tech/common-application-properties/
# ===================================================================

jhipster:
  clientApp:
    name: 'derinconfigurationApp'
  # By default CORS is disabled. Uncomment to enable.
  # cors:
  #     allowed-origins: "*"
  #     allowed-methods: "*"
  #     allowed-headers: "*"
  #     exposed-headers: "Authorization,Link,X-Total-Count"
  #     allow-credentials: true
  #     max-age: 1800
  mail:
    from: derinconfiguration@localhost
  swagger:
    default-include-pattern: /api/.*
    title: derinconfiguration API
    description: derinconfiguration API documentation
    version: 0.0.1
    terms-of-service-url:
    contact-name:
    contact-url:
    contact-email:
    license:
    license-url:
kafka:
  bootstrap-servers: localhost:9092
  consumer:
    key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
    value.deserializer: org.apache.kafka.common.serialization.StringDeserializer
    group.id: derinconfiguration
    auto.offset.reset: earliest
  producer:
    key.serializer: org.apache.kafka.common.serialization.StringSerializer
    value.serializer: org.apache.kafka.common.serialization.StringSerializer
# ===================================================================
# Application specific properties
# Add your own application properties here, see the ApplicationProperties class
# to have type-safe configuration, like in the JHipsterProperties above
#
# More documentation is available at:
# https://www.jhipster.tech/common-application-properties/
# ===================================================================

# application:

Kubernetes配置:

Kubernetes configuration:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: derinconfiguration
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: derinconfiguration
      version: 'v1'
  template:
    metadata:
      labels:
        app: derinconfiguration
        version: 'v1'
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - derinconfiguration
                topologyKey: kubernetes.io/hostname
              weight: 100
      initContainers:
        - name: init-ds
          image: busybox:latest
          command:
            - '/bin/sh'
            - '-c'
            - |
              while true
              do
                rt=$(nc -z -w 1 192.168.1.156 5432)
                if [ $? -eq 0 ]; then
                  echo "DB is UP"
                  break
                fi
                echo "DB is not yet reachable;sleep for 10s before retry"
                sleep 10
              done
      containers:
        - name: derinconfiguration-app
          image: 192.168.1.150:5000/derin/derinconfiguration
          env:
            - name: SPRING_PROFILES_ACTIVE
              value: prod
            - name: SPRING_CLOUD_CONFIG_URI
              value: http://admin:${jhipster.registry.password}@jhipster-registry.default.svc.cluster.local:8761/config
            - name: JHIPSTER_REGISTRY_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: registry-secret
                  key: registry-admin-password
            - name: EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE
              value: http://admin:${jhipster.registry.password}@jhipster-registry.default.svc.cluster.local:8761/eureka/
            - name: SPRING_DATASOURCE_URL
              value: jdbc:postgresql://192.168.1.156:5432/derinfw
            - name: SPRING_DATASOURCE_USERNAME
              value: admin
            - name: SPRING_DATASOURCE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: postgresql-secret
                  key: postgresql-admin-password
            - name: KAFKA_CONSUMER_KEY_DESERIALIZER
              value: 'org.apache.kafka.common.serialization.StringDeserializer'
            - name: KAFKA_CONSUMER_VALUE_DESERIALIZER
              value: 'org.apache.kafka.common.serialization.StringDeserializer'
            - name: KAFKA_CONSUMER_BOOTSTRAP_SERVERS
              value: 'jhipster-kafka.default.svc.cluster.local:9092'
            - name: KAFKA_CONSUMER_GROUP_ID
              value: 'derinconfiguration'
            - name: KAFKA_CONSUMER_AUTO_OFFSET_RESET
              value: 'earliest'
            - name: KAFKA_PRODUCER_BOOTSTRAP_SERVERS
              value: 'jhipster-kafka.default.svc.cluster.local:9092'
            - name: KAFKA_PRODUCER_KEY_DESERIALIZER
              value: 'org.apache.kafka.common.serialization.StringDeserializer'
            - name: KAFKA_PRODUCER_VALUE_DESERIALIZER
              value: 'org.apache.kafka.common.serialization.StringDeserializer'
            - name: JHIPSTER_METRICS_LOGS_ENABLED
              value: 'true'
            - name: JHIPSTER_LOGGING_LOGSTASH_ENABLED
              value: 'true'
            - name: JHIPSTER_LOGGING_LOGSTASH_HOST
              value: jhipster-logstash
            - name: SPRING_ZIPKIN_ENABLED
              value: 'true'
            - name: SPRING_ZIPKIN_BASE_URL
              value: http://jhipster-zipkin
            - name: SPRING_SLEUTH_PROPAGATION_KEYS
              value: 'x-request-id,x-ot-span-context'
            - name: JAVA_OPTS
              value: ' -Xmx256m -Xms256m'
          resources:
            requests:
              memory: '512Mi'
              cpu: '500m'
            limits:
              memory: '1Gi'
              cpu: '1'
          ports:
            - name: http
              containerPort: 8084
          readinessProbe:
            httpGet:
              path: /management/health
              port: http
            initialDelaySeconds: 20
            periodSeconds: 15
            failureThreshold: 6
          livenessProbe:
            httpGet:
              path: /management/health
              port: http
            initialDelaySeconds: 120

推荐答案

您是否一直在尝试禁用准备和活跃度调查? –尼克

have you been trying disabling readiness and liveness probes? – Nick

在我删除准备和活力探针之后,它现在可以工作了.似乎需要更多时间才能运行. –穆罕默德(Mehmet)

After I removed readiness and liveness probes, it working now. It seems that needs extra time to run. – mehmet

当由于某种原因探针返回结果花费很长时间时,这是很常见的情况.

That is quite common situation when for some reason it takes long time for probe to return result.

有时,您必须处理可能在首次初始化时需要额外启动时间的旧版应用程序.在这种情况下,设置活动性探针参数而不牺牲对激发此类探针的死锁的快速响应可能会很棘手.

Sometimes, you have to deal with legacy applications that might require an additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromising the fast response to deadlocks that motivated such a probe.

诀窍是使用相同的命令(HTTP或TCP检查)设置启动探针,并使用failureThreshold * periodSeconds足够长的时间来覆盖最坏情况下的启动时间.

The trick is to set up a startup probe with the same command, HTTP or TCP check, with a failureThreshold * periodSeconds long enough to cover the worse case startup time.

livenessProbe:
  httpGet:
    path: /healthz
    port: liveness-port
  failureThreshold: 1
  periodSeconds: 10

Probes具有许多字段,可用于更精确地控制活动和准备情况检查的行为:

Probes have a number of fields that you can use to more precisely control the behavior of liveness and readiness checks:

initialDelaySeconds:启动容器后,启动活动性或就绪性探针之前的秒数.默认为0秒.最小值为0.

initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.

periodSeconds:执行探测的频率(以秒为单位).默认为10秒.最小值为1.

periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.

在查看您的YAML文件之前,首先要对readiness probe进行更改

Looking at your YAML file I'd start with changes to readiness probe first

readinessProbe:
 httpGet:
   path: /management/health
 initialDelaySeconds: 20
 periodSeconds: 15
 failureThreshold: 6

livenessProbe:
 httpGet:
   path: /management/health
 initialDelaySeconds: 120

这意味着20+(6 * 15)= 110秒后,就绪"探针将失败.同时,liveness探测延迟设置为120秒.我将readiness探针的延迟增加到与livenes相同的值,以防止将流量发送到Pod并没有完全正常运行的情况.

That means that after 20+(6*15)=110 sec the Readiness probe will fail. At the same time the liveness probe delay is set to 120 sec. I'd increase delay for the readiness probe to the same value as livenes one in order to prevent a situation when traffic is sent to the Pod that is not completely up and running.

以下是来自 Kubernetes的一些信息官方文档有关探测的信息:

Below is some info from Kubernetes official documentation on probes:

活力探针

许多长时间运行的应用程序最终会转换为损坏状态,并且除非重新启动,否则无法恢复. Kubernetes提供了活动性探针来检测和纠正这种情况.

Many applications running for long periods of time eventually transition to broken states, and cannot recover except by being restarted. Kubernetes provides liveness probes to detect and remedy such situations.

准备情况调查

有时,应用程序暂时无法提供流量.例如,应用程序可能需要在启动过程中加载大数据或配置文件,或者在启动后依赖于外部服务.在这种情况下,您不想杀死该应用程序,但也不想发送它的请求. Kubernetes提供了准备就绪探针以检测和缓解这些情况.装有容器的容器报告其尚未准备就绪,无法通过Kubernetes Services接收流量.

Sometimes, applications are temporarily unable to serve traffic. For example, an application might need to load large data or configuration files during startup, or depend on external services after startup. In such cases, you don’t want to kill the application, but you don’t want to send it requests either. Kubernetes provides readiness probes to detect and mitigate these situations. A pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services.

注意:就绪"探针在容器的整个生命周期内运行.

Note: Readiness probes runs on the container during its whole lifecycle.

就绪探针的配置与活动探针类似.唯一的区别是您使用readinessProbe字段而不是livenessProbe字段.

Readiness probes are configured similarly to liveness probes. The only difference is that you use the readinessProbe field instead of the livenessProbe field.

readinessProbe:
  exec:
    command:
    - cat
    - /tmp/healthy
  initialDelaySeconds: 5
  periodSeconds: 5

希望有帮助.

这篇关于Spring Boot应用程序不适用于Kubernetes集群的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆