如何使用Istio的Prometheus配置kubernetes hpa? [英] How to use Istio's Prometheus to configure kubernetes hpa?

查看:190
本文介绍了如何使用Istio的Prometheus配置kubernetes hpa?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有一个Istio集群,我们正在尝试为Kubernetes配置水平Pod自动缩放.我们希望将请求计数用作hpa的自定义指标.我们如何才能将Istio的Prometheus用于相同的目的?

We have an Istio cluster and we are trying to configure horizontal pod autoscale for Kubernetes. We want to use the request count as our custom metric for hpa. How can we utilise Istio's Prometheus for the same purpose?

推荐答案

这个问题原来比我预期的要复杂得多,但最后我得到了答案.

This question turned out to be much more complex than I expected, but finally here I am with the answer.

  1. 首先,您需要配置您的应用程序以提供自定义指标.它在开发应用程序方面.这是一个使用Go语言的示例:使用Prometheus观看指标

第二,您需要定义应用程序部署(或Pod或任何您想要的部署)并将其部署到Kubernetes,例如:

Secondly, you need to define and deploy a Deployment of the application (or a Pod, or whatever you want) to Kubernetes, example:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: podinfo
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: podinfo
      annotations:
        prometheus.io/scrape: 'true'
    spec:
      containers:
      - name: podinfod
        image: stefanprodan/podinfo:0.0.1
        imagePullPolicy: Always
        command:
          - ./podinfo
          - -port=9898
          - -logtostderr=true
          - -v=2
        volumeMounts:
          - name: metadata
            mountPath: /etc/podinfod/metadata
            readOnly: true
        ports:
        - containerPort: 9898
          protocol: TCP
        readinessProbe:
          httpGet:
            path: /readyz
            port: 9898
          initialDelaySeconds: 1
          periodSeconds: 2
          failureThreshold: 1
        livenessProbe:
          httpGet:
            path: /healthz
            port: 9898
          initialDelaySeconds: 1
          periodSeconds: 3
          failureThreshold: 2
        resources:
          requests:
            memory: "32Mi"
            cpu: "1m"
          limits:
            memory: "256Mi"
            cpu: "100m"
      volumes:
        - name: metadata
          downwardAPI:
            items:
              - path: "labels"
                fieldRef:
                  fieldPath: metadata.labels
              - path: "annotations"
                fieldRef:
                  fieldPath: metadata.annotations
---
apiVersion: v1
kind: Service
metadata:
  name: podinfo
  labels:
    app: podinfo
spec:
  type: NodePort
  ports:
    - port: 9898
      targetPort: 9898
      nodePort: 31198
      protocol: TCP
  selector:
    app: podinfo

请注意字段annotations: prometheus.io/scrape: 'true'.需要请求Prometheus从资源中读取指标.还要注意,还有两个附加注释,它们具有默认值.但是如果您在应用程序中更改了它们,则需要为它们添加正确的值:

Pay attention to the field annotations: prometheus.io/scrape: 'true'. It is required to request Prometheus to read metrics from the resource. Also note, there are two more annotations, which have default values; but if you change them in your application, you need to add them with the correct values:

  • prometheus.io/path:如果度量标准路径不是/metrics,请使用此注释对其进行定义.
  • prometheus.io/port:在指定的端口上刮除Pod,而不是Pod的已声明端口(如果没有声明,默认为无端口目标).
  • prometheus.io/path: If the metrics path is not /metrics, define it with this annotation.
  • prometheus.io/port: Scrape the pod on the indicated port instead of the pod’s declared ports (default is a port-free target if none are declared).

接下来,Istio中的Prometheus使用自己修改的Istio目的配置,默认情况下,它会跳过Pod中的自定义指标.因此,您需要对其进行一些修改. 就我而言,我从此示例,并仅针对Pods修改了Istio的Prometheus配置:

Next, Prometheus in Istio uses its own modified for Istio purposes configuration, and by default it skips custom metrics from Pods. Therefore, you need to modify it a little. In my case, I took configuration for Pod metrics from this example and modified Istio's Prometheus configuration only for Pods:

kubectl edit configmap -n istio-system prometheus

我根据前面提到的示例更改了标签的顺序:

I changed the order of labels according to the example mentioned before:

# pod's declared ports (default is a port-free target if none are declared).
- job_name: 'kubernetes-pods'
  # if you want to use metrics on jobs, set the below field to
  # true to prevent Prometheus from setting the `job` label
  # automatically.
  honor_labels: false
  kubernetes_sd_configs:
  - role: pod
  # skip verification so you can do HTTPS to pods
  tls_config:
    insecure_skip_verify: true
  # make sure your labels are in order
  relabel_configs:
  # these labels tell Prometheus to automatically attach source
  # pod and namespace information to each collected sample, so
  # that they'll be exposed in the custom metrics API automatically.
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: pod
  # these labels tell Prometheus to look for
  # prometheus.io/{scrape,path,port} annotations to configure
  # how to scrape
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
    action: keep
    regex: true
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
    target_label: __address__
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
    action: replace
    target_label: __scheme__

此后,自定义指标出现在Prometheus中.但是,在更改Prometheus配置时要小心,因为Istio所需的某些指标可能会消失,请仔细检查所有内容.

After that, custom metrics appeared in Prometheus. But, be careful with changing Prometheus configuration, because some metrics required for Istio may disappear, check everything carefully.

现在是时候安装 Prometheus自定义指标适配器了.

  • 下载存储库
  • 在文件<repository-directory>/deploy/manifests/custom-metrics-apiserver-deployment.yaml中更改Prometheus服务器的地址.例如- --prometheus-url=http://prometheus.istio-system:9090/
  • 运行命令kubectl apply -f <repository-directory>/deploy/manifests 一段时间后,custom.metrics.k8s.io/v1beta1应该出现在命令"kubectl api-vesions"的输出中.
  • Download this repository
  • Change the address for Prometheus server in the file <repository-directory>/deploy/manifests/custom-metrics-apiserver-deployment.yaml. Example, - --prometheus-url=http://prometheus.istio-system:9090/
  • Run command kubectl apply -f <repository-directory>/deploy/manifests After some time, custom.metrics.k8s.io/v1beta1 should appear in the output of a command 'kubectl api-vesions'.

此外,使用命令kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .检查自定义API的输出 下一个示例的输出应类似于以下示例:

Also, check the output of the custom API using commands kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . and kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq . The output of the last one should look like in the following example:

{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "podinfo-6b86c8ccc9-kv5g9",
        "apiVersion": "/__internal"
          },
          "metricName": "http_requests",
          "timestamp": "2018-01-10T16:49:07Z",
          "value": "901m"    },
        {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "podinfo-6b86c8ccc9-nm7bl",
        "apiVersion": "/__internal"
      },
      "metricName": "http_requests",
      "timestamp": "2018-01-10T16:49:07Z",
      "value": "898m"
    }
  ]
}

如果是,则可以转到下一步.如果没有,请在CustomMetrics kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "pods/"中查找适用于Pod的API,并查看http_requests kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "http"获得哪些API. MetricNames是根据Prometheus从Pods收集的度量标准生成的,如果它们为空,则需要朝该方向查看.

If it does, you can move to the next step. If it doesn’t, look what APIs available for Pods in CustomMetrics kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "pods/" and for http_requests kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "http". MetricNames are generating according to the metrics Prometheus gather from Pods and if they are empty, you need to look in that direction.

最后一步是配置HPA并对其进行测试.因此,就我而言,我为podinfo应用程序创建了HPA,该应用程序之前已定义:

The last step is the configuring HPA and test it. So in my case, I created HPA for the podinfo application, defined before:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: podinfo
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: podinfo
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Pods
    pods:
      metricName: http_requests
      targetAverageValue: 10

并使用简单的Go应用程序测试负载:

and used simple Go application to test load:

#install hey
go get -u github.com/rakyll/hey
#do 10K requests rate limited at 25 QPS
hey -n 10000 -q 5 -c 5 http://<K8S-IP>:31198/healthz

一段时间后,我看到使用命令kubectl describe hpakubectl get hpa

After some time, I saw changes in scaling by using commands kubectl describe hpa and kubectl get hpa

我从确保Kubernetes水平Pod自动缩放器和Prometheus的高可用性和正常运行时间

所有有用的链接都放在一个位置:

All useful links in one place:

  • Watching Metrics With Prometheus - the example of adding metrics to your application
  • k8s-prom-hpa - the example of creating Custom Metrics for Prometheus (the same as in the article above)
  • Kubernetes Custom Metrics Adapter for Prometheus
  • Setting up the custom metrics adapter and sample app

这篇关于如何使用Istio的Prometheus配置kubernetes hpa?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆