在Kubernetes上从http端口(80)重定向到https端口(443) [英] Redirect from http port (80) to https port(443) on Kubernetes

查看:2195
本文介绍了在Kubernetes上从http端口(80)重定向到https端口(443)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是新来的

如何在Kubernetes的同一域(服务)上从http端口(80)重定向到https端口(443).

How can I redirect from http port(80) to https port(443) on the same domain(service) in Kubernetes.

我尝试将nginx放在同一个pod(容器)上,并从http重定向到https,但这没用.

I've tried putting nginx on same pod(container) and redirecting from http to https but it didn't work.

我在同一个Pod上尝试过这种方式

  //nginx
    server {
        listen         80;
        server_name    example.com;
        return         301 https://$server_name$request_uri;
        }

Kubernetes示例部署文件.

//Jupyterhub is running on port 8000.
    spec:
      ports:
      - port: 443
        name: https
        protocol: TCP
        targetPort: 8000
      - port: 80
        name: http
        protocol: TCP
        targetPort: 433

kubernetes中是否有默认方式?

Is there a default way within kubernetes?

非常感谢您的帮助.

推荐答案

非常感谢您的帮助.

Any help is much appreciated.

免责声明:到目前为止,这不是生产设置,主要目的是使您了解整体细节,以帮助您确定方向.而且,这将是一堵文字墙.

Disclaimer: This is not a production setup by far and is aimed mainly to shed some light on the overall bits and pieces to help you out orient around. Also, it will be quite a wall of text.

目标:在Kubernetes集群中通过https运行JupyterHub.

Target: to run JupyterHub over https in kubernetes cluster.

最初的考虑:同时运行nginx和JupyterHub并不完全符合k8s的理念.仅当容器自然地一起缩放时,才将它们放置在同一吊舱中.事实并非如此.因此建议将它们分开运行...

Initial consideration: Running both nginx and JupyterHub is not really in line with k8s philosophy. Containers are to be placed together in same pod only in case they naturally scale together. Here is that not the case. Hence suggestion is to run them separately...

这很简单,只是在这里提供了额外的保护措施,以免混淆.

This is rather straightforward and just here as additional safeguard not to mix things up.

清单文件:ns-example.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ns-example

简单地:kubectl create -f ns-example.yaml和名称空间在那里.从现在开始,可以通过这种方式轻松创建/删除资源.

simply: kubectl create -f ns-example.yaml and namespace is there. From now on resources can be easily created/deleted this way.

要获得此jupyterhub/jupyterhub公共官方docker映像,请使用.无需自定义或执行任何操作,只是为了使简单的多用户JupyterHub启动并作出响应,因此我们可以将其封装在服务包装中.

To achive this jupyterhub/jupyterhub public official docker image is used. No customization or whatever, justo to have simple multiuser JupyterHub up and responding so we can encase it in service wrapper.

我们从服务开始,没有什么花哨的地方,只是一个方便的名称和暴露给本地集群的8000端口.官方文档建议在sts/deploy/pod资源之前创建服务,以便我们做到这一点.

We start off with service, nothing fancy, just a handy name and 8000 port exposed to local cluster. Official documentation is recommending that services are to be crated before sts/deploy/pod resources so we are in line with that.

清单文件:svc-jupyterhub.yaml

apiVersion: v1
kind: Service
metadata:
  namespace: ns-example
  name: svc-jupyterhub
  labels:
    name: jupyterhub
spec:
  selector:
    name: jupyterhub
  ports:
  - protocol: TCP
    port: 8000
    targetPort: 8000

现在,以上服务将公开的JupyterHub的实际部署.再次,没什么幻想,这只是模仿默认的docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub,如官方的JupyterHub存储库中所述.这是一个基本示例,无需任何自定义...

Now for the actual deployment of the JupyterHub that above service will expose. Again, nothing fancy, this is simply mimicking of default docker run -p 8000:8000 -d --name jupyterhub jupyterhub/jupyterhub jupyterhub as noted in the official JupyterHub repository. This is without any customization, just as an basic example...

清单文件:dep-jupyterhub.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: ns-example
  name: dep-jupyterhub
  labels:
    name: jupyterhub
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: jupyterhub
    spec:
      containers:
      - name: jupyterhub
        image: jupyterhub/jupyterhub
        command: ['jupyterhub']
        ports:
        - containerPort: 8000

注意:在我的本地测试运行中,从网络中拉出初始图像花了相当长的时间,但是ymmv ...

Note: for my local test run initial image pull from the net took quite some minutes but ymmv...

创建此资源后,JupterHub应该已启动并正在运行,但仅在本地k8s集群中可见.

After this resources are created JupterHub should be up and running, but visible only in local k8s cluster.

现在,我们缺少Nginx来在JupyterHub周围公开和终止TLS.还有更多种给猫皮的方法,但是由于您只共享了nginx的部分设置,因此这里有些又是粗略的部分,可以帮助您入门.

Now we lack nginx to expose and terminate TLS around our JupyterHub. There are more ways to skin a cat, but since you shared only portion of setup for your nginx here are some, again sketchy, parts to get you started.

要创建一些最小的nginx并模拟TLS,我们需要一些配置文件.

To create some minimal nginx and to mimic TLS we need some configuration files.

我们从nginx.conf文件开始,该文件将保留我们的nginx配置.这是ConfigMap的自然选择.另外,请注意,这绝不是完美,完整或生产就绪的设置-这只是一些快速的方法,例如可以启动nginx.有重复的步骤,可以并且应该进行优化,端口80的重定向无法正常工作,因为它会将您引向不存在的域,因为服务器的域是虚构的,通配符证书是自签名的, yada,yada,yada ...但是它说明了这个想法:nginx正在终止TLS,并将流量发送到JupyterHub周围的上游服务.

We start with nginx.conf file that will hold our nginx configuration. This is natural candidate for ConfigMap. Also, note that this is by no means perfect or complete or production ready setup - it is just some quick hack-on way to get nginx up for example run. There are repetitions, this can and should be optimized, redirection for port 80 is not working properly since it would fire you off to non existant domain, given domain for server is imaginary, wildcard certificate is self signed, yada, yada, yada... But it illustrates the idea: nginx is terminating TLS and sending traffic to upstream service around JupyterHub.

清单文件:cm-nginx.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  namespace: ns-example
  name: cm-nginx
data:
  nginx.conf: |     
     # Exmaple nginx configuration file
     #
     # Commented out parts are left for pointers

     upstream jupyterhub {
        server svc-jupyterhub:8000 fail_timeout=0;
     }

     # jupyterhub.my-domain.com https request sent to upstream jupyterhub proxy
     server {
        listen 443 ssl;
        server_name jupyterhub.my-domain.com;

        ssl_certificate      /etc/nginx/ssl/wildcard.my-domain.com.crt;
        ssl_certificate_key  /etc/nginx/ssl/wildcard.my-domain.com.key;

        location / {
           proxy_set_header        Host $host:$server_port;
           proxy_set_header        X-Real-IP $remote_addr;
           proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header        X-Forwarded-Proto $scheme;
           proxy_redirect http:// https://;
           proxy_pass              http://jupyterhub;
           # Required for new HTTP-based CLI
           proxy_http_version 1.1;
           proxy_request_buffering off;
           proxy_buffering off; # Required for HTTP-based CLI to work over SSL
        }
     }

     # redicrection from http to https for jupyterhub.my-domain.com
     # this obviously doesn't work since my-domain.com is not pointing to our server
     server {
        listen 80;
        server_name jyputerhub.my-domain.com;

     #    root /nowhere;
     #    rewrite ^ https://jupyterhub.my-domain.com$request_uri permanent;

        location / {
           proxy_set_header        Host $host:$server_port;
           proxy_set_header        X-Real-IP $remote_addr;
           proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header        X-Forwarded-Proto $scheme;
           proxy_redirect http:// https://;
           proxy_pass              http://jupyterhub;
           # Required for new HTTP-based CLI
           proxy_http_version 1.1;
           proxy_request_buffering off;
           proxy_buffering off; # Required for HTTP-based CLI to work over SSL
        }
     }

     # if none of named servers is matched on http...
     # this obviously doesn't work since my-domain.com is not pointing to our server
     server {
        listen 80 default_server;

     #    root /nowhere;
     #    rewrite ^ https://jupyterhub.my-domain.com permanent;

        location / {
           proxy_set_header        Host $host:$server_port;
           proxy_set_header        X-Real-IP $remote_addr;
           proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header        X-Forwarded-Proto $scheme;
           proxy_redirect http:// https://;
           proxy_pass              http://jupyterhub;
           # Required for new HTTP-based CLI
           proxy_http_version 1.1;
           proxy_request_buffering off;
           proxy_buffering off; # Required for HTTP-based CLI to work over SSL
        }
     }

     # if none of named server is matched on https...
     # this obviously doesn't work since my-domain.com is not pointing to our server
     server {
        listen 443 default_server;

        ssl_certificate      /etc/nginx/ssl/wildcard.my-domain.com.crt;
        ssl_certificate_key  /etc/nginx/ssl/wildcard.my-domain.com.key;

     #    root /nowhere;
     #    rewrite ^ https://juputerhub.my-domain.com permanent;

        location / {
           proxy_set_header        Host $host:$server_port;
           proxy_set_header        X-Real-IP $remote_addr;
           proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header        X-Forwarded-Proto $scheme;
           proxy_redirect http:// https://;
           proxy_pass              http://jupyterhub;
           # Required for new HTTP-based CLI
           proxy_http_version 1.1;
           proxy_request_buffering off;
           proxy_buffering off; # Required for HTTP-based CLI to work over SSL
        }
     }

现在我们需要这些证书才能运行...

Now we need those certificates for example to function...

授予的证书(尤其是私钥)是Secret k8s资源的完美候选者,但这是不存在的示例域的自签名证书(为这篇文章即时生成)...接下来,我想我也想在这里最后说明两个文件的ConfigMap,但这也许是最重要的-例如,我懒得再输入两个命令来获取base64中的所有内容.所以在这里它又像ConfigMap一样...(是的,它应该是Secret,是的,REAL证书/密钥不应该是公共的,而应该是pssst,不要告诉任何人)...

Granted, certificates (especially private keys) are perfect candidates for Secret k8s resources, but this is self-signed certificate (generated on the fly just for this post) for non-existant example domain... Next, I'd like to illustrate ConfigMap with two files here as well and finally, but maybe most important - I'm too lazy to type two more commands to get everything in base64 for example sake. So here it goes as ConfigMap again... (yes, it should be Secret instead, and yeah, REAL certificate/key should not be public but pssst, don't tell anyone)...

清单文件:cm-wildcard-certificate-my-domain-com.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  namespace: ns-example
  name: cm-wildcard-certificate-my-domain-com
data:
  wildcard.my-domain.com.key: |
    -----BEGIN PRIVATE KEY-----
    MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQCtU3Yk+tKSnPFC
    l+0Iutma0xI79MiWEf8Z2vacyfgMUNvthqFxTfTIeeySzzFh1KVx8pYJbfL1Gkxx
    iDfYZbKwQxhlV363bx8J+j2YnIIQ4uZGQ0MlxMlb65e0JfLayLOIffo7vSPqqBDa
    6MY4qjqVuiJ7zW9/X9h+38Y76fHyEzde03cHihKnkW0smNKZcwYBLz5oa1D39zv5
    WTqQrq+2GXEGfHvArDc06azbAm3o55iRmFPhIWEJcX6oCs0nd5jLIpycy43ayIKv
    HvjEmChDnsrQDkMImFk0nDsMn0Leu0DAsyPopm3TIGqoPwZY4Sk+zn7ttjU/6VUI
    pndJDVd5AgMBAAECggEAH6mTd4XqWaYZ3JRsVJ/tiH7uYc2Bpwh6lXqOem3axkUv
    J+DkNRKMmOLM+LSozLpPztUF24seSvAW7tZ3fSx2zAQ1vK2TFGdUQDpabjqI+BS7
    BDLdXVTpg8Ux3VLhXl4zjceVorwWh5NUIOlM7KUMNrXd/se0iowzvFmcmO1PqWzU
    O6KI5EKz6LTUpEU/7RSl+wt/Ix4yTRYblkHlzWL1GXmQ50HYFZtC3iFEk4H4yDiQ
    Z4VI+gGSpQGKDBQdR9OIXc3seVPOPnSd5NjDXQU8IR36VWHE8xG6k9/+TeU8r9ue
    zNecjieWbFny4UE+uELXdeaRcmH+M8MTrKDApDj+QQKBgQDZ0WdOZ1O8QqILMwuR
    Up2+oT88A6JZjfUICpDlsXgCaitT4YyBXkBwQyyQiTVspo6+ENHSBS584JdmjRpe
    rqXazlwimY0vdINcm4O1279gmHOGaKffLzik1AKNSQEm52rNhle8xoXWD/cmLjvc
    NYgzpPPFIWwXG0dniCCnbfR8tQKBgQDLtXpuckotb8guCGThFn6nb01Hcsit9OfC
    QG9KXd8fpRV+YKqKF2wx1KeVgMoXMbmT78LRl0wArCQZsh16cqS/abH8S5k2v9jx
    L5q+YYVcXC1U7Oolekoddob8af0qp4FnVDjRU9GiMtv2UQoX4yoX4kHkdWZqqFNr
    q12VlksuNQKBgQCC6odq6lO7zVjT3mRPfhZto0D8czq7FMV3hdI9HAODgAh2rBPl
    FZ8pWlaIsM85dIpK1pUl5BNi3yJgcuKskdAByRI7gYsIQMFLgfUR8vf9uOOGn5R2
    Yk1rVDoMbRqSJXld+ib1wWRjmsjzW8qCunIYiEYz77il0rGCGqF1wHK4GQKBgQCN
    RCTLQua9667efWO31GmwozbsPWV9fUDbLOQApmh9AXaOVWruqJ+XTumIe++pdgpD
    1Rk9T7adIMNILoTSzX4CX8HWPHbbyN8hIuok7GwXSLUHF+SoaM3M8M1bbgTq9459
    oaJlR8MwwCRaBIkDV71xIq6fR+rmPCTdndEgU0F/oQKBgQCWC1K5FySXxaxzomsZ
    eM3Ey6cQ36QnidjuHAEiEcaJ+E/YmG/s9MPbLCRI8tn6KGvOW3zKzrHXOsoeXsMU
    SCmRUpB0J5PqVbbTdj12kggX3x6I7TIkXucopCA3Nparhlnqx7amski2EB/EVE0C
    YWkjEAMUCquUmJeEg2dELIiGOw==
    -----END PRIVATE KEY-----
  wildcard.my-domain.com.crt: |
    -----BEGIN CERTIFICATE-----
    MIIDNjCCAh4CCQCUtoVaGZH/NDANBgkqhkiG9w0BAQsFADBdMQswCQYDVQQGEwJV
    UzELMAkGA1UECAwCTlkxCzAJBgNVBAcMAk5ZMQwwCgYDVQQKDANOL0ExDDAKBgNV
    BAsMA04vQTEYMBYGA1UEAwwPKi5teS1kb21haW4uY29tMB4XDTE4MDgyODA5Mzkz
    N1oXDTIyMDUyNDA5MzkzN1owXTELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAk5ZMQsw
    CQYDVQQHDAJOWTEMMAoGA1UECgwDTi9BMQwwCgYDVQQLDANOL0ExGDAWBgNVBAMM
    DyoubXktZG9tYWluLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
    AK1TdiT60pKc8UKX7Qi62ZrTEjv0yJYR/xna9pzJ+AxQ2+2GoXFN9Mh57JLPMWHU
    pXHylglt8vUaTHGIN9hlsrBDGGVXfrdvHwn6PZicghDi5kZDQyXEyVvrl7Ql8trI
    s4h9+ju9I+qoENroxjiqOpW6InvNb39f2H7fxjvp8fITN17TdweKEqeRbSyY0plz
    BgEvPmhrUPf3O/lZOpCur7YZcQZ8e8CsNzTprNsCbejnmJGYU+EhYQlxfqgKzSd3
    mMsinJzLjdrIgq8e+MSYKEOeytAOQwiYWTScOwyfQt67QMCzI+imbdMgaqg/Bljh
    KT7Ofu22NT/pVQimd0kNV3kCAwEAATANBgkqhkiG9w0BAQsFAAOCAQEAI+G44qo6
    BPTC+bLm+2SAlr6oEC09JZ8Q/0m8Se1MLJnzhIXrWJZIdvEB1TtXPYDChz8TPKTd
    QQCh7xNPZahMkVQWwbsknNCPdaLp0SAHMNs3nfTQjZ3cE/RRITqFkT0LGSjXkhtj
    dTZdzKvcP8YEYnDhNn3ZBK04djEsAoIyordRATFQh1B7/0I3BsUAwItDEwH+Mv5G
    rvSYkoi+yw7/koijxJHDbH0+WXYdcsmbWrMEh6H92Z64TMOFS+N6ZQRsNvzfiSwZ
    KM2yEtU9c74CPKS+UleQLjDufk8epmNHx6+80aHj7R9z3mbw4dL7yKwlbGws2GAW
    TE+Fk0HB+9W7fw==
    -----END CERTIFICATE-----

现在,我们需要围绕nginx进行服务.

Now we need service around nginx.

有更多种为猫皮的方法,但为简单起见,这也是最简单的-NodePort方法.您可以使用入口,也可以使用externalIP或其他方式,但这只是一个示例,因此它是NodePort.

There are more ways to skin a cat, but here is, again for simplicity sake, taken easiest - NodePort approach. You can use ingress, you can use externalIP or whatnot, but this is example so NodePort it is.

清单文件:svc-nginx.yaml

apiVersion: v1
kind: Service
metadata:
  namespace: ns-example
  name: svc-nginx
  labels:
    name: nginx
spec:
  type:
    NodePort
  selector:
    name: nginx
  ports:
  - protocol: TCP
    name: http-port
    port: 80
    targetPort: 80
  - protocol: TCP
    name: ssl-port
    port: 443
    targetPort: 443

最后,创建完所有文件后,我们可以启动nginx部署.同样,将所有ConfigMap与官方nginx映像粘合在一起没有什么幻想(是的,因为在此这样做,对docker映像使用最新"或省略标签是个坏主意,但是再次,这只是示例,请记住不要在生产部署中被它咬伤...)

Finally, after all is created we can fire up our nginx deployment. Again, nothing fancy just to glue together all ConfigMaps with official nginx image (yes it is bad idea to use "latest" or omit tags for docker image as it was done here, but, again, this is example and keep in mind not to get bitten by it in production deployment...)

清单文件:dep-nginx.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: ns-example
  name: dep-nginx
  labels:
    name: nginx
  annotations:
    ingress.kubernetes.io/secure-backends: "true"
    kubernetes.io/tls-acme: "true"
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
          - mountPath: /etc/nginx/conf.d
            name: nginx-conf
          - mountPath: /etc/nginx/ssl
            name: wildcard-certificate
      volumes:
      - name: nginx-conf
        configMap:
          name: cm-nginx
          items:
          - key: nginx.conf
            path: nginx.conf
      - name: wildcard-certificate
        configMap:
          name: cm-wildcard-certificate-my-domain-com

最后的笔记:

  • 如前所述,这些并不是要在生产中使用的,从资源处理到版本控制的许多细微细节可能会让您反感.这只是一个例子.
  • 证书是自签名的,如果您导航到nginx,浏览器会抱怨这一点.
  • 所有内容均粘贴在DockerCE Edge版本18.06.0-ce-mac69(26398)和1.9.3 k8s上的测试设置中,因此应几乎没有错误.
  • 键入kubectl get cm,deploy,svc,pod -n ns-example -o wide应该显示有关的所有信息(将svc-nginx定向到浏览器的哪些端口将特别受关注.)
  • 最后,由于所有内容都封装在yaml清单文件中,因此清理仅仅是有序删除资源的问题(最后要小心删除名称空间).
  • As mentioned before, those are not meant to be used in production, lot of tiny details from resource handling to versioning can bite you back. This is just an example.
  • Certificates are self-signed and browsers will complain about this if you navigate to the nginx.
  • Everything was pasted from tested setup on DockerCE Edge Version 18.06.0-ce-mac69 (26398) and 1.9.3 k8s so should be more or less error-free.
  • Typing kubectl get cm,deploy,svc,pod -n ns-example -o wide should display all info about (of which ports to target browser to for svc-nginx will be of particular interest).
  • Finally since everything is encased in yaml manifest files, cleanup is simply question of orderly deletion of resources (taking care to delete namespace last).

这篇关于在Kubernetes上从http端口(80)重定向到https端口(443)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆