Google Cloud Kubernetes 访问私有 Docker Hub 托管的映像 [英] Google Cloud Kubernetes accessing private Docker Hub hosted images

查看:29
本文介绍了Google Cloud Kubernetes 访问私有 Docker Hub 托管的映像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以将私有映像从 Docker Hub 拉到 Google Cloud Kubernetes 集群?是否建议这样做,还是我需要将我的私有图像也推送到 Google Cloud?

我阅读了文档,但没有发现任何可以清楚地解释我的东西.好像可以,不知道有没有推荐.

解决方案

没有限制使用任何你想要的注册表.如果您只在 pod 规范中使用镜像名称(例如,镜像:nginx),则该镜像将从公共 docker hub 注册表中提取,其标签假定为 :latest

如 Kubernetes 文档中所述:

<块引用>

容器的图像属性支持与docker 命令可以,包括私有注册表和标签.私人的注册表可能需要密钥才能从中读取图像.

使用 Google 容器注册表

Kubernetes 原生支持 Google Container Registry (GCR),当在谷歌上运行计算引擎 (GCE).如果您在 GCE 或 Google 上运行集群Kubernetes Engine,只需使用完整的镜像名称(例如gcr.io/my_project/image:tag).集群中的所有 pod 都将读取访问此注册表中的图像.

使用 AWS EC2 容器注册表

当节点是 AWS EC2 实例时,Kubernetes 原生支持 AWS EC2 容器注册表.只需使用完整的图像名称(例如ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag)在 Pod 中定义.集群中所有可以创建 pod 的用户都可以运行使用 ECR 注册表中任何图像的 pod.

使用 Azure 容器注册表 (ACR)

使用 Azure 容器注册表 时,您可以使用以下任一方式进行身份验证管理员用户或服务负责人.在任何一种情况下,身份验证都是通过标准完成的码头工人身份验证.这些说明假定使用 azure-cli 命令线工具.

首先需要创建注册表并生成凭证,完成这方面的文档可以在 Azure 容器注册表文档.

配置节点以向私有存储库进行身份验证

以下是配置节点以使用私有节点的推荐步骤注册表.在此示例中,在您的台式机/笔记本电脑上运行这些:

  1. 为您要使用的每组凭据运行 docker login [server].这会更新 $HOME/.docker/config.json.
  2. 在编辑器中查看 $HOME/.docker/config.json 以确保它只包含您要使用的凭据.
  3. 获取节点列表,例如:
    • 如果您想要名称:nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
    • 如果你想获取 IP:nodes=$(kubectl get nodes -o jsonpath='{range.items[*].status.addresses[?(@.type=="ExternalIP")]}{.address}{end}')
  4. 将本地的 .docker/config.json 复制到每个节点的根目录下.
    • 例如:for n in $nodes;做 scp ~/.docker/config.json root@$n:/root/.docker/config.json;完成

用例:

<块引用>

有许多用于配置私有注册表的解决方案.以下是一些常见用例和建议的解决方案.

  1. 仅运行非专有(例如开源)图像的集群.无需隐藏图像.
    • 在 Docker 中心使用公共映像.
      • 无需配置.
      • 在 GCE/Google Kubernetes Engine 上,会自动使用本地镜像来提高速度和可用性.
  2. 集群运行一些专有图像,这些图像应该对公司外部人员隐藏,但对所有集群用户可见.
    • 使用托管的私有 Docker 注册表.
      • 它可能托管在 Docker Hub 或其他地方.
      • 如上所述在每个节点上手动配置 .docker/config.json.
    • 或者,在您的防火墙后面运行一个具有开放读取访问权限的内部私有注册表.
      • 无需 Kubernetes 配置.
    • 或者,在 GCE/Google Kubernetes Engine 上,使用项目的 Google Container Registry.
      • 与手动配置节点相比,集群自动扩缩的效果更好.
    • 或者,在不方便更改节点配置的集群上,使用 imagePullSecrets.
  3. 包含专有图像的集群,其中一些需要更严格的访问控制.
    • 确保 AlwaysPullImages 准入控制器处于活动状态.否则,所有 Pod 都可能有权访问所有图像.
    • 将敏感数据移动到秘密"资源中,而不是将其打包在图像中.
  4. 一个多租户集群,每个租户都需要自己的私有注册表.
    • 确保 AlwaysPullImages 准入控制器处于活动状态.否则,所有租户的所有 Pod 都可能访问所有图片.
    • 运行需要授权的私有注册表.
    • 为每个租户生成注册表凭据,放入密钥,并将密钥填充到每个租户命名空间.
    • 租户将该秘密添加到每个命名空间的 imagePullSecrets 中.

考虑阅读 Pull an Image from a如果您决定使用私有注册表,请提供私有注册表 文档.

Is it possible, to pull private images from Docker Hub to a Google Cloud Kubernetes cluster? Is this recommended, or do I need to push my private images also to Google Cloud?

I read the documentation, but I found nothing that could explain me this clearly. It seems that it is possible, but I don´t know if it's recommended.

解决方案

There is no restriction to use any registry you want. If you just use the image name, (e.g., image: nginx) in pod specification, the image will be pulled from public docker hub registry with tag assumed as :latest

As mentioned in the Kubernetes documentation:

The image property of a container supports the same syntax as the docker command does, including private registries and tags. Private registries may require keys to read images from them.

Using Google Container Registry

Kubernetes has native support for the Google Container Registry (GCR), when running on Google Compute Engine (GCE). If you are running your cluster on GCE or Google Kubernetes Engine, simply use the full image name (e.g. gcr.io/my_project/image:tag). All pods in a cluster will have read access to images in this registry.

Using AWS EC2 Container Registry

Kubernetes has native support for the AWS EC2 Container Registry, when nodes are AWS EC2 instances. Simply use the full image name (e.g. ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag) in the Pod definition. All users of the cluster who can create pods will be able to run pods that use any of the images in the ECR registry.

Using Azure Container Registry (ACR)

When using Azure Container Registry you can authenticate using either an admin user or a service principal. In either case, authentication is done via standard Docker authentication. These instructions assume the azure-cli command line tool.

You first need to create a registry and generate credentials, complete documentation for this can be found in the Azure container registry documentation.

Configuring Nodes to Authenticate to a Private Repository

Here are the recommended steps to configuring your nodes to use a private registry. In this example, run these on your desktop/laptop:

  1. Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json.
  2. View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
  3. Get a list of your nodes, for example:
    • if you want the names: nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
    • if you want to get the IPs: nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')
  4. Copy your local .docker/config.json to the home directory of root on each node.
    • for example: for n in $nodes; do scp ~/.docker/config.json root@$n:/root/.docker/config.json; done

Use cases:

There are a number of solutions for configuring private registries. Here are some common use cases and suggested solutions.

  1. Cluster running only non-proprietary (e.g. open-source) images. No need to hide images.
    • Use public images on the Docker hub.
      • No configuration required.
      • On GCE/Google Kubernetes Engine, a local mirror is automatically used for improved speed and availability.
  2. Cluster running some proprietary images which should be hidden to those outside the company, but visible to all cluster users.
    • Use a hosted private Docker registry.
      • It may be hosted on the Docker Hub, or elsewhere.
      • Manually configure .docker/config.json on each node as described above.
    • Or, run an internal private registry behind your firewall with open read access.
      • No Kubernetes configuration is required.
    • Or, when on GCE/Google Kubernetes Engine, use the project’s Google Container Registry.
      • It will work better with cluster autoscaling than manual node configuration.
    • Or, on a cluster where changing the node configuration is inconvenient, use imagePullSecrets.
  3. Cluster with a proprietary images, a few of which require stricter access control.
    • Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods potentially have access to all images.
    • Move sensitive data into a "Secret" resource, instead of packaging it in an image.
  4. A multi-tenant cluster where each tenant needs own private registry.
    • Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods of all tenants potentially have access to all images.
    • Run a private registry with authorization required.
    • Generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.
    • The tenant adds that secret to imagePullSecrets of each namespace.

Consider reading the Pull an Image from a Private Registry document if you decide to use a private registry.

这篇关于Google Cloud Kubernetes 访问私有 Docker Hub 托管的映像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆