在Google Compute Engine上运行Kubernetes群集后,我应该编辑salt tar文件吗? [英] Should I edit the salt tar files after a Kubernetes cluster is running on Google Compute Engine?

查看:133
本文介绍了在Google Compute Engine上运行Kubernetes群集后,我应该编辑salt tar文件吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用了 curl -sS https://get.k8s.io | bash 使用Kubernetes 1.2.4在Google Compute Engine上创建群集。这很好。我想通过向kube-apiserver pod规范中指定的 kube-apiserver 命令添加几个标志来启用ABAC授权模式。



我不清楚我是否应该调整盐文件一旦他们焦油/ gzipped。 pod规范生成的salt文件是 here ,但在群集站起来后编辑它有一些额外的要求:


  • 我必须解开安装脚本上传到Google Cloud Storage的盐tarball
  • 编辑salt文件
  • tar / gzip备份,生成新校验和文件

  • 将这些文件推送到GCS

  • 更新所有实例的kube-metadata,以便 SALT_TAR_HASH 现在正确无误



这感觉就像我走错了路,因为这也会与升级产生冲突。



有没有更好的方法来配置在安装脚本中烘焙的pod,服务等,而无需执行所有操作?

解决方案

customiza安装脚本中内置的变量位于您可以设置用于更改行为的环境变量中(请参阅 cluster / gce / config-default.sh )。如果覆盖其中一个变量不起作用(我相信这是ABAC的情况),那么您唯一的选择是手动修改salt文件。

如果您愿意从源代码构建Kubernetes,那么最简单的方法是将github库克隆为所需的发行版本,在本地修改salt文件,然后运行快速发布,然后 ./ cluster / kube-up.sh 。这将构建一个版本(来自源代码),绑定到本地修改的salt文件中,生成校验和,将salt文件上传到Google Cloud Storage,然后启动具有正确盐文件和文件的群集。校验和。



如果您不想从源代码构建,而不是调整 kube-env 元数据输入,您可以在实例模板中修复它,然后删除每个实例。它们会被新实例自动替换,这些实例会继承您对实例模板所做的更改。

您的当前机制不会与升级混淆,因为升级会在新版本中创建新的实例模板。您对旧实例模板(或直接旧节点)所做的任何更改都不会转发到新节点(无论好坏)。

I've used curl -sS https://get.k8s.io | bash to create a cluster on Google Compute Engine using Kubernetes 1.2.4. This worked great. I wanted to enable ABAC authorization mode by adding a few flags to the kube-apiserver command specified in kube-apiserver pod spec.

I'm unclear if I should adjust the salt files once they're tar/gzipped. The salt file that the pod spec is generated from is here, but editing this after the cluster is stood up has a few additional requirements:

  • I have to unpack the salt tarball that the install script uploaded to Google Cloud Storage for me
  • Edit the salt files
  • Tar/gzip them back up, generate a new checksum file
  • Push these to GCS
  • Update all of the instances' kube-metadata so that SALT_TAR_HASH is now correct

It feels like I'm going down the wrong path with this, as this also will collide with upgrades.

Is there a better way to configure pods, services, etc that are baked into the install script without having to do all of this?

解决方案

The customization that is built into the install script is in the environment variables that you can set to change behavior (see cluster/gce/config-default.sh). If overriding one of these variables doesn't work (which I believe is the case for ABAC), then your only option is to manually modify the salt files.

If you are comfortable building Kubernetes from source, your easiest path would be to clone the github repository at the desired release version, modify the salt files locally, and then run make quick-release and then ./cluster/kube-up.sh. This will build a release (from source), bundle in your locally modified salt files, generate a checksum, upload the salt files to Google Cloud Storage, and then launch a cluster with the correct salt files & checksum in your cluster.

If you don't want to build from source, rather than adjusting the kube-env metadata entry on all instances, you can fix it in the instance template and then delete each instance. They will get automatically replaced by new instances which will inherit the changes you made to the instance template.

Your current mechanism won't really mess with upgrades, because upgrades create a new instance template at the new version. Any changes that you've made to the old instance template (or old nodes directly) won't be carried forward to the new nodes (for better or worse).

这篇关于在Google Compute Engine上运行Kubernetes群集后,我应该编辑salt tar文件吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆