Sounds quite nice, right? It will simplify your release process a lot, but sometimes it will give you a hard time, that’s life!
Recently Helm has been promoted to an official top-level @CloudNativeFdn project and is widely used in the community. That means something but I would like to briefly share with you my concerns around Helm.
Someone compared Tiller to “a giant sudo server”. For me it’s just another authorization layer with a lack of access control and additional TLS certs to maintain. Why not leverage an Kubernetes API to rely on an existing security model with proper audit and RBAC?
Someone compared Tiller to “a giant sudo server”. For me it’s just another authorization layer with a lack of access control and additional TLS certs to maintain. Why not leverage an Kubernetes API to rely on an existing security model with proper audit and RBAC?
It’s all about rendering and linting go-template files using configuration from values.yaml then applying the rendered Kuberentes manifest along with the corresponding metadata stored in ConfigMap.
What can be replaced by few simple commands.
1$ # render go-template files using golang or python script
2$ kubectl apply --dry-run -f .
3$ kubectl apply -f .
I have observed that teams used to have values.yaml per environment or even rendered it from values.yaml.tmpl before using it.
It doesn’t make sense for Kubernetes Secrets which are often encrypted and versioned in the repository. You can either use the helm-secrets plugin in order to do that or override it using--set key=value — but it still adds another layer of complexity.
Forget about it. It won’t work, especially for core Kubernetes components like kube-dns, CNI provider, cluster autoscaler, etc. These components have a different lifecycle and Helm doesn’t fit there.
My experience with Helm shows that it works fine for simple deployments using basic Kubernetes resources which can be easily recreated from scratch and don’t have a complex release process.
Sadly, Helm can’t handle more advanced and frequent deployments including Namespace, RBAC, NetworkPolicy, ResourceQuota or PodSecurityPolicy.
I know it may offend someone who is super obsessed with Helm, but this is a sad truth.
The Tiller server stores information in ConfigMaps located inside of Kubernetes. It does not need its own database.
Unfortunately, ConfigMap limit is restricted to 1MB because of etcd limits.
Hopefully, someone will come up with an idea to extend the ConfigMap storage driver to compress the serialized release before storing it, but for me, this still doesn’t solve the actual issue.
I really like the idea of a Tiller-less architecture but I’m not sure about Lua scripting because it can add additional complexity to the charts.
Recently I have observed a big shift towards operators which are more Kubernetes native than Helm charts.
I really hope that the community will sort this out (with our help ofc) sooner rather than later, but in the meantime, I prefer to use Helm as little as possible.
Don’t get me wrong — it’s just my personal point of view after spending some time building a hybrid-cloud deployment platform on top of Kubernetes.
@Edit
You might be interested in a follow-up written by my colleague. Where he explains the alternative deployment process without Helm.