I have a project that contains the deployment descriptor files for Kubernetes. This project has a folder structure that looks like this:
> project-deployment
> - base
> - dev
> - production
Inside the base folder, I have the kubernetes deployment files (deployment, service, namespaces etc.,). In the dev and production folder, I have kustomization.yaml that composes everything from the base folder. So far so good. I now want to introduce helm into this so that I can manage my releases much better. My question now is how do I go about structuring my folder structure?
Should I move everything (base, dev and production) folder into templates and just have one Charts.yaml and values.yaml? Any thoughts?
The configuration values that you push into your charts should be separate between environments. Build simple extendable charts that can have overrides per environment.
For example, a good workflow would have different value files per environment with specific differences in configuration:
~/myapp
└── config
├── production.yml
└── staging.yml
There are tools that can help you manage that particular use case. For example, consider using orca:
What Orca does best is manage environments. An Environment is a
Kubernetes namespace with a set of Helm charts installed on it. There
are a few use cases you will probably find useful right off the bat.
There are also some examples provided with it.
I also recommend going through the official The Chart Best Practices Guide.
Related
I have a little bit strange question, but maybe you can advise right way for this implementation.
I read about Helm Dependencies - when you can set the list of necessary charts and install them during your "main" chart install.
Is it possible to have this list of dependencies (with versions) without "main/root" chart?
For example - I want to install to my k8s cluster rabbit, redis, postgres and few my custom charts.
I don't want to run few times "helm chart install..." - I want to have one file with the list of helms/versions and install them in one command.
Also I want to easy upgrade helm charts using my the same one file - I want to change version, run again one command and update only necessary helm charts (with different versions).
Is it possible, or, maybe I should use something another for this?
There's no actual requirement that a Helm chart contain any templates of its own. It's possible to have a Helm chart that only contains dependencies; then running helm install on that "parent" chart would install all of the dependencies.
It would be valid to run helm create to build a new chart, then delete the values.yaml file and template directory from the generator, and fill in the requirements in Chart.yaml.
In your setup you're just installing infrastructure services. Occasionally you will see a dependency-only chart like this used to install several application components as well; this is sometimes called an umbrella chart. There are potential problems around umbrella charts where Helm likes to combine together dependencies: if different application components depend on different major versions of some chart there can be a conflict, and if you have multiple components that each depend on, say, Redis, an umbrella-chart installation will typically install just one shared Redis rather than an isolated Redis per component.
There are a number of ways to group multiple helm install commands into a single execution. Just writing a simple shell script will often work, depending on your needs. General-purpose automation tools (Ansible, Salt Stack, Chef) may have ways to run Helm, or if not, then to run arbitrary commands. There are also a couple of Helm-specific tools here: Helmsman is simpler and Helmfile more complex, but both let you "install" a collection of related charts without necessarily building an umbrella chart.
What you describe at the beginning of your question is called "Umbrella Charts". These are Charts that are used to bundle other Charts and manage these as one unit.
If this is not what you want to do (that's how I understand your question), then you are required to use other tools. One such tool is Helmfile, which allows you to define a list of Helm Charts, their versions and Helm Values. You can then install/upgrade/uninstall all referenced Helm Charts in one go.
Another option is to use kluctl (disclaimer: I'm the main developer of it). It basically allows you to do the same as offered by Helmfile, but in a different way/style. Kluctl focuses on Kustomize deployments while allowing you to easily pull-in third-party Helm Charts.
Having the mentioned Helm Charts in a Kluctl deployment would require you to have a kluctl deployment project that looks like this:
my_project
├── third-party
│ ├── rabbit
│ │ ├── kustomization.yaml
│ │ ├── helm-chart.yaml
│ │ └── helm-values.yaml
│ ├── redis
│ │ ├── kustomization.yaml
│ │ ├── helm-chart.yaml
│ │ └── helm-values.yaml
│ └── deployment.yaml
├── deployment.yaml
└── .kluctl.yaml
my_project/deployment.yaml
vars:
- values:
# this is arbitrary yaml
my:
ns: my-namespace
# this is also possible, loading vars from a file...
# many other sources are supported as well
# - file: my-config.yaml
deployments:
- include: third-party
commonLabels:
my.example.com/deployment: my-example-deployment
my.example.com/target: {{ target.name }}
my_project/third-party/deployment.yaml
deployments:
- path: rabbit
- path: redis
my_project/third-party/rabbit/kustomization.yaml
resources:
# this file is auto-generated by the helm-integration
- deploy.yaml
my_project/third-party/rabbit/helm-chart.yaml
helmChart:
repo: https://charts.bitnami.com/bitnami
chartName: rabbitmq
chartVersion: 10.3.2
releaseName: my-rabbit
# my.ns comes from the 'vars' defined in the root deployment.yaml
namespace: "{{ my.ns }}"
output: deploy.yaml
my_project/third-party/rabbit/helm-values.yaml
auth:
username: my-user
password: you-would-of-course-never-do-this
my_project/third-party/redis/helm-chart.yaml and helm-values.yaml
Basically the same as for rabbitmq, but with redis specific settings.
my_project/.kluctl.yaml
targets:
- name: dev
context: my-dev-cluster-context
Using kluctl
Based on the above example, you would then run kluctl helm-pull from the root project directory, which will then pre-pull all involved Helm Charts and write the contents besides the helm-chart.yaml files. These pre-pulled Charts are meant to be added to you version control (this might change in the future).
After that, you can run commands like kluctl diff -t dev, kluctl deploy -t dev and kluctl prune -t dev to work with the deployment.
kluctl helm-upgrade will help you while upgrading the pre-pulled Helm Charts.
Fully working example
A fully working example can be found here. The shown yamls from above are only meant to give an idea about the Helm related stuff.
Currently I am working with a project based on a micro service architecture. For making this project, I have 20 Spring Boot micro service projects are there. I for for every root folder I placed my Dockerfile for image building. And I am using Kubernetes cluster for deployment through Helm chart.
My confusion here that, when I created Helm chart, it giving the service.yaml and deployment.yaml inside template directory.
If I am deploying these 20 microservices, do I need to create 20 separate helm chart ? Or Can I create service for every 20 within 1 chart?
I am new to Kubernetes and Helm chart. So I am confused about the standard way of using yaml files with chart. Do I need to create 20 separate chart or can I include in 1 chart?
How can I follow the standard way of chart creation for my micro service projects please?
What I ended up doing (working with a similar stack), is create one microservice Chart, which is stored in an internal Chart repository. Inside of the Helm Chart, I gave enough configuration options, so teams have the flexibility to control their own deployments, but I made sure to set sensible defaults (e.g. make sure the Deployment utilises a RollingUpdateStrategy and readiness probes are configured with sensible defaults).
These configuration options can be passed by the values.yaml file. Teams deploy their microservice via a CICD pipeline, passing the values.yaml file to the helm command (with the -f flag).
I would certainly recommend you read the Helm Template Developer guide, before making the decision. It really depends on how similar your microservices are, but I recommend going for 1 Helm Chart if you have a homogenous environment (which also was the case for me).
Say that I have 5 apis that i want to deploy in a Kubernetes cluster, my question is simply what is the best practice to store the yaml files related to Kubernetes.
In projects I've seen online, Kubernetes yaml files are just added to the the api project itself. I wonder if it makes sense to decouple all files related to Kubernetes in an entirely separate "project", and which is managed by VCS as a completely separated entity from the api projects themselves.
This question arises since I'm currently reading a book about Kubernetes, on the topic namespaces, and considered it might be a good idea to have separate namespaces per environment (DEV / UAT / PROD), and it may make sense to have these files in a centralized "Kubernetes" project (unless it might be better to have a separate cluster per environment (?)).
Whether to put the yaml in the same repo as the app is a question that projects answer in different ways. You might want to put them together if you find that you often change both at the same time or you just find it clearer to see everything in one place. You might separate if you mostly work on the yaml separately or if you find it less clutttered or want different visibility for it (e.g. different teams to look at it). If things get more sophisticated then you'll actually want to generate the yaml from templates and inject environment-specific configuration into it at deploy time (whether those environments are namespaces or clusters) - see Best practices for storing kubernetes configuration in source control for more discussion on this.
From Production k8s experience for CI/CD:
One cluster per environment such as dev , stage , prod ( optionally per data centre )
One namespace per project
One git deployment repo per project
One branch in git deployment repo per environment
Use configmaps for configuration aspects
Use secret management solution to store and use secrets
Is there a way to create a tree of inheritance for Kubernetes deployments? I have a number of deployments which are similar but not identical. They share many ENV vars but not all. They all use the same image.
For example, I have a dev deployment which is configured almost identical to a production deployment but has env vars pointing to a different database backend. I have a celery deployment which is configured the same as the production deployment, however, it has a different run command.
Helm is what many people are using for this. It let's you create templates for kubernetes descriptors and pass parameters in to generate descriptors from the templates.
There are other tools out there which can be used to generate variations on kubernetes deployment descriptors by injecting parameters into templates. Ansible is also popular. But Helm is closely connected with the Kubernetes CNCF and community and there's a good selection of official charts available.
EDIT: If the aim is to enable different deployments (e.g. for dev and prod) using a single docker image then that's probably best handled with a single chart. You can create different values files for each deployment and supply the chosen values file to helm install with the --values parameter. If there are parts of the chart that are only sometimes applicable then they can be wrapped in if conditions to turn them on/off.
On the subject of inheritance specifically, there's an example in the helm documention of how to take another chart as a parent/dependency and override its values and I created a chart earlier that you can see in github that includes several other charts and overrides parts of all of them via the values.yml. It also shares some config between the included charts with globals. If you're looking to use a parent to reduce duplication rather than join multiple apps then it is possible to create a base/wrapper chart but it may turn out to be better to just duplicate config.
EDIT (180119): The alternative of Kustomize may soon become available in kubectl
You may also want to check Kustomize. It provides some support to write your yaml manifests in hierarchical form, so that you don't have to repeat yourself.
I've checked out helm.sh of course, but at first glance the entire setup seems a little complicated (helm-client & tiller-server). It seems to me like I can get away by just having a helm-client in most cases.
This is what I currently do
Let's say I have a project composed of 3 services viz. postgres, express, nginx.
I create a directory called product-release that is as follows:
product-release/
.git/
k8s/
postgres/
Deployment.yaml
Service.yaml
Secret.mustache.yaml # Needs to be rendered by the dev before use
express/
Deployment.yaml
Service.yaml
nginx/
Deployment.yaml
Service.yaml
updates/
0.1__0.2/
Job.yaml # postgres schema migration
update.sh # k8s API server scritps to patch/replace existing k8s objects, and runs the state change job
The usual git stuff can apply now. Everytime I make a change, I make changes to the spec files, test them, write the update scripts to help move from the last version to this current version and then commit it and tag it.
Questions:
This works for me so far, but is this "the right way"?
Why does helm have the tiller server? Isn't it simpler to do the templating on the client-side? Of course, if you want to separate the activity of the deployment from the knowledge of the application (like secrets) the templating would have to happen on the server, but otherwise why?
Seems that https://redspread.com/ (open source) addresses this particular issue, but needs more development before it'll be production ready - at least from my team quick glance at it.
We'll stick with keeping yaml files in git together with the deployed application for now I guess.
We are using kubernetes/helm (the latest/incubated version) and a central repository for Helm charts (with references container images built for our component releases).
In other words, the Helm package definitions and its dependencies are separate from the source code and image definitions that make up the several components of our web applications.
Notice: Tiller has been removed in Helm v3. Checkout this answer to see details on why it needs tiller in Helm v2 and why it's removed in Helm v3: https://v3.helm.sh/docs/faq/#removal-of-tiller
According to the idea of GitOps, what you did is a right way (to perform release from a git repo). However, if you want to push it further to make it more common, you can plan more goals including:
Choose a configuration management system beyond k8s app declarative definition only. E.g., Helm (like above answer https://stackoverflow.com/a/42053983/914967), Kustomize. They're pure client-side only.
avoid custom release process by altering update.sh with popular tools like kubectl apply or helm install.
drive change delivery from git tags/branches by using a CI/CD engine like argocd, Travis CI or GitHub Actions.
Uses branching strategy so that you can try changes on test/staging/production/ environment before delivering it directly.