Different deployment configurations using Helm - kubernetes-helm

I would like to have a slightly different deployment configuration in different invironments. That is, in Prod and Ver, I don't want all containers to be deployed.
With docker-compose we solve that by having incremental docker-compose files that we combine, like: docker-compose up -f docker-compose.yml -f docker-compose-prod.yml
How can that be done using Helm charts?
We have a structure with Chart.yaml and values.yaml in the top, and then one yaml file per container in a subfolder. The naive solution would be to copy that structure and leave out some of the chart files, but I would prefer to have only one file (at most one file!) per service.
We deploy to AKS using CircleCI.
To summarize:
Today, each service has it's own yaml file, and on every deploy, all of them gets deployed. I want to configure my charts so that only a subset of the services gets deployed in certain environments.
EDIT:
kubectl has the the possibility to use selectors, like kubectl create cfg.yaml --selector=tier=frontend or kubectl create cfg.yaml --selector=environment=prod and I already tag my containers, so that would have been simple. But helm install does not have the possibility to accept a similar flag and pass it to kubectl.

just create one values file for each environment and target those:
helm install . -f values.production.yaml
helm install . -f values.development.yaml
you can use condition to toggle deployments, imagine you have something,yaml which you want conditionally deployed:
{{ if .Values.something}}
something.yaml original content goes here
{{ end }}

Related

Kubernetes single deployment yaml file for spinning up the application

I am setting up kubernetes for an application with 8 microservices,activemq,postgres,redis and mongodb.
After the entire configuration of pods and deployment ,is there any way to create a single master deployment yaml file which will create the entire set of services,replcas etc for the entire application.
Note:I will be using multiple deployment yaml files,statefulsets etc for all above mentioned services.
You can use this script:
NAMESPACE="your_namespace"
RESOURCES="configmap secret daemonset deployment service hpa"
for resource in ${RESOURCES};do
rsrcs=$(kubectl -n ${NAMESPACE} get -o json ${resource}|jq '.items[].metadata.name'|sed "s/\"//g")
for r in ${rsrcs};do
dir="${NAMESPACE}/${resource}"
mkdir -p "${dir}"
kubectl -n ${NAMESPACE} get -o yaml ${resource} ${r} > "${dir}/${r}.yaml"
done
done
Remember to specify what resources you want exported in the script.
More info here
Is there any way to create a single master deployment yaml file which will create the entire set of services,replicas etc for the entire application.
Since you already mentioned kubernetes-helm why don't you actually used it for that exact purpose? In short helm is sort of package manager for Kubernetes, some say similar to yum or apt. It deploys charts which you can actually refer to as packed application. Its pack of all your pre-configured applications which can be deploy as one unit. It's not entirely one file but more collection of files that build so called helm chart.
What are the helm charts?
Well they are basically K8s yaml manifest combined into a single package that can be installed to your cluster. And installing the package is just as simple as running single command such as helm install. Once done the charts are highly reusable which reduces the time for creating dev, test and prod environments.
As an example of a complex helm chart deploying multiple resources you many want to check Stackstorm.
Basically once deployed without any custom config this chart will deploy 2 replicas for each component of StackStorm as well as backends like RabbitMQ, MongoDB and Redis.

is there a Helm built-in function that would allow me to write a file within my chart to an external directory?

Our helm charts create the appropriate k8s manifest and they install into our cluster. Associated (or packaged within our helm chart is a rule file which I would like to be able to write to some external directory.
What we would like to do is for an integrator to install an number of our helm charts. Each helm chart they install will have a rule that would be written to some external directory. The integrator after installing all of the associated helm charts would then inspect the rule's directory and then create a configmap containing those rules.
We can't create the configmap until we have install all the helm charts that contribute rules. Is there a helm built in that would allow me to write one (or more) files to an external directory?
No. The only outputs of a Helm chart are Kubernetes resources (that are installed in the cluster) and the plain-text rendered output of the NOTES.txt file.
However, you could install this metadata in the cluster. The easiest way would be to create a ConfigMap with the content you need, and then have the integrator process look for those ConfigMaps and combine them.
kubectl get configmap --all-namespaces -l type=rule -o name
If you want to do this programmatically, you could write a Kubernetes operator that runs in the cluster, and instead of writing out ConfigMaps, write some kind of custom resource; a controller would watch for those resources appearing using the Kubernetes API, and combine them appropriately.

Using kubectl roll outs to update my images, but need to also keep my deployment object in version control

In My CICD, I am:
generating a new image with a unique tag. foo:dev-1339 and pushing it to my image repo (ECR).
Then I am using a rolling update to update my deployment.
kubectl rolling-update frontend --image=foo:dev-1339
But I have a conflict here.
What if I also need to update some part of my deployment object as stored in a deployment.yaml file. Lets say harden a health check or add a parameter?
Then when I re apply my deployment object as a whole it will not be in sync with the current replica set, the tag will get reverted and I will lose that image update as it exists in the cluster.
How do I avoid this race condition?
A typical solution here is to use a templating layer like Helm or Kustomize.
In Helm, you'd keep your Kubernetes YAML specifications in a directory structure called a chart, but with optional templating. You can specify things like
image: myname/myapp:{{ .Values.tag | default "latest" }}
and then deploy the chart with
helm install myapp --name myapp --set tag=20191211.01
Helm keeps track of these values (in Secret objects in the cluster) so they don't get tracked in source control. You could check in a YAML-format file with settings and use helm install -f to reference that file instead.
In Kustomize, your CI tool would need to create a kustomize.yaml file for per-deployment settings, but then could set
images:
- name: myname/myapp
newTag: 20191211.01
If you trust your CI tool to commit to source control then it can check this modified file in as part of its deployment sequence.
Imperative vs Declarative workflow
There is two fundamental ways of using kubectl for applying changes to your cluster. The Imperative way, when you do commands is a good way for experimentation and development environment. kubectl rolling-updated is an example of an imperative command. See Managing Kubernetes using Imperative Commands.
For a production environment, it is recommended to use a Declarative workflow, by editing manifest-files, store them in a Git-repository. Automatically start a CICD work when you commit or merge. kubectl apply -f <file> or more interesting kubectl apply -k <file> is an example of this workflow. See Declarative Management using Config files or more interesting Declarative Management using Kustomize
CICD for building image and deployment
Building an artifact from source code, including a container image may be done in a CICD pipeline. Managing application config and applying it to the Kubernetes cluster may also be done in a CICD pipeline. You may want to automatize it all, e.g. for doing Continuous Deployment and combine both pipelines to a single long pipeline. This is a more complicated setup and there is no single answer on how to do it. When the build-parts is done, it may trigger an update of the image field in the app configuration repository to trigger the configuration-pipeline.
Unfortunately there is no solution, either from the command line or through the yaml files
As per the doc here, "...a Deployment is a higher-level controller that automates rolling updates of applications declaratively, and therefore is recommended" over the use of Replication Controllers and kubectl rolling-update. Updating the image of a Deployment will trigger Deployment's rollout.
An approach could be to update the Deployment configuration yaml (or json) under version control in the source repo and apply the changed Deployment configuration from the version control to the cluster.

How to bind kubernetes resource to helm release

If I run kubectl apply -f <some statefulset>.yaml separately, is there a way to bind the stateful set to a previous helm release? (eg by specifying some tags in the yaml file)
As far as I know - you cannot do it.
Yes, you can always create resources via templates before installing the Helm chart.
However, I have never seen a solution for your question.

Dockerhub registry Image accessing from Helm Chart using deployment YAML file

I am trying to implement the CI/CD pipeline for my microservice by using Jenkins, Kubernetes and Kubernetes Helm. Here I am using Helm chart for packaging of YAML files and deployment into Kubernetes cluster. I am now learning the implementation of Helm chart and deployment. When I am learning, I found the image name definition in deployment YAML file.
I have two questions:
If we only defining the image name, then it will automatically pull from Docker Hub? Or do we need to define additionally anything in the deployment chart YAML file for pulling?
How the Helm Tiller communicating with Docker Hub registry?
Docker image names in Kubernetes manifests follow the same rules as everywhere else. If you have an image name like postgres:9.6 or myname/myimage:foo, those will be looked up on Docker Hub like normal. If you're using a third-party repository (Google GCR, Amazon ECR, quay.io, ...) you need to include the repository name in the image name. It's the exact same string you'd give to docker run or docker build -t.
Helm doesn't directly talk to the Docker registry. The Helm flow here is:
The local Helm client sends the chart to the Helm Tiller.
Tiller applies any templating in the chart, and sends it to the Kubernetes API.
This creates a Deployment object with an embedded Pod spec.
Kubernetes creates Pods from the Deployment, which have image name references.
So if your Helm chart names an image that doesn't exist, all of this flow will run normally, until it creates Pods that wind up in ImagePullBackOff state.
P.S.: if you're not already doing this, you should make the image tag (the part after the colon) configurable in your Helm chart, and declare your image name as something like myregistry.io/myname/myimage:{{ .Values.tag }}. Your CD system can then give each build a distinct tag and pass it into helm install. This makes it possible to roll back fairly seamlessly.
Run the command below. It will generate blank chart with values.yaml, add key value pare inside values.yaml and use them in your deployment.yaml file as variable.
helm create mychart