I am involved several projects CI/CD structures for deployment to Kubernetes with GitOps principles.
Some of the projects started before I joined them, I could not have to much influence on those and some others I was involved at the startup but I was not really happy with the end results so I was in a search of how would an ideal delivery pipeline for Kubernetes should look like.
After reading several peoples proposals and designs, I arrived a solution like the following.
I try to use Best Practices that there are consensus from several sources and the principles of 12 Factor App.
It starts with a Git Repository per Service principle and having a Service Pipeline that is producing an Executable, a Docker Image and pushing to a Docker Registry and Helm Chart with Docker Image id and with the configuration that is valid for all environments and push it to a Helm Repository.
So with every commit to the Service Git Repositories, Service Pipelines will trigger to produces new Docker Images and Helm Charts (There is only one convention, Helm Chart Version will be only increased if there is an actual change to the structure of the Helm Templates, placing only the actual Docker Image Id into the Helm Chart will not bump the Version of the Helm Charts).
A commit to the Service Git Repository would also trigger the Environment Pipeline for the Dev Environment (this is oversimplified to be able to keep the size of the diagram in check, for Feature and Bugfix branches Environment Pipeline can also create additional Namespaces under Dev k8s cluster).
At this point, is one big change from my previous production implementation of similar pipelines and the reason of this question. In those implementations, at this point Environment Pipeline would get all Service Helm Charts from Helm Repository with the help of the Helm Umbrella Chart (unlike the diagram below) and execute 'helm upgrade --install appXXX -n dev -f values-s1.yaml -f values-s2.yaml -f values-s3.yaml -f values-s4.yaml -f values-s5.yaml' which works but the disadvantage being the auditing.
We can identify by inspecting K8s Cluster what we deployed a later point in the time but it would be painful, so my idea is to follow GitOps principles (and many sources agrees with me) and render the manifests from Helm Umbrella Chart with 'helm template' during the 'Environment Pipeline' and commit those to Environment Repository, so that way, first those can be much more easily audited and secondly I can deploy those with the help of the Continuous Deployment tool like ArgoCD.
Now that I explained the precondition, we are arrived to my actual question, I also read from the same sources I mentioned, 'helmfile' is also an awesome tool when I read the documentation, it has really nice tool to prevent boilerplate code but considering, I am planning to synchronise the the state in the Environment Git Repository with ArgoCD, I will not use 'helmfile sync' and 'helm template' does basically what 'helmfile template' does, is using 'helmfile' also in this workflow is an overkill? Additionally I think the concept of Helmfile's 'environment.yaml' collides what I try to achieve with Environment Git Repository.
And secondly, if I decide also to use 'helmfile', mainly because of the awesome extra templating functions preventing the boilerplate, how should I integrate it with ArgoCD, it seems previously it could be integrated over...
data:
configManagementPlugins: |
- name: helmfile
generate:
command: ["/bin/sh", "-c"]
args: ["helmfile -q template --include-crds --skip-tests"]
but it seems now 'configManagementPlugins' is deprecated, how should I integrate it with ArgoCD?
Thx for answers.
Related
I have a lot of cronjobs I need to set on Kubernetes.
I want a file to manage them all and set them to Kubernetes on deployment. I wish that if I remove a cron from that file it will be removed from Kubernetes too.
Basically, I want to handle the corns like I'm handling them today on the machine (from a cron file that I would deploy). Add, remove and change crons.
I couldn't find a way of doing so. Does someone have an idea?
Library or framework I can use like helm? Or any other solution.
I highly recommend using gitops with argocd as a solution for Kubernetes configure management. Run crontab in deployment is a bad ideal because it hard to monitor your job result (cronjob job result can be get by kube-state-metrics exporter).
The ideal is packaging your manifest (it may be kubernetes manifest, kustomize, helm...etc...) -> put them to git -> argocd makes sure your configure deployed correctly
The advantages of gitops are include:
centralize your configuration
versioning your configuration
git authentication & authorization
traceable
multi-cluster deployment with argocd
automation deployment & sync
...
Gitops is not a difficult and is the mordern way for kubernetes configure management. Let's try
I used Helm to do so. I built a template to go over all crons, which I inserted as values to the helm template (Very similar to crontab but more structured) - see in the example.
Then, all I need to do is run a helm upgrade with a new corn (values) file and it updates everything accordingly. If I updated, removed, or added a new corn everything is happening automatically and with versioning. You can also add a namespace to your cronjobs to make it more encapsulated.
Here is a very good and easy-to-understand example I used. And its git repo
I would like to implement continuous deployment on my Kubernetes-based infrastructure and I'm looking for advice. I already use a CI tool. All the manifests are currently stored on git, the same way one would store it to use GitOps.
From my research, I see 3 ways to implement continuous deployment:
write and maintain homemade scripts (basically run kubectl apply -f or helm install)
use a comprehensive CI/CD tool (like GitLab)
use a dedicated CD tool (like Spinnaker, ArgoCD, ...)
Could you explain me which option you chose and why? And are you satisfied with it or do you think you will change in the future?
Thank you very much for your answers 🙂
There is not much difference in your options. CICD services are triggered from your git-repository-service with a hook. Your CICD pipeline e.g. GitLab CI/CD or ArgoCD will then apply your config with e.g. kubectl apply -k somepath/ using kustomize for environment parameters (or alternatively with Helm).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
As I understand all of these tools Draft,Helm and Ksonnet have overlapping functionality such as creating a chart as well as deploying kubernetes configurations.
I understand that purpose of these tool would be to describe and configure the application as well as the k8s environments.
By using Draft we can create Dockerfile, Chart. Nevertheless we can do same thing with Helm and Ksonnet.
My question is If these components create a pipeline in CI/CD then what will be the order?
for example,
draft -> ksonnet > helm
or
draft -> helm -> ksonnet
In short, draft and helm are more or less complimentary and ksonnet is orthogonal, specifically providing an alternative to helm.
In elaborating I will split my answer up into three major sections, the first of which describes how draft and helm interact, the second describing how ksonnet is orthogonal to the others, and finally a section explaining how I understand these with respect to CI/CD.
Helm & Draft
Helm and Draft are complimentary in the sense that Helm, which can be considered to be a package management system for Kubernetes, provides a portion of the functionality offered by Draft which itself is essentially a Kubernetes application development tool.
The relationship between Draft and Helm can be summarized by pointing out that in pursuit of its goal of simplifying Kubernetes application development, Draft produces a Helm chart using metadata inferred from your current application type (more about that below) if one does not already exist or uses and existing one in order to deploy/update a development version of your application without you having to know anything about how it does that.
Helm for Kubernetes Package Management
As mentioned previously, Helm is a package management system for Kubernetes-based applications. It provides the following features:
A templating approach for defining Kubernetes manifests (called "charts")
Package management, including a basic package repository service to host released packages.
Application lifecycle management including deploy, update, and purging of Helm applications
Package dependencies
Helm takes a templated YAML approach to parameterizing Kubernetes manifests and allows values to be shared and overridden between dependent packages. ie, supposed Package A depends on Package B; Package A can re-use configuration values set on Package B and it can override those parameters with values of its own. Values for all packages in a given deployment can also be overridden using the Helm command line tool.
Also worth mentioning is the fact that Helm depends on the availability of its cluster-side component named "Tiller" to actually do the work of reifying templates and deploying the generated Kubernetes manifests to the cluster.
Draft for Kubernetes Application Development
The aim of Draft is to dramatically simplify development of Kubernetes applications by being quickly building and deploying the Helm charts/packages and corresponding docker images necessary to run a project -- provided that the following exist:
A Kubernetes cluster
Helm's Tiller pod installed in the Kubernetes cluster
A Docker registry
The draft installation guide provides details for getting these pieces set up to try it out yourself.
Draft also builds on Helm by providing a high-level "packaging" format that includes both the application helm chart and the Dockerfile, the latter giving it the ability to build docker images.
Finally, it has built-in support for specific programming languages and will to a limited extent attempt to infer which programming language and framework(s) you are using when initially creating a new Draft project using draft create.
Ksonnet for Kubernetes Package Management
As mentioned previously, Ksonnet is orthogonal in many ways to Helm, providing essentially the same features with respect to package management wrapped in different terminology -- see its core concepts documentation. It's worth noting that it is not compatible with nor does it address the same concerns as Draft.
I say that Ksonnet and Helm are orthogonal because they take mutually incompatible approaches to generating and deploying Kubernetes manifests. Whereas Helm uses templated YAML, Ksonnet generates Kubernetes manifests using a "data templating" language called Jsonnet. Also, rather than conceiving of "dependent" packages as is the case with Helm, Ksonnet blurs the line between dependent services by representing them as composable "prototypes". Finally, rather than depending on a cluster-side application that reifies and deployes manifest templates, Ksonnet has an apply subcommand analogous to kubectl apply.
CI/CD
So where do these pieces fit into a CI/CD workflow? Well since there are essentially two mutually incompatible toolsets, let's consider them on a case-by-case basis:
Draft + Helm
According to the Draft design Q&A section, it is meant only as a developer tool intended to abstract much of the complexity of dealing with kubernetes, helm, and docker from developers primarily interested in seeing their application run in a development cluster.
With this in mind, any CD approach involving this set of tools would have to do the following:
Build docker image(s) using the docker CLI if necessary
Build Helm package(s) using the helm CLI
Deploy Helm package(s) to Helm repository using the helm CLI
Install/update Helm package(s) on the appropriate staging/prod Kubernetes cluster(s) using the helm CLI
Ksonnet
The Ksonnet CD workflow is somewhat abbreviated compared to the helm workflow above:
Build docker image(s) using the docker CLI if necessary
Apply the Ksonnet manifest using the ks CLI
Whereas with Helm you would deploy your applicat's package to a Helm registry for re-use, if your Ksonnet manifest contains re-usable prototypes that might be of use to another Ksonnet-based application you would want to ensure it is available in a git repo as described in the Ksonnet registry documentation.
This means that how Ksonnet definitions are dealt with in CI/CD is largely dependent on which git repo(s) you decide to store them in and how they are structured.
I've checked out helm.sh of course, but at first glance the entire setup seems a little complicated (helm-client & tiller-server). It seems to me like I can get away by just having a helm-client in most cases.
This is what I currently do
Let's say I have a project composed of 3 services viz. postgres, express, nginx.
I create a directory called product-release that is as follows:
product-release/
.git/
k8s/
postgres/
Deployment.yaml
Service.yaml
Secret.mustache.yaml # Needs to be rendered by the dev before use
express/
Deployment.yaml
Service.yaml
nginx/
Deployment.yaml
Service.yaml
updates/
0.1__0.2/
Job.yaml # postgres schema migration
update.sh # k8s API server scritps to patch/replace existing k8s objects, and runs the state change job
The usual git stuff can apply now. Everytime I make a change, I make changes to the spec files, test them, write the update scripts to help move from the last version to this current version and then commit it and tag it.
Questions:
This works for me so far, but is this "the right way"?
Why does helm have the tiller server? Isn't it simpler to do the templating on the client-side? Of course, if you want to separate the activity of the deployment from the knowledge of the application (like secrets) the templating would have to happen on the server, but otherwise why?
Seems that https://redspread.com/ (open source) addresses this particular issue, but needs more development before it'll be production ready - at least from my team quick glance at it.
We'll stick with keeping yaml files in git together with the deployed application for now I guess.
We are using kubernetes/helm (the latest/incubated version) and a central repository for Helm charts (with references container images built for our component releases).
In other words, the Helm package definitions and its dependencies are separate from the source code and image definitions that make up the several components of our web applications.
Notice: Tiller has been removed in Helm v3. Checkout this answer to see details on why it needs tiller in Helm v2 and why it's removed in Helm v3: https://v3.helm.sh/docs/faq/#removal-of-tiller
According to the idea of GitOps, what you did is a right way (to perform release from a git repo). However, if you want to push it further to make it more common, you can plan more goals including:
Choose a configuration management system beyond k8s app declarative definition only. E.g., Helm (like above answer https://stackoverflow.com/a/42053983/914967), Kustomize. They're pure client-side only.
avoid custom release process by altering update.sh with popular tools like kubectl apply or helm install.
drive change delivery from git tags/branches by using a CI/CD engine like argocd, Travis CI or GitHub Actions.
Uses branching strategy so that you can try changes on test/staging/production/ environment before delivering it directly.
I am hoping to find a good way to automate the process of going from code to a deployed application on my kubernetes cluster.
In order to build and deploy my app I need to first build the docker image, tag it, and then push it to ECR. I then need to update my deployment.yaml with the new tag for the docker image and run the deployment with kubectl apply -f deployment.yaml.
This will go and perform a rolling deployment on the kubernetes cluster updating the pods to the new version of the container image, once this deployment has completed I may need to do other application specific things such as running database migrations, or cache clear/warming which may or may not need to run for a given deployment.
I suppose I could just write a shell script that runs all of these commands, and run it whenever I want to start up a new deployment, but I am hoping there is a better/industry standard way to solve these problems that I have missed.
As I was writing this question I noticed stackoverflow recommend this question: Kubernetes Deployments. One of the answers to it seems to imply at least some of what I am looking for is coming soon to kubernetes, but I want to make sure that if there is a better solution I could be using now that I at least know about it.
My colleague has a good blog post about this topic:
http://blog.jonparrott.com/building-a-paas-on-kubernetes/
Basically, Kubernetes is not a Platform-as-a-Service, it's a toolkit on which you can build your own Platform-a-as-Service. It's not very opinionated by design, instead it focuses on solving some tricky problems with scheduling, networking, and coordinating containers, and lets you layer in your opinions on top of it.
One of the simplest ways to automate the workflows you're describing is using a Makefile.
A step up from that, you can design your own miniature PaaS, which the author of the first blog post did here:
https://github.com/jonparrott/noel
Or, you could get involved in more sophisticated efforts to build an open source PaaS on Kubernetes, like OpenShift:
https://www.openshift.com/
or Deis, which is building a Heroku-like platform on Kubernetes:
https://deis.com/
or Redspread, which is building "Git for Kubernetes cluster":
https://redspread.com/
and there are many other examples of people building PaaS on top of Kubernetes. But I think it will be a long time, if ever, that there is an "industry standard" way to deploy to Kubernetes, since half the purpose is to enable multiple deployment workflows for different use cases.
I do want to note that as far as building container images, Google Cloud Container Builder can be a useful tool, since you can do things like use it to automatically build an image any time you push to a repository which could then get deployed. Alternatively, Jenkins is a popular way to automate CI/CD flows with Kubernetes.
I suppose I could just write a shell script that runs all of these commands, and run it whenever I want to start up a new deployment, but I am hoping there is a better/industry standard way to solve these problems that I have missed.
The company I work for (Weaveworks) and other folks in the space had been advocating for an approach that we call GitOps, please take a look at our series of blog posts covering the topic:
GitOps - Operations by Pull Request
The GitOps Pipeline - Part 2
GitOps Part 3 - Observability
Storing Secure Sealed Secrets using GitOps
The gist of it is that you push images from CI, your checked YAML manifests in git (usually different repo from app code). This repo with manifests is then applied to each of your clusters (dev/prod) by a reconciliation operator. You can automate it all yourself quite easily, but also do take a look at what we have built.
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).