Draft and Helm vs Ksonnet? [closed] - kubernetes

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
As I understand all of these tools Draft,Helm and Ksonnet have overlapping functionality such as creating a chart as well as deploying kubernetes configurations.
I understand that purpose of these tool would be to describe and configure the application as well as the k8s environments.
By using Draft we can create Dockerfile, Chart. Nevertheless we can do same thing with Helm and Ksonnet.
My question is If these components create a pipeline in CI/CD then what will be the order?
for example,
draft -> ksonnet > helm
or
draft -> helm -> ksonnet

In short, draft and helm are more or less complimentary and ksonnet is orthogonal, specifically providing an alternative to helm.
In elaborating I will split my answer up into three major sections, the first of which describes how draft and helm interact, the second describing how ksonnet is orthogonal to the others, and finally a section explaining how I understand these with respect to CI/CD.
Helm & Draft
Helm and Draft are complimentary in the sense that Helm, which can be considered to be a package management system for Kubernetes, provides a portion of the functionality offered by Draft which itself is essentially a Kubernetes application development tool.
The relationship between Draft and Helm can be summarized by pointing out that in pursuit of its goal of simplifying Kubernetes application development, Draft produces a Helm chart using metadata inferred from your current application type (more about that below) if one does not already exist or uses and existing one in order to deploy/update a development version of your application without you having to know anything about how it does that.
Helm for Kubernetes Package Management
As mentioned previously, Helm is a package management system for Kubernetes-based applications. It provides the following features:
A templating approach for defining Kubernetes manifests (called "charts")
Package management, including a basic package repository service to host released packages.
Application lifecycle management including deploy, update, and purging of Helm applications
Package dependencies
Helm takes a templated YAML approach to parameterizing Kubernetes manifests and allows values to be shared and overridden between dependent packages. ie, supposed Package A depends on Package B; Package A can re-use configuration values set on Package B and it can override those parameters with values of its own. Values for all packages in a given deployment can also be overridden using the Helm command line tool.
Also worth mentioning is the fact that Helm depends on the availability of its cluster-side component named "Tiller" to actually do the work of reifying templates and deploying the generated Kubernetes manifests to the cluster.
Draft for Kubernetes Application Development
The aim of Draft is to dramatically simplify development of Kubernetes applications by being quickly building and deploying the Helm charts/packages and corresponding docker images necessary to run a project -- provided that the following exist:
A Kubernetes cluster
Helm's Tiller pod installed in the Kubernetes cluster
A Docker registry
The draft installation guide provides details for getting these pieces set up to try it out yourself.
Draft also builds on Helm by providing a high-level "packaging" format that includes both the application helm chart and the Dockerfile, the latter giving it the ability to build docker images.
Finally, it has built-in support for specific programming languages and will to a limited extent attempt to infer which programming language and framework(s) you are using when initially creating a new Draft project using draft create.
Ksonnet for Kubernetes Package Management
As mentioned previously, Ksonnet is orthogonal in many ways to Helm, providing essentially the same features with respect to package management wrapped in different terminology -- see its core concepts documentation. It's worth noting that it is not compatible with nor does it address the same concerns as Draft.
I say that Ksonnet and Helm are orthogonal because they take mutually incompatible approaches to generating and deploying Kubernetes manifests. Whereas Helm uses templated YAML, Ksonnet generates Kubernetes manifests using a "data templating" language called Jsonnet. Also, rather than conceiving of "dependent" packages as is the case with Helm, Ksonnet blurs the line between dependent services by representing them as composable "prototypes". Finally, rather than depending on a cluster-side application that reifies and deployes manifest templates, Ksonnet has an apply subcommand analogous to kubectl apply.
CI/CD
So where do these pieces fit into a CI/CD workflow? Well since there are essentially two mutually incompatible toolsets, let's consider them on a case-by-case basis:
Draft + Helm
According to the Draft design Q&A section, it is meant only as a developer tool intended to abstract much of the complexity of dealing with kubernetes, helm, and docker from developers primarily interested in seeing their application run in a development cluster.
With this in mind, any CD approach involving this set of tools would have to do the following:
Build docker image(s) using the docker CLI if necessary
Build Helm package(s) using the helm CLI
Deploy Helm package(s) to Helm repository using the helm CLI
Install/update Helm package(s) on the appropriate staging/prod Kubernetes cluster(s) using the helm CLI
Ksonnet
The Ksonnet CD workflow is somewhat abbreviated compared to the helm workflow above:
Build docker image(s) using the docker CLI if necessary
Apply the Ksonnet manifest using the ks CLI
Whereas with Helm you would deploy your applicat's package to a Helm registry for re-use, if your Ksonnet manifest contains re-usable prototypes that might be of use to another Ksonnet-based application you would want to ensure it is available in a git repo as described in the Ksonnet registry documentation.
This means that how Ksonnet definitions are dealt with in CI/CD is largely dependent on which git repo(s) you decide to store them in and how they are structured.

Related

helm template - Helmfile, which way to go?

I am involved several projects CI/CD structures for deployment to Kubernetes with GitOps principles.
Some of the projects started before I joined them, I could not have to much influence on those and some others I was involved at the startup but I was not really happy with the end results so I was in a search of how would an ideal delivery pipeline for Kubernetes should look like.
After reading several peoples proposals and designs, I arrived a solution like the following.
I try to use Best Practices that there are consensus from several sources and the principles of 12 Factor App.
It starts with a Git Repository per Service principle and having a Service Pipeline that is producing an Executable, a Docker Image and pushing to a Docker Registry and Helm Chart with Docker Image id and with the configuration that is valid for all environments and push it to a Helm Repository.
So with every commit to the Service Git Repositories, Service Pipelines will trigger to produces new Docker Images and Helm Charts (There is only one convention, Helm Chart Version will be only increased if there is an actual change to the structure of the Helm Templates, placing only the actual Docker Image Id into the Helm Chart will not bump the Version of the Helm Charts).
A commit to the Service Git Repository would also trigger the Environment Pipeline for the Dev Environment (this is oversimplified to be able to keep the size of the diagram in check, for Feature and Bugfix branches Environment Pipeline can also create additional Namespaces under Dev k8s cluster).
At this point, is one big change from my previous production implementation of similar pipelines and the reason of this question. In those implementations, at this point Environment Pipeline would get all Service Helm Charts from Helm Repository with the help of the Helm Umbrella Chart (unlike the diagram below) and execute 'helm upgrade --install appXXX -n dev -f values-s1.yaml -f values-s2.yaml -f values-s3.yaml -f values-s4.yaml -f values-s5.yaml' which works but the disadvantage being the auditing.
We can identify by inspecting K8s Cluster what we deployed a later point in the time but it would be painful, so my idea is to follow GitOps principles (and many sources agrees with me) and render the manifests from Helm Umbrella Chart with 'helm template' during the 'Environment Pipeline' and commit those to Environment Repository, so that way, first those can be much more easily audited and secondly I can deploy those with the help of the Continuous Deployment tool like ArgoCD.
Now that I explained the precondition, we are arrived to my actual question, I also read from the same sources I mentioned, 'helmfile' is also an awesome tool when I read the documentation, it has really nice tool to prevent boilerplate code but considering, I am planning to synchronise the the state in the Environment Git Repository with ArgoCD, I will not use 'helmfile sync' and 'helm template' does basically what 'helmfile template' does, is using 'helmfile' also in this workflow is an overkill? Additionally I think the concept of Helmfile's 'environment.yaml' collides what I try to achieve with Environment Git Repository.
And secondly, if I decide also to use 'helmfile', mainly because of the awesome extra templating functions preventing the boilerplate, how should I integrate it with ArgoCD, it seems previously it could be integrated over...
data:
configManagementPlugins: |
- name: helmfile
generate:
command: ["/bin/sh", "-c"]
args: ["helmfile -q template --include-crds --skip-tests"]
but it seems now 'configManagementPlugins' is deprecated, how should I integrate it with ArgoCD?
Thx for answers.

Helm Chart: How do I install dependencies first?

I've been developing a prototype chart that depends on some custom resource definitions that are defined in one of the child charts.
To be more specific, I'm trying to create the resources defined in the strimzi-kafka-operator within my helm chart and would like the dependency to be explicitly installed first. I followed the helm documentation and added the following to my Chart.yaml
dependencies:
- name: strimzi-kafka-operator
version: 0.16.2
repository: https://strimzi.io/charts/
I ran:
$ helm dep up ./prototype-chart
$ helm install ./prototype-chart
> Error: unable to build Kubernetes objects from release manifest: unable to recognize "": no matches for kind "KafkaTopic" in version "kafka.strimzi.io/v1beta1"
which shows that it's trying to deploy my chart before my dependency. What is the correct way to install dependencies first and then my parent chart?
(For reference, here is the question I opened on GitHub directly with Strimzi where they informed me they aren't sure how to use their helm as a dependency:
https://github.com/strimzi/strimzi-kafka-operator/issues/2552
)
Regarding CRD's: the fact that Helm by default won't manage those1 is a feature, not a bug. It will still install them if not present; but it won't modify or delete existing CRD's. The previous version of Helm (v2) does, but (speaking from experience) that can get you into all sorts of trouble if you're not careful. Quoting from the link you referenced:
There is not support at this time for upgrading or deleting CRDs using Helm. This was an explicit decision after much community discussion due to the danger for unintentional data loss. [...] One of the distinct disadvantages of the crd-install method used in Helm 2 was the inability to properly validate charts due to changing API availability (a CRD is actually adding another available API to your Kubernetes cluster). If a chart installed a CRD, helm no longer had a valid set of API versions to work against. [...] With the new crds method of CRD installation, we now ensure that Helm has completely valid information about the current state of the cluster.
The idea here is that Helm should operate only at the level of release data (adding/removing deployments, storage, etc.); but with CRD's, you're actually modifying an extension to the Kubernetes API itself, potentially inadvertently breaking other releases that use the same definitions. Consider if you're on a team that has a "library" of CRDs shared between several charts, and you want to uninstall one — formerly, With v2, Helm would happily let you modify or even delete those at will, with no checks on if/how they were used in other releases. Changes to CRDs are changes to your control plane / core API, and should be treated as such — you're modifying global resources.
In short: with v3, Helm positions itself more as a "developer" tool to define, template, and manage releases; CRDs, however, are meant to be managed independently e.g. by a "cluster administrator". At the end of the day, it's a win for all sides, since developers can setup/teardown deployments at will, with confidence that it's not going to break functionality elsewhere... and whoever's on call won't have to deal with alerts if/when you accidentally delete/modify a CRD and break things in production :)
See also the extensive discussion here for more context behind this decision.
Hope this helps!

Is there a concept of inheritance for Kubernetes deployments?

Is there a way to create a tree of inheritance for Kubernetes deployments? I have a number of deployments which are similar but not identical. They share many ENV vars but not all. They all use the same image.
For example, I have a dev deployment which is configured almost identical to a production deployment but has env vars pointing to a different database backend. I have a celery deployment which is configured the same as the production deployment, however, it has a different run command.
Helm is what many people are using for this. It let's you create templates for kubernetes descriptors and pass parameters in to generate descriptors from the templates.
There are other tools out there which can be used to generate variations on kubernetes deployment descriptors by injecting parameters into templates. Ansible is also popular. But Helm is closely connected with the Kubernetes CNCF and community and there's a good selection of official charts available.
EDIT: If the aim is to enable different deployments (e.g. for dev and prod) using a single docker image then that's probably best handled with a single chart. You can create different values files for each deployment and supply the chosen values file to helm install with the --values parameter. If there are parts of the chart that are only sometimes applicable then they can be wrapped in if conditions to turn them on/off.
On the subject of inheritance specifically, there's an example in the helm documention of how to take another chart as a parent/dependency and override its values and I created a chart earlier that you can see in github that includes several other charts and overrides parts of all of them via the values.yml. It also shares some config between the included charts with globals. If you're looking to use a parent to reduce duplication rather than join multiple apps then it is possible to create a base/wrapper chart but it may turn out to be better to just duplicate config.
EDIT (180119): The alternative of Kustomize may soon become available in kubectl
You may also want to check Kustomize. It provides some support to write your yaml manifests in hierarchical form, so that you don't have to repeat yourself.

How do I version control a kubernetes application?

I've checked out helm.sh of course, but at first glance the entire setup seems a little complicated (helm-client & tiller-server). It seems to me like I can get away by just having a helm-client in most cases.
This is what I currently do
Let's say I have a project composed of 3 services viz. postgres, express, nginx.
I create a directory called product-release that is as follows:
product-release/
.git/
k8s/
postgres/
Deployment.yaml
Service.yaml
Secret.mustache.yaml # Needs to be rendered by the dev before use
express/
Deployment.yaml
Service.yaml
nginx/
Deployment.yaml
Service.yaml
updates/
0.1__0.2/
Job.yaml # postgres schema migration
update.sh # k8s API server scritps to patch/replace existing k8s objects, and runs the state change job
The usual git stuff can apply now. Everytime I make a change, I make changes to the spec files, test them, write the update scripts to help move from the last version to this current version and then commit it and tag it.
Questions:
This works for me so far, but is this "the right way"?
Why does helm have the tiller server? Isn't it simpler to do the templating on the client-side? Of course, if you want to separate the activity of the deployment from the knowledge of the application (like secrets) the templating would have to happen on the server, but otherwise why?
Seems that https://redspread.com/ (open source) addresses this particular issue, but needs more development before it'll be production ready - at least from my team quick glance at it.
We'll stick with keeping yaml files in git together with the deployed application for now I guess.
We are using kubernetes/helm (the latest/incubated version) and a central repository for Helm charts (with references container images built for our component releases).
In other words, the Helm package definitions and its dependencies are separate from the source code and image definitions that make up the several components of our web applications.
Notice: Tiller has been removed in Helm v3. Checkout this answer to see details on why it needs tiller in Helm v2 and why it's removed in Helm v3: https://v3.helm.sh/docs/faq/#removal-of-tiller
According to the idea of GitOps, what you did is a right way (to perform release from a git repo). However, if you want to push it further to make it more common, you can plan more goals including:
Choose a configuration management system beyond k8s app declarative definition only. E.g., Helm (like above answer https://stackoverflow.com/a/42053983/914967), Kustomize. They're pure client-side only.
avoid custom release process by altering update.sh with popular tools like kubectl apply or helm install.
drive change delivery from git tags/branches by using a CI/CD engine like argocd, Travis CI or GitHub Actions.
Uses branching strategy so that you can try changes on test/staging/production/ environment before delivering it directly.

How should I manage deployments with kubernetes

I am hoping to find a good way to automate the process of going from code to a deployed application on my kubernetes cluster.
In order to build and deploy my app I need to first build the docker image, tag it, and then push it to ECR. I then need to update my deployment.yaml with the new tag for the docker image and run the deployment with kubectl apply -f deployment.yaml.
This will go and perform a rolling deployment on the kubernetes cluster updating the pods to the new version of the container image, once this deployment has completed I may need to do other application specific things such as running database migrations, or cache clear/warming which may or may not need to run for a given deployment.
I suppose I could just write a shell script that runs all of these commands, and run it whenever I want to start up a new deployment, but I am hoping there is a better/industry standard way to solve these problems that I have missed.
As I was writing this question I noticed stackoverflow recommend this question: Kubernetes Deployments. One of the answers to it seems to imply at least some of what I am looking for is coming soon to kubernetes, but I want to make sure that if there is a better solution I could be using now that I at least know about it.
My colleague has a good blog post about this topic:
http://blog.jonparrott.com/building-a-paas-on-kubernetes/
Basically, Kubernetes is not a Platform-as-a-Service, it's a toolkit on which you can build your own Platform-a-as-Service. It's not very opinionated by design, instead it focuses on solving some tricky problems with scheduling, networking, and coordinating containers, and lets you layer in your opinions on top of it.
One of the simplest ways to automate the workflows you're describing is using a Makefile.
A step up from that, you can design your own miniature PaaS, which the author of the first blog post did here:
https://github.com/jonparrott/noel
Or, you could get involved in more sophisticated efforts to build an open source PaaS on Kubernetes, like OpenShift:
https://www.openshift.com/
or Deis, which is building a Heroku-like platform on Kubernetes:
https://deis.com/
or Redspread, which is building "Git for Kubernetes cluster":
https://redspread.com/
and there are many other examples of people building PaaS on top of Kubernetes. But I think it will be a long time, if ever, that there is an "industry standard" way to deploy to Kubernetes, since half the purpose is to enable multiple deployment workflows for different use cases.
I do want to note that as far as building container images, Google Cloud Container Builder can be a useful tool, since you can do things like use it to automatically build an image any time you push to a repository which could then get deployed. Alternatively, Jenkins is a popular way to automate CI/CD flows with Kubernetes.
I suppose I could just write a shell script that runs all of these commands, and run it whenever I want to start up a new deployment, but I am hoping there is a better/industry standard way to solve these problems that I have missed.
The company I work for (Weaveworks) and other folks in the space had been advocating for an approach that we call GitOps, please take a look at our series of blog posts covering the topic:
GitOps - Operations by Pull Request
The GitOps Pipeline - Part 2
GitOps Part 3 - Observability
Storing Secure Sealed Secrets using GitOps
The gist of it is that you push images from CI, your checked YAML manifests in git (usually different repo from app code). This repo with manifests is then applied to each of your clusters (dev/prod) by a reconciliation operator. You can automate it all yourself quite easily, but also do take a look at what we have built.
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.
We're working on an open source project called Jenkins X which is a proposed sub project of the Jenkins foundation aimed at automating CI/CD on Kubernetes using Jenkins and GitOps for promotion.
When you merge a change to the master branch, Jenkins X creates a new semantically versioned distribution of your app (pom.xml, jar, docker image, helm chart). The pipeline then automates the generation of Pull Requests to promote your application through all of the Environments via GitOps.
Here's a demo of how to automate CI/CD with multiple environments on Kubernetes using GitOps for promotion between environments and Preview Environments on Pull Requests - using Spring Boot and nodejs apps (but we support many languages + frameworks).