Difference between namespace install vs managed namespace install in Argo Workflows? - argo-workflows

I am trying to install argo workflows and looking at the documentation I can see 3 different types of installation https://argoproj.github.io/argo-workflows/installation/.
Can anybody give some clarity on the namespace install vs managed namespace install? If its a managed namespace, how can I tell the managed namespace? Should I edit the k8's manifest for deployment? What benefit it can provide compared to simple namespace install
?

A namespace install allows Workflows to run only in the namespace where Argo Workflows is installed.
A managed namespace install allows Workflows to run only in one namespace besides the one where Argo Workflows is installed.
Using a managed namespace install might make sense if you want some users/processes to be able to run Workflows without granting them any privileges in the namespace where Argo Workflows is installed.
For example, if I only run CI/CD-related Workflows that are maintained by the same team that manages the Argo Workflows installation, it's probably reasonable to use a namespace install. But if all the Workflows are run by a separate data science team, it probably makes sense to give them a data-science-workflows namespace and run a "managed namespace install" of Argo Workflows from another namespace.
To configure a managed namespace install, edit the workflow-controller and argo-server Deployments to pass the --managed-namespace argument.
You can currently only configure one managed namespace, but in the future it may be possible to manage more than one.

Related

Kubernetes - Handle cronjobs like crontab

I have a lot of cronjobs I need to set on Kubernetes.
I want a file to manage them all and set them to Kubernetes on deployment. I wish that if I remove a cron from that file it will be removed from Kubernetes too.
Basically, I want to handle the corns like I'm handling them today on the machine (from a cron file that I would deploy). Add, remove and change crons.
I couldn't find a way of doing so. Does someone have an idea?
Library or framework I can use like helm? Or any other solution.
I highly recommend using gitops with argocd as a solution for Kubernetes configure management. Run crontab in deployment is a bad ideal because it hard to monitor your job result (cronjob job result can be get by kube-state-metrics exporter).
The ideal is packaging your manifest (it may be kubernetes manifest, kustomize, helm...etc...) -> put them to git -> argocd makes sure your configure deployed correctly
The advantages of gitops are include:
centralize your configuration
versioning your configuration
git authentication & authorization
traceable
multi-cluster deployment with argocd
automation deployment & sync
...
Gitops is not a difficult and is the mordern way for kubernetes configure management. Let's try
I used Helm to do so. I built a template to go over all crons, which I inserted as values to the helm template (Very similar to crontab but more structured) - see in the example.
Then, all I need to do is run a helm upgrade with a new corn (values) file and it updates everything accordingly. If I updated, removed, or added a new corn everything is happening automatically and with versioning. You can also add a namespace to your cronjobs to make it more encapsulated.
Here is a very good and easy-to-understand example I used. And its git repo

Use existing resources for a new Kustomize installation? (kubeflow)

I am trying to install kubeflow pipelines (KFP) for kubeflow on AWS, as shown here. I am using an overlay for some simple labeling and other cosmetic changes. Installing KFP in the way shown in the documentation will also deploy instances of argo and other necessary services. I already have an instance of argo running on the cluster, so how can I point KFP at that installation of argo instead of deploying a duplicate instance?

What is the right way to manage changes in kubernetes manifests?

I've been using terraform for a while and I really like it. I also set up Atlantis so that my team could have a "GitOps" flow. This is my current process:
Add or remove resources from Terraform files
Push changes to GitHub and create a pull request
Atlantis picks up changes and creates a terraform plan
When the PR is approved, Atlantis applies the changes
I recently found myself needing to set up a few managed Kubernetes clusters using Amazon EKS. While Terraform is capable of creating most of the basic infrastructure, it falls short when setting up some of the k8s resources (no support for gateways or ingress, no support for alpha/beta features, etc). So instead I've been relying on a manual approach using kubectl:
Add the resource to an existing file or create a new file
Add a line to a makefile that runs the appropriate command (kubectl apply or create) on the new file
If I'm using a helm chart, add a line with helm template and then kubectl apply (I didn't really like using tiller, and helm3 is getting rid of it anyway)
If I want to delete a resource, I do it manually with kubectl delete
This process feels nowhere near as clean as what we're doing in Terraform. There are several key problems:
There's no real dry-run. Using kubectl --dry-run or kubectl diff doesn't really work, it's only a client-side diff. Server-side diff functions are currently in alpha
There's no state file. If I delete stuff from the manifests, I have to remember to also delete it from the cluster manually.
No clear way to achieve gitops. I've looked at Weaveworks Flux but that seems to be geared more towards deploying applications.
The makefile is getting more and more complicated. It doesn't feel like this is scaleable.
I should acknowledge that I'm fairly new to Kubernetes, so might be overlooking something obvious.
Is there a way for me to achieve a process similar to what I have in Terraform, within the Kubernetes universe?
This is more of an opinion question so I'll answer with an opinion. If you like to manage configuration you can try some of these tools:
If you want to use existing YAML files (configurations) and use something at a higher level you can try kustomize.
If you want to manage Kubernetes configurations using Jsonnet you should take a look at Ksonnet. Keep in mind that Ksonnet will not be supported in the future.
If you want to just automatically do a helm update in an automated way, there is not a tool there yet. You will have to build something at this point to orchestrate everything. For example, we ended up creating an in house tool that does this.

Draft and Helm vs Ksonnet? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
As I understand all of these tools Draft,Helm and Ksonnet have overlapping functionality such as creating a chart as well as deploying kubernetes configurations.
I understand that purpose of these tool would be to describe and configure the application as well as the k8s environments.
By using Draft we can create Dockerfile, Chart. Nevertheless we can do same thing with Helm and Ksonnet.
My question is If these components create a pipeline in CI/CD then what will be the order?
for example,
draft -> ksonnet > helm
or
draft -> helm -> ksonnet
In short, draft and helm are more or less complimentary and ksonnet is orthogonal, specifically providing an alternative to helm.
In elaborating I will split my answer up into three major sections, the first of which describes how draft and helm interact, the second describing how ksonnet is orthogonal to the others, and finally a section explaining how I understand these with respect to CI/CD.
Helm & Draft
Helm and Draft are complimentary in the sense that Helm, which can be considered to be a package management system for Kubernetes, provides a portion of the functionality offered by Draft which itself is essentially a Kubernetes application development tool.
The relationship between Draft and Helm can be summarized by pointing out that in pursuit of its goal of simplifying Kubernetes application development, Draft produces a Helm chart using metadata inferred from your current application type (more about that below) if one does not already exist or uses and existing one in order to deploy/update a development version of your application without you having to know anything about how it does that.
Helm for Kubernetes Package Management
As mentioned previously, Helm is a package management system for Kubernetes-based applications. It provides the following features:
A templating approach for defining Kubernetes manifests (called "charts")
Package management, including a basic package repository service to host released packages.
Application lifecycle management including deploy, update, and purging of Helm applications
Package dependencies
Helm takes a templated YAML approach to parameterizing Kubernetes manifests and allows values to be shared and overridden between dependent packages. ie, supposed Package A depends on Package B; Package A can re-use configuration values set on Package B and it can override those parameters with values of its own. Values for all packages in a given deployment can also be overridden using the Helm command line tool.
Also worth mentioning is the fact that Helm depends on the availability of its cluster-side component named "Tiller" to actually do the work of reifying templates and deploying the generated Kubernetes manifests to the cluster.
Draft for Kubernetes Application Development
The aim of Draft is to dramatically simplify development of Kubernetes applications by being quickly building and deploying the Helm charts/packages and corresponding docker images necessary to run a project -- provided that the following exist:
A Kubernetes cluster
Helm's Tiller pod installed in the Kubernetes cluster
A Docker registry
The draft installation guide provides details for getting these pieces set up to try it out yourself.
Draft also builds on Helm by providing a high-level "packaging" format that includes both the application helm chart and the Dockerfile, the latter giving it the ability to build docker images.
Finally, it has built-in support for specific programming languages and will to a limited extent attempt to infer which programming language and framework(s) you are using when initially creating a new Draft project using draft create.
Ksonnet for Kubernetes Package Management
As mentioned previously, Ksonnet is orthogonal in many ways to Helm, providing essentially the same features with respect to package management wrapped in different terminology -- see its core concepts documentation. It's worth noting that it is not compatible with nor does it address the same concerns as Draft.
I say that Ksonnet and Helm are orthogonal because they take mutually incompatible approaches to generating and deploying Kubernetes manifests. Whereas Helm uses templated YAML, Ksonnet generates Kubernetes manifests using a "data templating" language called Jsonnet. Also, rather than conceiving of "dependent" packages as is the case with Helm, Ksonnet blurs the line between dependent services by representing them as composable "prototypes". Finally, rather than depending on a cluster-side application that reifies and deployes manifest templates, Ksonnet has an apply subcommand analogous to kubectl apply.
CI/CD
So where do these pieces fit into a CI/CD workflow? Well since there are essentially two mutually incompatible toolsets, let's consider them on a case-by-case basis:
Draft + Helm
According to the Draft design Q&A section, it is meant only as a developer tool intended to abstract much of the complexity of dealing with kubernetes, helm, and docker from developers primarily interested in seeing their application run in a development cluster.
With this in mind, any CD approach involving this set of tools would have to do the following:
Build docker image(s) using the docker CLI if necessary
Build Helm package(s) using the helm CLI
Deploy Helm package(s) to Helm repository using the helm CLI
Install/update Helm package(s) on the appropriate staging/prod Kubernetes cluster(s) using the helm CLI
Ksonnet
The Ksonnet CD workflow is somewhat abbreviated compared to the helm workflow above:
Build docker image(s) using the docker CLI if necessary
Apply the Ksonnet manifest using the ks CLI
Whereas with Helm you would deploy your applicat's package to a Helm registry for re-use, if your Ksonnet manifest contains re-usable prototypes that might be of use to another Ksonnet-based application you would want to ensure it is available in a git repo as described in the Ksonnet registry documentation.
This means that how Ksonnet definitions are dealt with in CI/CD is largely dependent on which git repo(s) you decide to store them in and how they are structured.

How do I version control a kubernetes application?

I've checked out helm.sh of course, but at first glance the entire setup seems a little complicated (helm-client & tiller-server). It seems to me like I can get away by just having a helm-client in most cases.
This is what I currently do
Let's say I have a project composed of 3 services viz. postgres, express, nginx.
I create a directory called product-release that is as follows:
product-release/
.git/
k8s/
postgres/
Deployment.yaml
Service.yaml
Secret.mustache.yaml # Needs to be rendered by the dev before use
express/
Deployment.yaml
Service.yaml
nginx/
Deployment.yaml
Service.yaml
updates/
0.1__0.2/
Job.yaml # postgres schema migration
update.sh # k8s API server scritps to patch/replace existing k8s objects, and runs the state change job
The usual git stuff can apply now. Everytime I make a change, I make changes to the spec files, test them, write the update scripts to help move from the last version to this current version and then commit it and tag it.
Questions:
This works for me so far, but is this "the right way"?
Why does helm have the tiller server? Isn't it simpler to do the templating on the client-side? Of course, if you want to separate the activity of the deployment from the knowledge of the application (like secrets) the templating would have to happen on the server, but otherwise why?
Seems that https://redspread.com/ (open source) addresses this particular issue, but needs more development before it'll be production ready - at least from my team quick glance at it.
We'll stick with keeping yaml files in git together with the deployed application for now I guess.
We are using kubernetes/helm (the latest/incubated version) and a central repository for Helm charts (with references container images built for our component releases).
In other words, the Helm package definitions and its dependencies are separate from the source code and image definitions that make up the several components of our web applications.
Notice: Tiller has been removed in Helm v3. Checkout this answer to see details on why it needs tiller in Helm v2 and why it's removed in Helm v3: https://v3.helm.sh/docs/faq/#removal-of-tiller
According to the idea of GitOps, what you did is a right way (to perform release from a git repo). However, if you want to push it further to make it more common, you can plan more goals including:
Choose a configuration management system beyond k8s app declarative definition only. E.g., Helm (like above answer https://stackoverflow.com/a/42053983/914967), Kustomize. They're pure client-side only.
avoid custom release process by altering update.sh with popular tools like kubectl apply or helm install.
drive change delivery from git tags/branches by using a CI/CD engine like argocd, Travis CI or GitHub Actions.
Uses branching strategy so that you can try changes on test/staging/production/ environment before delivering it directly.