How to Automatically Update Istio Resources in Cluster? - kubernetes

I have a kubernetes cluster, with two nodes running.
I have argocd being used to handle pulling in any changes to my microservice (one microservice, currently, but I will be adding to that).
My application is being built as a helm chart. So when my repo changes, i update my helm chart, and then argocd sees that the helm chart has changes and applies those changes to the cluster.
I'm looking to add Istio as my service mesh to my cluster. With using Istio there will be quite a few yaml configuration files.
My question is, how can I have my cluster auto update my istio configurations like how argocd updates when my helm chart changes?
Of course, I could put the istio configuration files in the helm chart, but my thoughts on that were:
do i want my istio configurations tied to my application?
even if I did do #1, which I am not opposed to, there are many istio configurations that will apply cluster-wide, not just to my one microservice, and those definitely wouldn't make sense to tie into my specific one microservice, argo-cd application. So how would I handle auto updating cluster-wide istio files?
Another option could be to use the argocd app of apps pattern, but from what I've read that doesn't seem to have the greatest support yet.

In my opinion, you should package Istio components like VirtualService, RequestAuthentication etc. to the application if they "belong" to the application. You could even add Gateways and Certificates to the app if it fits your development model (i.e., no separate team which manages these concerns). Using a tool like crossplane, you could even include database provisioning or other infrastructure to your app. That way, the app is self-contained and configuration not spread at multiple places.
You could create an "infrastructure" chart. This is something which could be in an own Argo app or even deployed before your apps (maybe the same phase at which Argo CD itself is deployed)

It depends on how to choose to install Istio. If you are installing it using Helm then I believe you can do something similar or otherwise you'll have to create some automation scripts to install using istioctl every-time you make changes to your configs.
1. "Do i want my istio configurations tied to my application?"
What do you mean by this? There is a Data Plane and a Control Plane. You have multiple ways to attach a sidecar-proxy to your app and also deploy any other CRDs like VirtualService, DestinationRule, PeerAuthentication Policy etc.
2. "Even if I did do #1, which I am not opposed to, there are many istio configurations that will apply cluster-wide, not just to my one microservice, and those definitely wouldn't make sense to tie into my specific one microservice, argo-cd application. So how would I handle auto updating cluster-wide istio files?"
Again, what do you mean by this? Whenever you update Istio Control Plane, the Data Plane proxies will sync automatically and will reload the new config using Envoy Hot-Restarts. It's another story if you bump up the version in which case you'll have to restart your application pods to pick up the new changes.

Did you look at using the Istio operator to deploy your service mesh ?
I already do this today with ArgoCD and the "app of apps" pattern. The Istio operator is one application and I created another one for the custom resource (Kind: IstioOperator) that deploys Istio's control plane (istiod and gateways).
If your service mesh configurations changes, it should happen through that custom resource.

Related

Does helmfile sync will redeploy all existing helm charts

I have a few services running on a kubernetes cluster, and I use Helm Chart where I placed all my services. However, I was asked to transfer Helm Charts into Helmfile.
If I use
helmfile import myrepo/mychart
helmfile sync
Will it redeploy and substitute existing running pods or It will deploy just deploy additional services mentioned
Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
Helmfile is a declarative spec for deploying helm charts. It lets you...
Keep a directory of chart value files and maintain changes in version control.
Apply CI/CD to configuration changes.
Periodically sync to avoid skew in environments.
To avoid upgrades for each iteration of helm, the helmfile executable delegates to helm - as a result, helm must be installed.
Like #DavidMaze suggested, use helm diff command first to determine the changes and then use helm sync command for applying them.

Kubernetes - Handle cronjobs like crontab

I have a lot of cronjobs I need to set on Kubernetes.
I want a file to manage them all and set them to Kubernetes on deployment. I wish that if I remove a cron from that file it will be removed from Kubernetes too.
Basically, I want to handle the corns like I'm handling them today on the machine (from a cron file that I would deploy). Add, remove and change crons.
I couldn't find a way of doing so. Does someone have an idea?
Library or framework I can use like helm? Or any other solution.
I highly recommend using gitops with argocd as a solution for Kubernetes configure management. Run crontab in deployment is a bad ideal because it hard to monitor your job result (cronjob job result can be get by kube-state-metrics exporter).
The ideal is packaging your manifest (it may be kubernetes manifest, kustomize, helm...etc...) -> put them to git -> argocd makes sure your configure deployed correctly
The advantages of gitops are include:
centralize your configuration
versioning your configuration
git authentication & authorization
traceable
multi-cluster deployment with argocd
automation deployment & sync
...
Gitops is not a difficult and is the mordern way for kubernetes configure management. Let's try
I used Helm to do so. I built a template to go over all crons, which I inserted as values to the helm template (Very similar to crontab but more structured) - see in the example.
Then, all I need to do is run a helm upgrade with a new corn (values) file and it updates everything accordingly. If I updated, removed, or added a new corn everything is happening automatically and with versioning. You can also add a namespace to your cronjobs to make it more encapsulated.
Here is a very good and easy-to-understand example I used. And its git repo

Deploy both front and backend using helm charts

I have a monorepo nodejs/react app that I want to deploy to GKE using Helm charts. I added two Dockerfiles one for the frontend and the other for the back.
I'm using Helm Charts to deploy my microservices to the Kubernetes cluster but this time I don't know how to configure it so that I can deploy both back and front simultaneously to GKE.
Should I configure a values.yaml file for each service and keep the other templates as they are (ingress, service, deployment, hpa) or should I work on each service independently?
Posting this as an answer for better visibility since it's a good solution:
David suggested that you can
probably put both parts into the same Helm chart, probably with different templates/*.yaml files for the front-and back-end parts.
If you had a good argument that the two parts are separate (maybe different development teams work on them and you have a good public API contract) it's fine to deploy them separately

What is the right way to manage changes in kubernetes manifests?

I've been using terraform for a while and I really like it. I also set up Atlantis so that my team could have a "GitOps" flow. This is my current process:
Add or remove resources from Terraform files
Push changes to GitHub and create a pull request
Atlantis picks up changes and creates a terraform plan
When the PR is approved, Atlantis applies the changes
I recently found myself needing to set up a few managed Kubernetes clusters using Amazon EKS. While Terraform is capable of creating most of the basic infrastructure, it falls short when setting up some of the k8s resources (no support for gateways or ingress, no support for alpha/beta features, etc). So instead I've been relying on a manual approach using kubectl:
Add the resource to an existing file or create a new file
Add a line to a makefile that runs the appropriate command (kubectl apply or create) on the new file
If I'm using a helm chart, add a line with helm template and then kubectl apply (I didn't really like using tiller, and helm3 is getting rid of it anyway)
If I want to delete a resource, I do it manually with kubectl delete
This process feels nowhere near as clean as what we're doing in Terraform. There are several key problems:
There's no real dry-run. Using kubectl --dry-run or kubectl diff doesn't really work, it's only a client-side diff. Server-side diff functions are currently in alpha
There's no state file. If I delete stuff from the manifests, I have to remember to also delete it from the cluster manually.
No clear way to achieve gitops. I've looked at Weaveworks Flux but that seems to be geared more towards deploying applications.
The makefile is getting more and more complicated. It doesn't feel like this is scaleable.
I should acknowledge that I'm fairly new to Kubernetes, so might be overlooking something obvious.
Is there a way for me to achieve a process similar to what I have in Terraform, within the Kubernetes universe?
This is more of an opinion question so I'll answer with an opinion. If you like to manage configuration you can try some of these tools:
If you want to use existing YAML files (configurations) and use something at a higher level you can try kustomize.
If you want to manage Kubernetes configurations using Jsonnet you should take a look at Ksonnet. Keep in mind that Ksonnet will not be supported in the future.
If you want to just automatically do a helm update in an automated way, there is not a tool there yet. You will have to build something at this point to orchestrate everything. For example, we ended up creating an in house tool that does this.

Kubernetes Deployments

While working on creating a platform that will do microservice deployments using Kubernetes, we want to take a Dependency on the Kubernetes Deployment Object. However, we saw the documentation http://kubernetes.io/v1.1/docs/user-guide/deployments.html says the following "Note that Deployment objects effectively have API version v1alpha1. Alpha objects may change or even be discontinued in future software releases"
I am wondering if we should go about using the Deployment concept to do our deployments, essentially rolling updates or since it could be discontinued or change, should we just reimplement the same concepts ourselves like , creating a rc with new labels, create new pods with different labels then both old rc and new rc, scale down the old rc by slowly removing pods from the old rc and slowly adding new pods into the new rc.
What is the plan or proposed changes for Deployment or that concept is going away for a better concept ?
Also i am wondering why OpenShift did not use the Deployment object, was it not ready at that time ?
OpenShifts deployment object preceded the upstream Kube object (being feature complete in the March 2015 time frame). Once Kube Deployments support the remaining features in OpenShift deployments, we'll automatically migrate them. Some things OpenShift deployments support that are not upstream yet
Automatic deployment when Docker registry tags change
Custom deployments (run your own deployment logic in a pod)
Deployment hooks - execute "bundle exec rake db:migrate" before or after deploying your app
Recreate deployment strategy
Ability to pause or "hold" a deployment so it does not automatically run (so admins can choose to deploy).
Ability for deployments to "fail" and be recorded (so that end users know that the code they pushed failed to start).
It will take time to add those remaining options.
As of now, the Deployment concept has been moved to "v1beta1". The concept will most probably be continued, because it is a declarative approach (vs. the imperative approach with the older replication controller etc.).
Can't tell anything about OpenShift but on GKE it works for me pretty well!
Deployment is planned to graduate to beta in 1.2 release. See related issue #15313 for the changes to be made. We will also have new kubectl commands for rolling update which uses Deployment, see issue #17168 and the proposal.