Deploy both front and backend using helm charts - kubernetes

I have a monorepo nodejs/react app that I want to deploy to GKE using Helm charts. I added two Dockerfiles one for the frontend and the other for the back.
I'm using Helm Charts to deploy my microservices to the Kubernetes cluster but this time I don't know how to configure it so that I can deploy both back and front simultaneously to GKE.
Should I configure a values.yaml file for each service and keep the other templates as they are (ingress, service, deployment, hpa) or should I work on each service independently?

Posting this as an answer for better visibility since it's a good solution:
David suggested that you can
probably put both parts into the same Helm chart, probably with different templates/*.yaml files for the front-and back-end parts.
If you had a good argument that the two parts are separate (maybe different development teams work on them and you have a good public API contract) it's fine to deploy them separately

Related

Does helmfile sync will redeploy all existing helm charts

I have a few services running on a kubernetes cluster, and I use Helm Chart where I placed all my services. However, I was asked to transfer Helm Charts into Helmfile.
If I use
helmfile import myrepo/mychart
helmfile sync
Will it redeploy and substitute existing running pods or It will deploy just deploy additional services mentioned
Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
Helmfile is a declarative spec for deploying helm charts. It lets you...
Keep a directory of chart value files and maintain changes in version control.
Apply CI/CD to configuration changes.
Periodically sync to avoid skew in environments.
To avoid upgrades for each iteration of helm, the helmfile executable delegates to helm - as a result, helm must be installed.
Like #DavidMaze suggested, use helm diff command first to determine the changes and then use helm sync command for applying them.

How to Automatically Update Istio Resources in Cluster?

I have a kubernetes cluster, with two nodes running.
I have argocd being used to handle pulling in any changes to my microservice (one microservice, currently, but I will be adding to that).
My application is being built as a helm chart. So when my repo changes, i update my helm chart, and then argocd sees that the helm chart has changes and applies those changes to the cluster.
I'm looking to add Istio as my service mesh to my cluster. With using Istio there will be quite a few yaml configuration files.
My question is, how can I have my cluster auto update my istio configurations like how argocd updates when my helm chart changes?
Of course, I could put the istio configuration files in the helm chart, but my thoughts on that were:
do i want my istio configurations tied to my application?
even if I did do #1, which I am not opposed to, there are many istio configurations that will apply cluster-wide, not just to my one microservice, and those definitely wouldn't make sense to tie into my specific one microservice, argo-cd application. So how would I handle auto updating cluster-wide istio files?
Another option could be to use the argocd app of apps pattern, but from what I've read that doesn't seem to have the greatest support yet.
In my opinion, you should package Istio components like VirtualService, RequestAuthentication etc. to the application if they "belong" to the application. You could even add Gateways and Certificates to the app if it fits your development model (i.e., no separate team which manages these concerns). Using a tool like crossplane, you could even include database provisioning or other infrastructure to your app. That way, the app is self-contained and configuration not spread at multiple places.
You could create an "infrastructure" chart. This is something which could be in an own Argo app or even deployed before your apps (maybe the same phase at which Argo CD itself is deployed)
It depends on how to choose to install Istio. If you are installing it using Helm then I believe you can do something similar or otherwise you'll have to create some automation scripts to install using istioctl every-time you make changes to your configs.
1. "Do i want my istio configurations tied to my application?"
What do you mean by this? There is a Data Plane and a Control Plane. You have multiple ways to attach a sidecar-proxy to your app and also deploy any other CRDs like VirtualService, DestinationRule, PeerAuthentication Policy etc.
2. "Even if I did do #1, which I am not opposed to, there are many istio configurations that will apply cluster-wide, not just to my one microservice, and those definitely wouldn't make sense to tie into my specific one microservice, argo-cd application. So how would I handle auto updating cluster-wide istio files?"
Again, what do you mean by this? Whenever you update Istio Control Plane, the Data Plane proxies will sync automatically and will reload the new config using Envoy Hot-Restarts. It's another story if you bump up the version in which case you'll have to restart your application pods to pick up the new changes.
Did you look at using the Istio operator to deploy your service mesh ?
I already do this today with ArgoCD and the "app of apps" pattern. The Istio operator is one application and I created another one for the custom resource (Kind: IstioOperator) that deploys Istio's control plane (istiod and gateways).
If your service mesh configurations changes, it should happen through that custom resource.

Integration of Kubernetes Helm templates for a project deployment

Currently I am working with a project based on a micro service architecture. For making this project, I have 20 Spring Boot micro service projects are there. I for for every root folder I placed my Dockerfile for image building. And I am using Kubernetes cluster for deployment through Helm chart.
My confusion here that, when I created Helm chart, it giving the service.yaml and deployment.yaml inside template directory.
If I am deploying these 20 microservices, do I need to create 20 separate helm chart ? Or Can I create service for every 20 within 1 chart?
I am new to Kubernetes and Helm chart. So I am confused about the standard way of using yaml files with chart. Do I need to create 20 separate chart or can I include in 1 chart?
How can I follow the standard way of chart creation for my micro service projects please?
What I ended up doing (working with a similar stack), is create one microservice Chart, which is stored in an internal Chart repository. Inside of the Helm Chart, I gave enough configuration options, so teams have the flexibility to control their own deployments, but I made sure to set sensible defaults (e.g. make sure the Deployment utilises a RollingUpdateStrategy and readiness probes are configured with sensible defaults).
These configuration options can be passed by the values.yaml file. Teams deploy their microservice via a CICD pipeline, passing the values.yaml file to the helm command (with the -f flag).
I would certainly recommend you read the Helm Template Developer guide, before making the decision. It really depends on how similar your microservices are, but I recommend going for 1 Helm Chart if you have a homogenous environment (which also was the case for me).

Create custom helm charts

I'm using helm charts to create deploy micro services, by executing helm create it creates basic chart with deployment, services and ingress but I have few other configurations such as horizontal pod autoscaler, pod disruption budget.
what I do currently copy the yaml and change accordingly, but this takes lot of time and I don't see this as a (correct way/best practice) to do it.
helm create <chartname>
I want to know how you can create helm charts and have your extra configurations as well.
Bitnami's guide to creating your first helm chart describes helm create as "the best way to get started" and says that "if you already have definitions for your application, all you need to do is replace the generated YAML files for your own". The approach is also suggested in the official helm docs and the chart developer guide. So you are acting on best advice.
It would be cool if there were a wizard you could use to take existing kubernetes yaml files and make a helm chart from them. One tool like this that is currently available is chartify. It is listed on helm's related projects page (and I couldn't see any others that would be relevant).
You can try using Move2Kube. You will have to put all your yamls (if the source is kubernetes yamls) or other source artifacts in a directory (say src) and do move2kube translate -s src/.
In the wizard that comes up, you can choose helm instead of yamls and it will create a helm chart for you.

How to manage more than 200 microservice with Helm?

i would want to know how do you manage your service with Helm ?
I already know that we are going to have more than 200 microservices. How to manage them easily ?
Each microservice with is own yaml files (deployment,service,ingress ,values etc..)
or one several large (deployment,ingress, etc.. )yaml files for all microservices and i push the values yaml file with the specific params for the application.
I'd suggest aiming for an umbrella chart that includes lots of subcharts for the individual services. You can deploy each chart individually but using a single umbrella makes it easier to deploy the whole setup consistently to different environments.
Perhaps some microservices will be similar enough that for them you could use the same chart with different parameters (maybe including docker image parameter) but you'll have to work through them to see whether you can do that. You can include the same chart as a dependency multiple times within an umbrella chart to represent different services.
Ideally you also want a chart for a service to be individually-deployable so you can deploy and check that service in isolation. To do this you would give each chart its own resources including its own Ingress. But you might decide that for the umbrella chart you prefer to disable the Ingresses in the subcharts and put in a single fan-out Ingress for everything - that comes down to what works best for you.