Helm chart deployment with spinnaker - kubernetes-helm

I am using spinnaker to deploy helm charts, using the stages Bake(Manifest) for creating the artifact and Deploy(Manifest) for deploying the chart.
Here i didn’t find out any option for the release name of helm install in the spinnaker stages. Even I spinned-up one helm pod in k8s cluster and tried to list out the releases. Even after successful helm chart deployment with spinnaker also, i didn't see any release name.
How to control the helm release name by using above spinnaker stages?

Spinnaker doesn't install helm charts using standard helm commands helm install/upgrade.
It takes helm chart as input, bake manifest stage transforms the chart to a single manifest file, then it applies the manifest directly to k8s in deploy manifest stage.
So, to answer your question. You can't control standard helm release name or helm chart version as k8s cluster doesn't have context of that helm chart.

Related

How can I migrate an ingress from an helm chart to terraform without deleting the resource during deployment

I have a custom application helm chart with an ingress object which is deployed in production.
Now I need to migrate the ingress source code object from the helm chart to terraform to give control over the object to another team.
Technically no problem with accepting a downtime.
But I want to keep the ingress object from being undeployed by the helm chart during deployment as there is a letsencrypt certificate attached to it.
So is there a possibility to tell helm to keep the ingress object when I remove the ingress in the source of the helm chart during helm upgrade?
found the answer myself in the helm anntotations. https://helm.sh/docs/howto/charts_tips_and_tricks/#tell-helm-not-to-uninstall-a-resource
That mean's you deploy the ingress again via helm chart with the annotation "helm.sh/resource-policy": keep.
Then you remove the ingress from the helm chart and redeploy it.
Now the ingress is still deployed in kubernetes but not anymore under control of the helm release.
Next step is to model/code the ingress in terraform and import the resource via terraform import.
Last step is to test with terraform plan if the imported resource corresponds completely with the coded ingress in terraform
That's it.
You can just keep the helm chart as it is and add details into the terraform, I think it will work.
Terraform will run the plan and apply the helm release and if you set helm config to roll out, in that case, if No changes there no update will get applied to resources like ingress, deployment etc.
With terraform, you can use the Helm provider: https://registry.terraform.io/providers/hashicorp/helm/latest/docs

Is atomic deployment possible with Helm subcharts in kubernetes

Provided I have deployment defined in Helm Chart with subcharts.
Does "helm install --atomic ..." rollback the deployment of all charts and subcharts in case any of the chart/subchart deployment fails.
In other words is the whole deployment including subcharts atomic?

Run istioctl in Spinnaker pipeline

Currently we deploy the custom istio ingress gateways(g/w) through helm using Spinnaker pipeline.(One time activity for every k8s namespace)
istio 1.6 is deprecating the helm way of creation of custom user g/w. Instead is asks to deploy it using istioctl command.
Since Spinnaker supports only Helm2 or Helm3 as rendering engine.
My specific ask is how can I now deploy the custom istio user g/w through helm pipeline using istioctl command?
Since I didn't get much response. Let me answer it myself.
Here's what I did:
I took a bitnami kubectl docker base image
Bundled on of the istio releases say 1.5.8 https://github.com/istio/istio/releases/download/1.5.8/istio-1.5.8-linux.tar.gz
Get the default manifest using istioctl manifest generate
Modify it accordingly to define a custom ingress-gateway
Run the following command in the entrypoint.sh for the Docker image
istioctl manifest generate -f manifest.yaml | kubecl apply -f -
Create a docker image including all the steps
In Spinnaker pipeline create a stage which deploys based on K8s file.
In the file define a Job and run the docker image created.
In this way once the job starts running it creates a K8s pod which internally creates the custom user istio ingress g/w.

Role of Helm install command vs kubectl command in Kubernetes cluster deployment

I have a Kubernetes cluster with 1 master node and 2 worker node. And I have another machine where I installed Helm. Actually I am trying to create Kubernetes resources using Helm chart and trying to deploy into remote Kubernetes cluster.
When I am reading about helm install command, I found that we need to use helm and kubectl command for deploying.
My confusion in here is that, when we using helm install, the created chart will deploy on Kubernetes and we can push it into chart repo also. So for deploying we are using Helm. But why we are using kubectl command with Helm?
Helm 3: No Tiller. Helm install just deploys stuff using kubectl underneath. So to use helm, you also need a configured kubectl.
Helm 2:
Helm/Tiller are client/server, helm needs to connect to tiller to initiate the deployment. Because tiller is not publicly exposed, helm uses kubectl underneath to open a tunnel to tiller. See here: https://github.com/helm/helm/issues/3745#issuecomment-376405184
So to use helm, you also need a configured kubectl. More detailed: https://helm.sh/docs/using_helm/
Chart Repo: is a different concept (same for helm2 / helm3), it's not mandatory to use. They are like artifact storage, for example in quay.io application registry you can audit who pushed and who used a chart. More detailed: https://github.com/helm/helm/blob/master/docs/chart_repository.md. You always can bypass repo and install from src like: helm install /path/to/chart/src

Deploying Images from gitlab in a new namespace in Kubernetes

I have integrated gitlab with Kubernetes cluster which is hosted on AWS. Currently it builds the code from gitlab to the default namespace. I have created two namespaces in kubernetes one for production and one for development. What are the steps if I want that to be deployed in a dev or a production namespace. Do I need to make changes at the gitlab level or on the kubernetes level.
This is done at the kubernetes level. Whether you're using helm or kubectl, you can specify the desired namespace in the command.
As in:
kubectl create -f deployment.yaml --namespace <desired-namespace>
helm install stable/gitlab-ce --namespace <desired-namespace>
Alternatively, you can just change your current namespace to the desired namespace and install as you did before. By default, helm charts or kuberenetes yaml files will install into your current namespace unless specified otherwise.