How do I diff a Helm template against an existing deployment/release? - kubernetes

It looks like Helm 3 is making this more difficult: https://github.com/databus23/helm-diff/issues/176
But I'm finding that using the helm-diff plugin OR just doing this: helm template releaseName chart | kubectl diff -f - | bat -l diff - I'm seeing ALL resources as new with "+" next to them. Why is this?
I'm running these commands:
# upgrade
helm upgrade --install --create-namespace \
--namespace derps -f helm/deploy-values.yaml \
--set 'parentChart.param1=sdfsdfsdfdsf' \
--set 'parentChart.param2=sdfsdfsdfdsf' \
--set 'parentChart.param3=sdfsdfsdfdsf' \
--set 'parentChart.param4=sdfsdfsdfdsf' \
--set 'parentChart.param5=sdfsdfsdfdsf' \
myapp helm/mychart
# make no changes and try to diff
helm template \
--namespace derps -f helm/deploy-values.yaml \
--set 'parentChart.param1=sdfsdfsdfdsf' \
--set 'parentChart.param2=sdfsdfsdfdsf' \
--set 'parentChart.param3=sdfsdfsdfdsf' \
--set 'parentChart.param4=sdfsdfsdfdsf' \
--set 'parentChart.param5=sdfsdfsdfdsf' \
myapp helm/mychart | kubectl diff -f - | bat -l diff -
I get output that shows the ENTIRE manifest is new- why is this?

did you try:
helm template \
--namespace derps --no-hooks --skip-tests \
-f helm/deploy-values.yaml \
--set 'parentChart.param1=sdfsdfsdfdsf' \
--set 'parentChart.param2=sdfsdfsdfdsf' \
--set 'parentChart.param3=sdfsdfsdfdsf' \
--set 'parentChart.param4=sdfsdfsdfdsf' \
--set 'parentChart.param5=sdfsdfsdfdsf' \
myapp helm/mychart | kubectl diff --namespace derps -f - | bat -l diff -

You probably need -n derps on the diff too. If memory serves me, helm template --namespace doesn't actually inject the value.

Related

Grafana Helm Chart on AWS EKS, using --set for multiple service annotations

The grafana helm chart spawns a service on a Classic Load Balancer. I have the AWS load balancer webhook installed, and I'd like to overwrite the annotations on the Grafana service. I'm attempting the following:
helm install grafana grafana/grafana \
--namespace grafana \
--set persistence.storageClassName="gp2" \
--set persistence.enabled=true \
--set adminPassword='abc' \
--values grafana.yaml \
--set service.type=LoadBalancer \
--set nodeSelector.app=prometheus \
--set nodeSelector.k8s-app=metrics-server \
--set service.annotations."service\.beta.kubernetes\.io/aws-load-balancer-nlb-target-type"=ip \
--set service.annotations."service\.beta.kubernetes\.io/aws-load-balancer-type"=external
but, after trying multiple permutations, I continue to get:
Error: INSTALLATION FAILED: YAML parse error on grafana/templates/service.yaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal object into Go struct field .metadata.annotations of type string
What is the correct way of doing this?
there is an issue in the annotation, you are missing escape character for beta\.kubernetes
try this and it should work.
--set service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-nlb-target-type"=ip \
--set service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"=external

Passing boolean values during helm install

I am trying to deploy fusionauth using helm install and by setting environmental variables with value as boolean true or false, I am able to deploy but fusionauth is complaining it is not able to understand value "true", How to set this value as FUSIONAUTH_APP_SILENT_MODE: true instead of FUSIONAUTH_APP_SILENT_MODE: "true"
helm upgrade --install \
fusionauth fusionauth \
--create-namespace \
--namespace fusionauth \
--set environment[0].name=FUSIONAUTH_APP_SILENT_MODE \
--set "environment[0].value="\true"\" \
--repo https://fusionauth.github.io/charts
Any help would be highly appreciated!

Helm3 Upgrade dry run

I am trying to do a helm upgrade dry run.
1.
helm upgrade -i $xyz-abc-ms xyz-abc-exe/target/classes/helm/xyz-abc \
--set jobs.helmServiceAccount=jenkins,csbEnabledLocal=false,jacoco.enabled=true,containerinfo.imageTag=${DOCKER_BUILD_NUMBER},pki.sslenabled=false,pki.kafkaEnabled=true,runtimeContainerInfo.image=fnd-base-images/ocp-os-java-msnext,couchbase.serviceName=oc-cb-02 \
--tiller-namespace=$(oc project -q) \
--namespace $(oc project -q) \
--debug \
--dry-run
But I get the error below:
Error: unknown flag: --tiller-namespace helm.go:81: [debug] unknown flag: --tiller-namespace
2.
I think tiller-namespace is removed from the Helm 3. So I tried the below:
helm upgrade -i $xyz-abc-ms xyz-abc-exe/target/classes/helm/xyz-abc \
--set jobs.helmServiceAccount=jenkins,csbEnabledLocal=false,jacoco.enabled=true,containerinfo.imageTag=${DOCKER_BUILD_NUMBER},pki.sslenabled=false,pki.kafkaEnabled=true,runtimeContainerInfo.image=fnd-base-images/ocp-os-java-msnext,couchbase.serviceName=oc-cb-02 \
--namespace $(oc project -q) \
--debug \
--dry-run
But now I am getting below error:
Error: unknown shorthand flag: 'q' in -q) helm.go:81: [debug] unknown shorthand flag: 'q' in -q)
Can someone help me with the correct command here?
Wihtout -q when I try as below:
helm upgrade -i $xyz-abc-ms xyz-abc-exe/target/classes/helm/xyz-abc \
--set jobs.helmServiceAccount=jenkins,csbEnabledLocal=false,jacoco.enabled=true,containerinfo.imageTag=${DOCKER_BUILD_NUMBER},pki.sslenabled=false,pki.kafkaEnabled=true,runtimeContainerInfo.image=fnd-base-images/ocp-os-java-msnext,couchbase.serviceName=oc-cb-02 ) \
--namespace $(oc project) \
--debug \
--dry-run
It fails with below Error:
Error: "helm upgrade" requires 2 arguments
Usage: helm upgrade [RELEASE] [CHART] [flags]
helm.go:81: [debug] "helm upgrade" requires 2 arguments
What's the proper command for this?
Yeah tiller is not even used by Helm 3.
This article talks about why it was needed in Helm 2 and why they eventually removed it but if you want a very short summary, here it is:
Helm takes your yaml and template files and has to add the resulting objects to Kubernetes right? Tiller does that job but in order to be able to do that, it would need to have maximum permission. In Helm 3, they drop tiller and rely on the authorization that comes with Kubernetes.
Now let's go back to your problem. You should drop your tiller-namespace flag as you have already done. With regards to the q flag, you don't even use it with the helm upgrade command, seems like it's the oc project -q is the part that's failing?
I was able to do with this command:
helm upgrade -i xyz-abc xyz-abc-exe/target/classes/helm/xyz-abc --debug --dry-run

Helm - not able to fetch jetstack/cert-manager

Fetching the deprecated cert-manager Helm chart from Github into an untardir is easy:
helm fetch \
--version v0.5.2 \
--untar \
--untardir charts \
stable/cert-manager
I have been trying to fetch the up-to-date Helm chart from Jetstack in the same manner:
helm fetch \
--repo https://charts.jetstack.io \
--untar \
--untardir charts \
jetstack/cert-manager
Error: chart "jetstack/cert-manager" not found in https://charts.jetstack.io repository
If you specify --repo flag, you should be able to fetch the chart without jetstack/ prefix in your chart name.
helm fetch \
--repo https://charts.jetstack.io \
--untar \
--untardir charts \
cert-manager
Chart name prefix represents the chart repo, like stable/.
After running helm repo add jetstack https://charts.jetstack.io first, you will be able to fetch chart without --repo flag.
helm fetch \
--untar \
--untardir charts \
jetstack/cert-manager

How can I generate cert.pem and key.pem to deploy in an kubernetes cluster with helm?

When I try to install the DB2 with the command:
helm install --name stocktrader-db2 ibm-charts/ibm-db2oltp-dev --tls \
--set db2inst.instname=db2inst1 --set db2inst.password=start1a \
--set options.databaseName=STRADER --set peristence.useDynamicProvisioning=true \
--set dataVolume.size=20Gi --set dataVolume.storageClassName=ibmc-block-gold
I get the following error message:
could not read x509 key pair (cert: "/Users/name/.helm/cert.pem", key:
"/Users/name/.helm/key.pem"): can't load key pair from cert
/Users/name/.helm/cert.pem and key /Users/name/.helm/key.pem: open
/Users/name/.helm/cert.pem: no such file or directory
=> What is the default directory for the files cert.pem and key.pem?
I think you are following their README.md, the installation instructions there assume you have Tiller setup in your cluster with TLS enabled.
If you remove the --tls flag from the command (helm install --name stocktrader-db2 ibm-charts/ibm-db2oltp-dev --set db2inst.instname=db2inst1 --set db2inst.password=start1a --set options.databaseName=STRADER --set peristence.useDynamicProvisioning=true --set dataVolume.size=20Gi --set dataVolume.storageClassName=ibmc-block-gold) it will not attempt to find the certificates.
If you need TLS between helm and tiller, follow this link. Also, per this link, copy the certificates into helm's home directory:
$ cp ca.cert.pem $(helm home)/ca.pem
$ cp helm.cert.pem $(helm home)/cert.pem
$ cp helm.key.pem $(helm home)/key.pem
Then, run the helm install --name stocktrader-db2 ... command.
I removed TLS from the following command:
helm install --name stocktrader-db2 ibm-charts/ibm-db2oltp-dev
--tls
--set db2inst.instname=db2inst1
--set db2inst.password=ThisIsMyPassword
--set options.databaseName=STRADER
--set peristence.useDynamicProvisioning=true
--set dataVolume.size=20Gi
--set dataVolume.storageClassName=glusterfs
If TLS is need the helm configuration can be done via the following installation procedure:
https://helm.sh/docs/using_helm/#securing-your-helm-installation