How to download a helm chart as a file for templating? - kubernetes-helm

I can successfully push a helm chart to my harbor registry:
helm registry login -u robot$myrobo -p ... https://myregistry.mycompany.com
helm chart save mychart.gz myregistry.mycompany.com/myrepo/mychart:0.0.0-1
helm chart push myregistry.mycompany.com/myrepo/mychart:0.0.0-1
I can pull the chart as well:
helm registry login -u robot$myrobo -p ... https://myregistry.mycompany.com
helm chart pull myregistry.mycompany.com/myrepo/mychart:0.0.0-1
Both commands succeed, but now I want to run
helm template name path/to/the/chart/I/just/downloaded -f another_file | further_processing
But path/to/the/chart/I/just/downloaded does not exist. It used to with helm 2 and another registry but now (with helm3) the file does not seem to be physically downloaded somewhere.
Except into the cache https://helm.sh/docs/topics/registries/#where-are-my-charts where I could probably parse the index.json and somehow get to my data but that is not desired. Is there a convenient way to access my files in the template command?
Proceedings:
Answer by RafaƂ Leszko:
I tried:
$ helm pull myregistry.mycompany.com/myrepo/mychart:0.0.0-1
Error: repo myregistry.mycompany.com not found
$ helm pull myrepo/mychart:0.0.0-1
Error: repo myrepo not found
I know there are no typos because helm chart pull myregistry.mycompany.com/myrepo/mychart:0.0.0-1 succeeds.

you can try this command:
helm pull hazelcast/hazelcast --untar --destination ./chart

With Helm 3, you can use the helm pull command. It downloads the given helm chart to the current directory.
Try the following commands. There work fine.
helm pull hazelcast/hazelcast
helm template hazelcast hazelcast-*.tgz

Try the commands below. You will need to add repository of Hazelcast before pulling the chart.
helm repo add hazelcast https://hazelcast-charts.s3.amazonaws.com/
helm repo update
helm pull hazelcast/hazelcast

Related

pushing the helm chart to azure container registry fails

I get the below error when I try to push the chart to ACR. Can you suggest the steps to be done here?
"This command is implicitly deprecated because command group 'acr helm' is deprecated and will be removed in a future release. Use 'helm v3' instead."
I followed this article to create helm chart
https://cloudblogs.microsoft.com/opensource/2018/11/27/tutorial-azure-devops-setup-cicd-pipeline-kubernetes-docker-helm/
These articles also describe the issue, but I don't understand what needs to be done to fix it.
https://github.com/Azure/azure-cli/issues/14498
https://gitanswer.com/azure-cli-az-acr-helm-commands-not-working-python-663770738
https://github.com/Azure/azure-cli/issues/14467
Here is the yaml script which throws error
- bash: |
cd $(projectName)
chartPackage=$(ls $(projectName)-$(helmChartVersion).tgz)
az acr helm push \
-n $(registryName) \
-u $(registryLogin) \
-p '$(registryPassword)' \
$chartPackage
Chart.yaml
apiVersion: v1
description: first helm chart create
name: helmApp
version: v0.3.0
Azure has deprecated the support managing Helm charts using the Az Cli. So you will need Helm client version 3.7.1 to push the Helm charts to ACR.
To push the Helm charts to ACR, follow the next steps:
Enable OCI support
export HELM_EXPERIMENTAL_OCI=1
Save your chart to a local archive
cd chart-dir
helm package .
Authenticate with the registry using helm registry login command
helm registry login $ACR_NAME.azurecr.io \
--username $USER_NAME \
--password $PASSWORD
Push chart to the registry as OCI artifact
helm push chart-name-0.1.0.tgz oci://$ACR_NAME.azurecr.io/helm
You can use the above steps in the Azure DevOps pipeline and it will work as expected. For more info on pushing helm charts to ACR, refer to this doc.
Export the variable HELM_EXPERIMENTAL_OCI=1 as part of the bash script. Azure Chart Museums in ACR are OCI registries and therefore need this ENV variable set in order to push.
Upon closer examination of the question You should ise the built in task for this
- task: HelmDeploy#0
displayName: Helm save
inputs:
command: save
chartNameForACR: '<chart_name>:<tag>'
chartPathForACR: <chart_dir>
azureSubscriptionEndpointForACR: $(SERVICE_CONNECTION)
azureResourceGroupForACR: $(REGISTRY_RESOURCE_GROUP)
azureContainerRegistry: $(REGISTRY_NAME)
```

Issue in helm command execution, helm show command doesn't work

I am using below command to export default values of chart helm-zabbix to file $HOME/zabbix_values.yaml, as i am trying to install zabbix on kubernetes cluster.
helm show values cetic/zabbix > $HOME/zabbix_values.yaml
But, i am getting below error:
Error: unknown command "show" for "helm"
Run 'helm --help' for usage.
But helm --help doesn't show show in command list.
Do i need to install any helm plugin or suggest any other alternatives.
You should use inspect instead of show command as helm version 2 use inspect instead of show.
Try using
helm inspect values cetic/zabbix
You can do below to get values for a chart deployed -
helm get values cetic/zabbix -n <namespace>
Namespace is needed for helm 3
helm show is used to show the information on chart

What helm repository should I add?

I am following these instructions (https://docs.rocket.chat/installation/paas-deployments/eks) which use helm.
However helm seems to have changed since these instructions were written and a 'repository' is now required.
My question is :
What repo should I add for helm v3 that is the equivalent to helm v2 default ?
Here is what I have done
I tried the command helm init --service-account tiller
but received the error, Error: unknown command "init" for "helm.
I read here that init is no longer required for helm.
So I tried the next command to install traefik, helm install stable/traefik --name traefik --namespace kube-system --set rbac.enabled=true.
and that says Error: unknown flag: --name which is also a change for v3,
So I adjust the command to be helm install traefik stable/traefik --namespace kube-system --set rbac.enabled=true.
And now I get Error: failed to download "stable/traefik" (hint: running helm repo update may help).
helm repo update returns Error: no repositories found. You must add one before updating
I tried helm repo add stable but got Usage: helm repo add [NAME] [URL] [flags]
In the update documentation I am not finding anything about what that NAME or URL should be.
So, another way to ask my question is :
What values should [NAME] and [URL] be for the helm repo add command ?
The command is :
helm repo add stable https://kubernetes-charts.storage.googleapis.com
- https://github.com/helm/charts

Error: failed to download "stable/mssql-linux" (hint: running `helm repo update` may help)

Please see the command below:
helm install --name mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
which I got from here: https://github.com/helm/charts/tree/master/stable/mssql-linux
After just one month it appears the --name is no longer needed so I now have (see here: Helm install unknown flag --name):
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
The error I see now is:
Error: failed to download "stable/mssql-linux" (hint: running `helm repo update` may help)
What is the problem?
Update
Following on from the answers; the command above now works, however I cannot connect to the database using SQL Studio Manager from my local PC. The additional steps I have followed are:
1) kubectl expose deployment mymssql-mssql-linux --type=NodePort --name=mymssql-mssql-linux-service
2) kubectl get service - the below service is relevant here
mymssql-mssql-linux-service NodePort 10.107.98.68 1433:32489/TCP 7s
3) Then try to connect to the database using SQL Studio Manager 2019:
Server Name: localhost,32489
Authentication: SQL Server Authentication
Login: sa
Password: I have tried: b64enc quote and MyStrongPassword1234
I cannot connect using SQL Studio Manager.
Check if the stable repo is added or not
helm repo list
If not then add
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
And then run below to install mssql-linux
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
Try:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
and then run your helm command.
Explanation:
Helm in version 3 does not have any repository added by default (helm v2 had stable repository add by default), so you need to add it manually.
Update:
First of all, if you are using helm keep everything in helm values it makes thinks cleaner and easier to find it later rather than mixing kubeclt and helm - I am referring to exposing service via kubeclt.
Ad. 1,2. You have to read some docs to understand Kubernetes services.
With expose command and type NodePort you are exposing your MySQL server on port 32489 - in your case, on Kubernetes nodes. You can check IP of Kubernetes nodes with kubectl get nodes -owide, so your database is available on :32489. This approach is very tricky, it might work fine for PoC purposes, but this is not a recommended way especially on cloud-hosted Kubernetes. The same result you can achieve appending you helm command with --set service.type=NodePort.
Ad. 3 For debugging purposes you can use kubectl port-forward to port forward traffic from container to your local machine. kubectl port-forward deploymeny/mymssql-mssql-linux 1433 should do the trick and you should be able to connect to MySQL on localhost:1433.
In case if the chart you want to use is not published to hub you can install the package directly using the path to the unpacked chart directory.
For example (works for helm v3.2.4):
git clone https://github.com/helm/charts/
cd helm/charts/stable
helm install --name mymssql ./mssql-linux --set acceptEula.value=Y --set edition.value=Developer

How to deploy a release after changing the configurations?

I have had jhub released in my cluster successfully. I then changed the config to pull another docker image as stated in the documentation.
This time, while running the same old command:
# Suggested values: advanced users of Kubernetes and Helm should feel
# free to use different values.
RELEASE=jhub
NAMESPACE=jhub
helm upgrade --install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.8.2 \
--values jupyter-hub-config.yaml
where the jupyter-hub-config.yaml file is:
proxy:
secretToken: "<a secret token>"
singleuser:
image:
# Get the latest image tag at:
# https://hub.docker.com/r/jupyter/datascience-notebook/tags/
# Inspect the Dockerfile at:
# https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
name: jupyter/datascience-notebook
tag: 177037d09156
I get the following problem:
UPGRADE FAILED
ROLLING BACK
Error: "jhub" has no deployed releases
Error: UPGRADE FAILED: "jhub" has no deployed releases
I then deleted the namespace via kubectl delete ns/jhub and the release via helm delete --purge jhub. Again tried this command in vain, again the same error.
I read few GH issues and found that either the YAML file was invalid or that the --force flag worked. However, in my case, none of these two are valid.
I expect to make this release and also learn how to edit the current releases.
Note: As you would find in the aforementioned documentation, there is a pvc created.
I had the same issue when I was trying to update my config.yaml file in GKE. Actually what worked for me is to redo these steps:
run curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
helm init --service-account tiller --history-max 100 --wait
[OPTIONAL] helm version to verify that you have a similar output as the documentation
Add repo
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
Run upgrade
RELEASE=jhub
NAMESPACE=jhub
helm upgrade $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
After changes in kubeconfig the next solution worked for me
helm init --tiller-namespace=<ns> --upgrade
Works with kubectl 1.10.0 and helm 2.3.0. I guess this upgrades tiller to compatible helm version.
Don't forget to set KUBECONFIG variable before use this comman - this step itself may solve your issue if you didn't do this after changing your kubeconfig.
export KUBECONFIG=<*.kubeconfig>
In my case in the config cluster.server field has been changed, but context.name and current-context fields I left the same as in the previous config, not sure if it matters. And I faced the same issue on the firs try to deploy new release with helm, but after first successful deploy it's enough to change KUBECONFIG variable.
I hope it helps.
Added the following to my gCloud. I run it everytime I update my config.yaml file. Make sure to be connected to the correct Kubernetes cluster before running.
update.sh
# Installs Helm.
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
# Make Helm aware of the JupyterHub Helm chart repo.
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
# Re-installs the chart configured by your config.yaml.
RELEASE=jhub
JUPYTERHUB_VERSION=0.9.0
helm upgrade $RELEASE jupyterhub/jupyterhub \
--version=${JUPYTERHUB_VERSION} \
--values config.yaml