We have one ecr repo on AWS. That contains all helm charts.
This ecr is protected and someone assigned one role to me.
this role allows me to all the images from aws cli console.
Now I am using helm to deploy chart. so for what i used following piece of code.
When I run helm dep update command then this only pull postgres image and test-chart request fails with error 401.
I understand that somewhere I need to mention the aws credentials but don't know where I should use it.
One more thing it would be nice if someone can tell be how to access this with AWS access-token.
dependencies:
- name: postgresql
version: 9.2.1
repository: https://charts.bitnami.com/bitnami
condition: postgresql.enabled
- name: createdb
version: latest
repository: https://111.ecr.eu-central-1.amazonaws.com/test-chart
Helm client version 3 does support ECR as Helms chart repository now. Although, currently any OCI-based registry support is considered experimental on the Helms official documentation.
Assuming you have the required permissions and have already pushed the helm chart to ECR (follow documentation here if you haven't), you can (optionally) do a quick aws ecr describe-images to get the list of available tags of your helm-chart on the ECR repo.
aws ecr describe-images --repository-name <your-repo-name> --region <region> --profile <profile>
{
"imageDetails": [
{
"registryId": "************",
"repositoryName": "<your-repo-name>",
"imageDigest": "sha256:******************************************",
"imageTags": [
"0.1.6"
],
"imageSizeInBytes": 3461,
"imagePushedAt": "2021-04-07T00:16:31+01:00",
"imageManifestMediaType": "application/vnd.oci.image.manifest.v1+json",
"artifactMediaType": "application/vnd.cncf.helm.config.v1+json"
}
]
}
Get ECR token & login to helm repo:
aws ecr get-login-password --region <region> | helm registry login --username AWS --password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com
Once you have the required details, you can run a helm chart pull command to pull the chart from ECR.
helm chart pull <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<your-repo-name>:0.1.6
0.1.6: Pulling from <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<your-repo-name>
ref: <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<your-repo-name>:0.1.6
digest: d16af8672604ebe54********************************************
size: 3.2 KiB
name: <your-chart-name>
version: 0.1.6
Status: Chart is up to date for <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<your-repo-name>:0.1.6
Verify:
helm chart list
REF NAME VERSION DIGEST SIZE CREATED
<aws_account_id>.dkr.ecr.<region>.amazonaws.com/<your-repo-name>... <your-chart-name> 0.1.6 50f03e4 3.2 KiB 22 second
Storing/pulling/installing a chart from any OCI compatible registry is no more an experimental from version 3.7+
With ECR, once you've logged in:
$ aws ecr get-login-password --region eu-west-1 | helm registry login \
--username AWS --password-stdin 12345678910.dkr.ecr.eu-west-1.amazonaws.com
you can pull the chart via:
$ helm pull \
oci://12345678910.dkr.ecr.eu-west-1.amazonaws.com/my/helm/chart --version 0.1.19
Pulled: 12345678910.dkr.ecr.eu-west-1.amazonaws.com/my/helm/chart:0.1.19
Digest: sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Tested with helm 3.9.0
Related
I get the below error when I try to push the chart to ACR. Can you suggest the steps to be done here?
"This command is implicitly deprecated because command group 'acr helm' is deprecated and will be removed in a future release. Use 'helm v3' instead."
I followed this article to create helm chart
https://cloudblogs.microsoft.com/opensource/2018/11/27/tutorial-azure-devops-setup-cicd-pipeline-kubernetes-docker-helm/
These articles also describe the issue, but I don't understand what needs to be done to fix it.
https://github.com/Azure/azure-cli/issues/14498
https://gitanswer.com/azure-cli-az-acr-helm-commands-not-working-python-663770738
https://github.com/Azure/azure-cli/issues/14467
Here is the yaml script which throws error
- bash: |
cd $(projectName)
chartPackage=$(ls $(projectName)-$(helmChartVersion).tgz)
az acr helm push \
-n $(registryName) \
-u $(registryLogin) \
-p '$(registryPassword)' \
$chartPackage
Chart.yaml
apiVersion: v1
description: first helm chart create
name: helmApp
version: v0.3.0
Azure has deprecated the support managing Helm charts using the Az Cli. So you will need Helm client version 3.7.1 to push the Helm charts to ACR.
To push the Helm charts to ACR, follow the next steps:
Enable OCI support
export HELM_EXPERIMENTAL_OCI=1
Save your chart to a local archive
cd chart-dir
helm package .
Authenticate with the registry using helm registry login command
helm registry login $ACR_NAME.azurecr.io \
--username $USER_NAME \
--password $PASSWORD
Push chart to the registry as OCI artifact
helm push chart-name-0.1.0.tgz oci://$ACR_NAME.azurecr.io/helm
You can use the above steps in the Azure DevOps pipeline and it will work as expected. For more info on pushing helm charts to ACR, refer to this doc.
Export the variable HELM_EXPERIMENTAL_OCI=1 as part of the bash script. Azure Chart Museums in ACR are OCI registries and therefore need this ENV variable set in order to push.
Upon closer examination of the question You should ise the built in task for this
- task: HelmDeploy#0
displayName: Helm save
inputs:
command: save
chartNameForACR: '<chart_name>:<tag>'
chartPathForACR: <chart_dir>
azureSubscriptionEndpointForACR: $(SERVICE_CONNECTION)
azureResourceGroupForACR: $(REGISTRY_RESOURCE_GROUP)
azureContainerRegistry: $(REGISTRY_NAME)
```
I can successfully push a helm chart to my harbor registry:
helm registry login -u robot$myrobo -p ... https://myregistry.mycompany.com
helm chart save mychart.gz myregistry.mycompany.com/myrepo/mychart:0.0.0-1
helm chart push myregistry.mycompany.com/myrepo/mychart:0.0.0-1
I can pull the chart as well:
helm registry login -u robot$myrobo -p ... https://myregistry.mycompany.com
helm chart pull myregistry.mycompany.com/myrepo/mychart:0.0.0-1
Both commands succeed, but now I want to run
helm template name path/to/the/chart/I/just/downloaded -f another_file | further_processing
But path/to/the/chart/I/just/downloaded does not exist. It used to with helm 2 and another registry but now (with helm3) the file does not seem to be physically downloaded somewhere.
Except into the cache https://helm.sh/docs/topics/registries/#where-are-my-charts where I could probably parse the index.json and somehow get to my data but that is not desired. Is there a convenient way to access my files in the template command?
Proceedings:
Answer by RafaĆ Leszko:
I tried:
$ helm pull myregistry.mycompany.com/myrepo/mychart:0.0.0-1
Error: repo myregistry.mycompany.com not found
$ helm pull myrepo/mychart:0.0.0-1
Error: repo myrepo not found
I know there are no typos because helm chart pull myregistry.mycompany.com/myrepo/mychart:0.0.0-1 succeeds.
you can try this command:
helm pull hazelcast/hazelcast --untar --destination ./chart
With Helm 3, you can use the helm pull command. It downloads the given helm chart to the current directory.
Try the following commands. There work fine.
helm pull hazelcast/hazelcast
helm template hazelcast hazelcast-*.tgz
Try the commands below. You will need to add repository of Hazelcast before pulling the chart.
helm repo add hazelcast https://hazelcast-charts.s3.amazonaws.com/
helm repo update
helm pull hazelcast/hazelcast
Please see the command below:
helm install --name mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
which I got from here: https://github.com/helm/charts/tree/master/stable/mssql-linux
After just one month it appears the --name is no longer needed so I now have (see here: Helm install unknown flag --name):
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
The error I see now is:
Error: failed to download "stable/mssql-linux" (hint: running `helm repo update` may help)
What is the problem?
Update
Following on from the answers; the command above now works, however I cannot connect to the database using SQL Studio Manager from my local PC. The additional steps I have followed are:
1) kubectl expose deployment mymssql-mssql-linux --type=NodePort --name=mymssql-mssql-linux-service
2) kubectl get service - the below service is relevant here
mymssql-mssql-linux-service NodePort 10.107.98.68 1433:32489/TCP 7s
3) Then try to connect to the database using SQL Studio Manager 2019:
Server Name: localhost,32489
Authentication: SQL Server Authentication
Login: sa
Password: I have tried: b64enc quote and MyStrongPassword1234
I cannot connect using SQL Studio Manager.
Check if the stable repo is added or not
helm repo list
If not then add
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
And then run below to install mssql-linux
helm install mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer
Try:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
and then run your helm command.
Explanation:
Helm in version 3 does not have any repository added by default (helm v2 had stable repository add by default), so you need to add it manually.
Update:
First of all, if you are using helm keep everything in helm values it makes thinks cleaner and easier to find it later rather than mixing kubeclt and helm - I am referring to exposing service via kubeclt.
Ad. 1,2. You have to read some docs to understand Kubernetes services.
With expose command and type NodePort you are exposing your MySQL server on port 32489 - in your case, on Kubernetes nodes. You can check IP of Kubernetes nodes with kubectl get nodes -owide, so your database is available on :32489. This approach is very tricky, it might work fine for PoC purposes, but this is not a recommended way especially on cloud-hosted Kubernetes. The same result you can achieve appending you helm command with --set service.type=NodePort.
Ad. 3 For debugging purposes you can use kubectl port-forward to port forward traffic from container to your local machine. kubectl port-forward deploymeny/mymssql-mssql-linux 1433 should do the trick and you should be able to connect to MySQL on localhost:1433.
In case if the chart you want to use is not published to hub you can install the package directly using the path to the unpacked chart directory.
For example (works for helm v3.2.4):
git clone https://github.com/helm/charts/
cd helm/charts/stable
helm install --name mymssql ./mssql-linux --set acceptEula.value=Y --set edition.value=Developer
I have had jhub released in my cluster successfully. I then changed the config to pull another docker image as stated in the documentation.
This time, while running the same old command:
# Suggested values: advanced users of Kubernetes and Helm should feel
# free to use different values.
RELEASE=jhub
NAMESPACE=jhub
helm upgrade --install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.8.2 \
--values jupyter-hub-config.yaml
where the jupyter-hub-config.yaml file is:
proxy:
secretToken: "<a secret token>"
singleuser:
image:
# Get the latest image tag at:
# https://hub.docker.com/r/jupyter/datascience-notebook/tags/
# Inspect the Dockerfile at:
# https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
name: jupyter/datascience-notebook
tag: 177037d09156
I get the following problem:
UPGRADE FAILED
ROLLING BACK
Error: "jhub" has no deployed releases
Error: UPGRADE FAILED: "jhub" has no deployed releases
I then deleted the namespace via kubectl delete ns/jhub and the release via helm delete --purge jhub. Again tried this command in vain, again the same error.
I read few GH issues and found that either the YAML file was invalid or that the --force flag worked. However, in my case, none of these two are valid.
I expect to make this release and also learn how to edit the current releases.
Note: As you would find in the aforementioned documentation, there is a pvc created.
I had the same issue when I was trying to update my config.yaml file in GKE. Actually what worked for me is to redo these steps:
run curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
helm init --service-account tiller --history-max 100 --wait
[OPTIONAL] helm version to verify that you have a similar output as the documentation
Add repo
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
Run upgrade
RELEASE=jhub
NAMESPACE=jhub
helm upgrade $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
After changes in kubeconfig the next solution worked for me
helm init --tiller-namespace=<ns> --upgrade
Works with kubectl 1.10.0 and helm 2.3.0. I guess this upgrades tiller to compatible helm version.
Don't forget to set KUBECONFIG variable before use this comman - this step itself may solve your issue if you didn't do this after changing your kubeconfig.
export KUBECONFIG=<*.kubeconfig>
In my case in the config cluster.server field has been changed, but context.name and current-context fields I left the same as in the previous config, not sure if it matters. And I faced the same issue on the firs try to deploy new release with helm, but after first successful deploy it's enough to change KUBECONFIG variable.
I hope it helps.
Added the following to my gCloud. I run it everytime I update my config.yaml file. Make sure to be connected to the correct Kubernetes cluster before running.
update.sh
# Installs Helm.
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
# Make Helm aware of the JupyterHub Helm chart repo.
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
# Re-installs the chart configured by your config.yaml.
RELEASE=jhub
JUPYTERHUB_VERSION=0.9.0
helm upgrade $RELEASE jupyterhub/jupyterhub \
--version=${JUPYTERHUB_VERSION} \
--values config.yaml
I'm new to using Spinnaker and Halyard. I'm following this guide by Google.
When installing Spinnaker, they use helm and attach on a spinnaker-config.yaml file that looks like this:
./helm install -n cd stable/spinnaker -f spinnaker-config.yaml --timeout 600 \
--version 1.1.6 --wait
spinnaker-config.yaml:
export SA_JSON=$(cat spinnaker-sa.json)
export PROJECT=$(gcloud info --format='value(config.project)')
export BUCKET=$PROJECT-spinnaker-config
cat > spinnaker-config.yaml <<EOF
gcs:
enabled: true
bucket: $BUCKET
project: $PROJECT
jsonKey: '$SA_JSON'
dockerRegistries:
- name: gcr
address: https://gcr.io
username: _json_key
password: '$SA_JSON'
email: 1234#5678.com
# Disable minio as the default storage backend
minio:
enabled: false
# Configure Spinnaker to enable GCP services
halyard:
spinnakerVersion: 1.10.2
image:
tag: 1.12.0
additionalScripts:
create: true
data:
enable_gcs_artifacts.sh: |-
\$HAL_COMMAND config artifact gcs account add gcs-$PROJECT --json-path /opt/gcs/key.json
\$HAL_COMMAND config artifact gcs enable
enable_pubsub_triggers.sh: |-
\$HAL_COMMAND config pubsub google enable
\$HAL_COMMAND config pubsub google subscription add gcr-triggers \
--subscription-name gcr-triggers \
--json-path /opt/gcs/key.json \
--project [project_guid] \
--message-format GCR
EOF
I need to add another pubsub with a different name than gcr-triggers and noticed that anything I try to add in a pipeline won't persist. I suspect this is because it needs to be added with hal like so:
note: I've already created and verified gcloud subscriptions and add-iam-policy-binding.
hal config pubsub google subscription add [new_trigger] \
--subscription-name [new_trigger] \
--json-path /opt/gcs/key.json \
--project $PROJECT \
--message-format GCR
I suspect installing spinnaker like so is kind of unconventional (correct me if I'm wrong). I've never ran a hal binary from my master machine where kubectl is run and this was not necessary in the guide. Spinnaker's architecture has a bunch of pods I can see. I've poked around in them and didn't see hal.
My question is: with this guide, how am I suppose to hal config new things? What's the normal way this is done?
Helm is a package manager similar to apt in some linux distros.
Because this is a Micro service Architecture running in Kubernetes you must access the Halyard Pod (actually it should be a stateful set)
Get Halyard Pod
export HALYARD=$(kubectl -n spinnaker get pod -l app=halyard -oname | cut -d'/' -f 2)
Access Halyard Pod in Spinnaker namespace kubectl -n spinnaker exec -it ${HALYARD} /bin/bash
Test access by running the command hal config you should get the full config of spinnaker
After you apply the changes you need dont forget to use hal deploy apply