I had written the role task for adding the jumpstart repo and Next how can I pull the image from jumpstart repo
helm repo add jumpstart https://project.github.io/jumpstart
- name: Add jumpstart chart repo
community.kubernetes.helm_repository:
name: jumpstart
repo_url: "https://charts.jumpstart.io"
(below pull command how to use through kubernetes.core.helm using ansible)
helm pull jumpstart/patchfile
And after pulling want to install the chart
helm install patchfile -f patchfile.yaml jumpstart/patchfile -n test --create-namespace
Note: shell and command not to used
Related
I'm new to helm charts, want to do the customisation to existing public chart, anyway to do it?
We are okay to host it at our end, just wanted to check how to dump the existing helm chart.
You can do below if you want to pull the helm chart at your local to customise it -
helm pull [chart URL | repo/chartname] [...] [flags]
Example -
helm repo add <name> <url>
helm pull repo-name/chart-name
Then you can customise it as per your need.
Any further explanation needed then let me know
I am trying to add the Jfrog Artifactory to spinnaker so that spinnaker will be able to fetch the helm chart and makes the deployment. I am trying this command but it's not working
hal config artifact helm account add my-helm-account \
--username-password-file $USERNAME_PASSWORD_FILE
When I run the pipeline it shows me this error
Status: 500, URL: http://spin-clouddriver.spinnaker:7002/artifacts/fetch/, Message: Failed to download index.yaml file in
You'll need to provide the --repository flag as well. I'm guessing that spin-clouddriver URL is the default if a repository isn't specified.
The final hal command may look like this:
hal config artifact helm account add my-helm-account --username-password-file $USERNAME_PASSWORD_FILE --repository https://my.artifactory.com/artifactory/helm
Reference for the command: https://spinnaker.io/docs/reference/halyard/commands/#hal-config-artifact-helm-account-add
I am new to Helm Kubernetes. I am currently using a list of bash commands to create a local Minikube cluster with many containers installed. In order to alleviate the manual burden we were thinking of creating an (umbrella) Helm Chart to execute the whole list of commands.
Between the commands that I would need to run in the Chart there are few (cleanup) kubectl deletes, i.e. :
kubectl delete all,configmap --all -n system --force --grace-period=0
and also some helm installs, i.e.:
helm repo add bitnami https://charts.bitnami.com/bitnami && \
helm install postgres bitnami/postgresql --set postgresqlPassword=test,postgresqlDatabase=test && \
Question1: is it possible to include kubectl command in my Helm Chart?
Question2: is it possible to add a dependency from a Chart only remotely available? I.e. the dependency from postgres above.
Question3: If you think Helm is not the correct tool for doing this, what would you suggest instead?
Thank you
You can't embed imperative kubectl commands in a Helm chart. An installed Helm chart keeps track of a specific set of Kubernetes resources it owns; you can helm delete the release, and that will delete that specific set of things. Similarly, if you have an installed Helm chart, you can helm upgrade it, and the new chart contents will replace the old ones.
For the workflow you describe – you're maintaining a developer environment based on Minikube, and you want to be able to start clean – there are two good approaches to take:
helm delete the release(s) that are already there, which will uninstall their managed Kubernetes resources; or
minikube delete the whole "cluster" (as a single container or VM), and then minikube start a new empty "cluster".
CDK EKS model has a helm module making it possible to point out a helm repo, the chart and so forth.
I keep my helm charts in a private bitbucket and is possible to use command-line helm to add that repo using credentials, like this:
helm repo add my-helm https://api.bitbucket.org/2.0/repositories/myaccount/my-helm/src/master/ --username my#email.com --password mypass
How can I provide my credentials for my CDK stack to process the helm repo correct?
I use Typescript
Just hit the same limitation. It is not supported at the moment see here: https://github.com/aws/aws-cdk/issues/11031
I have Minikube (v1.1.0) running locally with Helm (v2.13.1) initialized and connected the local docker daemon with Minikube running eval $(minikube docker-env). In the code base of my application I created a chart with helm create chart. The first few lines of ./chart/values.yml I changed to:
image:
repository: app-development
tag: latest
pullPolicy: Never
I build the image locally and install/upgrade the chart with Helm:
docker build . -t app-development
helm upgrade --install example ./chart
Now, this works perfect the first time, but if I make changes to the application I would like to run the above two commands to upgrade the image. Is there any way to get this working?
workaround
To get the expected behaviour I can delete the chart from Minikube and install it again:
docker build . -t app-development
helm del --purge example
helm install example ./chart
When you make a change like this, Kubernetes is looking for some change in the Deployment object. If it sees that you want 1 Pod running app-development:latest, and it already has 1 Pod running an image named app-development:latest, then it's in the right state and it doesn't need to do anything (even if the local image that has that tag has changed).
The canonical advice here is to never use the :latest tag with Kubernetes. Every time you build an image, use a distinct tag (a time stamp or the current source control commit ID are easy unique things). With Helm it's easy enough to inject this based on a value you pass in:
image: app-development:{{ .Values.tag | default "latest" }}
This sort of build sequence would look a little more like
TAG=$(date +%Y%m%d-%H%m%S)
docker build -t "app-development:$TAG" .
helm upgrade --install --set "tag=$TAG"
If you're actively developing your component you may find it easier to try to separate out "hacking on code" from "deploying into Kubernetes" as much as you can. Some amount of this tends to be inevitable, but Kubernetes really isn't designed to be a live development environment.
One way you could solve this problem is using minikube and cloud code from google. When you initialize cloud code in your project, it creates skaffold yaml at root location. You can put helm chart for same project in the same code base. Go ahead and edit this configuration to match folder location for the helm chart:
deploy: helm:
releases:
- name: <chart_name>
chartPath: <folder path relative to this file>
now when you click on cloud code at the bottom of your visual code editor (or any editor), it should give you following options:
[1]: https://i.stack.imgur.com/vXK4U.png
Select "Run on Kubernetes" from the list.
Only changes you'll have to do in your helm chart is to read image url from Skaffold yaml using profile.
profiles:
- name: prod
deploy:
helm:
releases:
- name: <helm_chart_name>
chartPath: helm
skipBuildDependencies: true
artifactOverrides:
image: <url_production_image_url>
This will read image from configured url whereas in local, it should read from docker daemon. Cloud code also provide hot update / deployment when you make any changes to any file though.No need to always mention image tag while testing it locally. Once you're good with the code, update the image with latest version number which should trigger deployment in your integration / dev environment.