Helm: use packaged values.dev.yaml for install - kubernetes-helm

Given the chart structure:
└── myChart
├── Chart.yaml
├── templates
│   ├── ...
│   └── service.yaml
├── values.dev.yaml
└── values.yaml
values.dev.yaml gets packaged with the chart.tgz. Is it possible to use values.dev.yaml for values (-f) when installing ?

Related

kustomize on Ingress on kubernetes (minikube) cluster

i did with success some kustomize for multi-evironnements
on a files tree like :
├── base
│   ├── deployment.yaml
│   ├── kustomization.yaml
│   ├── service.yaml
├── overlays
│   ├── prod
│   │   ├── application.properties
│   │   ├── kustomization.yaml
│   └── uat
│   ├── application.properties
│   ├── kustomization.yaml
but id'like to add now auto generation of an ingress ressource,
in a tree similar as..
├── base
│   ├── deployment.yaml
│   ├── kustomization.yaml
│   ├── service.yaml
│   ├── ingress.yaml <============ added template
├── overlays
│   ├── prod
│   │   ├── application.properties
│   │   ├── kustomization.yaml
│   └── uat
│   ├── application.properties
│   ├── kustomization.yaml
Is it possible ?
Does anybody has a snippet or tricks to do that ?
in my prod overlay, my kustomization.yaml file is :
bases:
- ../../base
namespace: prod
patches:
- replicas.yaml
configMapGenerator:
- name: application.properties
files:
- application.properties=application.properties
but when i launch my command
kubectl apply -k .
it does not create my Ingress ressource (only configmap, service and deployment)
So: how does it work ?

Flux v2 - How to Deploy Same Helm Chart, Multiple Times, Into Different Namespaces

We are building out a small cluster for a dev team.
Ive been working through this repo: https://github.com/fluxcd/flux2-kustomize-helm-example
The infrastructure part went fine.
Now instead of apps I need to create a way for each developer, to deploy/maintain their own version of the application they are working on.
├── clusters
│   └── qa
│   ├── deploys.bak
│   ├── flux-system
│   │   ├── gotk-components.yaml
│   │   ├── gotk-sync.yaml
│   │   └── kustomization.yaml
│   └── infrastructure.yaml
├── deploys
│   ├── base
│   ├── dev1
│   ├── dev2
│   ├── dev3
│   └── staging
In deploys/base it would be great to specify a Namespace, a HelmRelease, and a Kubernetes Secret.
Then in deploys/dev1 it would be great if we could include the base but have a way of overriding the namespace everything goes into.
So you would have namespaces app-dev1, app-dev2 etc.
This would allow us to only really have to override the ingress information, and the image tag for the app.
Thanks for any information on this.
You need to add a patch to your kustomization.
patches:
- target:
kind: HelmRelease
name: .*-helm-release
version: v2beta1
patch: |-
- op: add
path: /spec/targetNamespace
value: dev1
- op: replace
path: /metadata/namespace
value: dev1
Add this to every env that you want.

Helmfile how change exist resource to another chart

I want to move official-web this chart in sw-api chart template to independent chart,and move the files out off the sw-api's template
then use helmfile apply ,give me this error.
This is my error code
Error: Failed to render chart: exit status 1:
Error: rendered manifests contain a resource that already exists.
Unable to continue with install: ConfigMap "official-web-config" in namespace "develop" exists and cannot be imported into the current release: invalid ownership metadata;
annotation validation error: key "meta.helm.sh/release-name" must equal "official-web": current value is "sw-api"
This my helm folder now
── sw-api
│   ├── Chart.yaml
│   ├── templates
│   │   ├── micros-worker
│   │   │   ├── configMap.yaml
│   │   │   ├── deployment.yaml
│   │   │   └── hpa.yaml
│   │   └── official-web
│   │   ├── certificate.yaml
│   │   ├── configMap.yaml
│   │   ├── deployment.yaml
│   │   ├── hpa.yaml
│   │   ├── ingress.yaml
│   │   ├── ip.yaml
│   │   └── svc.yaml
│   ├── values-dev.yaml
├── sw-api-values.yaml.gotmpl
── swagger
│   ├── Chart.yaml
│   ├── templates
│   │   ├── _helpers.tpl
│   │   ├── deployment.yaml
│   │   └── svc.yaml
│   ├── values-dev.yaml
I become like this
── official-web
│   ├── Chart.yaml
│   ├── templates
│   │   ├── _helpers.tpl
│   │   ├── certificate.yaml
│   │   ├── configMap.yaml
│   │   ├── deployment.yaml
│   │   ├── hpa.yaml
│   │   ├── ingress.yaml
│   │   ├── ip.yaml
│   │   └── svc.yaml
│   ├── values-dev.yaml
├── official-web-values.yaml.gotmpl
├── sw-api
│   ├── Chart.yaml
│   │   └── micros-worker
│   │   ├── configMap.yaml
│   │   ├── deployment.yaml
│   │   └── hpa.yaml
│   ├── values-dev.yaml
│   └── values-test.yaml
├── sw-api-values.yaml.gotmpl
And do helmfile apply command (in namespace "develop"):
helmfile -f helmfile.yaml --log-level=debug --debug -e dev apply
This error show before do hemlfile apply
Error: Failed to render chart: exit status 1:
Error: rendered manifests contain a resource that already exists.
Unable to continue with install: ConfigMap "official-web-config" in namespace "develop" exists and cannot be imported into the current release: invalid ownership metadata;
annotation validation error: key "meta.helm.sh/release-name" must equal "official-web": current value is "sw-api"

Helm 3 installing charts that are not enabled

I have an umbrella chart with some other charts in it. The problem is, there is a particular chart that is being installed even if it's installed.
Here is the chart.yaml for the umbrella chart
- name: casa
condtion: persistence.configmap.gluuCasaEnabled
version: 1.0.0
Here is the persistence block part.
persistence:
configmap:
# Auto install other services. If enabled the respective service chart will be installed
gluuCasaEnabled: false
gluuPassportEnabled: false
And here is the casa service chart.yaml
apiVersion: v1
appVersion: "4.0.0_01"
description: A Helm chart for casa
name: casa
version: 1.0.0
maintainers:
- name: Gluu
home: https://www.gluu.org/ee
email: support#gluu.org
Here is how the dicrectories look like or the services
├── Chart.yaml
├── README.md
├── charts
│   ├── casa
│   │   ├── Chart.yaml
│   │   ├── templates
│   │   │   ├── _helpers.tpl
│   │   │   ├── configmap.yaml
│   │   │   ├── deployment.yaml
│   │   │   ├── jobs.yaml
│   │   │   ├── pvc.yaml
│   │   │   ├── secrets.yaml
│   │   │   ├── service.yaml
│   │   │   └── storageclass.yaml
│   │   └── values.yaml
│   ├── config
│   │   ├── Chart.yaml
│   │   ├── README.md
│   │   ├── templates
│   │   │   ├── _helpers.tpl
│   │   │   ├── configmaps.yaml
│   │   │   └── load-init-config.yml
│   │   ├── tls_generator.py
│   │   └── values.yaml
Funny thing is, others are working as expected when disabled or enabled.
What could be the issue?
I had a typo in condition block of the dependencies

How do I make jupyter-hub access my private docker image repository?

I want to deploy my own image on JuPyter-hub. However, I need to deploy it to some registry so that the image puller of JHub can pull it from there. In my case, the registry is private. Although I am able to push the image to my registry, I don't know how will I make the jupyterhub release and deployment be able to pull the image.
I tried reading this doc (https://github.com/jupyterhub/jupyterhub-deploy-docker) but it could not help me understand how am I to add authentication in the jupyter hub deployment.
I deploy jhub with this command:
# Suggested values: advanced users of Kubernetes and Helm should feel
# free to use different values.
RELEASE=jhub
NAMESPACE=jhub
helm upgrade --install $RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.8.0 \
--values jupyter-hub-config.yaml
where the jupyter-hub-config.yaml is as follows:
proxy:
secretToken: "abcd"
singleuser:
image:
name: jupyter/datascience-notebook
tag: some_tag
lifecycleHooks:
postStart:
exec:
command: ["/bin/sh", "-c", 'ipython profile create; cd ~/.ipython/profile_default/startup; echo ''run_id = "sample" ''> aviral.py']
The helm chart is available here: https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz
And the tree of this helm chart is:
.
├── Chart.yaml
├── jupyter-hub-config.yaml
├── requirements.lock
├── schema.yaml
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── hub
│   │   ├── configmap.yaml
│   │   ├── deployment.yaml
│   │   ├── image-credentials-secret.yaml
│   │   ├── netpol.yaml
│   │   ├── pdb.yaml
│   │   ├── pvc.yaml
│   │   ├── rbac.yaml
│   │   ├── secret.yaml
│   │   └── service.yaml
│   ├── image-puller
│   │   ├── _daemonset-helper.yaml
│   │   ├── daemonset.yaml
│   │   ├── job.yaml
│   │   └── rbac.yaml
│   ├── ingress.yaml
│   ├── proxy
│   │   ├── autohttps
│   │   │   ├── _README.txt
│   │   │   ├── configmap-nginx.yaml
│   │   │   ├── deployment.yaml
│   │   │   ├── ingress-internal.yaml
│   │   │   ├── rbac.yaml
│   │   │   └── service.yaml
│   │   ├── deployment.yaml
│   │   ├── netpol.yaml
│   │   ├── pdb.yaml
│   │   ├── secret.yaml
│   │   └── service.yaml
│   ├── scheduling
│   │   ├── _scheduling-helpers.tpl
│   │   ├── priorityclass.yaml
│   │   ├── user-placeholder
│   │   │   ├── pdb.yaml
│   │   │   ├── priorityclass.yaml
│   │   │   └── statefulset.yaml
│   │   └── user-scheduler
│   │   ├── _helpers.tpl
│   │   ├── configmap.yaml
│   │   ├── deployment.yaml
│   │   ├── pdb.yaml
│   │   └── rbac.yaml
│   └── singleuser
│   ├── image-credentials-secret.yaml
│   └── netpol.yaml
├── test-99.py
├── validate.py
└── values.yaml
All I want to do is to make jupyterhub access my private repo using secrets. In this case, I do not know how to make this available to it.
Image pull secret can be used to pull a image from private registry.
Append the jupyter-hub-config.yam with the following blob.
imagePullSecret:
enabled: true
registry:
username:
email:
password:
With the Value
username:AWS
password:aws ecr get-login --region ${REGION} --registry-ids ${ACCOUNT} | cut -d' ' -f6