Helm 3 install for resouces that exist - kubernetes

when running helm install (helm 3.0.2)
I got the following error: Error:
rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: PodSecurityPolicy, namespace: , name: po-kube-state-metrics
But I don't find it and also In the error im not getting the ns, How can I remove it ?
when running kubectl get all --all-namespaces I see all the resources but not the po-kub-state-metrics ... it also happen to other resources, any idea?
I got the same error to: monitoring-grafana entity and the result of
kubectl get PodSecurityPolicy --all-namespaces is:
monitoring-grafana false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,do

First of all you need to make sure you've successfully uninstalled the helm release, before reinstalling.
To list all the releases, use:
$ helm list --all --all-namespaces
To uninstall a release, use:
$ helm uninstall <release-name> -n <namespace>
You can also use --no-hooks to skip running hooks for the command:
$ helm uninstall <release-name> -n <namespace> --no-hooks
If uninstalling doesn't solve your problem, you can try the following command to cleanup:
$ helm template <NAME> <CHART> --namespace <NAMESPACE> | kubectl delete -f -
Sample:
$ helm template happy-panda stable/mariadb --namespace kube-system | kubectl delete -f -
Now, try installing again.
Update:
Let's consider that your chart name is mon and your release name is po. Since you are in the charts directory (.) like below:
.
├── mon
│   ├── Chart.yaml
│   ├── README.md
│   ├── templates
│   │   ├── one.yaml
│   │   ├── two.yaml
│   │   ├── three.yaml
│   │   ├── _helpers.tpl
│   │   ├── NOTES.txt
│   └── values.yaml
Then you can skip the helm repo name (i.e. stable) in the helm template command. Helm will use your mon chart from the directory.
$ helm template po mon --namespace mon | kubectl delete -f -

i've got the same issue while deploying Istio. So i did
kubectl get clusterrole
kubectl get clusterrolebinging
kubectl delete mutatingwebhookconfiguration istio-sidecar-injector
kubectl delete validatingwebhookconfiguration istio-galley
kubectl delete namespace <istio-namespace>
and when deleted all and started, it worked.

I had the same error with CRDs objects. I used this chart on Github, and to prevent this error I used the --skip-crds flag. Maybe the project that you are using has something like this:
https://github.com/helm/charts/tree/master/incubator/sparkoperator#configuration

So nor the --force or no the other options help. Here is the error that I was getting.
Release "develop-myrelease" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: namespace: , name: develop-myrelease, existing_kind: rbac.authorization.k8s.io/v1beta1, Kind=ClusterRoleBinding, new_kind: rbac.authorization.k8s.io/v1beta1, Kind=ClusterRoleBinding
So I just delete clusterrolebinding and its work.
kubectl get clusterrolebinding | grep develop-myrelease
kubectl delete clusterrolebinding develop-myrelease
and run the deployment again.

for my case able to successfully upgrade my build with --force
Mulhasans-MacBook-Pro:helm-tuts mulhasan$ helm upgrade --install --force api-streamingserver ./api-streamingserver
This will help for the same Release if you are doing with different release choose a different name for conflicting resources as of now Helmv3.x doesn't have option for CRDs --skip-crds is removed in Helmv3.x

If you are upgrading to helm 3, ensure you can run helm 2 and helm 3 separately. Example
helm2 list
helm3 list
After this, if you will try to install a helm chart within helm 3, that error will pops-up because it exists in helm 2.
Use helm2to3 plugin to upgrade to Helm3:
https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3
I follow this exactly and I got no issues

I spent many hours on bugs that are related to the error:
Error: rendered manifests contain a resource that already exists...
I have 3 simple conclusions:
1 ) Resources from previous deployments (via kubectl or helm) might exists in the cluster.
2 ) Use an advanced administrative/debugging tool like k9s or Lens to view ALL cluster resources (instead of kubectl get / helm ls).
3 ) Usally, the resource names which are specified in the error has a meaning - search directly for them and see if they can be deleted.

Related

Include configmap with non-managed helm chart

I was wondering if it is possible to include a configmap with its own values.yml file with a helm chart repository that I am not managing locally. This way, I can uninstall the resource with the name of the chart.
Example:
I am using New Relics Helm chart repository and installing the helm charts using their repo name. I want to include a configmap used for infrastructure settings with the same helm deployment without having to use a kubectl apply to add it independently.
I also want to avoid having to manage the repo locally as I am pinning the version and other values separately from the help upgrade install set triggers.
What you could do is use Kustomize. Let me show you with an example that I use for my Prometheus installation.
I'm using the kube-prometheus-stack helm chart, but add some more custom resources like a SecretProviderClass.
kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: kube-prometheus-stack
repo: https://prometheus-community.github.io/helm-charts
version: 39.11.0
releaseName: prometheus
namespace: prometheus
valuesFile: values.yaml
includeCRDs: true
resources:
- secretproviderclass.yaml
I can then build the Kustomize yaml by running kustomize build . --enable-helm from within the same folder as where my kustomization.yaml file is.
I use this with my gitops setup, but you can use this standalone as well.
My folder structure would look something like this:
.
├── kustomization.yaml
├── secretproviderclass.yaml
└── values.yaml
Using only Helm without any 3rd party tools like kustomize there are two solutions:
Depend on the configurability of the Chart you are using as described by #Akshay in the other answer
Declare the Chart you are looking to add a ConfigMap to as a dependency
You can manage the Chart dependencies in the Chart.yaml file:
# Chart.yaml
dependencies:
- name: nginx
version: "1.2.3"
repository: "https://example.com/charts"
With the dependency in place, you can add your own resource files (e.g., the ConfigMap) to the chart. During Helm install, all dependencies and your custom files will be merged into a single Helm deployment.
my-nginx-chart/:
values.yaml # defines all values including the dependencies
Chart.yaml # declares the dependencies
templates/ # custom resources to be added on top of the dependencies
configmap.yaml # the configmap you want to add
To configure values for a dependency, you need to prefix the parameters in your values.yaml:
my-configmap-value: Hello World
nginx: #<- refers to "nginx" dependency
image: ...

kubectl create secret with directory structure

I've created a secret with this command for one pair of public/private keys:
kubectl create secret generic my-keys-secret --from-file=./public.key --from-file=./private.key
And used it in my pod configuration file:
spec:
volumes:
- name: my-keys
secret:
secretName: my-keys-secret
volumeMounts:
- name: my-keys
readOnly: true
mountPath: "/keys"
So the pod can access keys/public.key and keys/private.key.
But our new requirement is to support multiple pairs of public/private keys in this structure:
.
└── keys
├── 1
│ ├── private.key
│ └── public.key
.
.
│
└── n
├── private.key
└── public.key
Is it possible to create the secret with kubectl create secret generic cmd in the above structure? (the pod should be able to access keys/n/public.keyand keys/n/private.key)
Yes, definintely. You need what is called a generator. The best one is kustomize. You can use it either as a standalone binary, or integrate it with kubectl.
You will simply create a kustomization.yaml file that will take certain resource directories and templates as input and generate a whole bunch of manifests as output. For multi-level directories, you will have a kustomization.yaml file per directory. These will all be consumed by the program in a sequence to generate a complete set of manifests for you. This you can directly apply to your cluster with:
kubectl apply -k .
This command assumes your current directory has a kustomization.yaml file that you want to use and it will first generate all the manifests and then apply them. If you only want to generate them and not apply them, you can --dry-run your instruction and get -o yaml output and save it to a file, like this:
kubectl apply -k . --dry-run=client -o yaml > my-secrets.yaml
This will put all of your generated secret manifests in my-secrets.yaml which you can check for correctness when templating.
You can read more about templatization options on the kustomize docs here. It's very intuitive and simple to use :)

How to upgrade at once multiple releases with Helm for the same chart

I have multiple apps based on the same chart deployed with Helm. Let's imagine you deploy your app multiple times with different configurations:
helm install myapp-01 mycharts/myapp
helm install myapp-02 mycharts/myapp
helm install myapp-03 mycharts/myapp
And after I update the chart files, I want to update all the releases, or maybe a certain range of releases. I managed to create a PoC script like this:
helm list -f myapp -o json | jq -r '.[].name' | while read i; do helm upgrade ${i} mycharts/myapp; done
While this works I will need to do a lot of things to have full functionality and error control.
Is there any CLI tool or something I can use in a CI/CD environment to update a big number of releases (say hundreds of them)? I've been investigating Rancher and Autohelm, but I couldn't find such functionality.
Thanks to the tip provided by #Jonas I've managed to create a simple structure to deploy and update lots of pods with the same image base.
I created a folder structure like this:
├── kustomization.yaml
├── base
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ ├── namespace.yaml
│ └── service.yaml
└── overlays
├── one
│ ├── deployment.yaml
│ └── kustomization.yaml
└── two
├── deployment.yaml
└── kustomization.yaml
So the main trick here is to have a kustomization.yaml file in the main folder that points to every app:
resources:
- overlays/one
- overlays/two
namePrefix: winnp-
Then in the base/kustomization.yaml I point to the base files:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
- namespace.yaml
And then in each app I use namespaces, sufixes and commonLabels for the deployments and services, and a patch to rename the base namespace:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ns-one
nameSuffix: "-one"
commonLabels:
app: vbserver-one
bases:
- ../../base
patchesStrategicMerge:
- deployment.yaml
patches:
- target:
version: v1 # apiVersion
kind: Namespace
name: base
patch: |-
- op: replace
path: /metadata/name
value: ns-one
Now, with a simple command I can deploy or modify all the apps:
kubectl apply -k .
So to update the image I only have to change the deployment.yaml file with the new image and run the command again.
I uploaded a full example of what I did in this GitHub repo

How to use ConfigMap configuration with Helm NginX Ingress controller - Kubernetes

I've found a documentation about how to configure your NginX ingress controller using ConfigMap: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
Unfortunately I've no idea and couldn't find it anywhere how to load that ConfigMap from my Ingress controller.
My ingress controller:
helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress
My config map:
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-configmap
data:
proxy-read-timeout: "86400s"
client-max-body-size: "2g"
use-http2: "false"
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- my.endpoint.net
secretName: ingress-tls
rules:
- host: my.endpoint.net
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 443
- path: /api
backend:
serviceName: api
servicePort: 443
How do I make my Ingress to load the configuration from the ConfigMap?
I've managed to display what YAML gets executed by Helm using the: --dry-run --debug options at the end of helm install command. Then I've noticed that there controller is executed with the: --configmap={namespace-where-the-nginx-ingress-is-deployed}/{name-of-the-helm-chart}-nginx-ingress-controller.
In order to load your ConfigMap you need to override it with your own (check out the namespace).
kind: ConfigMap
apiVersion: v1
metadata:
name: {name-of-the-helm-chart}-nginx-ingress-controller
namespace: {namespace-where-the-nginx-ingress-is-deployed}
data:
proxy-read-timeout: "86400"
proxy-body-size: "2g"
use-http2: "false"
The list of config properties can be found here.
One can pass config mag properties at the time of installation too:
helm install stable/nginx-ingress --name nginx-ingress --set controller.config.use-forwarded-headers='"true"'
NOTE: for non-string values had to use single quotes around double quotes to get it working.
If you used helm install to install the ingress-nginx, if no explicit value for which ConfigMap the nginx controller should look at was passed, the default value seems like it is {namespace}/{release-name}-nginx-ingress-controller. This is generated by https://github.com/helm/charts/blob/1e074fc79d0f2ee085ea75bf9bacca9115633fa9/stable/nginx-ingress/templates/controller-deployment.yaml#L67. (See similar if it's a dead link).
To verify for yourself, try to find your command that you installed the ingress-nginx chart with, and add --dry-run --debug to the command. This will show you the yaml files generated by Tiller to be applied to the cluster. The line # Source: nginx-ingress/templates/controller-deployment.yaml begins the controller deployment which has an arg of --configmap=. The value of this arg is what needs to be the name of the ConfigMap for the controller to sense, and use to update its own .conf file. This could be passed explicitly, but if it is not, it will have a default value.
If a ConfigMap is created with the RIGHT name, the controller's logs will show that it picked up the configuration change and reloaded itself.
This can be verified with kubectl logs <pod-name-of-controller> -n <namespace-arg-if-not-in-default-namespace>. My log messages contained the text Configuration changes detected, backend reload required. These log messages will not be present if the ConfigMap name was wrong.
I believe the official documentation for this is unnecessarily lacking, but maybe I'm incorrect? I will try to submit a PR with these details. Someone who knows more should help flesh them out so people don't need to stumble on this unnecessarily.
Cheers, thanks for your post.
If you want to give your own configuration while deploying nginx-ingress-controller, you can have a wrapper Helm chart over the original nginx-ingress Helm chart and provide your own values.yaml which can have custom configuration.
Using Helm 3 here.
Create a chart:
$ helm create custom-nginx
$ tree custom-nginx
So my chart structure looks like this:
custom-nginx/
├── Chart.yaml
├── charts
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── service.yaml
│   ├── serviceaccount.yaml
│   └── tests
│   └── test-connection.yaml
└── values.yaml
There are a few extra things here. Specifically, I don't need the complete templates/ directory and its contents, so I'll just remove those:
$ rm custom-nginx/templates/*
$ rmdir custom-nginx/templates
Now, the chart structure should look like this:
custom-nginx/
├── Chart.yaml
├── charts
└── values.yaml
Since, we've to include the original nginx-ingress chart as a dependency, my Chart.yaml looks like this:
$ cat custom-nginx/Chart.yaml
apiVersion: v2
name: custom-nginx
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 1.39.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 0.32.0
dependencies:
- name: nginx-ingress
version: 1.39.1
repository: https://kubernetes-charts.storage.googleapis.com/
Here, appVersion is the nginx-controller docker image version and version matches with the nginx-ingress chart version that I am using.
The only thing left is to provide your custom configuration. Here is an stripped down version of my custom configuration:
$ cat custom-nginx/values.yaml
# Default values for custom-nginx.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
nginx-ingress:
controller:
ingressClass: internal-nginx
replicaCount: 1
service:
externalTrafficPolicy: Local
publishService:
enabled: true
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: "80"
targetMemoryUtilizationPercentage: "80"
resources:
requests:
cpu: 1
memory: 2Gi
limits:
cpu: 1
memory : 2Gi
metrics:
enabled: true
config:
compute-full-forwarded-for: "true"
We can check the keys that are available to use as configuration (config section in values.yaml) in https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
And the rest of the configuration can be found here: https://github.com/helm/charts/tree/master/stable/nginx-ingress#configuration
Once configurations are set, just download the dependency of your chart:
$ helm dependency update <path/to/chart>
It's a good practice to do basic checks on your chart before deploying it:
$ helm lint <path/to/chart>
$ helm install --debug --dry-run --namespace <namespace> <release-name> <path/to/chart>
Then deploy your chart (which will deploy your nginx-ingress-controller with your own custom configurations).
Also, since you've a chart now, you can upgrade and rollback your chart.
When installing the chart through terraform, the configuration values can be set as shown below:
resource "helm_release" "ingress_nginx" {
name = "nginx"
repository = "https://kubernetes.github.io/ingress-nginx/"
chart = "ingress-nginx"
set {
name = "version"
value = "v4.0.2"
}
set {
name = "controller.config.proxy-read-timeout"
value = "86400s"
}
set {
name = "controller.config.client-max-body-size"
value = "2g"
}
set {
name = "controller.config.use-http2"
value = "false"
}
}
Just to confirm #NeverEndingQueue answer above, the name of the config map is present in the nginx-controller pod spec itself, so if you inspect the yaml of the nginx-controller pod: kubectl get po release-name-nginx-ingress-controller-random-sequence -o yaml, under spec.containers, you will find something like:
- args:
- /nginx-ingress-controller
- --default-backend-service=default/release-name-nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=default/release-name-nginx-ingress-controller
For example here, a config map named release-name-nginx-ingress-controller in the namespace default needs to be created.
Once done, you can verify if the changes have taken place by checking the logs. Normally, you will see something like:
I1116 10:35:45.174127 6 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"default", Name:"release-name-nginx-ingress-controller", UID:"76819abf-4df0-41e3-a3fe-25445e754f32", APIVersion:"v1", ResourceVersion:"62559702", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap default/release-name-nginx-ingress-controller
I1116 10:35:45.184627 6 controller.go:141] Configuration changes detected, backend reload required.
I1116 10:35:45.396920 6 controller.go:157] Backend successfully reloaded.
When you apply ConfigMap configuration with needful key-value data, Ingress controller picks up this information and insert it to the nested nginx-ingress-controller Pod's original configuration file /etc/nginx/nginx.conf, therefore it's easy afterwards to verify whether ConfigMap's values have been successfully reflected or not, by checking actual nginx.conf inside the corresponded Pod.
You can also check logs from the relevant nginx-ingress-controller Pod in order to check whether ConfigMap data already reloaded to the backend nginx.conf, or if not to investigate the reason.
Using enable-underscores-in-headers=true worked for me not enable-underscores-in-headers='"true"'
helm install nginx-ingress ingress-nginx/ingress-nginx
--namespace ingress-basic
--set controller.config.enable-underscores-in-headers=true
I managed to update the "large-client-header-buffers" in the nginx via configmap. Here are the steps I have followed..
Find the configmap name in the nginx ingress controller pod describition
kubectl -n utility describe pods/test-nginx-ingress-controller-584dd58494-d8fqr |grep configmap
--configmap=test-namespace/test-nginx-ingress-controller
Note: In my case, the namespace is "test-namespace" and the configmap name is "test-nginx-ingress-controller"
Create a configmap yaml
cat << EOF > test-nginx-ingress-controller-configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: test-nginx-ingress-controller
namespace: test-namespace
data:
large-client-header-buffers: "4 16k"
EOF
Note: Please replace the namespace and configmap name as per finding in the step 1
Deploy the configmap yaml
kubectl apply -f test-nginx-ingress-controller-configmap.yaml
Then you will see the change is updated to nginx controller pod after mins
i.g.
kubectl -n test-namespace exec -it test-nginx-ingress-controller-584dd58494-d8fqr -- cat /etc/nginx/nginx.conf|grep large
large_client_header_buffers 4 16k;
Based on the NeverEndingQueue's answer I want to provide an update for Kubernetes v1.23 / Helm 3
This is my installation command + --dry-run --debug part:
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace --dry-run --debug
This is the part we need from the generated output of the command above:
apiVersion: apps/v1
kind: DaemonSet
metadata:
...
spec:
...
template:
...
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
...
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --...
- --configmap=${POD_NAMESPACE}/ingress-nginx-controller
- --...
....
We need this part: --configmap=${POD_NAMESPACE}/ingress-nginx-controller.
As you can see, name of ConfigMap must be ingress-nginx-controller and namespace must be the one you use during chart installation (ie {POD_NAMESPACE}, in my example about this is --namespace ingress-nginx).
# nginx-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
map-hash-bucket-size: "128"
Then run kubectl apply -f nginx-config.yaml to apply ConfigMap and nginx's pod(s) will be auto-reloaded with updated config.
To check, that nginx config has been updated, find name of nginx's pod (you can use any one, if you have few nodes): kubectl get pods -n ingress-nginx (or kubectl get pods -A)
and then check config: kubectl exec -it ingress-nginx-controller-{generatedByKubernetesId} -n ingress-nginx cat /etc/nginx/nginx.conf
UPDATE:
The correct name (ie name: ingress-nginx-controller) is shown in the official docs. Conclusion: no need to reinvent the wheel.
What you have is an ingress yaml and not an Ingress controller deployment yaml , Ingress Controller is the Pod that actually does the work and usually is an nginx container itself. An example of such a configuration can be found here in the documentation you shared.
UPDATE
Using that example provided , you can also use following way to load config into nginx using config map
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config
configMap:
name: nginx-config
nginx-config contains your nginx configuration as part of config map
I read the above answers but could not make it work.
What worked for me was the following:
release_name=tcp-udp-ic
# add the helm repo from NginX and update the chart
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
echo "- Installing -${release_name}- into cluster ..."
#delete the config map if already exists
kubectl delete cm tcp-udp-ic-cm
helm del --purge ${release_name}
helm upgrade --install ${release_name} \
--set controller.image.tag=1.6.0 \
--set controller.config.name=tcp-udp-ic-cm \
nginx-stable/nginx-ingress --version 0.4.0 #--dry-run --debug
# update the /etc/nginx/nginx.conf file with my attributes, via the config map
kubectl apply -f tcp-udp-ic-cm.yaml
and the tcp-udp-ic-cm.yaml is :
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-udp-ic-cm
namespace: default
data:
worker-connections : "10000"
Essentially I need to deploy with helm the release and set the name of the config-map that is going to use. Helm creates the config-map but empty. Then I apply the config-map file in order to update the config-map resource with my values. This sequence is the only one i could make work.
An easier way of doing this is just modifying the values that's deployed through helm. The values needed to be changed to enter to ConfigMap are now in controller.config.entries. Show latest values with: helm show values nginx-stable/nginx-ingress and look for the format on the version you are running.
I had tons of issues with this since all references online said controller.config, until I checked with the command above.
After you've entered the values upgrade with:
helm upgrade -f <PATH_TO_FILE>.yaml <NAME> nginx-stable/nginx-ingress
The nginx ingress controller may cause issues with forwarding. While we were able to get it working with nginx, via X-Forwarded-Proto etc., but it was a bit complicated and convoluted.
Moving to haproxy instead resolved this problem. As well, make sure you are interfacing with the ingress controller over https or that may cause issues with keycloak.
Keycloak v18 with --proxy edge
annotations:
kubernetes.io/ingress.class: haproxy
...

Adjusting Kubernetes configurations depending on environment

I want to describe my services in kubernetes template files. Is it possible to parameterise values like number or replicas, so that I can set this at deploy time.
The goal here is to be able to run my services locally in minikube (where I'll only need one replica) and have them be as close to those running in staging/live as possible.
I'd like to be able to change the number of replicas, use locally mounted volumes and make other minor changes, without having to write a seperate template files that would inevitably diverge from each other.
Helm
Helm is becoming the standard for templatizing kubernetes deployments. A helm chart is a directory consisting of yaml files with golang variable placeholders
---
kind: Deployment
metadata:
name: foo
spec:
replicas: {{ .Values.replicaCount }}
You define the default value of a 'value' in the 'values.yaml' file
replicaCount: 1
You can optionally overwrite the value using the --set command line
helm install foo --set replicaCount=42
Helm can also point to an external answer file
helm install foo -f ./dev.yaml
helm install foo -f ./prod.yaml
dev.yaml
---
replicaCount: 1
prod.yaml
---
replicaCount: 42
Another advantage of Helm over simpler solutions like envbsubst is that Helm supports plugins. One powerful plugin is the helm-secrets plugin that lets you encrypt sensitive data using pgp keys. https://github.com/futuresimple/helm-secrets
If using helm + helm-secrets your setup may look like the following where your code is in one repo and your data is in another.
git repo with helm charts
stable
|__mysql
|__Values.yaml
|__Charts
|__apache
|__Values.yaml
|__Charts
incubator
|__mysql
|__Values.yaml
|__Charts
|__apache
|__Values.yaml
|__Charts
Then in another git repo that contains the environment specific data
values
|__ mysql
|__dev
|__values.yaml
|__secrets.yaml
|__prod
|__values.yaml
|__secrets.yaml
You then have a wrapper script that references the values and the secrets files
helm secrets upgrade foo --install -f ./values/foo/$environment/values.yaml -f ./values/foo/$environment/secrets.yaml
envsubst
As mentioned in other answers, envsubst is a very powerful yet simple way to make your own templates. An example from kiminehart
apiVersion: extensions/v1beta1
kind: Deployment
# ...
architecture: ${GOOS}
GOOS=amd64 envsubst < mytemplate.tmpl > mydeployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
# ...
architecture: amd64
Kubectl
There is a feature request to allow kubectl to do some of the same features of helm and allow for variable substitution. There is a background document that strongly suggest that the feature will never be added, and instead is up to external tools like Helm and envsubst to manage templating.
(edit)
Kustomize
Kustomize is a new project developed by google that is very similar to helm. Basically you have 2 folders base and overlays. You then run kustomize build someapp/overlays/production and it will generate the yaml for that environment.
someapp/
├── base/
│ ├── kustomization.yaml
│ ├── deployment.yaml
│ ├── configMap.yaml
│ └── service.yaml
└── overlays/
├── production/
│ └── kustomization.yaml
│ ├── replica_count.yaml
└── staging/
├── kustomization.yaml
└── cpu_count.yaml
It is simpler and has less overhead than helm, but does not have plugins for managing secrets. You could combine kustomize with sops or envsubst to manage secrets.
https://kubernetes.io/blog/2018/05/29/introducing-kustomize-template-free-configuration-customization-for-kubernetes/
I'm hoping someone will give me a better answer, but in the meantime, you can feed your configuration through envsubst (see gettext and this for mac).
Example config, text.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test
spec:
replicas: ${NUM_REPLICAS}
...
Then run:
$ NUM_REPLICAS=2 envsubst < test.yaml | kubectl apply -f -
deployment "test" configured
The final dash is required. This doesn't solve the problem with volumes of course, but it helps a little. You could write a script/makefile to automate this for environment.