Adjusting Kubernetes configurations depending on environment - kubernetes

I want to describe my services in kubernetes template files. Is it possible to parameterise values like number or replicas, so that I can set this at deploy time.
The goal here is to be able to run my services locally in minikube (where I'll only need one replica) and have them be as close to those running in staging/live as possible.
I'd like to be able to change the number of replicas, use locally mounted volumes and make other minor changes, without having to write a seperate template files that would inevitably diverge from each other.

Helm
Helm is becoming the standard for templatizing kubernetes deployments. A helm chart is a directory consisting of yaml files with golang variable placeholders
---
kind: Deployment
metadata:
name: foo
spec:
replicas: {{ .Values.replicaCount }}
You define the default value of a 'value' in the 'values.yaml' file
replicaCount: 1
You can optionally overwrite the value using the --set command line
helm install foo --set replicaCount=42
Helm can also point to an external answer file
helm install foo -f ./dev.yaml
helm install foo -f ./prod.yaml
dev.yaml
---
replicaCount: 1
prod.yaml
---
replicaCount: 42
Another advantage of Helm over simpler solutions like envbsubst is that Helm supports plugins. One powerful plugin is the helm-secrets plugin that lets you encrypt sensitive data using pgp keys. https://github.com/futuresimple/helm-secrets
If using helm + helm-secrets your setup may look like the following where your code is in one repo and your data is in another.
git repo with helm charts
stable
|__mysql
|__Values.yaml
|__Charts
|__apache
|__Values.yaml
|__Charts
incubator
|__mysql
|__Values.yaml
|__Charts
|__apache
|__Values.yaml
|__Charts
Then in another git repo that contains the environment specific data
values
|__ mysql
|__dev
|__values.yaml
|__secrets.yaml
|__prod
|__values.yaml
|__secrets.yaml
You then have a wrapper script that references the values and the secrets files
helm secrets upgrade foo --install -f ./values/foo/$environment/values.yaml -f ./values/foo/$environment/secrets.yaml
envsubst
As mentioned in other answers, envsubst is a very powerful yet simple way to make your own templates. An example from kiminehart
apiVersion: extensions/v1beta1
kind: Deployment
# ...
architecture: ${GOOS}
GOOS=amd64 envsubst < mytemplate.tmpl > mydeployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
# ...
architecture: amd64
Kubectl
There is a feature request to allow kubectl to do some of the same features of helm and allow for variable substitution. There is a background document that strongly suggest that the feature will never be added, and instead is up to external tools like Helm and envsubst to manage templating.
(edit)
Kustomize
Kustomize is a new project developed by google that is very similar to helm. Basically you have 2 folders base and overlays. You then run kustomize build someapp/overlays/production and it will generate the yaml for that environment.
someapp/
├── base/
│ ├── kustomization.yaml
│ ├── deployment.yaml
│ ├── configMap.yaml
│ └── service.yaml
└── overlays/
├── production/
│ └── kustomization.yaml
│ ├── replica_count.yaml
└── staging/
├── kustomization.yaml
└── cpu_count.yaml
It is simpler and has less overhead than helm, but does not have plugins for managing secrets. You could combine kustomize with sops or envsubst to manage secrets.
https://kubernetes.io/blog/2018/05/29/introducing-kustomize-template-free-configuration-customization-for-kubernetes/

I'm hoping someone will give me a better answer, but in the meantime, you can feed your configuration through envsubst (see gettext and this for mac).
Example config, text.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test
spec:
replicas: ${NUM_REPLICAS}
...
Then run:
$ NUM_REPLICAS=2 envsubst < test.yaml | kubectl apply -f -
deployment "test" configured
The final dash is required. This doesn't solve the problem with volumes of course, but it helps a little. You could write a script/makefile to automate this for environment.

Related

Include configmap with non-managed helm chart

I was wondering if it is possible to include a configmap with its own values.yml file with a helm chart repository that I am not managing locally. This way, I can uninstall the resource with the name of the chart.
Example:
I am using New Relics Helm chart repository and installing the helm charts using their repo name. I want to include a configmap used for infrastructure settings with the same helm deployment without having to use a kubectl apply to add it independently.
I also want to avoid having to manage the repo locally as I am pinning the version and other values separately from the help upgrade install set triggers.
What you could do is use Kustomize. Let me show you with an example that I use for my Prometheus installation.
I'm using the kube-prometheus-stack helm chart, but add some more custom resources like a SecretProviderClass.
kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: kube-prometheus-stack
repo: https://prometheus-community.github.io/helm-charts
version: 39.11.0
releaseName: prometheus
namespace: prometheus
valuesFile: values.yaml
includeCRDs: true
resources:
- secretproviderclass.yaml
I can then build the Kustomize yaml by running kustomize build . --enable-helm from within the same folder as where my kustomization.yaml file is.
I use this with my gitops setup, but you can use this standalone as well.
My folder structure would look something like this:
.
├── kustomization.yaml
├── secretproviderclass.yaml
└── values.yaml
Using only Helm without any 3rd party tools like kustomize there are two solutions:
Depend on the configurability of the Chart you are using as described by #Akshay in the other answer
Declare the Chart you are looking to add a ConfigMap to as a dependency
You can manage the Chart dependencies in the Chart.yaml file:
# Chart.yaml
dependencies:
- name: nginx
version: "1.2.3"
repository: "https://example.com/charts"
With the dependency in place, you can add your own resource files (e.g., the ConfigMap) to the chart. During Helm install, all dependencies and your custom files will be merged into a single Helm deployment.
my-nginx-chart/:
values.yaml # defines all values including the dependencies
Chart.yaml # declares the dependencies
templates/ # custom resources to be added on top of the dependencies
configmap.yaml # the configmap you want to add
To configure values for a dependency, you need to prefix the parameters in your values.yaml:
my-configmap-value: Hello World
nginx: #<- refers to "nginx" dependency
image: ...

How to upgrade at once multiple releases with Helm for the same chart

I have multiple apps based on the same chart deployed with Helm. Let's imagine you deploy your app multiple times with different configurations:
helm install myapp-01 mycharts/myapp
helm install myapp-02 mycharts/myapp
helm install myapp-03 mycharts/myapp
And after I update the chart files, I want to update all the releases, or maybe a certain range of releases. I managed to create a PoC script like this:
helm list -f myapp -o json | jq -r '.[].name' | while read i; do helm upgrade ${i} mycharts/myapp; done
While this works I will need to do a lot of things to have full functionality and error control.
Is there any CLI tool or something I can use in a CI/CD environment to update a big number of releases (say hundreds of them)? I've been investigating Rancher and Autohelm, but I couldn't find such functionality.
Thanks to the tip provided by #Jonas I've managed to create a simple structure to deploy and update lots of pods with the same image base.
I created a folder structure like this:
├── kustomization.yaml
├── base
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ ├── namespace.yaml
│ └── service.yaml
└── overlays
├── one
│ ├── deployment.yaml
│ └── kustomization.yaml
└── two
├── deployment.yaml
└── kustomization.yaml
So the main trick here is to have a kustomization.yaml file in the main folder that points to every app:
resources:
- overlays/one
- overlays/two
namePrefix: winnp-
Then in the base/kustomization.yaml I point to the base files:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
- namespace.yaml
And then in each app I use namespaces, sufixes and commonLabels for the deployments and services, and a patch to rename the base namespace:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ns-one
nameSuffix: "-one"
commonLabels:
app: vbserver-one
bases:
- ../../base
patchesStrategicMerge:
- deployment.yaml
patches:
- target:
version: v1 # apiVersion
kind: Namespace
name: base
patch: |-
- op: replace
path: /metadata/name
value: ns-one
Now, with a simple command I can deploy or modify all the apps:
kubectl apply -k .
So to update the image I only have to change the deployment.yaml file with the new image and run the command again.
I uploaded a full example of what I did in this GitHub repo

Exclude Resource in kustomization.yaml

I have a kustomize base that I'd like to re-use without editing it. Unfortunately, it creates a namespace I don't want to create. I'd like to simply remove that resource from consideration when compiling the manifests and add a resource for mine since I can't patch a namespace to change the name.
Can this be done? How?
You can omit the specific resource by using a delete directive of Strategic Merge Patch like this.
Folder structure
$ tree .
.
├── base
│   ├── kustomization.yaml
│   └── namespace.yaml
└── overlays
├── dev
│   └── kustomization.yaml
└── prod
├── delete-ns-b.yaml
└── kustomization.yaml
File content
$ cat base/kustomization.yaml
resources:
- namespace.yaml
$ cat base/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
---
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ cat overlays/dev/kustomization.yaml
bases:
- ../../base
$ cat overlays/prod/delete-ns-b.yaml
$patch: delete
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ cat overlays/prod/kustomization.yaml
bases:
- ../../base
patchesStrategicMerge:
- delete-ns-b.yaml
Behavior of kustomize
$ kustomize build overlays/dev
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
---
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ kustomize build overlays/prod
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
In this case, we have two namespaces in the base folder. In dev, kustomize produces 2 namespaces because there is no patch. But, in prod, kustomize produces only one namespace because delete patch deletes namespace ns-b.
I found that my understanding of not being able to change a namespace name was incorrect. Using the patch capability, you actually can change the name of a resource including namespaces.
This is what I ended up using:
patches:
- target:
kind: Namespace
name: application
patch: |-
- op: replace
path: /metadata/name
value: my-application
I encountered this problem and eventually took a different approach to solving it. It's worth thinking back through your requirements and asking yourself why you would want kustomize to omit a resource? In my case - and I would imagine this is the most common use-case - I wanted kustomize to omit a resource because I didn't want to apply it to the target kubernetes cluster but kustomize doesn't provide an easy way to do this. Would it not be better for the filtration to take place when applying the resource to the cluster rather than when generating them? The solution which I eventually applied was to filter the resources by label when applying to the cluster. You can add an exclusion label in an overlay to prevent the resource from being applied.
e.g.
$ kustomize build . | kubectl apply -l apply-resource!=no -f -

Helm 3 install for resouces that exist

when running helm install (helm 3.0.2)
I got the following error: Error:
rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: PodSecurityPolicy, namespace: , name: po-kube-state-metrics
But I don't find it and also In the error im not getting the ns, How can I remove it ?
when running kubectl get all --all-namespaces I see all the resources but not the po-kub-state-metrics ... it also happen to other resources, any idea?
I got the same error to: monitoring-grafana entity and the result of
kubectl get PodSecurityPolicy --all-namespaces is:
monitoring-grafana false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,do
First of all you need to make sure you've successfully uninstalled the helm release, before reinstalling.
To list all the releases, use:
$ helm list --all --all-namespaces
To uninstall a release, use:
$ helm uninstall <release-name> -n <namespace>
You can also use --no-hooks to skip running hooks for the command:
$ helm uninstall <release-name> -n <namespace> --no-hooks
If uninstalling doesn't solve your problem, you can try the following command to cleanup:
$ helm template <NAME> <CHART> --namespace <NAMESPACE> | kubectl delete -f -
Sample:
$ helm template happy-panda stable/mariadb --namespace kube-system | kubectl delete -f -
Now, try installing again.
Update:
Let's consider that your chart name is mon and your release name is po. Since you are in the charts directory (.) like below:
.
├── mon
│   ├── Chart.yaml
│   ├── README.md
│   ├── templates
│   │   ├── one.yaml
│   │   ├── two.yaml
│   │   ├── three.yaml
│   │   ├── _helpers.tpl
│   │   ├── NOTES.txt
│   └── values.yaml
Then you can skip the helm repo name (i.e. stable) in the helm template command. Helm will use your mon chart from the directory.
$ helm template po mon --namespace mon | kubectl delete -f -
i've got the same issue while deploying Istio. So i did
kubectl get clusterrole
kubectl get clusterrolebinging
kubectl delete mutatingwebhookconfiguration istio-sidecar-injector
kubectl delete validatingwebhookconfiguration istio-galley
kubectl delete namespace <istio-namespace>
and when deleted all and started, it worked.
I had the same error with CRDs objects. I used this chart on Github, and to prevent this error I used the --skip-crds flag. Maybe the project that you are using has something like this:
https://github.com/helm/charts/tree/master/incubator/sparkoperator#configuration
So nor the --force or no the other options help. Here is the error that I was getting.
Release "develop-myrelease" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: namespace: , name: develop-myrelease, existing_kind: rbac.authorization.k8s.io/v1beta1, Kind=ClusterRoleBinding, new_kind: rbac.authorization.k8s.io/v1beta1, Kind=ClusterRoleBinding
So I just delete clusterrolebinding and its work.
kubectl get clusterrolebinding | grep develop-myrelease
kubectl delete clusterrolebinding develop-myrelease
and run the deployment again.
for my case able to successfully upgrade my build with --force
Mulhasans-MacBook-Pro:helm-tuts mulhasan$ helm upgrade --install --force api-streamingserver ./api-streamingserver
This will help for the same Release if you are doing with different release choose a different name for conflicting resources as of now Helmv3.x doesn't have option for CRDs --skip-crds is removed in Helmv3.x
If you are upgrading to helm 3, ensure you can run helm 2 and helm 3 separately. Example
helm2 list
helm3 list
After this, if you will try to install a helm chart within helm 3, that error will pops-up because it exists in helm 2.
Use helm2to3 plugin to upgrade to Helm3:
https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3
I follow this exactly and I got no issues
I spent many hours on bugs that are related to the error:
Error: rendered manifests contain a resource that already exists...
I have 3 simple conclusions:
1 ) Resources from previous deployments (via kubectl or helm) might exists in the cluster.
2 ) Use an advanced administrative/debugging tool like k9s or Lens to view ALL cluster resources (instead of kubectl get / helm ls).
3 ) Usally, the resource names which are specified in the error has a meaning - search directly for them and see if they can be deleted.

How to override a namespace override

In the following scenario I have my containers defined in ../base/.
In this /dev/ directory I want to start all the deployments and statefulsets in namespace dev.
The rub is that I also want to run the local-path-storage CSI in the local-path-storage namespace. kustomize will override it and create it in the "dev" namespace.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: dev
bases:
- ../base
resources:
- local-path-storage.yaml
How can I undo the namespace override for just local-path-storage.yaml?
This functionality doesn't exist in Kustomize yet. There's an open issue addressing this, but no open PRs at the time of this writing.
The quickest solution here is to remove the namespace setting in the dev/kustomize.yaml and hand-set the namespace in all resources in dev.
Another option, shamelessly copied from the issue I cited earlier, is to create a transformer to get around this:
#!/usr/bin/env /usr/bin/python3
import sys
import yaml
with open(sys.argv[1], "r") as stream:
try:
data = yaml.safe_load(stream)
except yaml.YAMLError as exc:
print("Error parsing NamespaceTransformer input", file=sys.stderr)
# See kubectl api-resources --namespaced=false
denylist = [
"ComponentStatus",
"Namespace",
"Node",
"PersistentVolume",
"MutatingWebhookConfiguration",
"ValidatingWebhookConfiguration",
"CustomResourceDefinition",
"APIService",
"MeshPolicy",
"TokenReview",
"SelfSubjectAccessReview",
"SelfSubjectRulesReview",
"SubjectAccessReview",
"CertificateSigningRequest",
"ClusterIssuer",
"BGPConfiguration",
"ClusterInformation",
"FelixConfiguration",
"GlobalBGPConfig",
"GlobalFelixConfig",
"GlobalNetworkPolicy",
"GlobalNetworkSet",
"HostEndpoint",
"IPPool",
"PodSecurityPolicy",
"NodeMetrics",
"PodSecurityPolicy",
"ClusterRoleBinding",
"ClusterRole",
"ClusterRbacConfig",
"PriorityClass",
"StorageClass",
"VolumeAttachment",
]
try:
for yaml_input in yaml.safe_load_all(sys.stdin):
if yaml_input['kind'] not in denylist:
if "namespace" not in yaml_input["metadata"]:
yaml_input["metadata"]["namespace"] = data["namespace"]
print("---")
print(yaml.dump(yaml_input, default_flow_style=False))
except yaml.YAMLError as exc:
print("Error parsing YAML input\n\n%s\n\n" % input, file=sys.stderr)
Unfortunately it is not possible, the namespace override in kustomization assume all resources should belong to the same namespace.
Your alternative are:
Create separate kustomization for resources that does not belong to the same namespace.
Deploy resources that does not need kustomization by using kubectl apply -f .
Use alternative replacement approach like suggested by Eric staples.
I generally create one kustomization per set of resources, that are deployed together in a namespace to make the kustomization simple and independent from any other resources.
It is possible since kustomize 4.5.6 by adding a namespaceTransformer. You want to set the field unsetOnly to true.
Here is an example:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
resources:
- local-path-storage.yaml
transformers:
- |-
apiVersion: builtin
kind: NamespaceTransformer
metadata:
name: notImportantHere
namespace: dev
unsetOnly: true
This should set the namespace to dev for all resources that DO NOT have a namespace set.
Link to namespaceTransformer spec: https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#_namespacetransformer_
I am faced with the same problem.
My approach to this problem is to break it up into multiple steps.
I would have stepone, steptwo folders.
tree ./project/
./project/
├── stepone
│   ├── base
│   └── overlay
└── steptwo
├── base
└── overlay
Now I can move the part of the deployment that should not have the namespace override into steptwo or vice versa. Depending on your deployment needs.
I am working on complex transitions from a heml template with over 200 files outputted from the templates.
I am simply breaking up the deployment into different steps and use kustomize at each step to manage just the portion of the deployment where isolation is required.
It does add some effort, but it still gives the isolation I need till kustomize finds a good way to handle this complexity of the namespace overide. This takes #Diego-mendes answer and encapsulates the different parts into their own folders.