Kubernetes deployment is missing Kustomize's hash suffixes - kubernetes

I'm new to Kubernetes. In my project I'm trying to use Kustomize to generate configMaps for my deployment. Kustomize adds a hash after the configMap name, but I can't get it to also change the deployment to use that new configMap name.
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: env-receiver-deployment
labels:
app: env-receiver-app
project: env-project
spec:
replicas: 1
selector:
matchLabels:
app: env-receiver-app
template:
metadata:
labels:
app: env-receiver-app
project: env-project
spec:
containers:
- name: env-receiver-container
image: eu.gcr.io/influxdb-241011/env-receiver:latest
resources: {}
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: env-receiver-config
args: [ "-port=$(ER_PORT)", "-dbaddr=$(ER_DBADDR)", "-dbuser=$(ER_DBUSER)", "-dbpass=$(ER_DBPASS)" ]
kustomize.yml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: env-receiver-config
literals:
- ER_PORT=8080
- ER_DBADDR=http://localhost:8086
- ER_DBUSER=writeuser
- ER_DBPASS=writeuser
Then I run kustomize, apply the deployment and check if it did apply the environment.
$ kubectl apply -k .
configmap/env-receiver-config-258g858mgg created
$ kubectl apply -f k8s/deployment.yml
deployment.apps/env-receiver-deployment unchanged
$ kubectl describe pod env-receiver-deployment-76c678dcf-5r2hl
Name: env-receiver-deployment-76c678dcf-5r2hl
[...]
Environment Variables from:
env-receiver-config ConfigMap Optional: false
Environment: <none>
[...]
But it still gets its environment variables from: env-receiver-config, not env-receiver-config-258g858mgg.
My current workaround is to disable the hash suffixes in the kustomize.yml.
generatorOptions:
disableNameSuffixHash: true
It looks like I'm missing a step to tell the deployment the name of the new configMap. What is it?

It looks like the problem come from the fact that you generate the config map through kustomize but the deployment via kubectl directly without using kustomize.
Basically, kustomize will look for all the env-receiver-config in all your resources and replace them by the hash suffixed version.
For it to work, all your resources have to go through kustomize.
To do so, you need to add to your kustomization.yml:
resources:
- yourDeployment.yml
and then just run kubectl apply -k .. It should create both the ConfigMap and the Deployment using the right ConfigMap name

Related

How to create a volume that mounts a file which it's path configured in a ConfigMap

I'll describe what is my target then show what I had done to achieve it... my goal is to:
create a configmap that holds a path for properties file
create a deployment, that has a volume mounting the file from the path configured in configmap
What I had done:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: "my.properties"
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-client-deployment
spec:
selector:
matchLabels:
app: my-client
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: my-client
spec:
containers:
- name: my-client-container
image: {{ .Values.image.client}}
imagePullPolicy: {{ .Values.pullPolicy.client }}
ports:
- containerPort: 80
env:
- name: MY_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: my-configmap
key: my_properties_file_name
volumeMounts:
- name: config
mountPath: "/etc/config"
readOnly: true
imagePullSecrets:
- name: secret-private-registry
volumes:
# You set volumes at the Pod level, then mount them into containers inside that Pod
- name: config
configMap:
# Provide the name of the ConfigMap you want to mount.
name: my-configmap
# An array of keys from the ConfigMap to create as files
items:
- key: "my_properties_file_name"
path: "my.properties"
The result is having a file namedmy.properties under /etc/config, BUT the content of that file is "my.properties" (as it was indicated as the file name in the configmap), and not the content of properties file as I have it actually in my localdisk.
How can I mount that file, using it's path configured in a configmap?
Put the content of the my.properties file directly inside the ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: |
This is the content of the file.
It supports multiple lines but do take care of the indentation.
Or you can also use a kubectl create configmap command:
kubectl create configmap my-configmap --from-file=my_properties_file_name=./my.properties
In either method, you are actually passing the snapshot of the content of the file on the localdisk to kubernetes to store. Any changes you make to the file on the localdisk won't be reflected unless you re-create the configmap.
The design of kubernetes allows running kubectl command against kubernetes cluster located on the other side of the globe so you can't simply mount a file on your localdisk to be accessible in realtime by the cluster. If you want such mechanism, you can't use a ConfigMap, but instead you would need to setup a shared volume that is mounted by both your local machine and the cluster for example using a NFS server.

How to use dynamic/variable image tag in a Kubernetes deployment?

In our project, which also uses Kustomize, our base deployment.yaml file looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:IMAGE_TAG # <------------------------------
ports:
- containerPort: 80
Then we use sed to replace IMAGE_TAG with the version of the image we want to deploy.
Is there a more sophisticated way to do this, rather than editing the text yaml file using sed?
There is a specific transformer for this called the images transformer.
You can keep your deployment as it is, with or without tag:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
and then in your kustomization file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: nginx
newTag: MYNEWTAG
Do keep in mind that this will replace the tag of all the nginx images of all the resources included in your kustomization file. If you need to run multiple versions of nginx you can replace the image name in your deployment by a placeholder and have different entries in the transformer.
It is possible to use an image tag from an environment variable, without having to edit files for each different tag. This is useful if your image tag needs to vary without changing version-controlled files.
Standard kubectl is enough for this purpose. In short, use a configMapGenerator with data populated from environment variables. Then add replacements that refer to this ConfigMap data to replace relevant image tags.
Example
Continuing with your example deployment.yaml, you could have a kustomization.yaml file in the same folder that looks like so:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Generate a ConfigMap based on the environment variables in the file `.env`.
configMapGenerator:
- name: my-config-map
envs:
- .env
replacements:
- source:
# Replace any matches by the value of environment variable `MY_IMAGE_TAG`.
kind: ConfigMap
name: my-config-map
fieldPath: data.MY_IMAGE_TAG
targets:
- select:
# In each Deployment resource …
kind: Deployment
fieldPaths:
# … match the image of container `nginx` …
- spec.template.spec.containers.[name=nginx].image
options:
# … but replace only the second part (image tag) when split by ":".
delimiter: ":"
index: 1
resources:
- deployment.yaml
In the same folder, you need a file .env with the environment variable name only (note: just the name, no value assigned):
MY_IMAGE_TAG
Now MY_IMAGE_TAG from the local environment is integrated as the image tag when running kubectl kustomize, kubectl apply --kustomize, etc.
Demo:
MY_IMAGE_TAG=foobar kubectl kustomize .
This prints the generated image tag, which is foobar as desired:
# …
spec:
# …
template:
# …
spec:
containers:
- image: nginx:foobar
name: nginx
ports:
- containerPort: 80
Alternatives
Keep the following in mind from the configMapGenerator documentation:
Note: It's recommended to use the local environment variable population functionality sparingly - an overlay with a patch is often more maintainable. Setting values from the environment may be useful when they cannot easily be predicted, such as a git SHA.
If you are simply looking to share a fixed image tag between multiple files, see the already suggested images transformer.
#evolutics answer is probably "proper" way to do it but if you are customizing only image tag every deployment you could consider putting $IMAGE_TAG variable in deployment.yaml and using envsubst command.
Example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:$IMAGE_TAG
ports:
- containerPort: 80
IMAGE_TAG=my_image_tag envsubst '${IMAGE_TAG}' < deployment.yaml > final_deployment.yaml

How to kubernetes "kubectl apply" does not update existing deployments

I have a .NET-core web application. This is deployed to an Azure Container Registry. I deploy this to my Azure Kubernetes Service using
kubectl apply -f testdeployment.yaml
with the yaml-file below
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb
spec:
replicas: 1
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: mycontainerregistry.azurecr.io/myweb:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: my-registry-key
This works splendid, but when I change some code, push new code to container and run the
kubectl apply -f testdeployment
again, the AKS/website does not get updated, until I remove the deployment with
kubectl remove deployment myweb
What should I do to make it overwrite whatever is deployed? I would like to add something in my yaml-file. (Im trying to use this for continuous delivery in Azure DevOps).
I believe what you are looking for is imagePullPolicy. The default is ifNotPresent which means that the latest version will not be pulled.
https://kubernetes.io/docs/concepts/containers/images/
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb
spec:
replicas: 1
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: mycontainerregistry.azurecr.io/myweb
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: my-registry-key
To ensure that the pod is recreated, rather run:
kubectl delete -f testdeployment && kubectl apply -f testdeployment
kubectl does not see any changes in your deployment yaml file, so it will not make any changes. That's one of the problems using the latest tag.
Tag your image to some incremental version or build number and replace latest with that tag in your CI pipeline (for example with envsubst or similar). This way kubectl knows the image has changed. And you also know what version of the image is running. The latest tag could be any image version.
Simplified example for Azure DevOps:
# <snippet>
image: mycontainerregistry.azurecr.io/myweb:${TAG}
# </snippet>
Pipeline YAML:
stages:
- stage: Build
jobs:
- job: Build
variables:
- name: TAG
value: $(Build.BuildId)
steps:
- script: |
envsubst '${TAG}' < deployment-template.yaml > deployment.yaml
displayName: Replace Environment Variables
Alternatively you could also use another tool like Replace Tokens (different syntax: #{TAG}#).
First delete the deployment config file by running below command on the relative path of the deployment file.
kubectl delete -f .\deployment-file-name.yaml
earlier I used to get
deployment.apps/deployment-file-name unchanged
meaning the deployment file remains cached.
It happens while you're fixing some errors / typos on the deployment YAML & the config got cached once the error got cleared.
Only a kubectl delete -f .\deployment-file-name.yaml could remove the cache.
Later you can do the deployment by
kubectl apply -f .\deployment-file-name.yaml
Sample yaml file as follows :
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-file-name
spec:
replicas: 1
selector:
matchLabels:
app: myservicename
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: /platformservice:latest

Kube / create deployment with config map

I new in kube, and im trying to create deployment with configmap file. I have the following:
app-mydeploy.yaml
--------
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-mydeploy
labels:
app: app-mydeploy
spec:
replicas: 3
selector:
matchLabels:
app: mydeploy
template:
metadata:
labels:
app: mydeploy
spec:
containers:
- name: mydeploy-1
image: mydeploy:tag-latest
envFrom:
- configMapRef:
name: map-mydeploy
map-mydeploy
-----
apiVersion: v1
kind: ConfigMap
metadata:
name: map-mydeploy
namespace: default
data:
my_var: 10.240.12.1
I created the config and the deploy with the following commands:
kubectl create -f app-mydeploy.yaml
kubectl create configmap map-mydeploy --from-file=map-mydeploy
when im doing kubectl describe deployments, im getting among the rest:
Environment Variables from:
map-mydeploy ConfigMap Optional: false
also kubectl describe configmaps map-mydeploy give me the right results.
the issue is that my container is CrashLoopBackOff, when I look at the logs, it says: time="2019-02-05T14:47:53Z" level=fatal msg="Required environment variable my_var is not set.
this log is from my container that says that the my_var is not defined in the env vars.
what im doing wrong?
I think you are missing you key in the command
kubectl create configmap map-mydeploy --from-file=map-mydeploy
try to this kubectl create configmap map-mydeploy --from-file=my_var=map-mydeploy
also I highly recommend that if you are just using one value, create you configMap from literal kubectl create configmap my-config --from-literal=my_var=10.240.12.1 then related the configMap in your deployment as you are currently doing it.

How to dynamically populate values into Kubernetes yaml files

I would like to pass in some of the values in kubernetes yaml files during runtime like reading from config/properties file.
what is the best way to do that?
In the below example, I do not want to hardcode the port value, instead read the port number from config file.
Ex:
logstash.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: 33044 (looking to read this port from config file)
env:
- name: INPUT_PORT
value: "5044"
config.yaml
logstash_port: 33044
This sounds like a perfect use case for Helm (www.helm.sh).
Helm Charts helps you define, install, and upgrade Kubernetes applications. You can use a pre-defined chart (like Nginx, etc) or create your own chart.
Charts are structured like:
mychart/
Chart.yaml
values.yaml
charts/
templates/
...
In the templates folder, you can include your ReplicationController files (and any others). In the values.yaml file you can specify any variables you wish to share amongst the templates (like port numbers, file paths, etc).
The values file can be as simple or complex as you require. An example of a values file:
myTestService:
containerPort: 33044
image: "logstash"
You can then reference these values in your template file using:
apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: {{ .Values.myTestService.containerPort }}
env:
- name: INPUT_PORT
value: "5044"
Once finished you can compile into Helm chart using helm package mychart. To deploy to your Kubernetes cluster you can use helm install mychart-VERSION.tgz. That will then deploy your chart to the cluster. The version number is set within the Chart.yaml file.
You can use Kubernetes ConfigMaps for this. ConfigMaps are introduced to include external configuration files such as property files.
First create a ConfigMap artifact out of your property like follows:
kubectl create configmap my-config --from-file=db.properties
Then in your Deployment yaml you can provide it as a volume binding or environment variables
Volume binding :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
volumeMounts:
- name: config-volume
mountPath: /etc/creds <mount path>
volumes:
- name: config-volume
configMap:
name: my-config
Here under mountPath you need to provide the location of your container where your property file should resides. And underconfigMap name you should define the name of your configMap you created.
Environment variables way :
apiVersion: v1
kind: ReplicationController
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: test
ports:
- containerPort: 33044
env:
- name: DB_PROPERTIES
valueFrom:
configMapKeyRef:
name: my-config
items:
- key: <propert name>
path: <path/to/property>
Here under the configMapKeyRef section under name you should define your config map name you created. e.g. my-config. Under the items you should define the key(s) of your property file and path to each of the key, Kubernetes will automatically resolve the value of the property internally.
You can find more about ConfigMap here.
https://kubernetes-v1-4.github.io/docs/user-guide/configmap/
There are some parameters you can't change once a pod is created. containerPort is one of them.
You can add a new container to a pod though. And open a new port.
The parameters you CAN change, you can do it either by dynamically creating or modifying the original deployment (say with sed) and running kubectl replace -f FILE command, or through kubectl edit DEPLOYMENT command; which automatically applies the changes.