How to change container name and image in kustomization.yaml - kubernetes

I want to change all the dev-app to demo-app using kustomization.
In my base deployment.yaml I have the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: dev-app
name: dev-app
spec:
replicas: 1
selector:
matchLabels:
app: dev-app
template:
metadata:
labels:
service: dev-app
spec:
imagePullSecrets:
- name: my-docker-secret
containers:
- name: dev-app
image: the-my-app
imagePullPolicy: Always
ports:
- containerPort: 1234
restartPolicy: Always
In my overlays/demo kustomization.yaml I have the following:
bases:
- ../../base
resources:
- deployment.yaml
namespace: demo-app
images:
- name: the-my-app
newName: my.docker.registry.com/my-project/my-app
newTag: test
when I do this:
kubectl apply -k my-kustomization-dir
result look like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: dev-app
name: dev-app
spec:
replicas: 1
selector:
matchLabels:
app: dev-app
template:
metadata:
labels:
service: dev-app
spec:
imagePullSecrets:
- name: my-docker-secret
containers:
- name: dev-app
image: my.docker.registry.com/my-project/my-app:test
imagePullPolicy: Always
ports:
- containerPort: 1234
restartPolicy: Always
I want to change the name in the container to demo-app.
containers:
- name: dev-app
if possible help me with best possible way to replace all the dev-app name,label,service tag to demo-app

A way to do this is to replicate your deployment file into your overlays folder and change what you need to change, for example:
Path structure:
├── base
│   ├── deployment.yaml
│   └── kustomization.yaml
└── overlays
├── dev
│   ├── deployment.yaml
│   └── kustomization.yaml
└── prod
├── deployment.yaml
└── kustomization.yaml
deployment file of base/ folder:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: dev-app
name: dev-app
spec:
replicas: 1
selector:
matchLabels:
app: dev-app
template:
metadata:
labels:
service: dev-app
spec:
imagePullSecrets:
- name: my-docker-secret
containers:
- name: dev-app
image: the-my-app
imagePullPolicy: Always
ports:
- containerPort: 1234
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 500M
deployment file of overlays/prod folder:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: demo-app
name: demo-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
service: demo-app
spec:
imagePullSecrets:
- name: my-docker-secret
containers:
- name: demo-app
image: the-my-app
imagePullPolicy: Always
ports:
- containerPort: 1234
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1
memory: 500M
Kustomization file of overlays/prod:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: deployment.yaml
target:
kind: Deployment
options:
allowNameChange: true
namespace: prod-demo-app
images:
- name: the-my-app
newName: my.docker.registry.com/my-project/my-app
newTag: test
I suggest you use Helm, a much more robust templating engine. You can also use a combination of both Helm and Kustomize, Helm for templating and Kustomize for resource management, patches for a specific configuration, and overlays.

Related

Kubernetes error while creating mount source path : file exists

after re-deploying my kubernetes statefulset, the pod is now failing due to error while creating mount source path
'/var/lib/kubelet/pods/1559ef17-9c48-401d-9a2f-9962a4a16151/volumes/kubernetes.io~csi/pvc-6b9ac265-d0ec-4564-adb2-1c7b3f6631ca/mount': mkdir /var/lib/kubelet/pods/1559ef17-9c48-401d-9a2f-9962a4a16151/volumes/kubernetes.io~csi/pvc-6b9ac265-d0ec-4564-adb2-1c7b3f6631ca/mount: file exists
I'm assuming this is because the persistent volume/PVC already exists and so it cannot be created, but I thought that was the point of the statefulset, that the data would persist and you could just mount it again? How should I fix this?
Thanks.
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ClusterIP
ports:
- name: http
port: 80
selector:
app: foo-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foo-statefulset
namespace: foo
spec:
selector:
matchLabels:
app: foo-app
serviceName: foo-app
replicas: 1
template:
metadata:
labels:
app: foo-app
spec:
serviceAccountName: foo-service-account
containers:
- name: foo
image: blahblah
imagePullPolicy: Always
volumeMounts:
- name: foo-data
mountPath: "foo"
- name: stuff
mountPath: "here"
- name: config
mountPath: "somedata"
volumes:
- name: stuff
persistentVolumeClaim:
claimName: stuff-pvc
- name: config
configMap:
name: myconfig
volumeClaimTemplates:
- metadata:
name: foo-data
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "foo-storage"
resources:
requests:
storage: 2Gi

Create multiple containers with templating

I have a running k8s deployment, with one container.
I want to deploy 10 more containers, with a few differences in the deployment manifest (i.e command launched, container name, ...).
Rather than create 10 more .yml files with the whole deployment, I would prefer use templating. What can I do to achieve this ?
---
apiVersion: v1
kind: CronJob
metadata:
name: myname
labels:
app.kubernetes.io/name: myname
spec:
schedule: "*/10 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app.kubernetes.io/name: myname
spec:
serviceAccountName: myname
containers:
- name: myname
image: 'mynameimage'
imagePullPolicy: IfNotPresent
command: ["/my/command/to/launch"]
restartPolicy: OnFailure
Kustomize seems to be the go-to tool for templating, composition, multi-environment overriding, etc, in kubernetes configs. And it's built directly into kubectl now as well.
Specifically, I think you can achieve what you want by using the bases and overlays feature. Setup a base which contains the common structure and overlays which contain specific overrides.
You can either specify a set of containers to be created you can do that like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: container1
image: your-image
- name: container2
image: your-image
- name: container3
image: your-image
and you can repeat that container definition as many times as you want.
The other way around is to use a templating engine like helm/kustomize as mentioned above.
Using helm which is a templating engine for Kubernetes manifests you can create your own template by following me through.
If you have never worked with helm you can check the official docs
In order for you to follow make sure you have helm already installed!
- create a new chart:
helm create cowboy-app
this will generate a new project for you.
- DELETE EVERYTHING WITHING THE templates DIR
- REMOVE ALL values.yaml content
- create a new file deployment.yaml in templates directory and paste this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}
labels:
chart: {{ .Values.appName }}
spec:
selector:
matchLabels:
app: {{ .Values.appName }}
replicas: 1
template:
metadata:
labels:
app: {{ .Values.appName }}
spec:
containers:
{{ toYaml .Values.images | indent 8 }}
- in values.yaml paste this:
appName: cowboy-app
images:
- name: app-1
image: image-1
- name: app-2
image: image-2
- name: app-3
image: image-3
- name: app-4
image: image-4
- name: app-5
image: image-5
- name: app-6
image: image-6
- name: app-7
image: image-7
- name: app-8
image: image-8
- name: app-9
image: image-9
- name: app-10
image: image-10
So if you are familiar with helm you can tell that {{ toYaml .Values.images | indent 10 }} in the deployment.yaml is referring to data specified in values.yaml as YAML and by running helm install release-name /path/to/chart will generate and deploy a manifest file which is deployment.yaml that looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cowboy-app
labels:
chart: cowboy-app
spec:
selector:
matchLabels:
app: cowboy-app
replicas: 1
template:
metadata:
labels:
app: cowboy-app
spec:
containers:
- image: image-1
name: app-1
- image: image-2
name: app-2
- image: image-3
name: app-3
- image: image-4
name: app-4
- image: image-5
name: app-5
- image: image-6
name: app-6
- image: image-7
name: app-7
- image: image-8
name: app-8
- image: image-9
name: app-9
- image: image-10
name: app-10
Either you can use Helm or Kustomize. Both are templating tools and help you to achieve your goal

How to deal with a namespace different from the globally set one?

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ns1
resources:
- r1a.yaml
- r1b.yaml
- r1c.yaml
- r1d.yaml
- r1e.yaml
- r2.yaml # needs to be placed in namespace ns2
Let's assume above situation. The problem is objects specified in r2.yaml would be place in ns1 even if ns2 is explicitely referenced in metadata.namespace.
How do I have to deal with this? Or how can I solve this (as I assume there a multiple options)?
I've looked into this and I came up with one idea.
├── base
│ ├── [nginx.yaml] Deployment nginx ns: default
| ├── [nginx2.yaml] Deployment nginx ns: default
| ├── [nginx3.yaml] Deployment nginx ns: default
| ├── [nginx4.yaml] Deployment nginx ns: default
| ├── [nginx5.yaml] Deployment nginx ns: nginx
│ └── [kustomization.yaml] Kustomization
└── prod
├── [kustomization.yaml] Kustomization
└── [patch.yaml] patching namespace
You need to have 2 directories, in this setup are: base and prod. In base directory you should use your base YAMLs and kustomization.yaml file. In my scenario I have 6 YAMLs: nginx/1/2/3/4.yaml based on Kubernetes Documentation, and nginx5.yaml which looks the same but with additional spec.namespace: nginx.
In base directory:
$ cat kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- nginx1.yaml
- nginx2.yaml
- nginx3.yaml
- nginx4.yaml
- nginx5.yaml
And 5 YAMLs with nginx.
In Prod directory:
You should have 2 files. kustomization.yaml and patch.yaml.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ns1
bases:
- ../base
patchesJson6902:
- target:
group: apps
version: v1
kind: Deployment
name: nginx-deployment-5
path: patch.yaml
$ cat patch.yaml
- op: replace
path: /metadata/namespace
value: nginx
When you will use kustomize build . in prod directory, all nginx-deployment/-2/-3/-4 will be in the namespace: ns1 and nginx-deployment-5 will be in namespace: nginx.
~/prod (project)$ kustomize build .
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-5
namespace: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: ns1
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-2
namespace: ns1
spec:
Useful links:
Kustomize Builtin Plugins
Customizong
Kustomization Patches

kustomize, secretGenerator & patchesStrategicMerge: envFrom.secretRef not reading hashed secret name

In my kustomization.yaml I have:
...
secretGenerator:
- name: db-env
behavior: create
envs:
- my.env
patchesStrategicMerge:
- app.yaml
And then in my app.yaml (the patch) I have:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
template:
spec:
containers:
- name: server
envFrom:
- secretRef:
name: db-env
When I try build this via kustomize build k8s/development I get back out:
apiVersion: apps/v1
kind: Deployment
...
spec:
containers:
- envFrom:
- secretRef:
name: db-env
name: server
When it should be:
- envFrom:
- secretRef:
name: db-env-4g95hhmhfc
How do I get the secretGenerator name hashing to apply to patchesStrategicMerge too?
Or alternatively, what's the proper way to inject some environment vars into a deployment for a specific overlay?
This for development.
My file structure is like:
❯ tree k8s
k8s
├── base
│   ├── app.yaml
│   └── kustomization.yaml
├── development
│   ├── app.yaml
│   ├── golinks.sql
│   ├── kustomization.yaml
│   ├── mariadb.yaml
│   ├── my.cnf
│   └── my.env
└── production
├── ingress.yaml
└── kustomization.yaml
Where base/kustomization.yaml is:
namespace: go-mpen
resources:
- app.yaml
images:
- name: server
newName: reg/proj/server
and development/kustomization.yaml is:
resources:
- ../base
- mariadb.yaml
configMapGenerator:
- name: mariadb-config
files:
- my.cnf
- name: initdb-config
files:
- golinks.sql # TODO: can we mount this w/out a config file?
secretGenerator:
- name: db-env
behavior: create
envs:
- my.env
patchesStrategicMerge:
- app.yaml
This works fine for me with kustomize v3.8.4. Can you please check your version and if disableNameSuffixHash is not perhaps set to you true.
Here are the manifests used by me to test this:
➜ app.yaml deployment.yaml kustomization.yaml my.env
app.yaml
kind: Deployment
metadata:
name: app-deployment
spec:
template:
spec:
containers:
- name: server
envFrom:
- secretRef:
name: db-env
deplyoment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
and my kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: db-env
behavior: create
envs:
- my.env
patchesStrategicMerge:
- app.yaml
resources:
- deployment.yaml
And here is the result:
apiVersion: v1
data:
ASD: MTIz
kind: Secret
metadata:
name: db-env-f5tt4gtd7d
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: app-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
- envFrom:
- secretRef:
name: db-env-f5tt4gtd7d
name: server

How to mount jar file to tomcat container

I have a folder in my project, which contains 1 properties file and 1 jar file(db-driver) file.
I need to copy both of these files to /usr/local/tomcat/lib directory on my pod. I am not sure how to achieve this in kubernetes yaml file. Below is my yaml file where I am trying to achieve this using configMap, but pod creation fails with error "configmap references non-existent config key: app.properties"
Target /usr/local/tomcat/lib already has other jar files so I am trying to use configMap to not override entire directory and just add 2 files which are specific to my application.
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcatdeployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: appvolume
mountPath: /usr/local/data
- name: config
mountPath: /usr/local/tomcat/lib
subPath: ./configuration
ports:
- name: http
containerPort: 8080
protocol: TCP
volumes:
- name: appvolume
- name: config
configMap:
name: config-map
items:
- key: app.properties
path: app.properties
---
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
key: app.properties
Current Directory structure...
.
├── configuration
│   ├── app.properties
│   └── mysql-connector-java-5.1.21.jar
├── deployment.yaml
└── service.yaml
Please share your valuable feedback on how to achieve this.
Regards.
Please try this:
kubectl create configmap config-map --from-file=app.properties --from-file=mysql-connector-java-5.1.21.jar
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcatdeployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: config
mountPath: /usr/local/tomcat/lib/conf
ports:
- name: http
containerPort: 8080
protocol: TCP
volumes:
- name: config
configMap:
name: config-map
or
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcatdeployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat3
image: tomcat:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: config
mountPath: /usr/local/tomcat/lib/app.properties
subPath: app.properties
- name: config
mountPath: /usr/local/tomcat/lib/mysql-connector-java-5.1.21.jar
subPath: mysql-connector-java-5.1.21.jar
ports:
- name: http
containerPort: 8080
protocol: TCP
volumes:
- name: config
configMap:
name: config-map
items:
- key: app.properties
path: app.properties
- key: mysql-connector-java-5.1.21.jar
path: mysql-connector-java-5.1.21.jar
it's normal to have this error because in this volume declaration you mentioned that key: app.properties otherwise in the configmap key: app.properties so here the key is key and the value is app.properties so you must in the volume declaration change :
volumes:
- name: appvolume
- name: config
configMap:
name: config-map
items:
- key: app.properties
path: app.properties
to :
volumes:
- name: appvolume
- name: config
configMap:
name: config-map
items:
- key: key
path: app.properties
for more you can refer here : add-configmap-data-to-a-volume