Kubernetes: init container spec in yaml format - kubernetes

Currently, I'm writting my init container specs inside:
metadata:
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "sdf",
"image": "sdf"
...
So, it forces me to write init container specs in json format.
My question is: Is there any way to write init-container specs without using this way?

From Kubernetes 1.6 on there's a new syntax available. Same format as for normal pod spec, just use initContainers instead.

Since 1.6, u are possible to write it in yaml way. Here is an example that we used to build up the galera cluster.
spec:
serviceName: "galera"
replicas: 3
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: install
image: gcr.io/google_containers/galera-install:0.1
imagePullPolicy: Always
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: config
mountPath: /etc/mysql/conf.d
- name: bootstrap
image: debian:jessie
command:
- "hello world"
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
volumeMounts:
- name: workdir
mountPath: "/hello"
containers:
- name: mysql
xxxxxx

Related

How to use git-sync image as a sidecar in kubernetes that git pulls periodically

I am trying to use git-sync image as a side car in kubernetes that runs git-pull periodically and mounts cloned data to shared volume.
Everything is working fine when I configure it for sync one time. I want to run it periodically like every 10 mins. Somehow when I configure it to run periodically pod initializing is failing.
I read documentation but couldn't find proper answer. Would be nice if you help me to figure out what I am missing in my configuration.
Here is my configuration that failing.
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "10"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
volumes:
- name: www-data
emptyDir: {}
Pod
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx-helloworld
name: nginx-helloworld
spec:
containers:
- image: nginx
name: nginx-helloworld
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
you are using the git-sync as an initContainers, which run only during init (once in lifecycle)
A Pod can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the app containers are started.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
init-containers
So use this as a regular container
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "20"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
volumes:
- name: www-data
emptyDir: {}

My pod is always in 'ContainerCreating' state

When I ssh into minikube and pull the image from docker hub it pulls the image successfully:
$ docker pull mysql:5.7
So I understand network is not an issue.
But when I try deploying using the following command it goes into 'ContainerCreating' endlessly.
$ kubectl apply -f my-depl.yaml
#my-depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-depl
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
subPath: "mysql"
name: mysql-data
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-root-password
key: ROOT_PASSWORD
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql-data-disk
Please let me know if there is anything wrong with the above yaml file or any other helpful debug tips that can help pull the image successfully from the Docker Hub.
I do not know the reason but my container got created automatically without any problems when I restarted the Minikube.
It would help if someone can add the reason behind this behavior.

RabbitMQ configuration files is not coping in the Kubernetes deployment

I'm trying to deploy RabbitMQ on the Kubernetes cluster and using the initcontainer to copy a file from ConfigMap. However, the file is not copying after POD is in a running state.
Initially, I have tried without using an initcontainer, but I was getting an error like "touch: cannot touch '/etc/rabbitmq/rabbitmq.conf': Read-only file system."
kind: Deployment
metadata:
name: broker01
namespace: s2sdocker
labels:
app: broker01
spec:
replicas: 1
selector:
matchLabels:
app: broker01
template:
metadata:
name: broker01
labels:
app: broker01
spec:
initContainers:
- name: configmap-copy
image: busybox
command: ['/bin/sh', '-c', 'cp /etc/rabbitmq/files/definitions.json /etc/rabbitmq/']
volumeMounts:
- name: broker01-definitions
mountPath: /etc/rabbitmq/files
- name: pre-install
mountPath: /etc/rabbitmq
containers:
- name: broker01
image: rabbitmq:3.7.17-management
envFrom:
- configMapRef:
name: broker01-rabbitmqenv-cm
ports:
volumeMounts:
- name: broker01-data
mountPath: /var/lib/rabbitmq
- name: broker01-log
mountPath: /var/log/rabbitmq/log
- name: broker01-definitions
mountPath: /etc/rabbitmq/files
volumes:
- name: pre-install
emptyDir: {}
- name: broker01-data
persistentVolumeClaim:
claimName: broker01-data-pvc
- name: broker01-log
persistentVolumeClaim:
claimName: broker01-log-pvc
- name: broker01-definitions
configMap:
name: broker01-definitions-cm
The file "definitions.json" should be copied to /etc/reabbitmq folder. I have followed "Kubernetes deployment read-only filesystem error". But issue did not fix.
After making changes in the "containers volumeMount section," I was able to copy the file on to /etc/rabbitmq folder.
Please find a modified code here.
- name: broker01
image: rabbitmq:3.7.17-management
envFrom:
- configMapRef:
name: broker01-rabbitmqenv-cm
ports:
volumeMounts:
- name: broker01-data
mountPath: /var/lib/rabbitmq
- name: broker01-log
mountPath: /var/log/rabbitmq/log
- name: pre-install
mountPath: /etc/rabbitmq
can you check permissions on /etc/rabbitmq/.
does the user has permission to copy the file to above location?
- name: pre-install
mountPath: /etc/rabbitmq
I see that /etc/rabbitmq is a mount point. it is a ready only file system and hence the file copy is failed.
can you update the permissions on 'pre-install' mount point

kubernetes timezone in POD with command and argument

I want to change timezone with command.
I know applying hostpath.
Could you know how to apply command ?
ln -snf /user/share/zoneinfor/$TZ /etc/localtime
it works well within container.
But I don't know applying with command and arguments in yaml file.
You can change the timezone of your pod by using specific timezone config and hostPath volume to set specific timezone. Your yaml file will look something like:
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000000"
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Prague
type: File
If you want it across all pods or deployments, you need to add volume and volumeMounts to all your deployment file and change the path value in hostPath section to the timezone you want to set.
Setting TZ environment variable as below works fine for me on GCP Kubernetes.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: demo
image: gcr.io/project/image:master
imagePullPolicy: Always
env:
- name: TZ
value: Europe/Warsaw
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 0
In a deployment, you can do it by creating a volumeMounts in /etc/localtime and setting its values. Here is an example I have for a mariadb:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mariadb
spec:
replicas: 1
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb
ports:
- containerPort: 3306
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
In order to add "hostPath" in the deployment config, as suggested in previous answers, you'll need to be a privileged user. Otherwise your deployment may fail on:
"hostPath": hostPath volumes are not allowed to be used
As a workaround you can try one of theses options:
Add allowedHostPaths: {} next to volumes.
Add TZ environment variable. For example: TZ = Asia/Jerusalem
(Option 2 is similar to running docker exec -it openshift/origin /bin/bash -c "export TZ='Asia/Jerusalem' && /bin/bash").
For me: Setting up of volumes and volumeMounts didn't help. Setting up of TZ environment alone works in my case.

Kubernetes/Openshift: how to use envFile to read from a filesystem file

I've configured two container into a pod.
The first of them is engaged to create a file like:
SPRING_DATASOURCE_USERNAME: username
SPRING_DATASOURCE_PASSWORD: password
I want that the second container reads from this location in order to initialize its env variables.
I'm using envFrom but I don't quite figure out how to use it.
This is my spec:
metadata:
annotations:
configmap.fabric8.io/update-on-change: ${project.artifactId}
labels:
name: wsec
name: wsec
spec:
replicas: 1
selector:
name: wsec
version: ${project.version}
provider: fabric8
template:
metadata:
labels:
name: wsec
version: ${project.version}
provider: fabric8
spec:
containers:
- name: sidekick
image: quay.io/ukhomeofficedigital/vault-sidekick:latest
args:
- -cn=secret:openshift/postgresql:env=USERNAME
env:
- name: VAULT_ADDR
value: "https://vault.vault-sidekick.svc:8200"
- name: VAULT_TOKEN
value: "34f8e679-3fbd-77b4-5de9-68b99217cc02"
volumeMounts:
- name: sidekick-backend-volume
mountPath: /etc/secrets
readOnly: false
- name: wsec
image: ${docker.image}
env:
- name: SPRING_APPLICATION_JSON
valueFrom:
configMapKeyRef:
name: wsec-configmap
key: SPRING_APPLICATION_JSON
envFrom:
???
volumes:
- name: sidekick-backend-volume
emptyDir: {}
It looks like you are packaging the environment variables with the first container and read then in the second container.
If that's the case, you can use initContainers.
Create a volume mapped to an emptyDir. Mount that on the initContainer(propertiesContainer) and the main container(springBootAppContainer) as a volumeMount. This directory is now visible to both containers.
image: properties/container/path
command: [bash, -c]
args: ["cp -r /location/in/properties/container /propsdir"]
volumeMounts:
- name: propsDir
mountPath: /propsdir
This will put the properties in /propsdir. When the main container starts, it can read properties from /propsdir