I want to change timezone with command.
I know applying hostpath.
Could you know how to apply command ?
ln -snf /user/share/zoneinfor/$TZ /etc/localtime
it works well within container.
But I don't know applying with command and arguments in yaml file.
You can change the timezone of your pod by using specific timezone config and hostPath volume to set specific timezone. Your yaml file will look something like:
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000000"
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Prague
type: File
If you want it across all pods or deployments, you need to add volume and volumeMounts to all your deployment file and change the path value in hostPath section to the timezone you want to set.
Setting TZ environment variable as below works fine for me on GCP Kubernetes.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: demo
image: gcr.io/project/image:master
imagePullPolicy: Always
env:
- name: TZ
value: Europe/Warsaw
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 0
In a deployment, you can do it by creating a volumeMounts in /etc/localtime and setting its values. Here is an example I have for a mariadb:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mariadb
spec:
replicas: 1
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb
ports:
- containerPort: 3306
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
In order to add "hostPath" in the deployment config, as suggested in previous answers, you'll need to be a privileged user. Otherwise your deployment may fail on:
"hostPath": hostPath volumes are not allowed to be used
As a workaround you can try one of theses options:
Add allowedHostPaths: {} next to volumes.
Add TZ environment variable. For example: TZ = Asia/Jerusalem
(Option 2 is similar to running docker exec -it openshift/origin /bin/bash -c "export TZ='Asia/Jerusalem' && /bin/bash").
For me: Setting up of volumes and volumeMounts didn't help. Setting up of TZ environment alone works in my case.
Related
I'm trying to play with single pod multi container scenario.
The problem is one of my container (directus) is a node app that run as user 'node' with uid 1000
First try, I use hostpath as storage back end. With this, I need to change the host's directory mode with chmod manualy.
Now, I'm trying using longhorn.
And basicaly I don't want to change a host directory mod/ownership each time i deploy this deployment.
Here is my manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: lh-directus
namespace: lh-directus
spec:
replicas: 1
selector:
matchLabels:
app: lh-directus
template:
metadata:
labels:
app: lh-directus
spec:
nodeSelector:
kubernetes.io/os: linux
isGeneralDeployment: "true"
volumes:
- name: lh-directus-uploads-volume
persistentVolumeClaim:
claimName: lh-directus-uploads-pvc
- name: lh-directus-dbdata-volume
persistentVolumeClaim:
claimName: lh-directus-dbdata-pvc
containers:
# Redis Cache
- name: redis
image: redis:6
# Database
- name: database
image: postgres:12
volumeMounts:
- name: lh-directus-dbdata-volume
mountPath: /var/lib/postgresql/data
# Directus
- name: directus
image: directus/directus:latest
securityContext:
fsGroup: 1000
volumeMounts:
- name: lh-directus-uploads-volume
mountPath: /directus/uploads
When I Appy the manifest, I got error
error: error validating "lh-directus.yaml": error validating data: ValidationError(Deployment.spec.template.spec.containers[2].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.SecurityContext; if you choose to ignore these errors, turn validation off with --validate=false
I reads about initContainer ....
But Kindly please tell me how to fix this problem without initContainer and without manualy set/change host's path ownership/mod.
Sincerely
-bino-
I have deployment.yml file where i'm mounting service logging folder to a folder in host machine.
The issue is when i run multiple instances using the same deployment.yml file like scaling up all the instances are logging to a same file. Is there a way to solve this by dynamically creating folder in host machine based on container id or something. Any suggestions is appreciated.
My current deployment.yml file is
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
spec:
selector:
matchLabels:
app: logstash
replicas: 2
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: logstash:6.8.6
volumeMounts:
- mountPath: /usr/share/logstash/config/
name: config
- mountPath: /usr/share/logstash/logs/
name: logs
volumes:
- name: config
hostPath:
path: "/etc/logstash/"
- name: logs
hostPath:
path: "/var/logs/logstash"
There are some fields in kubernetes which you can get dynamically like node name, pod name, pod ip, etc. Refer this (https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) doc for examples.
Here is an example where you can set node-name as an environment variable.
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
You can change your deployment in such a way that it creates a file by adding node name to it.. In this way you can have different file name on each node. Recommended is to create a daemonset instead of deployment which will spawn one pod on each selected nodes (selection can be done using node selector).
you can use sed for dynamically adding some values
for example:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
spec:
selector:
matchLabels:
app: logstash
replicas: 2
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: logstash:6.8.6
volumeMounts:
- mountPath: /usr/share/logstash/config/
name: config
- mountPath: /usr/share/logstash/logs/
name: logs
volumes:
- name: config
hostPath:
path: {path}
- name: logs
hostPath:
path: "/var/logs/logstash"
Now I want to add dynamically add the path
I will simply
set -i "s|{path}:'/etc/logstash/'|g" deployment.yml
In this way, you can put as many values as you want before deploying the file.
Below is deployment yaml, after deployment, I could access the pod
and I can see the mountPath "/usr/share/nginx/html", but I could not find
"/work-dir" which should be created by initContainer.
Could someone explain me the reason?
Thanks and Rgds
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
The volume at "/work-dir" is mounted by the init container and the "/work-dir" location only exists in the init container. When the init container completes, its file system is gone so the "/work-dir" directory in that init container is "gone". The application (nginx) container mounts the same volume, too, (albeit at a different location) providing mechanism for the two containers to share its content.
Per the docs:
Init containers can run with a different view of the filesystem than
app containers in the same Pod.
The volume mount with a PVC allows you to share the contents of /work-dir/ and /use/share/nginx/html/ but it does not mean the nginx container will have the /work-dir folder. Given this, you may think that you could just mount the path / which would allow you to share all folders underneath. However, a mountPath does not work for /.
So, how do you solve your problem? You could have another pod mount /work-dir/ in case you actually need the folder. Here is an example (pvc and deployment with mounts):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-fs-pvc
namespace: default
labels:
mojix.service: default-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: shared-fs
labels:
mojix.service: shared-fs
spec:
replicas: 1
selector:
matchLabels:
mojix.service: shared-fs
template:
metadata:
creationTimestamp: null
labels:
mojix.service: shared-fs
spec:
terminationGracePeriodSeconds: 3
containers:
- name: nginx-c
image: nginx:latest
volumeMounts:
- name: shared-fs-volume
mountPath: /var/www/static/
- name: alpine-c
image: alpine:latest
command: ["/bin/sleep", "10000s"]
lifecycle:
postStart:
exec:
command: ["/bin/mkdir", "-p", "/work-dir"]
volumeMounts:
- name: shared-fs-volume
mountPath: /work-dir/
volumes:
- name: shared-fs-volume
persistentVolumeClaim:
claimName: shared-fs-pvc
In a simple Postgres Deployment, I wish to choose the volume dependent on the namespace. The aim is to use the same Deployment configuration file to create Postgres deployments in different namespaces (e.g. production/staging).
What ways are there to achieve this?
Below my configuration file, I basically want to make MAKE_THIS_DEPENDENT_ON_NAMESPACE dependent on the environment (or namespace) this Deployment is used in.
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.6
name: postgres
volumeMounts:
-name: postgres-storage
mountPath: /var/lib/postgresql
volumes:
- name: postgres-persistent-storage
gcePersistentDisk:
pdName: MAKE_THIS_DEPENDENT_ON_NAMESPACE
You should try using a Persistent Volume Claim instead, PVCs are namespaced.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: dockerfile/nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
I need to provide access to the file /var/docker.sock on the Kubernetes host (actually, a GKE instance) to a container running on that host.
To do this I'd like to mount the directory into the container, by configuring the mount in the deployment.yaml for the container deployment.
How would I specify this in the deployment configuration?
Here is the current configuration, I have for the deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: appd-sa-agent
spec:
replicas: 1
template:
metadata:
labels:
run: appd-sa-agent
spec:
containers:
- name: appd-sa-agent
image: docker.io/archbungle/appd-sa-agent:latest
ports:
- containerPort: 443
env:
- name: APPD_HOST
value: "https://graffiti201707132327203.saas.appdynamics.com"
How would I specify mounting the localhost file path to a directory mountpoint on the container?
Thanks!
T.
You need to define a hostPath volume.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: appd-sa-agent
spec:
replicas: 1
template:
metadata:
labels:
run: appd-sa-agent
spec:
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock
containers:
- name: appd-sa-agent
image: docker.io/archbungle/appd-sa-agent:latest
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
ports:
- containerPort: 443
env:
- name: APPD_HOST
value: "https://graffiti201707132327203.saas.appdynamics.com"
you need to use the hostPath option. Here is the sample yaml file.
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath