I am trying to update my pod time to Asia/Kolkata zone as per kubernetes timezone in POD with command and argument. However, the time still remains the same UTC time. Only the time zone is getting updated from UTC to Asia.
I was able to fix it using the volume mounts as below. Create a config map and apply the deployment yaml.
kubectl create configmap tz --from-file=/usr/share/zoneinfo/Asia/Kolkata -n <required namespace>
Why is the environmental variable method not working? Will a pod eviction occur from one host to another if we use volume mount time and will if affect the volume mount time after pod eviction?
The EV deployment YAML is below which does not update the time
apiVersion: apps/v1
kind: Deployment
metadata:
name: connector
labels:
app: connector
namespace: clients
spec:
replicas: 1
selector:
matchLabels:
app: connector
template:
metadata:
labels:
app: connector
spec:
containers:
- image: connector
name: connector
resources:
requests:
memory: "32Mi" # "64M"
cpu: "250m"
limits:
memory: "64Mi" # "128M"
cpu: "500m"
ports:
- containerPort: 3307
protocol: TCP
env:
- name: TZ
value: Asia/Kolkata
volumeMounts:
- name: connector-rd
mountPath: /home/mongobi/mongosqld.conf
subPath: mongosqld.conf
volumes:
- name: connector-rd
configMap:
name: connector-rd
items:
- key: mongod.conf
Volume Mount yaml is below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: connector
labels:
app: connector
namespace: clients
spec:
replicas: 1
selector:
matchLabels:
app: connector
template:
metadata:
labels:
app: connector
spec:
containers:
- image: connector
name: connector
resources:
requests:
memory: "32Mi" # "64M"
cpu: "250m"
limits:
memory: "64Mi" # "128M"
cpu: "500m"
ports:
- containerPort: 3307
protocol: TCP
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
- name: connector-rd
mountPath: /home/mongobi/mongosqld.conf
subPath: mongosqld.conf
volumes:
- name: connector-rd
configMap:
name: connector-rd
items:
- key: mongod.conf
path: mongosqld.conf
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Kolkata
In this scenario you need to mention type attribute as File for hostPath in the deployment configuration. The below configuration should work for you.
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Kolkata
type: File
Simply setting TZ env variable in deployment works for me
Related
I have installed on K3S raspberry pi cluster nexus with the following setups for kubernetes learning purposes. First I created a StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nexus
namespace: dev-ops
spec:
serviceName: "nexus"
replicas: 1
selector:
matchLabels:
app: nexus-server
template:
metadata:
labels:
app: nexus-server
spec:
containers:
- name: nexus
image: klo2k/nexus3:latest
env:
- name: MAX_HEAP
value: "800m"
- name: MIN_HEAP
value: "300m"
resources:
limits:
memory: "4Gi"
cpu: "1000m"
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort: 8081
volumeMounts:
- name: nexusstorage
mountPath: /sonatype-work
volumes:
- name: nexusstorage
persistentVolumeClaim:
claimName: nexusstorage
Storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nexusstorage
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "30"
fsType: "ext4"
diskSelector: "ssd"
nodeSelector: "ssd"
pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexusstorage
namespace: dev-ops
spec:
accessModes:
- ReadWriteOnce
storageClassName: nexusstorage
resources:
requests:
storage: 50Gi
Service
apiVersion: v1
kind: Service
metadata:
name: nexus-server
namespace: dev-ops
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8081'
spec:
selector:
app: nexus-server
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
nodePort: 32000
this setup will spin up nexus, but if I restart the pod the data will not persist and I have to create all the setups and users from scratch.
What I'm missing in this case?
UPDATE
I got it working, nexus needs on mount permissions on directory. The working StatefulSet looks as it follow
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nexus
namespace: dev-ops
spec:
serviceName: "nexus"
replicas: 1
selector:
matchLabels:
app: nexus-server
template:
metadata:
labels:
app: nexus-server
spec:
securityContext:
runAsUser: 200
runAsGroup: 200
fsGroup: 200
containers:
- name: nexus
image: klo2k/nexus3:latest
env:
- name: MAX_HEAP
value: "800m"
- name: MIN_HEAP
value: "300m"
resources:
limits:
memory: "4Gi"
cpu: "1000m"
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort: 8081
volumeMounts:
- name: nexus-storage
mountPath: /nexus-data
volumes:
- name: nexus-storage
persistentVolumeClaim:
claimName: nexus-storage
important snippet to get it working
securityContext:
runAsUser: 200
runAsGroup: 200
fsGroup: 200
I'm not familiar with that image, although checking dockerhub, they mention using a Dockerfile similar to that of Sonatype. Then, I would change the mountpoint for your volume, to /nexus-data
This is the default path storing data (they set this env var, then declare a VOLUME). Which we can confirm, looking at the repository that most likely produced your arm-capable image
And following up on your last comment, let's try to also mount it in /opt/sonatype/sonatype-work/nexus3...
In your statefulset, change volumeMounts, to this:
volumeMounts:
- name: nexusstorage
mountPath: /nexus-data
- name: nexusstorage
mountPath: /opt/sonatype/sonatype-work/nexus3
volumes:
- name: nexusstorage
persistentVolumeClaim:
claimName: nexusstorage
Although the second volumeMount entry should not be necessary, as far as I understand. Maybe something's wrong with your storage provider?
Are you sure your PVC is write-able? Reverting back to your initial configuration, enter your pod (kubectl exec -it) and try to write a file at the root of your PVC.
I'm learning k8s, I found an example in the MS docs. The problem I'm having is that I want to switch what GITHUB repo thats being used. I havent been able to figure out the path within this yaml example
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
This YAML example doesn't have a Github Repo field at all. That's why you can't find a path.
If you're trying to change the container image source, it has to be from a container registry (or your own filesystem), which is located at
containers: image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
where mcr.microsoft.com is the container registry.
You won't be able to connect this directly to a Github Repository, but any container registry will work, and I believe Github has one at https://ghcr.io (that link itself will direct you back to Github)
I am using the below command to restart Pods in a statefulset
kubectl rollout restart statefulset ts
If I have to introduce a delay between pod rotation, is there an argument or any other method to achieve it? I am using a sidecar that updates the Pod IP address to a configuration file, if the Pod restarts before the IP address is updated in the config file, the service is not healthy. Looking for a way to introduce a delay between pod restarts/pod rotations.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: typesense
namespace: typesense
labels:
service: typesense
app: typesense
spec:
serviceName: ts
podManagementPolicy: Parallel
replicas: 3
selector:
matchLabels:
service: typesense
app: typesense
template:
metadata:
labels:
service: typesense
app: typesense
spec:
serviceAccountName: typesense-service-account
securityContext:
fsGroup: 2000
runAsUser: 10000
runAsGroup: 3000
runAsNonRoot: true
terminationGracePeriodSeconds: 300
containers:
- name: typesense
envFrom:
# - configMapRef:
# name: typesense-config
- secretRef:
name: typesense-secret
image: typesense/typesense:0.23.0.rc43
command:
- "/opt/typesense-server"
- "-d"
- "/usr/share/typesense/data"
- "--api-port"
- "8108"
- "--peering-port"
- "8107"
- "--nodes"
- "/usr/share/typesense/nodes"
ports:
- containerPort: 8108
name: http
resources:
requests:
memory: 100Mi
cpu: "100m"
limits:
memory: 1Gi
cpu: "1000m"
volumeMounts:
- name: nodeslist
mountPath: /usr/share/typesense
- name: data
mountPath: /usr/share/typesense/data
- name: typesense-node-resolver
image: alasano/typesense-node-resolver
command:
- "/opt/tsns"
- "-namespace=typesense"
volumeMounts:
- name: nodeslist
mountPath: /usr/share/typesense
volumes:
- name: nodeslist
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs
resources:
requests:
storage: 1Gi
You can find the full manifest here.
Maybe using init container ?
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: typesense
namespace: typesense
labels:
service: typesense
app: typesense
spec:
serviceName: ts
podManagementPolicy: Parallel
replicas: 3
selector:
matchLabels:
service: typesense
app: typesense
template:
metadata:
labels:
service: typesense
app: typesense
spec:
serviceAccountName: typesense-service-account
securityContext:
fsGroup: 2000
runAsUser: 10000
runAsGroup: 3000
runAsNonRoot: true
terminationGracePeriodSeconds: 300
initContainers:
- name: init-myservice
image: busybox:1.34.1
command: [
"echo",
"ip",
"to",
"config",
"file",,
]
containers:
- name: typesense
envFrom:
# - configMapRef:
# name: typesense-config
- secretRef:
name: typesense-secret
image: typesense/typesense:0.23.0.rc43
command:
- "/opt/typesense-server"
- "-d"
- "/usr/share/typesense/data"
- "--api-port"
- "8108"
- "--peering-port"
- "8107"
- "--nodes"
- "/usr/share/typesense/nodes"
ports:
- containerPort: 8108
name: http
resources:
requests:
memory: 100Mi
cpu: "100m"
limits:
memory: 1Gi
cpu: "1000m"
volumeMounts:
- name: nodeslist
mountPath: /usr/share/typesense
- name: data
mountPath: /usr/share/typesense/data
- name: typesense-node-resolver
image: alasano/typesense-node-resolver
command:
- "/opt/tsns"
- "-namespace=typesense"
volumeMounts:
- name: nodeslist
mountPath: /usr/share/typesense
volumes:
- name: nodeslist
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs
resources:
requests:
storage: 1Gi
Maybe you could try terminationGracePeriodSeconds and preStop hook with following command.
command: [ "/bin/bash", "-c", "sleep 30" ]
It introduces a 30 second delay after your container receive stop from control plane. With terminationGracePeriodSeconds, control plane won't force kill your container within the period.
The complete example should look like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test
labels:
app: test
spec:
replicas: 1
serviceName: "test"
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- image: nginx
name: test
lifecycle:
preStop:
exec:
command: [ "/bin/bash", "-c", "sleep 60" ]
terminationGracePeriodSeconds: 60
env:
kubernetes provider: gke
kubernetes version: v1.13.12-gke.25
grafana version: 6.6.2 (official image)
grafana deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
name: grafana
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:6.6.2
ports:
- name: grafana
containerPort: 3000
# securityContext:
# runAsUser: 104
# allowPrivilegeEscalation: true
resources:
limits:
memory: "1Gi"
cpu: "500m"
requests:
memory: "500Mi"
cpu: "100m"
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-storage
volumes:
- name: grafana-storage
persistentVolumeClaim:
claimName: grafana-pvc
Problem
when I deployed this grafana dashboard first time, its working fine. after sometime I restarted the pod to check whether volume mount is working or not. after restarting, I getting below error.
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later
what I understand from this error, user could create these files. How can I give this user appropriate permission to start grafana successfully?
I recreated your deployment with appropriate PVC and noticed that grafana pod was failing.
Output of command: $ kubectl get pods -n monitoring
NAME READY STATUS RESTARTS AGE
grafana-6466cd95b5-4g95f 0/1 Error 2 65s
Further investigation pointed the same errors as yours:
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later
This error showed on first creation of a pod and the deployment. There was no need to recreate any pods.
What I did to make it work was to edit your deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
name: grafana
labels:
app: grafana
spec:
securityContext:
runAsUser: 472
fsGroup: 472
containers:
- name: grafana
image: grafana/grafana:6.6.2
ports:
- name: grafana
containerPort: 3000
resources:
limits:
memory: "1Gi"
cpu: "500m"
requests:
memory: "500Mi"
cpu: "100m"
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-storage
volumes:
- name: grafana-storage
persistentVolumeClaim:
claimName: grafana-pvc
Please take a specific look on part:
securityContext:
runAsUser: 472
fsGroup: 472
It is a setting described in official documentation: Kubernetes.io: set the security context for a pod
Please take a look on this Github issue which is similar to yours and pointed me to solution that allowed pod to spawn correctly:
https://github.com/grafana/grafana-docker/issues/167
Grafana had some major updates starting from version 5.1. Please take a look: Grafana.com: Docs: Migrate to v5.1 or later
Please let me know if this helps.
On v8.0, I do that setting runAsUser: 0.
It works.
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- name: grafana-tcp
port: 3000
protocol: TCP
targetPort: 3000
selector:
project: grafana
type: LoadBalancer
status:
loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
project: grafana
name: grafana
spec:
replicas: 1
selector:
matchLabels:
project: grafana
strategy:
type: RollingUpdate
template:
metadata:
labels:
project: grafana
name: grafana
spec:
securityContext:
runAsUser: 0
containers:
- image: grafana/grafana
name: grafana
ports:
- containerPort: 3000
protocol: TCP
resources: {}
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-volume
volumes:
- name: grafana-volume
hostPath:
# directory location on host
path: /opt/grafana
# this field is optional
type: DirectoryOrCreate
restartPolicy: Always
status: {}
I have the following file using which I'm setting up Prometheus on my Kubernetes cluster:
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: plant-simulator-monitoring
spec:
replicas: 1
selector:
matchLabels:
name: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus:latest
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
When I apply this to my Kubernetes cluster, I see the following error:
ts=2020-03-16T21:40:33.123641578Z caller=sync.go:165 component=daemon err="plant-simulator-monitoring:deployment/prometheus-deployment: running kubectl: The Deployment \"prometheus-deployment\" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{\"app\":\"prometheus-server\"}: `selector` does not match template `labels`"
I could not see anything wrong with my yaml file. Is there something that I'm missing?
As I mentioned in comments, You have issue with matching labels.
In spec.selector.matchLabels you have name: prometheus-server and in spec.template.medatada.labels you have app: prometheus-server. Values there need to be the same. Below what I get when used your yaml:
$ kubectl apply -f deploymentoriginal.yaml
The Deployment "prometheus-deployment" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"prometheus-server"}: `selector` does not match template `labels`
And output when I used below yaml with the same labels:
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: plant-simulator-monitoring
spec:
replicas: 1
selector:
matchLabels:
name: prometheus-server
template:
metadata:
labels:
name: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus:latest
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
$ kubectl apply -f deploymentselectors.yaml
deployment.apps/prometheus-deployment created
More detailed info about selectors/labels can be found in Official Kubernetes docs.
There is a mismatch between the label in selector(name: prometheus-server) and metadata (app: prometheus-server). Below should work.
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server