Pod is running state but logging inside the container and and running capsh --print, give error as:
sh: capsh: not found
Running same image with --cap-add SYS_ADMIN or --privileged as docker container gives desired output.
What changes in deployment or extra permissions are needed for it to work inside k8s container?
Deployment :
kind: Deployment
apiVersion: apps/v1
metadata:
name: sample-deployment
namespace: sample
labels:
app: sample
spec:
replicas: 1
selector:
matchLabels:
app: sample
template:
metadata:
labels:
app: sample
spec:
containers:
- name: sample
image: alpine:3.17
command:
- sh
- -c
- while true; do echo Hello World; sleep 10; done
env:
- name: NFS_EXPORT_0
value: /var/opt/backup
- name: NFS_LOG_LEVEL
value: DEBUG
volumeMounts:
- name: backup
mountPath: /var/opt/backup
securityContext:
capabilities:
add: ["SYS_ADMIN"]
volumes:
- name: backup
persistentVolumeClaim:
claimName: sample-pvc
Related
I am trying to use git-sync image as a side car in kubernetes that runs git-pull periodically and mounts cloned data to shared volume.
Everything is working fine when I configure it for sync one time. I want to run it periodically like every 10 mins. Somehow when I configure it to run periodically pod initializing is failing.
I read documentation but couldn't find proper answer. Would be nice if you help me to figure out what I am missing in my configuration.
Here is my configuration that failing.
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "10"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
volumes:
- name: www-data
emptyDir: {}
Pod
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx-helloworld
name: nginx-helloworld
spec:
containers:
- image: nginx
name: nginx-helloworld
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
you are using the git-sync as an initContainers, which run only during init (once in lifecycle)
A Pod can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the app containers are started.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
init-containers
So use this as a regular container
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "20"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
volumes:
- name: www-data
emptyDir: {}
I have a problem with Kubernetes on my local machine. I want to create a pod with a database so I prepared a deployment file with service.
apiVersion: v1
kind: Service
metadata:
name: bid-service-db
labels:
app: bid-service-db
tier: database
spec:
ports:
- name: "5432"
port: 5432
targetPort: 5432
selector:
app: bid-service-db
tier: database
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bid-service-db
labels:
app: bid-service-db
tier: database
spec:
selector:
matchLabels:
app: bid-service-db
strategy:
type: Recreate
template:
metadata:
labels:
app: bid-service-db
tier: database
spec:
containers:
- env:
- name: POSTGRES_DB
value: mydb
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: postgres
image: postgres:13
imagePullPolicy: Never
name: bid-service-db
ports:
- containerPort: 5432
name: bid-service-db
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: postgres-persistance-storage
persistentVolumeClaim:
claimName: bid-service-db-volume
status: {}
I am applying this file with k apply -f bid-db-deployment.yaml. k get all returns that only service was created, but pod not started. What can I do in this case? How to troubleshoot that?
if you didn't get any errors on the 'apply' you can get the failure reason by:
kubectl describe deployment/DEPLOMENT_NAME
Also, you can take only the deployment part and put it in a separate YAML file and see if you get errors.
Since after restart the cluster it worked for you, a good ideia next times should be verify the logs from kube-api and kube-controller pods using the command:
kubectl logs pn kube-system <kube-api/controller_pod_name>
To get the list of your deployments in all name space you can use the command:
kubectl get deployments -A
I have this script. A Pod will have two containers, one for the main application and the other for logging. I want the logging container to sleep to help me debug an issue.
apiVersion: apps/v1
kind: Deployment
metadata:
name: codingjediweb
spec:
replicas: 2
selector:
matchLabels:
app: codingjediweb
template:
metadata:
labels:
app: codingjediweb
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: codingjediweb
image: docker.io/manuchadha25/codingjediweb:03072020v2
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: db.cassandraUri
value: cassandra://xx.yy.xxx.yyy:9042
- name: db.password
value: 9__
- name: db.keyspaceName
value: somei
- name: db.username
value: supserawesome
ports:
- containerPort: 9000
- name: logging
image: busybox
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
command: ["tail -f /deploy/codingjediweb-1.0/logs/*.log"]
Before running tail -f ..., I want to add a sleep/delay to avoid a race condition (the application takes sometime before logging and tail -f fails in the meanwhile because the log file doesn't exist. Alternatively, I am ok to run a script like this - while true; do sleep 86400; done .
How can I do that?
got it - had to do command: ['sh', '-c', "while true; do sleep 86400; done"]
i want to use command: - wget to download a file and put in a volume gunzip since it is a gz format. but somehow the container fails as soon as i hit the kubectl apply -f command. the Pod status displays Error. what could i be doing wrong?
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
labels:
app: example-app
spec:
replicas: 1
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: docker.source.co.za/azp/example-app:1
imagePullPolicy: Always
command:
- wget
- "-O"
- http://confluence.source.co.za/download/attachments/627674073/refpolicies.tar.gz
volumeMounts:
- name: example-app
mountPath: /config/
readOnly: true
volumes:
- name: example-app
emptyDir: {}
I am trying to deploy a simple nginx in kubernetes using hostvolumes. I use the next yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
When I deploy it kubectl create -f webserver.yaml, it throws the next error:
error: error validating "webserver.yaml": error validating data: ValidationError(Deployment.spec.template): unknown field "volumes" in io.k8s.api.core.v1.PodTemplateSpec; if you choose to ignore these errors, turn validation off with --validate=false
I believe you have the wrong indentation. The volumes key should be at the same level as containers.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 1
template:
metadata:
labels:
app: webserver
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: hostvol
mountPath: /usr/share/nginx/html
volumes:
- name: hostvol
hostPath:
path: /home/docker/vol
Look at this wordpress example from the documentation to see how it's done.