I'm moving from Kubernetes v1.17.4 to v1.18.3
The command kubectl -n nameSpace cp file pod-xxxx-yyyy:file will not write to the pod.
There are no error messages generated
Copying from a pod works just fine.
YAML for creating the pod
apiVersion: apps/v2
kind: Deployment
metadata:
name: acadmin
namespace: iiabe3h
labels:
app: acadmin
spec:
replicas: 1
selector:
matchLabels:
app: acadmin
template:
metadata:
labels:
app: acadmin
spec:
schedulerName: stork
terminationGracePeriodSeconds: 1800
securityContext:
supplementalGroups: [12312, 65432, 3399, 1000001]
fsGroup: 1000
nodeSelector:
node-role.kubernetes.io/worker: "true"
volumes:
- name: acadmin-logs
persistentVolumeClaim:
claimName: acadmin-logs
- name: acadmin-config
persistentVolumeClaim:
claimName: acadmin-config
containers:
- name: acadmin
image: acadmin:latest
env:
- name: spring.profiles.active
value: "e3h"
- name: AccessControlFilter.props
value: "/opt/jboss/wildfly/standalone/configuration/AccessControlFilter.props"
volumeMounts:
- name: acadmin-logs
mountPath: /opt/jboss/wildfly/standalone/log
- name: acadmin-config
mountPath: /opt/jboss/wildfly/standalone/configuration
Related
I am trying to use git-sync image as a side car in kubernetes that runs git-pull periodically and mounts cloned data to shared volume.
Everything is working fine when I configure it for sync one time. I want to run it periodically like every 10 mins. Somehow when I configure it to run periodically pod initializing is failing.
I read documentation but couldn't find proper answer. Would be nice if you help me to figure out what I am missing in my configuration.
Here is my configuration that failing.
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "10"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
volumes:
- name: www-data
emptyDir: {}
Pod
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx-helloworld
name: nginx-helloworld
spec:
containers:
- image: nginx
name: nginx-helloworld
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
you are using the git-sync as an initContainers, which run only during init (once in lifecycle)
A Pod can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the app containers are started.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
init-containers
So use this as a regular container
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "20"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
volumes:
- name: www-data
emptyDir: {}
after re-deploying my kubernetes statefulset, the pod is now failing due to error while creating mount source path
'/var/lib/kubelet/pods/1559ef17-9c48-401d-9a2f-9962a4a16151/volumes/kubernetes.io~csi/pvc-6b9ac265-d0ec-4564-adb2-1c7b3f6631ca/mount': mkdir /var/lib/kubelet/pods/1559ef17-9c48-401d-9a2f-9962a4a16151/volumes/kubernetes.io~csi/pvc-6b9ac265-d0ec-4564-adb2-1c7b3f6631ca/mount: file exists
I'm assuming this is because the persistent volume/PVC already exists and so it cannot be created, but I thought that was the point of the statefulset, that the data would persist and you could just mount it again? How should I fix this?
Thanks.
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ClusterIP
ports:
- name: http
port: 80
selector:
app: foo-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foo-statefulset
namespace: foo
spec:
selector:
matchLabels:
app: foo-app
serviceName: foo-app
replicas: 1
template:
metadata:
labels:
app: foo-app
spec:
serviceAccountName: foo-service-account
containers:
- name: foo
image: blahblah
imagePullPolicy: Always
volumeMounts:
- name: foo-data
mountPath: "foo"
- name: stuff
mountPath: "here"
- name: config
mountPath: "somedata"
volumes:
- name: stuff
persistentVolumeClaim:
claimName: stuff-pvc
- name: config
configMap:
name: myconfig
volumeClaimTemplates:
- metadata:
name: foo-data
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "foo-storage"
resources:
requests:
storage: 2Gi
I got that error when deploying a k8s deployment, I tried to impersonate being a root user via the security context but it didn't help, any guess how to solve it? Unfortunately, I don't have any other ideas or a workaround to avoid this permission issue.
The error I get is:
30: line 1: /scripts/wrapper.sh: Permission denied
stream closed
The deployment is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler-grok-exporter
labels:
app: cluster-autoscaler-grok-exporter
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler-grok-exporter
sidecar: cluster-autoscaler-grok-exporter-sidecar
template:
metadata:
labels:
app: cluster-autoscaler-grok-exporter
sidecar: cluster-autoscaler-grok-exporter-sidecar
spec:
securityContext:
runAsUser: 1001
fsGroup: 2000
serviceAccountName: flux
imagePullSecrets:
- name: id-docker
containers:
- name: get-data
# 3.5.0 - helm v3.5.0, kubectl v1.20.2, alpine 3.12
image: dtzar/helm-kubectl:3.5.0
command: ["sh", "-c", "/scripts/wrapper.sh"]
args:
- cluster-autoscaler
- "90"
# - cluster-autoscaler
- "30"
- /scripts/get_data.sh
- /logs/data.log
volumeMounts:
- name: logs
mountPath: /logs/
- name: scripts-volume-get-data
mountPath: /scripts/get_data.sh
subPath: get_data.sh
- name: scripts-wrapper
mountPath: /scripts/wrapper.sh
subPath: wrapper.sh
- name: export-data
image: ippendigital/grok-exporter:1.0.0.RC3
imagePullPolicy: Always
ports:
- containerPort: 9148
protocol: TCP
volumeMounts:
- name: grok-config-volume
mountPath: /grok/config.yml
subPath: config.yml
- name: logs
mountPath: /logs
volumes:
- name: grok-config-volume
configMap:
name: grok-exporter-config
- name: scripts-volume-get-data
configMap:
name: get-data-script
defaultMode: 0777
defaultMode: 0700
- name: scripts-wrapper
configMap:
name: wrapper-config
defaultMode: 0777
defaultMode: 0700
- name: logs
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: cluster-autoscaler-grok-exporter-sidecar
labels:
sidecar: cluster-autoscaler-grok-exporter-sidecar
spec:
type: ClusterIP
ports:
- name: metrics
protocol: TCP
targetPort: 9144
port: 9148
selector:
sidecar: cluster-autoscaler-grok-exporter-sidecar
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/name: cluster-autoscaler-grok-exporter
app.kubernetes.io/part-of: grok-exporter
name: cluster-autoscaler-grok-exporter
spec:
endpoints:
- port: metrics
selector:
matchLabels:
sidecar: cluster-autoscaler-grok-exporter-sidecar
From what I can see, your script does not have execute permissions.
Remove this line from your config map.
defaultMode: 0700
Keep only:
defaultMode: 0777
Also, I see missing leading / in your script path
- /bin/sh scripts/get_data.sh
So, change it to
- /bin/sh /scripts/get_data.sh
I created a kubernetes cluster in amazon. Then I run my pod (container) and volume into this cluster. Now I want to run the samba server into the volume and connect my pod to samba server. Is there any tutorial how can I solve this problem? By the way I am working at windows 10. Here is my deployment code with volume:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
labels:
app : application
spec:
replicas: 2
selector:
matchLabels:
project: k8s
template:
metadata:
labels:
project: k8s
spec:
containers:
- name : k8s-web
image: mine/flask:latest
volumeMounts:
- mountPath: /test-ebs
name: my-volume
ports:
- containerPort: 8080
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: pv0004
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0004
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: [my-Id-volume]
you can check out the smaba container docker image at : https://github.com/dperson/samba
---
kind: Service
apiVersion: v1
metadata:
name: smb-server
labels:
app: smb-server
spec:
type: LoadBalancer
selector:
app: smb-server
ports:
- port: 445
name: smb-server
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: smb-server
spec:
replicas: 1
selector:
matchLabels:
app: smb-server
template:
metadata:
name: smb-server
labels:
app: smb-server
spec:
containers:
- name: smb-server
image: dperson/samba
env:
- name: PERMISSIONS
value: "0777"
args: ["-u", "username;test","-s","share;/smbshare/;yes;no;no;all;none","-p"]
volumeMounts:
- mountPath: /smbshare
name: data-volume
ports:
- containerPort: 445
volumes:
- name: data-volume
hostPath:
path: /smbshare
type: DirectoryOrCreate
My deployment is working fine. i just try to use local persistent volume for storing data on local of my application. after that i am getting below error.
error: error validating "xxx-deployment.yaml": error validating data: ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "volumeMounts" in io.k8s.api.core.v1.LocalObjectReference; if you choose to ignore these errors, turn validation off with --validate=false
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
spec:
selector:
matchLabels:
app: xxx
replicas: 3
template:
metadata:
labels:
app: xxx
spec:
containers:
- name: xxx
image: xxx:1.xx
imagePullPolicy: "Always"
stdin: true
tty: true
ports:
- containerPort: 80
imagePullPolicy: Always
imagePullSecrets:
- name: xxx
volumeMounts:
- mountPath: /data
name: xxx-data
restartPolicy: Always
volumes:
- name: xx-data
persistentVolumeClaim:
claimName: xx-xx-pvc
You need to move the imagePullSecret further down. It's breaking the container spec. imagePullSecret is defined at the pod spec level while volumeMounts belongs to the container spec
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxx
namespace: xxx
spec:
selector:
matchLabels:
app: xxx
replicas: 3
template:
metadata:
labels:
app: xxx
spec:
containers:
- name: xxx
image: xxx:1.xx
imagePullPolicy: "Always"
stdin: true
tty: true
ports:
- containerPort: 80
imagePullPolicy: Always
volumeMounts:
- mountPath: /data
name: xxx-data
imagePullSecrets:
- name: xxx
restartPolicy: Always
volumes:
- name: xx-data
persistentVolumeClaim:
claimName: xx-xx-pvc
You have an indentation typo in your yaml, volumeMounts is under imagePullSecrets, when it should be at the same level:
imagePullSecrets:
- name: xxx
volumeMounts:
- mountPath: /data
name: xxx-data
volumeMounts: is a container child.
And volumes: is spec child.
Also volumeMounts and Vloume name should be same.