How to solve AzureFile mount error(22): Invalid argument inside container? - kubernetes

I've got an error on mounting AzureFile Share inside AKS(1.18.2) container (build on top of Ubuntu 18.04 with cifs-utils installed):
Warning FailedMount 0s kubelet, aks-nodepool1-37397744-vmss000001 MountVolume.SetUp failed for volume "myapplication-logs" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5e19e1d0-0bfd-4760-a87a-00cb0f3e573a/volumes/kubernetes.io~azure-file/crawler-logs --scope -- mount -t cifs -o file_mode=0777,dir_mode=0777,vers=3.0,<masked> //myazurestorage.file.core.windows.net/crawler-logs /var/lib/kubelet/pods/5e19e1d0-0bfd-4760-a87a-00cb0f3e573a/volumes/kubernetes.io~azure-file/myapplication-logs
Output: Running scope as unit run-r403b463e326d4562a7e44dc8fe018b4b.scope.
mount error(22): Invalid argument
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
Here is my yaml config:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: myapplication-logs
provisioner: kubernetes.io/azure-file
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
skuName: Standard_LRS
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: myapplication-logs
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
storageClassName: myapplication-logs
azureFile:
secretName: azurefilesharesecretname}
shareName: myapplication-logs
readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapplication-logs
spec:
accessModes:
- ReadWriteMany
storageClassName: myapplication-logs
resources:
requests:
storage: 3Gi
---
apiVersion: apps/v1
spec:
selector:
matchLabels:
app: myapplication
replicas: 1
template:
spec:
containers:
name: myapplication
readinessProbe:
httpGet:
path: /probes/ready
port: 5000
timeoutSeconds: 60
periodSeconds: 10
ports:
- containerPort: 21602
- containerPort: 5000
livenessProbe:
httpGet:
path: /probes/healthy
port: 5000
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 30
image: myappacr.azurecr.io/myapplication:1.0.391
volumeMounts:
- readOnly: true
name: secrets-volume
mountPath: /usr/bin/myapp/Secrets
- name: configuration-volume
mountPath: /usr/bin/myapp/Configuration
- name: myapplication-logs
mountPath: /usr/bin/myapp/logs
imagePullSecrets:
- name: acr-dev-regcred
volumes:
- name: secrets-volume
secret:
secretName: myapplication-secrets
- configMap:
name: myapplication-configuration
name: configuration-volume
- name: myapplication-logs
persistentVolumeClaim:
claimName: myapplication-logs
metadata:
labels:
app: myapplication
kind: Deployment
metadata:
name: myapplication-deployment
labels:
app: myapplication
StorageClass, PersistentVolume and PersistentVolumeClaim is successfully deployd without any errors\events.
Cannot find out where is the problem?
Any ideas on what is happening?

There are two ways to consume azure file share as volume from a container in AKS
Manually create and use a volume with Azure Files share. Docs here
In this case the PV need to specify mountOptions
apiVersion: v1
kind: PersistentVolume
metadata:
name: azurefile
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
storageClassName: azurefile
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
- mfsymlinks
- nobrl
Dynamically create and use a persistent volume with Azure Files. Docs here
In this case StorageClass need to have mountOptions
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: my-azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
parameters:
skuName: Standard_LRS
Now looking at your yamls It seems you are mixing the manual and dynamic mode because you are creating both PersistentVolume and a StorageClass. I suggest to follow one of the suitable approach and have mountOptions specified properly which is mandatory for both the modes.

Problem solved!
There was a problem with secret I created.
It was created with kubectl apply -f secret.json and accountstoragekey wasn't encoded in base64.
Thanx to Azure Support!

Related

Persistent Volume Claim is not getting bound for AWS EBS volume

I'm new to AWS EKS (Elastic Kubernetes Service). I'm trying to run SonarQube on EKS using Fargate profile (serverless).
Following are my Kubernetes manifests files:-
Deployment:-
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonarqube
name: sonarqube
namespace: cicd
spec:
replicas: 1
selector:
matchLabels:
app: sonarqube
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- name: sonarqube
image: sonarqube:8.6.1-community
resources:
requests:
cpu: 500m
memory: 1024Mi
limits:
cpu: 2000m
memory: 2048Mi
volumeMounts:
- mountPath: "/opt/sonarqube/data/"
name: sonar-data
- mountPath: "/opt/sonarqube/extensions/"
name: sonar-extensions
env:
- name: "SONARQUBE_JDBC_USERNAME"
valueFrom:
configMapKeyRef:
name: configuration
key: USERNAME
- name: "SONARQUBE_JDBC_URL"
valueFrom:
configMapKeyRef:
name: configuration
key: URL
- name: "SONARQUBE_JDBC_PASSWORD"
valueFrom:
secretKeyRef:
name: db-password
key: password
ports:
- containerPort: 9000
protocol: TCP
volumes:
- name: sonar-data
persistentVolumeClaim:
claimName: ebs-claim-sonar-data
- name: sonar-extensions
persistentVolumeClaim:
claimName: ebs-claim-sonar-extensions
Storage Classes for Sonar data and Sonar extensions:-
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-sc-sonar-data
namespace: cicd
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-sc-sonar-extensions
namespace: cicd
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
My Persistent Volume Claims (PVC):-
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim-sonar-data
namespace: cicd
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc-sonar-data
resources:
requests:
storage: 100Gi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim-sonar-extensions
namespace: cicd
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc-sonar-extensions
resources:
requests:
storage: 50Gi
Please ignore the config maps and secrets. The problem is PVC is not getting bound with the Storage Class hence the deployment is failing with the below error:-
Pod not supported on Fargate: volumes not supported: sonar-data not supported because PVC ebs-claim-sonar-data not bound, sonar-extensions not supported because: PVC ebs-claim-sonar-extensions not bound
I tried to debug a lot but not able to understand the root cause. I would really appreciate it if someone can help me with this.
Also just want to check if run just my storage class object using the kubectl create -f <sc file> then I guess I should be able to see any EBS volumes getting provisioned on the AWS console, right?
Please assist, thanks

Pod/Container directory used to mount nfs path gets empty after deployment

The following is my kubernetes/openshift deployment configuration template, along with its persistent volumes and persistent volume claims:
apiVersion: v1
kind: DeploymentConfig
metadata:
name: pythonApp
creationTimestamp: null
annotations:
openshift.io/image.insecureRepository: "true"
spec:
replicas: 1
strategy:
type: Recreate
revisionHistoryLimit: 2
template:
metadata:
labels:
app: pythonApp
creationTimestamp: null
spec:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "backend"
containers:
- name: backend
imagePullPolicy: IfNotPresent
image: <img-name>
command: ["sh", "-c"]
args: ['python manage.py runserver']
resources: {}
volumeMounts:
- mountPath: /pythonApp/configs
name: configs
restartPolicy: Always
volumes:
- name: configs
persistentVolumeClaim:
claimName: "configs-volume"
status: {}
---------------------------------------------------------------
apiVersion: v1
kind: PersistentVolume
metadata:
name: "configs-volume"
storageClassName: manual
capacity:
storage: 1Gi
persistentVolumeReclaimPolicy: Retain
accessModes:
- ReadWriteMany
nfs:
path: /mnt/k8sMount/configs
server: <server-ip>
---------------------------------------------------------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "configs-volume-claim"
creationTimestamp: null
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: "configs-volume"
After the deployment, when I exec inside the container (using oc exec or kubectl exec) and check the /pythonApp/configs folder, it is found to be empty, when really it's supposed to have some configuration files from the used image.
Is this issue caused due to the fact that /pythonApp/configs is mounted to the persistent nfs volume mount path /mnt/k8sMount/configs, which will be initially empty?
How could this be solved?
Environment
Kubernetes version: 1.11
Openshift version: 3.11

Failed to provision volume with StorageClass "slow": Failed to get GCE GCECloudProvider with error <nil>

I'm trying to install Redis cluster (StatefulSet) out of GKE and when getting pvc I've got
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 10s persistentvolume-controller Failed to provision volume with StorageClass "slow": Failed to get GCE GCECloudProvider with error <nil>
Already added "--cloud-provider=gce" on files /etc/kubernetes/manifests/kube-controller-manager.yaml and sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml. Restarted but still the same.
Can anyone help me please? What's the trick for making k8s work on GCP?
My manifest taken from here:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster
labels:
app: redis-cluster
data:
fix-ip.sh: |
#!/bin/sh
CLUSTER_CONFIG="/data/nodes.conf"
if [ -f ${CLUSTER_CONFIG} ]; then
if [ -z "${POD_IP}" ]; then
echo "Unable to determine Pod IP address!"
exit 1
fi
echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
sed -i.bak -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG}
fi
exec "$#"
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
cluster-node-timeout 15000
cluster-config-file /data/nodes.conf
cluster-migration-barrier 1
appendonly yes
protected-mode no
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
zone: "us-west2-a"
reclaimPolicy: Retain
---
apiVersion: v1
kind: Service
metadata:
name: redis-cluster
labels:
app: redis-cluster
spec:
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
clusterIP: None
selector:
app: redis-cluster
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-cluster
labels:
app: redis-cluster
spec:
serviceName: redis-cluster
replicas: 5
selector:
matchLabels:
app: redis-cluster
template:
metadata:
labels:
app: redis-cluster
spec:
containers:
- name: redis
image: redis:5.0-rc
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command: ["/conf/fix-ip.sh", "redis-server", "/conf/redis.conf"]
args:
- --cluster-announce-ip
- "$(POD_IP)"
readinessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 20
periodSeconds: 3
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: conf
mountPath: /conf
readOnly: false
- name: data
mountPath: /data
readOnly: false
volumes:
- name: conf
configMap:
name: redis-cluster
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
labels:
name: redis-cluster
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: slow
resources:
requests:
storage: 5Gi
Please verify your "StorageClass: slow", it seems there is an indentation problem (starting with reclaimPolicy)
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
zone: "us-west2-a"
reclaimPolicy: Retain
#
Update:
Please add --cloud-provider=gce into: kube-apiserver.yaml, kube-controller-manager.yaml, KUBELET_KUBECONFIG_ARGS. You can also enable enable-admission-plugins=DefaultStorageClass
Verify in your "VM instance details" permissiosn in "Cloud API access scopes" permissions.
Verify if your storage class pv and pvc are working properly.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-test
spec:
accessModes:
- ReadOnlyMany
storageClassName: slow
resources:
requests:
storage: 1Gi
Google offers two main types of persistent disk, which are provisioned automatically on kubernetes:
Standard storage (labeled pd-standard)
SSD storage (labeled pd-ssd)
By default, GKE will provision standard storage persistent disks. In fact, that’s the only storage class even available at first.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: test-storageclass
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
you can tell a persistent volume to use the new ssd storage class with the following key/value pair: storageClassName: ssd.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ssd-storageclass
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 1Gi

NFS PersistentVolume data lost when deleting K8s deployment

I create a PersistentVolume using NFS as below, when I delete the deployment I lose my data. If I exec into the postgres container the DB that was created before is not there anymore.
Using AWS EKS, I managed to deleted a deployment without losing any data.
Any help as to why this happens?
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /mnt/pv001
server: 164.10.0.1
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: metabase-postgres-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Deployment
...
spec:
volumes:
- name: metabase-postgres-storage
persistentVolumeClaim:
claimName: metabase-postgres-persistent-volume-claim
...
I had the wrong mountPath, must be /var/lib/postgresql/data
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: edw-pg
spec:
serviceName: postgres-cluster-ip-service
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
terminationGracePeriodSeconds: 10
containers:
- name: postgres
image: postgres:10.7
ports:
- containerPort: 5432
volumeMounts:
- name: edw-persistent-storage-claim
mountPath: /var/lib/postgresql/data
readOnly: false
subPath: postgres
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: pgpassword
key: PGPASSWORD
volumeClaimTemplates:
- metadata:
name: edw-persistent-storage-claim
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 50Gi

Multiple Volume mounts with Kubernetes: one works, one doesn't

I am trying to create a Kubernetes pod with a single container which has two external volumes mounted on it. My .yml pod file is:
apiVersion: v1
kind: Pod
metadata:
name: my-project
labels:
name: my-project
spec:
containers:
- image: my-username/my-project
name: my-project
ports:
- containerPort: 80
name: nginx-http
- containerPort: 443
name: nginx-ssl-https
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/projects/my-project/media/upload
name: pd-data
- mountPath: /home/projects/my-project/backups
name: pd2-data
imagePullSecrets:
- name: vpregistrykey
volumes:
- name: pd-data
persistentVolumeClaim:
claimName: pd-claim
- name: pd2-data
persistentVolumeClaim:
claimName: pd2-claim
I am using Persistent Volumes and Persisten Volume Claims, as such:
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pd-disk
labels:
name: pd-disk
spec:
capacity:
storage: 250Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: "pd-disk"
fsType: "ext4"
PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pd-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
I have initially created my disks using the command:
$ gcloud compute disks create --size 250GB pd-disk
Same goes for the second disk and second PV and PVC. Everything seems to work ok when I create the pod, no errors are thrown. Now comes the weird part: one of the paths is being mounted correctly (and is therefor persistent) and the other one is being erased every time I restart the pod...
I have tried re-creating everything from scratch, but nothing changes. Also, from the pod description, both volumes seem to be correctly mounted:
$ kubectl describe pod my-project
Name: my-project
...
Volumes:
pd-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd-claim
ReadOnly: false
pd2-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd2-claim
ReadOnly: false
Any help is appreciated. Thanks.
The Kubernetes documentation states:
Volumes can not mount onto other volumes or have hard links to other
volumes
I had the same issue and in my case the problem was that both volume mounts had overlapping mountPaths, i.e. both started with /var/.
They mounted without issues after fixing that.
I do not see any direct problem for which such behavior as explained above has occurred! But what I can rather ask you to try is to use a "Deployment" instead of a "Pod" as suggested by many here, especially when using PVs and PVCs. Deployment takes care of many things to maintain the "Desired State". I have attached my code below for your reference which works and both the volumes are persistent even after deleting/terminating/restarting as this is managed by the Deployment's desired state.
Two difference which you would find in my code from yours are:
I have a deployment object instead of pod
I am using GlusterFs for my volume.
Deployment yml.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
namespace: platform
labels:
component: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
component: nginx
spec:
nodeSelector:
role: app-1
containers:
- name: nginx
image: vip-intOAM:5001/nginx:1.15.3
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/etc/nginx/conf.d/"
name: nginx-confd
- mountPath: "/var/www/"
name: nginx-web-content
volumes:
- name: nginx-confd
persistentVolumeClaim:
claimName: glusterfsvol-nginx-confd-pvc
- name: nginx-web-content
persistentVolumeClaim:
claimName: glusterfsvol-nginx-web-content-pvc
One of my PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: glusterfsvol-nginx-confd-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
glusterfs:
endpoints: gluster-cluster
path: nginx-confd
readOnly: false
persistentVolumeReclaimPolicy: Retain
claimRef:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
PVC for the above
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi