Kubernetes can not mount a volume to a folder - kubernetes

I am following these docs on how to setup a sidecar proxy to my cloud-sql database. It refers to a manifest on github that -as I find it all over the place on github repos etc- seems to work for 'everyone' but I run into trouble. The proxy container can not mount to /secrets/cloudsql it seems as it can not succesfully start. When I run kubectl logs [mypod] cloudsql-proxy:
invalid json file "/secrets/cloudsql/mysecret.json": open /secrets/cloudsql/mysecret.json: no such file or directory
So the secret seems to be the problem.
Relevant part of the manifest:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=pqbq-224713:europe-west4:osm=tcp:5432",
"-credential_file=/secrets/cloudsql/mysecret.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credential
secret:
secretName: mysecret
To test/debug the secret I mount the volume to an another container that does start, but then the path and file /secrets/cloudsql/mysecret.json does not exist either. However when I mount the secret to an already EXISTING folder I can find in this folder not the mysecret.json file (as I expected...) but (in my case) two secrets it contains, so I find: /existingfolder/password and /existingfolder/username (apparently this is how it works!? When I cat these secrets they give the proper strings, so they seem fine).
So it looks like the path can not be made by the system, is this a permission issue? I tried simply mounting in the proxy container to the root ('/') so no folder, but that gives an error saying it is not allowed to do so. As the image gcr.io/cloudsql-docker/gce-proxy:1.11 is from Google and I can not get it running I can not see what folder it has.
My questions:
Is the mountPath created from the manifest or should it be already
in the container?
How can I get this working?

I solved it. I was using the same secret on the cloudsql-proxy as the ones used on the app (env), but it needs to be a key you generate from a service account and then make a secret out of that. Then it works. This tutorial helped me through.

Related

Mounting /etc/default directory problem with SOLR image

I'm deploying a basic "solr:8.9.0" image to local Kubernetes env.
If I'm trying to mount pod's "/var/solr" directory, it works well.
I can see the files inside /var/solr in the mounted directory.
spec:
containers:
- image: solr:8.6.0
imagePullPolicy: IfNotPresent
name: solr
ports:
- name: solrport
containerPort: 8983
volumeMounts:
- mountPath: /var/solr/
name: solr-volume
volumes:
- name: solr-volume
persistentVolumeClaim:
claimName: solr-pvc
But somehow I can't mount "/etc/default/" directory. That doesn't work.
I knew there are files inside that directory but they are disappearing.
Any idea why?
Thanks!
this is because of how volumeMounts work.
A standard volumeMount mounts the volume in the suplied directory overwriting everything that is inside that directory.
You want to specify a subpath for the data you actually want to mount. By doing this the original contents of the directory won't get overridden.
see here for more information regarding the usage of subpaths.

How to add encryption-provider-config option to kube-apiserver?

I am using kubernetes 1.15.7 version.
I am trying to follow the link https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration to enable 'encryption-provider-config' option on 'kube-apiserver'.
I edited file '/etc/kubernetes/manifests/kube-apiserver.yaml' and provided below option
- --encryption-provider-config=/home/rtonukun/secrets.yaml
But after that I am getting below error.
The connection to the server 171.69.225.87:6443 was refused - did you specify the right host or port?
with all kubectl commands like 'kubectl get no'.
Mainy, how do I do these below two steps?
3. Set the --encryption-provider-config flag on the kube-apiserver to point to the location of the config file.
4. Restart your API server.
I've reproduced exactly your scenario, and I'll try to explain how I fixed it
Reproducing the same scenario
Create the encrypt file on /home/koopakiller/secrets.yaml:
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: r48bixfj02BvhhnVktmJJiuxmQZp6c0R60ZQBFE7558=
- identity: {}
Edit the file /etc/kubernetes/manifests/kube-apiserver.yaml and set the --encryption-provider-config flag:
- --encryption-provider-config=/home/koopakiller/encryption.yaml
Save the file and exit.
When I checked the pods status got the same error:
$ kubectl get pods -A
The connection to the server 10.128.0.62:6443 was refused - did you specify the right host or port?
Troubleshooting
Since kubectl is not working anymore, I tried to look directly the running containers using docker command, then I see kube-apiserver container was recently recreated:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
54203ea95e39 k8s.gcr.io/pause:3.1 "/pause" 1 minutes ago Up 1 minutes k8s_POD_kube-apiserver-lab-1_kube-system_015d9709c9881516d6ecf861945f6a10_0
...
Kubernetes store the logs of created pods on /var/log/pods directory, I've checked the kube-apiserver log file and found a valuable information:
{"log":"Error: error opening encryption provider configuration file "/home/koopakiller/encryption.yaml": open /home/koopakiller/encryption.yaml: no such file or directory\n","stream":"stderr","time":"2020-01-22T13:28:46.772768108Z"}
Explanation
Taking a look at manifest file kube-apiserver.yaml is possible to see the command kube-apiserver, it runs into container, so they need to have the encryption.yaml file mounted into container.
If you check the volumeMounts in this file, you could see that only the paths below is mounted in container by default:
/etc/ssl/certs
/etc/ca-certificates
/etc/kubernetes/pki
/usr/local/share/ca-certificates
/usr/share/ca-certificates
...
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
...
Based on the facts above, we can assume that apiserver failed to start because /home/koopakiller/encryption.yaml doesn't actually mounted into container.
How to solve
I can see 2 ways to solve this issue:
1st - Copy the encryption file to /etc/kubernetes/pki (or any of the path above) and change the path in /etc/kubernetes/kube-apiserver.yaml:
- --encryption-provider-config=/etc/kubernetes/encryption.yaml
Save the file and wait apiserver restart.
2nd - Create a new volumeMounts in the kube-apiserver.yaml manifest to mount a custom directory from node into container.
Let's create a new directory in /etc/kubernetes/secret (home folder isn't a good location to leave config files =)).
Edit /etc/kubernetes/manifests/kube-apiserver.yaml:
...
- --encryption-provider-config=/etc/kubernetes/secret/encryption.yaml
...
volumeMounts:
- mountPath: /etc/kubernetes/secret
name: secret
readOnly: true
...
volumes:
- hostPath:
path: /etc/kubernetes/secret
type: DirectoryOrCreate
name: secret
...
After save the file kubernetes will mount the node path /etc/kubernetes/secret into the same path into the apiserver container, wait start completely and try to list your node again.
Please let know if that helped!

Kubernetes init-container file permission mismatch

I have an init container that copies files onto the volume.
I have implemented a security policy, that the user-id is not relevant, and all rights to files are set by group (0) - basically the same that is the default in OpenShift.
After creating the test instance with emptyDir instead of pvcs the container has crashed. After inspecting the image I've found out that the file permissons are broken: only the owner can write.
I've double-checked the init container. The files there have write permission for owner and other. I copy them with cp. But the final pod sees this files as writable only by owner.
To make things worse, the owner has been changed to root, although initially it was another user.
Is this a bug or a feature of emptyDir? Or I'm using them in a wrong way?
This is how I declare the volume:
containers:
- name: container
volumeMounts:
- name: storage
mountPath: /var/storage
initContainers:
- name: container-init
volumeMounts:
- name: storage
mountPath: /storage-mount
volumes:
- name: storage
emptyDir: {}

Mount Kubernetes secret as a file with rw permission

I am trying to create a file with in a POD from kubernetes sceret, but i am facing one issue like, i am not able to change permission of my deployed files.
I am getting below error,
chmod: changing permissions of '/root/.ssh/id_rsa': Read-only file system
I have already apply defaultmode & mode for the same but still it is not working.
volumes:
- name: gitsecret
secret:
secretName: git-keys
VolumeMounts:
- mountPath: "/root/.ssh"
name: gitsecret
readOnly: false
thank you
As you stated, your version of Kubernetes is 1.10 and documentation for it is available here
You can have a look at the github link #RyanDawson provided, there you will be able to find that this RO flag for configMap and secrets was intentional. It can be disabled using feature gate ReadOnlyAPIDataVolumes.
You can follow this guide on how to Disabling Features Using Feature Gates.
As a workaround, you can try this approach:
containers:
- name: apache
image: apache:2.4
lifecycle:
postStart:
exec:
command: ["chown", "www-data:www-data", "/var/www/html/app/etc/env.php"]
You can find explanation inside Kubernetes docs Attach Handlers to Container Lifecycle Events
There has been some back and forth over this but presumably you are on a k8s version where configmap and secret are read-only no matter how you set the flag - the issue is https://github.com/kubernetes/kubernetes/issues/62099 I think you'll need to follow the advice on there and create an emptyDir volume to copy the relevant files into.

Mounting client.crt, client.key, ca.crt with a service-account or otherwise?

Has anyone used service-accounts to mount ssl certificates to access the aws cluster from within a running job before? How do we do this? I created the job and this is the from the the output of the failing container which is causing the Pod to be in error state.
Error in configuration:
* unable to read client-cert /client.crt for test-user due to open /client.crt: no such file or directory
* unable to read client-key /client.key for test-user due to open /client.key: no such file or directory
* unable to read certificate-authority /ca.crt for test-cluster due to open /ca.crt: no such file or director
The solution is to create a Secret containing the certs, and then getting the job to reference it.
Step 1. Create secret:
kubectl create secret generic job-certs --from-file=client.crt --from-file=client.key --from-file=ca.crt
Step 2. Reference secret in job's manifest. You have to insert the volumes and volumeMounts in the job.
spec:
volumes:
- name: ssl
secret:
secretName: job-certs
containers:
volumeMounts:
- mountPath: "/etc/ssl"
name: "ssl"