I am trying to create a file with in a POD from kubernetes sceret, but i am facing one issue like, i am not able to change permission of my deployed files.
I am getting below error,
chmod: changing permissions of '/root/.ssh/id_rsa': Read-only file system
I have already apply defaultmode & mode for the same but still it is not working.
volumes:
- name: gitsecret
secret:
secretName: git-keys
VolumeMounts:
- mountPath: "/root/.ssh"
name: gitsecret
readOnly: false
thank you
As you stated, your version of Kubernetes is 1.10 and documentation for it is available here
You can have a look at the github link #RyanDawson provided, there you will be able to find that this RO flag for configMap and secrets was intentional. It can be disabled using feature gate ReadOnlyAPIDataVolumes.
You can follow this guide on how to Disabling Features Using Feature Gates.
As a workaround, you can try this approach:
containers:
- name: apache
image: apache:2.4
lifecycle:
postStart:
exec:
command: ["chown", "www-data:www-data", "/var/www/html/app/etc/env.php"]
You can find explanation inside Kubernetes docs Attach Handlers to Container Lifecycle Events
There has been some back and forth over this but presumably you are on a k8s version where configmap and secret are read-only no matter how you set the flag - the issue is https://github.com/kubernetes/kubernetes/issues/62099 I think you'll need to follow the advice on there and create an emptyDir volume to copy the relevant files into.
Related
I need to copy a file inside my pod during the time of creation. I don't want to use ConfigMap and Secrets. I am trying to create a volumeMounts and copy the source file using the kubectl cp command—my manifest looks like this.
apiVersion: v1
kind: Pod
metadata:
name: copy
labels:
app: hello
spec:
containers:
- name: init-myservice
image: bitnami/kubectl
command: ['kubectl','cp','./test.json','init-myservice:./data']
volumeMounts:
- name: my-storage
mountPath: data
- name: init-myservices
image: nginx
volumeMounts:
- name: my-storage
mountPath: data
volumes:
- name: my-storage
emptyDir: {}
But I am getting a CrashLoopBackOff error. Any help or suggestion is highly appreciated.
it's not possible.
let me explain : you need to think of it like two different machine. here your local machine is the one where the file exist and you want to copy it in another machine with cp. but it's not possible. and this is what you are trying to do here. you are trying to copy file from your machine to pod's machine.
here you can do one thing just create your own docker image for init-container. and copy the file you want to store before building the docker image. then you can copy that file in shared volume where you want to store the file.
I do agree with an answer provided by H.R. Emon, it explains why you can't just run kubectl cp inside of the container. I do also think there are some resources that could be added to show you how you can tackle this particular setup.
For this particular use case it is recommended to use an initContainer.
initContainers - specialized containers that run before app containers in a Pod. Init containers can contain utilities or setup scripts not present in an app image.
Kubernetes.io: Docs: Concepts: Workloads: Pods: Init-containers
You could use the example from the official Kubernetes documentation (assuming that downloading your test.json is feasible):
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://info.cern.ch
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
-- Kubernetes.io: Docs: Tasks: Configure Pod Initalization: Create a pod that has an initContainer
You can also modify above example to your specific needs.
Also, referring to your particular example, there are some things that you will need to be aware of:
To use kubectl inside of a Pod you will need to have required permissions to access the Kubernetes API. You can do it by using serviceAccount with some permissions. More can be found in this links:
Kubernetes.io: Docs: Reference: Access authn authz: Authentication: Service account tokens
Kubernetes.io: Docs: Reference: Access authn authz: RBAC
Your bitnami/kubectl container will run into CrashLoopBackOff errors because of the fact that you're passing a single command that will run to completion. After that Pod would report status Completed and it would be restarted due to this fact resulting in before mentioned CrashLoopBackOff. To avoid that you would need to use initContainer.
You can read more about what is happening in your setup by following this answer (connected with previous point):
Stackoverflow.com: Questions: What happens one of the container process crashes in multiple container POD?
Additional resources:
Kubernetes.io: Pod lifecycle
A side note!
I also do consider including the reason why Secrets and ConfigMaps cannot be used to be important in this particular setup.
I have a simple StatefulSet with two containers. I just want to share a path by an emptyDir volume:
volumes:
- name: shared-folder
emptyDir: {}
The first container is a busybox:
- image: busybox
name: test
command:
- sleep
- "3600"
volumeMounts:
- mountPath: /cache
name: shared-folder
The second container creates a file on /cache/<POD_NAME>. I want to mount both paths within the emptyDir volume to be able to share files between containers.
volumeMounts:
- name: shared-folder
mountPath: /cache/$(HOSTNAME)
Problem. The second container doesn't resolve /cache/$(HOSTNAME) so instead of mounting /cache/pod-0 it mounts /cache/$(HOSTNAME). I have also tried getting the POD_NAME and setting as env variable but it doesn't resolve it neither.
Dows anybody knows if it is possible to use a path like this (with env variables) in the mountPath attribute?
To use mountpath with env variable you can use subPath with expanded environment variables (k8s v1.17+).
In your case it would look like following:
containers:
- env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
volumeMounts:
- mountPath: /cache
name: shared-folder
subPathExpr: $(MY_POD_NAME)
I tested here and just using Kubernetes (k8s < 1.16) with env variables isn't possible to achieve what you want, basically what is happening is that variable will be accessible only after the pod gets deployed and you're referencing it before it happens.
You can use Helm to define your mounthPath and statefulset with the same value in the values.yaml file, then get this same value and set as a value for the mounthPath field and statefulset name. You can see about this here.
Edit:
Follow Matt's answer if you are using k8s 1.17 or higher.
The problem is that YAML configuration files are POSTed to Kubernetes exactly as they are written. This means that you need to create a templated YAML file, in which you will be able to replace the referenced ti environment variables with values bound to environment variables.
As this is a known "quirk" of Kubernetes there already exist tools to circumvent this problem. Helm is one of those tools which is very pleasant to use
I have an init container that copies files onto the volume.
I have implemented a security policy, that the user-id is not relevant, and all rights to files are set by group (0) - basically the same that is the default in OpenShift.
After creating the test instance with emptyDir instead of pvcs the container has crashed. After inspecting the image I've found out that the file permissons are broken: only the owner can write.
I've double-checked the init container. The files there have write permission for owner and other. I copy them with cp. But the final pod sees this files as writable only by owner.
To make things worse, the owner has been changed to root, although initially it was another user.
Is this a bug or a feature of emptyDir? Or I'm using them in a wrong way?
This is how I declare the volume:
containers:
- name: container
volumeMounts:
- name: storage
mountPath: /var/storage
initContainers:
- name: container-init
volumeMounts:
- name: storage
mountPath: /storage-mount
volumes:
- name: storage
emptyDir: {}
I am following these docs on how to setup a sidecar proxy to my cloud-sql database. It refers to a manifest on github that -as I find it all over the place on github repos etc- seems to work for 'everyone' but I run into trouble. The proxy container can not mount to /secrets/cloudsql it seems as it can not succesfully start. When I run kubectl logs [mypod] cloudsql-proxy:
invalid json file "/secrets/cloudsql/mysecret.json": open /secrets/cloudsql/mysecret.json: no such file or directory
So the secret seems to be the problem.
Relevant part of the manifest:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=pqbq-224713:europe-west4:osm=tcp:5432",
"-credential_file=/secrets/cloudsql/mysecret.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credential
secret:
secretName: mysecret
To test/debug the secret I mount the volume to an another container that does start, but then the path and file /secrets/cloudsql/mysecret.json does not exist either. However when I mount the secret to an already EXISTING folder I can find in this folder not the mysecret.json file (as I expected...) but (in my case) two secrets it contains, so I find: /existingfolder/password and /existingfolder/username (apparently this is how it works!? When I cat these secrets they give the proper strings, so they seem fine).
So it looks like the path can not be made by the system, is this a permission issue? I tried simply mounting in the proxy container to the root ('/') so no folder, but that gives an error saying it is not allowed to do so. As the image gcr.io/cloudsql-docker/gce-proxy:1.11 is from Google and I can not get it running I can not see what folder it has.
My questions:
Is the mountPath created from the manifest or should it be already
in the container?
How can I get this working?
I solved it. I was using the same secret on the cloudsql-proxy as the ones used on the app (env), but it needs to be a key you generate from a service account and then make a secret out of that. Then it works. This tutorial helped me through.
i am trying to pass a configuration file(which is located on master) on nginx container at the time of replication controller creation through kubernetes.. ex. as we are using ADD command in Dockerfile...
There isn't a way to dynamically add file to a pod specification when instantiating it in Kubernetes.
Here are a couple of alternatives (that may solve your problem):
Build the configuration file into your container (using the docker ADD command). This has the advantage that it works in the way which you are already familiar but the disadvantage that you can no longer parameterize your container without rebuilding it.
Use environment variables instead of a configuration file. This may require some refactoring of your code (or creating a side-car container to turn environment variables into the configuration file that your application expects).
Put the configuration file into a volume. Mount this volume into your pod and read the configuration file from the volume.
Use a secret. This isn't the intended use for secrets, but secrets manifest themselves as files inside your container, so you can base64 encode your configuration file, store it as a secret in the apiserver, and then point your application to the location of the secret file that is created inside your pod.
I believe you can also download config during container initialization.
See example below, you may download config instead index.html but I would not use it for sensetive info like passwords.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}