I have an application which saves the user uploaded file to disk does some processing and creates a new file on the disk with the processed data and returns to the user.
I am migrating this application to kubernetes and when i deploy the application it is erroring out when trying to save file to local disk.
any suggestion?
Save your file to an emptyDir volume for this kind of temporary storage.
See the configuration example in the documentation, that use this kind of volume for "cache":
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
Related
I have a pod that reads from an image that contains data within var/www/html. I want this data to be stored in a persistent volume. This is my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
container: app
template:
metadata:
labels:
container: app
spec:
containers:
- name: app
image: my/toolkit-app:working
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html
name: toolkit-volume
subPath: html
volumes:
- name: toolkit-volume
persistentVolumeClaim:
claimName: azurefile
imagePullSecrets:
- name: my-cred
However when I look into the pod I can see the directory is empty:
If I comment out the persistent volume:
#volumeMounts:
# - mountPath: /var/www/html
# name: toolkit-volume
# subPath: html
I can see that the image data is there:
So it seems like the persistent volume is overwriting the existing directory - is there a way round this? Ideally I want /var/www/html to be stored in a separate volume and for any existing files within the image to be stored there too.
This is more a problem of visibility: If you mount an empty volume at a specific path, you won't be able to see, what was placed there in the container image.
From your question I assume that you want to be able to rollout updates by the means of a new container image, but at the same time retain variable data, that was created at the same directory from your application.
You could achieve this with the following method:
Use an init container with the same image and mount your persistent directory to a different path, for example /data
As command for the init container copy the contents of /var/www/html to /data.
In the regular container image use the mount you already have, it will contain your variable data and the updated data from the init container.
I define a Secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
config.yaml: |-
apiUrl: "https://my.api.com/api/v1"
username: Administrator
password: NewPasswdTest11
And then creating volume mount in Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-webapp-test
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
replicas: 2
selector:
matchLabels:
name: k8s-webapp-test
version: 1.0.4
template:
metadata:
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
nodeSelector:
kubernetes.io/os: windows
volumes:
- name: secret-volume
secret:
secretName: string-data-secret
containers:
- name: k8s-webapp-test
image: dockerstore/k8s-webapp-test:1.0.4
ports:
- containerPort: 80
volumeMounts:
- name: secret-volume
mountPath: "/secrets"
readOnly: false
So, after the deployment, I have 2 pods with volume mounts in C:\secrets (I do use Windows nodes). When I try to edit config.yaml that is located in C:\secrets folder, I get following error:
Access to the path 'c:\secrets\config.yaml' is denied.
Although I marked file as readOnly false I cannot write to it. How can I modify the file?
As you can see here it is not possible by intention:
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location
You can look into using an init container which maps the secret and then copies it to the desired location where you might be able to modify it.
As an alternative to the init container you might also use a container lifecycle hook i.e. a PostStart-hook which executes immediately after a container is created.
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
cp -r /secrets ~/secrets;
You can create secrets from within a Pod but it seems you need to utilize the Kubernetes REST API to do so:
https://kubernetes.io/docs/concepts/overview/kubernetes-api/
I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519
I'm trying to deploy Nexus3 as a Kubernetes pod in IBM Cloud service. I am getting this error, probably because the PVC is mounted as read only for that user. I have had this problem other times in Postgres for example but I can't recall how to solve it:
mkdir: cannot create directory '../sonatype-work/nexus3/log': Permission denied
mkdir: cannot create directory '../sonatype-work/nexus3/tmp': Permission denied
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file ../sonatype-work/nexus3/log/jvm.log due to No such file or directory
Warning: Cannot open log file: ../sonatype-work/nexus3/log/jvm.log
Warning: Forcing option -XX:LogFile=/tmp/jvm.log
Unable to update instance pid: Unable to create directory /nexus-data/instances
/nexus-data/log/karaf.log (No such file or directory)
Unable to update instance pid: Unable to create directory /nexus-data/instances
These are the PVC and POD yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexus-pvc
annotations:
volume.beta.kubernetes.io/storage-class: "ibmc-file-retain-bronze"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
apiVersion: v1
kind: Pod
metadata:
name: nexus
labels:
name: nexus
spec:
containers:
- name: nexus
image: sonatype/nexus3
ports:
- containerPort: 8081
volumeMounts:
- name: nexus-data
mountPath: /nexus-data
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: nexus-data
persistentVolumeClaim:
claimName: nexus-pvc
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
The nexus3 Dockerfile is structured such that it runs as a non-root user. However, the NFS file storage requires root user to access and write to it. There are a couple of ways to fix this. One, you can restructure your Dockerfile to temporarily add the non-root user to root and change the volume mount permissions. Here are instructions for that: https://console.bluemix.net/docs/containers/cs_storage.html#nonroot
Another option is to run an initContainer (https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) that changes the mount path ownership before the main container runs. The initContainer would look something like this:
initContainers:
- name: permissionsfix
image: ubuntu:latest
command: ["/bin/sh", "-c"]
args:
- >
chown 1000:1000 /mount;
volumeMounts:
- name: volume
mountPath: /mount
File storage has these permission issues. Instead of using file based volumes, use block based volumes.
Install the block storage plugin and update your resources to use the newly available storage classes. An example of usage:
storage:
type: persistent-claim
size: 100Gi
deleteClaim: false
class: "ibmc-block-retain-bronze"
I need to be able to run a shell script (my script is for initializing my db cluster) to initialize my pods in Kubernetes,
I don't want to create my script inside my dockerfile because I get my image directly from the web so I don't want to touch it.
So I want to know if there is a way to get my script in to one of my volumes so I can execute it like that:
spec:
containers:
- name: command-demo-container
image: debian
command: ["./init.sh"]
restartPolicy: OnFailure
It depends what exactly does your init script do. But the InitContainers should be helpful in such cases. Init containers are run before the main application container is started and can do some preparation work such as create configuration files etc.
You would still need your own Docker image, but it doesn't have to be the same image as the database one.
I finally decided to take the approach of creating a config file with the script we want to run and then call this configMap from inside the volume.
this is a short explanation:
In my pod.yaml file there is a VolumeMount called "/pgconf" which is the directory that the docker image reads any SQL script that you put there and run it when the pod is starting.
And inside Volumes I will put the configMap name (postgres-init-script-configmap) which is the name of the config defined inside the configmap.yaml file.
There is no need to create the configMap using kubernetes,
The pod will take the configuration from the configMap file as long as you place it in the same directory as the pod.yaml .
my POD yaml file:
apiVersion: v1
kind: Pod
metadata:
name: "{{.Values.container.name.primary}}"
labels:
name: "{{.Values.container.name.primary}}"
spec:
securityContext:
fsGroup: 26
restartPolicy: {{default "Always" .Values.restartPolicy}}
containers:
- name: {{.Values.container.name.primary}}
image: "{{.Values.image.repository}}/{{.Values.image.container}}:{{.Values.image.tag}}"
ports:
- containerPort: {{.Values.container.port}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: primary
resources:
requests:
cpu: {{ .Values.resources.cpu }}
memory: {{ .Values.resources.memory }}
volumeMounts:
- mountPath: /pgconf
name: init-script
readOnly: true
volumes:
- name: init-script
configMap:
name: postgres-init-script-configmap
my configmap.yaml (Which contains the SQL script that will initial the DB):
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-init-script-configmap
data:
setup.sql: |-
CREATE USER david WITH PASSWORD 'david';