With Okteto Cloud, in order to let different pods/deployments access a shared PersistentVolumeClaim, I tried setting the PersistentVolumeClaim's accessModes to "ReadWriteMany":
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "pv-claim-cpdownloads"
},
"spec": {
"accessModes": [
"ReadWriteMany"
],
"resources": {
"requests": {
"storage": "10Gi"
}
}
}
}
Applying my deployment with kubectl succeeds, but the deployment itself times out on the okteto web UI, with the error:
pod has unbound immediate PersistentVolumeClaims (repeated 55 times)
Now, the same PersistentVolumeClaim with accessModes set to "ReadWriteOnce" deploys just fine.
Is the accessMode "ReadWriteMany" disallowed on Okteto Cloud ?
If it is, how could I get several pods/deployments to access the same volume data ?
For precisions, in my case I think I technically only need one pod to write to the volume and the other one to read from it.
My use case is to have one container save files to a folder, and another container watches changes and loads files from that same folder.
Okteo Cloud only supports the "ReadWriteOnce" access mode.
If you share the volume between pods/deployments they will all go to the same node, which is equivalent to have a single reader/writer. But it is not a recommended practice.
What is your use case? why do you need to share volumes?
Related
I have a Kubernetes deployment (apache flume to be exact) which needs to store persistent data. It has a PVC set up and bind to a path, which works without problem.
When I simply increase the scale of the deployment through kubernetes dashboard, it gives me an error saying multiple pods are trying to attach the same persistent volume. My deployment description is something like this (I tried to remove irrelevant parts)
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "myapp-deployment",
"labels": {
"app": "myapp",
"name": "myapp-master"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "myapp",
"name": "myapp-master"
}
},
"template": {
"spec": {
"volumes": [
{
"name": "myapp-data",
"persistentVolumeClaim": {
"claimName": "myapp-pvc"
}
}
],
"containers": [
{
"name": "myapp",
"resources": {},
"volumeMounts": [
{
"name": "ingestor-data",
"mountPath": "/data"
}
]
}
]
}
},...
Each pod should get its own persistent space (but with same pathname), so one doesn't mess with the others'. I tried to add a new volume in the volume array above, and a volume mount in the volume mount array, but it didn't work (I guess it meant "bind two volumes to a single container")
What should I change to have 2 pods with separate persistent volumes? What should I change to have N number of pods and N number of PVC's so I can freely scale the deployment up and down?
Note: I saw a similar question here which explains N number of pods cannot be done using deployments. Is it possible to do what I want with only 2 pods?
You should use a StatefulSet for that. This is for pods with persistent data that should survive a pod restart. Replicas have a certain order and are named in that way (my-app-0, my-app-1, ...). They are stopped and restarted in this order and will mount the same volume after a restart/update.
With the StatefulSet you can use a volumeClaimTemplates to dynamically create new PersistentVolumes with the creation of a new pod. So every time a pod is created a volume get provisioned by your storage class.
From docs:
The volumeClaimTemplates will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
See docs for more details:
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components
Each pod should get its own persistent space (but with same pathname), so one doesn't mess with the others'.
For this reason, use a StatefulSet instead. Most things will work the same way, except that each Pod will get its own unique Persistent Volume.
In my docker setup, I maintain targets.json file which is dynamically updated with targets to probe. The file starts empty but is appended with targets during some use case.
sample targets.json
[
{
"targets": [
"x.x.x.x"
],
"labels": {
"app": "testApp1"
}
},
{
"targets": [
"x.x.x.x"
],
"labels": {
"app": "testApp2"
}
}
]
This file is then provided to prometheus configuration as file_sd_configs. Everything works fine, targets get added to targets.json file due to some event in application and prometheus starts monitoring along with blackbox for health checks.
scrape_configs:
- job_name: 'test-run'
metrics_path: /probe
params:
module: [icmp]
file_sd_configs:
- files:
- targets.json
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox:9115
Inside my node.js application I am able to append data to targets.json file, but now I trying to replicate this in Kubernetes on minikube. I tried adding in ConfigMap as following and it works, but I dont want to populate targets in configuration, but rather maintain a json file.
Can this be done using Persistent Volumes? The pod running Prometheus will always read the targets file and pod running application will write to targets file.
kind: ConfigMap
apiVersion: v1
metadata:
name: prometheus-cm
data:
targets.json: |-
[
{
"targets": [
"x.x.x.x"
],
"labels": {
"app": "testApp1"
}
}
]
Simply, what strategy in Kubernetes is recommended to so that one pod can read a json file and another pod can write to that file.
In order to achieve your goal you need to use PVC:
A PersistentVolume (PV) is a piece of storage in the cluster that has
been provisioned by an administrator. It is a resource in the cluster
just like a node is a cluster resource. PVs are volume plugins like
Volumes, but have a lifecycle independent of any individual pod that
uses the PV. This API object captures the details of the
implementation of the storage, be that NFS, iSCSI, or a
cloud-provider-specific storage system.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It
is similar to a pod. Pods consume node resources and PVCs consume PV
resources. Pods can request specific levels of resources (CPU and
Memory). Claims can request specific size and access modes (e.g., can
be mounted once read/write or many times read-only).
The json file needs to be persisted if one pod has to write to it and another one to read it. There is an official guide describing that concept in steps:
Create a PersistentVolume
Create a PersistentVolumeClaim
Create a Pod that uses your PersistentVolumeClaim as a volume
I also recommend reading this: Create ReadWriteMany PersistentVolumeClaims on your Kubernetes Cluster as a supplement.
Is there a possibility to have a service in all namespaces of k8s dynamically deployed?
Right now, glusterFS endpoint(ns dependent) is being deleted by k8s if the port is not in use anymore.
Ex:
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs"
},
"subsets": [
{
"addresses": [
{
"ip": "172.0.0.1"
}
],
"ports": [
{
"port": 1
}
]
}
]
}
So I made a svc for port 1 to be used all the time, so I dont end up with a missing/deleted endpoint in any ns.
apiVersion: v1
kind: Service
metadata:
name: glusterfs
spec:
ports:
- port: 1
It would be interesting to have the above service deployed dynamically every time someone creates a new namespace.
DaemonSet is used to deploy Exactly one replica per node.
coming to your question, why do you need to create same service across namespaces?
It is not supported out of box though. However, you can create a custom script to achieve it.
K8s doesn't have any replication of services, pods, deployments, secrets, etc across namespaces... out of the box.
Introducing...The Kubernetes Controller/Operator Pattern.
Deploy a controller pod that has a read/list permissions on the namespaces resource. This controller will "watch" the namespaces and deploy whatever resources you want when they show up or change.
To get started building your own operator or controller please look at kubebuilder. https://book.kubebuilder.io/
According to this post:
https://netapp.io/2018/06/15/highly-secure-kubernetes-persistent-volumes/
You can't use/mount an NFS share in a pod if the pod is not having security context as privileged.
I am running a pod , with external NFS mounted but I have not specified any security context other than uid/gid. Working RW fine.
How can I check if my pod is a normal one or is privileged.
You can check this using kubectl get pods yourpod -o json under .spec.containers.securityContext or in metadata
As an example I created 2 nginx pods:
nginx(with privileged: true)
"metadata": {
"annotations": {
"cni.projectcalico.org/podIP": "10.48.2.3/32",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"nginx\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}],\"securityContext\":{\"privileged\":true}}]}}\n",
"securityContext": {
"privileged": true
and
nginx-nonprivileged
"metadata": {
"annotations": {
"cni.projectcalico.org/podIP": "10.48.2.4/32",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx-nonprivileged\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"nginx\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}\n",
I have 3 different Kubernetes Secrets and I want to mount each one into its own Pod managed by a StatefulSet with 3 replicas.
Is it possible to configure the StatefulSet such that each Secret is mounted into its own Pod?
Not really. A StatefulSet (and any workload controller for that matter) allows only a single pod definition template (it could have multiple containers). The issue with this is that a StatefulSet is designed to have N replicas so can you have an N number of secrets. It would have to be a SecretStatefulSet: a different controller.
Some solutions:
You could define a single Kubernetes secret that contains all your required secrets for all of your pods. The downside is that you will have to share the secret between the pods. For example:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
pod1: xxx
pod2: xxx
pod3: xxx
...
podN: xxx
Use something like Hashicorp's Vault and store your secret remotely with keys such as pod1, pod2, pod3,...podN. You can also use an HSM. This seems to be the more solid solution IMO but it might take longer to implement.
In all cases, you will have to make sure that the number of secrets matches your number of pods in your StatefulSet.
This is exactly what you're looking for I guess. https://github.com/spoditor/spoditor
Essentially, it uses a custom annotation on the PodSpec template, like:
annotations:
spoditor.io/mount-volume: |
{
"volumes": [
{
"name": "my-volume",
"secret": {
"secretName": "my-secret"
}
}
],
"containers": [
{
"name": "nginx",
"volumeMounts": [
{
"name": "my-volume",
"mountPath": "/etc/secrets/my-volume"
}
]
}
]
}
Now, nginx container in each Pod of the StatefulSet will try to mount its own dedicated secret in the pattern of my-secret-{pod ordinal}.
You will just need to make sure my-secret-0, my-secret-1, so on and so forth exists in the same namespace of the StatefulSet.
There're more advanced usage of the annotation in the documentation of the project.