According to this post:
https://netapp.io/2018/06/15/highly-secure-kubernetes-persistent-volumes/
You can't use/mount an NFS share in a pod if the pod is not having security context as privileged.
I am running a pod , with external NFS mounted but I have not specified any security context other than uid/gid. Working RW fine.
How can I check if my pod is a normal one or is privileged.
You can check this using kubectl get pods yourpod -o json under .spec.containers.securityContext or in metadata
As an example I created 2 nginx pods:
nginx(with privileged: true)
"metadata": {
"annotations": {
"cni.projectcalico.org/podIP": "10.48.2.3/32",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"nginx\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}],\"securityContext\":{\"privileged\":true}}]}}\n",
"securityContext": {
"privileged": true
and
nginx-nonprivileged
"metadata": {
"annotations": {
"cni.projectcalico.org/podIP": "10.48.2.4/32",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx-nonprivileged\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"nginx\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}\n",
Related
This question already has answers here:
Sharing secret across namespaces
(18 answers)
Closed 6 months ago.
In Azure Kubernetes I want have a pod with jenkins in defualt namespace, that needs read secret from my aplication workspace.
When I tried I get the next error:
Error from server (Forbidden): secrets "myapp-mongodb" is forbidden: User "system:serviceaccount:default:jenkinspod" cannot get resource "secrets" in API group "" in the namespace "myapp"
How I can bring access this jenkisn pod to read secrets in 'myapp' namespace
secret is a namespaced resource and can be accessed via proper rbac permissions. However any improper rbac permissions may lead to leakage.
You must role bind the pod's associated service account. Here is a complete example. I have created a new service account for role binding in this example. However, you can use the default service account if you want.
step-1: create a namespace called demo-namespace
kubectl create ns demo-namespace
step-2: create a secret in demo-namespace:
kubectl create secret generic other-secret -n demo-namespace --from-literal foo=bar
secret/other-secret created
step-2: Create a service account(my-custom-sa) in the default namespace.
kubectl create sa my-custom-sa
step-3: Validate that, by default, the service account you created in the last step has no access to the secrets present in demo-namespace.
kubectl auth can-i get secret -n demo-namespace --as system:serviceaccount:default:my-custom-sa
no
step-4: Create a cluster role with permissions of get and list secrets from demo-namespace namespace.
kubectl create clusterrole role-for-other-user --verb get,list --resource secret
clusterrole.rbac.authorization.k8s.io/role-for-other-user created
step-5: Create a rolebinding to bind the cluster role created in last step.
kubectl create rolebinding role-for-other-user -n demo-namespace --serviceaccount default:my-custom-sa --clusterrole role-for-other-user
rolebinding.rbac.authorization.k8s.io/role-for-other-user created
step-6: validate that the service account in the default ns now has access to the secrets of demo-namespace. (note the difference from step 3)
kubectl auth can-i get secret -n demo-namespace --as system:serviceaccount:default:my-custom-sa
yes
step-7: create a pod in default namsepace and mount the service account you created earlier.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: my-pod
name: my-pod
spec:
serviceAccountName: my-custom-sa
containers:
- command:
- sleep
- infinity
image: bitnami/kubectl
name: my-pod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
step-7: Validate that you can read the secret of demo-namespace from the pod in the default namespace.
curl -sSk -H "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/demo-namespace/secrets
{
"kind": "SecretList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "668709"
},
"items": [
{
"metadata": {
"name": "other-secret",
"namespace": "demo-namespace",
"uid": "5b3b9dba-be5d-48cc-ab16-4e0ceb3d1d72",
"resourceVersion": "662043",
"creationTimestamp": "2022-08-19T14:51:15Z",
"managedFields": [
{
"manager": "kubectl-create",
"operation": "Update",
"apiVersion": "v1",
"time": "2022-08-19T14:51:15Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:data": {
".": {},
"f:foo": {}
},
"f:type": {}
}
}
]
},
"data": {
"foo": "YmFy"
},
"type": "Opaque"
}
]
}
I have a Kubernetes deployment (apache flume to be exact) which needs to store persistent data. It has a PVC set up and bind to a path, which works without problem.
When I simply increase the scale of the deployment through kubernetes dashboard, it gives me an error saying multiple pods are trying to attach the same persistent volume. My deployment description is something like this (I tried to remove irrelevant parts)
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "myapp-deployment",
"labels": {
"app": "myapp",
"name": "myapp-master"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "myapp",
"name": "myapp-master"
}
},
"template": {
"spec": {
"volumes": [
{
"name": "myapp-data",
"persistentVolumeClaim": {
"claimName": "myapp-pvc"
}
}
],
"containers": [
{
"name": "myapp",
"resources": {},
"volumeMounts": [
{
"name": "ingestor-data",
"mountPath": "/data"
}
]
}
]
}
},...
Each pod should get its own persistent space (but with same pathname), so one doesn't mess with the others'. I tried to add a new volume in the volume array above, and a volume mount in the volume mount array, but it didn't work (I guess it meant "bind two volumes to a single container")
What should I change to have 2 pods with separate persistent volumes? What should I change to have N number of pods and N number of PVC's so I can freely scale the deployment up and down?
Note: I saw a similar question here which explains N number of pods cannot be done using deployments. Is it possible to do what I want with only 2 pods?
You should use a StatefulSet for that. This is for pods with persistent data that should survive a pod restart. Replicas have a certain order and are named in that way (my-app-0, my-app-1, ...). They are stopped and restarted in this order and will mount the same volume after a restart/update.
With the StatefulSet you can use a volumeClaimTemplates to dynamically create new PersistentVolumes with the creation of a new pod. So every time a pod is created a volume get provisioned by your storage class.
From docs:
The volumeClaimTemplates will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
See docs for more details:
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components
Each pod should get its own persistent space (but with same pathname), so one doesn't mess with the others'.
For this reason, use a StatefulSet instead. Most things will work the same way, except that each Pod will get its own unique Persistent Volume.
With Okteto Cloud, in order to let different pods/deployments access a shared PersistentVolumeClaim, I tried setting the PersistentVolumeClaim's accessModes to "ReadWriteMany":
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "pv-claim-cpdownloads"
},
"spec": {
"accessModes": [
"ReadWriteMany"
],
"resources": {
"requests": {
"storage": "10Gi"
}
}
}
}
Applying my deployment with kubectl succeeds, but the deployment itself times out on the okteto web UI, with the error:
pod has unbound immediate PersistentVolumeClaims (repeated 55 times)
Now, the same PersistentVolumeClaim with accessModes set to "ReadWriteOnce" deploys just fine.
Is the accessMode "ReadWriteMany" disallowed on Okteto Cloud ?
If it is, how could I get several pods/deployments to access the same volume data ?
For precisions, in my case I think I technically only need one pod to write to the volume and the other one to read from it.
My use case is to have one container save files to a folder, and another container watches changes and loads files from that same folder.
Okteo Cloud only supports the "ReadWriteOnce" access mode.
If you share the volume between pods/deployments they will all go to the same node, which is equivalent to have a single reader/writer. But it is not a recommended practice.
What is your use case? why do you need to share volumes?
Is there a possibility to have a service in all namespaces of k8s dynamically deployed?
Right now, glusterFS endpoint(ns dependent) is being deleted by k8s if the port is not in use anymore.
Ex:
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs"
},
"subsets": [
{
"addresses": [
{
"ip": "172.0.0.1"
}
],
"ports": [
{
"port": 1
}
]
}
]
}
So I made a svc for port 1 to be used all the time, so I dont end up with a missing/deleted endpoint in any ns.
apiVersion: v1
kind: Service
metadata:
name: glusterfs
spec:
ports:
- port: 1
It would be interesting to have the above service deployed dynamically every time someone creates a new namespace.
DaemonSet is used to deploy Exactly one replica per node.
coming to your question, why do you need to create same service across namespaces?
It is not supported out of box though. However, you can create a custom script to achieve it.
K8s doesn't have any replication of services, pods, deployments, secrets, etc across namespaces... out of the box.
Introducing...The Kubernetes Controller/Operator Pattern.
Deploy a controller pod that has a read/list permissions on the namespaces resource. This controller will "watch" the namespaces and deploy whatever resources you want when they show up or change.
To get started building your own operator or controller please look at kubebuilder. https://book.kubebuilder.io/
I'm trying to get going with Kubernetes DaemonSets and not having any luck at all. I've searched for a solution to no avail. I'm hoping someone here can help out.
First, I've seen this ticket. Restarting the controller manager doesn't appear to help. As you can see here, the other kube processes have all been started after the apiserver and the api server has '--runtime-config=extensions/v1beta1=true' set.
kube 31398 1 0 08:54 ? 00:00:37 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://dock-admin:2379 --address=0.0.0.0 --allow-privileged=false --portal_net=10.254.0.0/16 --admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota --runtime-config=extensions/v1beta1=true
kube 12976 1 0 09:49 ? 00:00:28 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:8080 --cloud-provider=
kube 29489 1 0 11:34 ? 00:00:00 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://127.0.0.1:8080
However api-versions only shows version 1:
$ kubectl api-versions
Available Server Api Versions: v1
Kubernetes version is 1.2:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
The DaemonSet has been created, but appears to have no pods scheduled (status.desiredNumberScheduled).
$ kubectl get ds -o json
{
"kind": "List",
"apiVersion": "v1",
"metadata": {},
"items": [
{
"kind": "DaemonSet",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "ds-test",
"namespace": "dvlp",
"selfLink": "/apis/extensions/v1beta1/namespaces/dvlp/daemonsets/ds-test",
"uid": "2d948b18-fa7b-11e5-8a55-00163e245587",
"resourceVersion": "2657499",
"generation": 1,
"creationTimestamp": "2016-04-04T15:37:45Z",
"labels": {
"app": "ds-test"
}
},
"spec": {
"selector": {
"app": "ds-test"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "ds-test"
}
},
"spec": {
"containers": [
{
"name": "ds-test",
"image": "foo.vt.edu:1102/dbaa-app:v0.10-dvlp",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {}
}
}
},
"status": {
"currentNumberScheduled": 0,
"numberMisscheduled": 0,
"desiredNumberScheduled": 0
}
}
]
}
Here is my yaml file to create the DaemonSet
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: ds-test
spec:
selector:
app: ds-test
template:
metadata:
labels:
app: ds-test
spec:
containers:
- name: ds-test
image: foo.vt.edu:1102/dbaa-app:v0.10-dvlp
ports:
- containerPort: 8080
Using that file to create the DaemonSet appears to work (I get 'daemonset "ds-test" created'), but no pods are created:
$ kubectl get pods -o json
{
"kind": "List",
"apiVersion": "v1",
"metadata": {},
"items": []
}
(I would have posted this as a comment, if I had enough reputation)
I am confused by your output.
kubectl api-versions should print out extensions/v1beta1 if it is enabled on the server. Since it does not, it looks like extensions/v1beta1 is not enabled.
But kubectl get ds should fail if extensions/v1beta1 is not enabled. So I can not figure out if extensions/v1beta1 is enabled on your server or not.
Can you try GET masterIP/apis and see if extensions is listed there?
You can also go to masterIP/apis/extensions/v1beta1 and see if daemonsets is listed there.
Also, I see kubectl version says 1.2, but then kubectl api-versions should not print out the string Available Server Api Versions (that string was removed in 1.1: https://github.com/kubernetes/kubernetes/pull/15796).
I have this issue in my cluster (k8s version: 1.9.7):
Daemonset controlled by "Daemonset controller" not "Scheduler", So I restart the controller manager, the problem sloved:
But I think this is a issue of kubernetes, some relation info:
Bug 1469037 - Sometime daemonset DESIRED=0 even this matched node
v1.7.4 - Daemonset DESIRED 0 (for node-exporter) #51785
I was facing a similar issue, then tried searching for the daemonset in the kube-system namespace, as mentioned here, https://github.com/kubernetes/kubernetes/issues/61342
I actually did get an output properly as well
For any case that the current state of pods is not equal to desired state (whether it was created by a DaemonSet, ReplicaSet, Deployment etc') I would first check the Kubelet on the current node:
$ sudo systemctl status kubelet
Or:
$ sudo journalctl -u kubelet
In many cases pods weren't created in my cluster because of errors like:
Couldn't parse as pod (Object 'Kind' is missing in 'null')
Which might occur after editing a resource's yaml in editor like vim.
Try:
$kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule-
master node cannot accept pods.