kubernetes ceph Storage Classes for Dynamic Provisioning executable file not found in $PATH - kubernetes

I am running local ceph (version 10.2.7) and kubernetes v1.6.5 in separate cluster. Using PV and PVM Claim I was about mount the rbd device to the pod.
When I configure to use ceph Storage Classes for Dynamic Provisioning. its giving the below error for pvclaim.
E0623 00:22:30.520160 1 rbd.go:317] rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH, command output:
W0623 00:22:45.513291 1 rbd_util.go:364] failed to create rbd image, output
E0623 00:22:45.513308 1 rbd.go:317] rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH, command output:
W0623 00:22:45.516768 1 rbd_util.go:364] failed to create rbd image, output
E0623 00:22:45.516830 1 rbd.go:317] rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH, command output:
I have installed ceph comman package on all the kuberernets cluster nodes. all the node running centos 7 OS.
How can I fix this error message?
Thanks
SR

Well, the internal kubernetes.io/rbd does not work, which is known for very long time and eg. discussed here.
One should use an external provisionier like the one mentioned here.

Kubelet is trying to run rbd create ....
The rbd command needs to be in the PATH of the kubelet binary.
Kubelet usually runs as root. Check if you can run rbd create as root. If not, add it to root's path, or to the environment of whatever script (systemd?) that is starting Kubelet.

You need define a new provisioner rbd-provisioner. Ref this issue.

Related

Why does buildah fail running inside a kubernetes container?

Hey I'm creating a Gitlab pipeline and I have a runner in Kubernetes.
In my pipeline I am trying to build the application as container.
I'm building the container with buildah, which is running inside a Kubernetes pod. While the pipeline is running kubectl get pods --all-namespaces shows the buildah pod:
NAMESPACE NAME READY STATUS RESTARTS AGE
gitlab-runner runner-wyplq6-h-project-6157-concurrent-0qc9ns 2/2 Running 0 7s
The pipeline runs
buildah login -u gitlab-ci-token -p ${CI_BUILD_TOKEN} ${CI_REGISTRY} and
buildah bud -t ${CI_REGISTRY_IMAGE}/${CI_COMMIT_BRANCH}:${CI_COMMIT_SHA} .
with the Dockerfile using FROM parity/parity:v2.5.13-stable.
Buldah bud however fails, and prints:
Login Succeeded!
STEP 1: FROM parity/parity:v2.5.13-stable
Getting image source signatures
Copying blob sha256:d1983a67e104e801fceb1850a375a71fe6b62636ba7a8403d9644f308a6a43f9
Copying blob sha256:3386e6af03b043219225367632569465e5ecd47391d1f99a6d265e51bd463a83
Copying blob sha256:49ac0bbe6c8eeb959337b336ceaa5c3bbbae81e316025f9b94ede453540f2377
Copying blob sha256:72d77d7d5e84353d77d8a8f97d250120afe3650b85010137961560bce3a327d5
Copying blob sha256:1a0f3a523f04f61db942018321ae122f90d8e3303e243b005e8de9817daf7028
Copying blob sha256:4aae9d2bd9a7a79a688ccf753f0fa9bed5ae66ab16041380e595a077e1772b25
Copying blob sha256:8326361ddc6b9703a60c5675d1e9cc4b05dbe17473f8562c51b78a1f6507d838
Copying blob sha256:92c90097dde63c8b1a68710dc31fb8b9256388ee291d487299221dae16070c4a
Copying config sha256:36be05aeb6426b5615e2d6b71c9590dbc4a4d03ae7bcfa53edefdaeef28d3f41
Writing manifest to image destination
Storing signatures
time="2022-02-08T10:40:15Z" level=error msg="Error while applying layer: ApplyLayer exit status 1 stdout: stderr: permission denied"
error creating build container: The following failures happened while trying to pull image specified by "parity/parity:v2.5.13-stable" based on search registries in /etc/containers/registries.conf:
* "localhost/parity/parity:v2.5.13-stable": Error initializing source docker://localhost/parity/parity:v2.5.13-stable: pinging docker registry returned: Get https://localhost/v2/: dial tcp [::1]:443: connect: connection refused
* "docker.io/parity/parity:v2.5.13-stable": Error committing the finished image: error adding layer with blob "sha256:3386e6af03b043219225367632569465e5ecd47391d1f99a6d265e51bd463a83": ApplyLayer exit status 1 stdout: stderr: permission denied
...
I am thinking of 2 possible causes:
First the container is build and then it is stored inside the kubernetes pod before transfering it to the container registry. Since the Pod does not have any persistent storage, it fails writting, hence this error.
The second is that The container is build and pushed to the container registry, for some reasons it has no permissions to it and fails.
Which one is it? And how do I fix it?
If it is the fist reason, do I need to add persistent volume rights to the serviceaccount running the pod?
gitlab runner needs root privileges, add this line into [runner.kuberentes] in gitlab configuration
privileged = true

ERROR: failed to create cluster while running kind create cluster

When I run kind create cluster in Ubuntu 20.04 I get this error:
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.21.1) đŸ–ŧ
✓ Preparing nodes đŸ“Ļ
✓ Writing configuration 📜
✗ Starting control-plane 🕹ī¸ k
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Complete logs: https://paste.debian.net/1207493/
What can be the reason for this? I cannot find any relevant solution in the docs or existing github issues.
I had the same problem. After adjusting the docker memory allocation, the cluster was created successfully

unable to mount volume to spark.kubernetes.executor

I am trying to read a file from server in spark cluster mode using kubernetes, so i put my file on all workers and i mount driver volume using
val conf = new SparkConf().setAppName("sparksetuptest")
.set("spark.kubernetes.driver.volumes.hostPath.host.mount.path", "/file-directory")
Everything works fine here but when i execute it shows that file not found at specific location.
So i mount directory to executor with .set("spark.kubernetes.executor.volumes.hostPath.host.mount.path", "/file-directory")
But now i am not able to execute program it stuck in a never ending process while fetching data.
Please suggest something, so that i can mount my directory with executor and read that file.
this is an example from nfs-example
spark.kubernetes.driver.volumes.nfs.images.options.server=example.com
spark.kubernetes.driver.volumes.nfs.images.options.path=/data
I think you need to declare the path that you want to mount under options.path and the spark.kubernetes.driver.volumes.[VolumeType].[VolumeName].mount.path is the mount path in your container
For example:
If I want to mount /home/lemon/data on the node of k8s to the path /data the docker container with VolumeName exepv, then
conf.set("spark.kubernetes.executor.volumes.hostPath.exepv.mount.path","/data")
conf.set("spark.kubernetes.executor.volumes.hostPath.exepv.options.path", "/home/lemon/data")
after this, you can access the path /data in your executor container

Why does my autofs service do not run on my linux container?

When deploying my system using kubernetes, autofs service is not running on the container.
running service autofs status returns the following error:
[FAIL] automount is not running ... failed!
running service autofs start returns the following error:
[....] Starting automount.../usr/sbin/automount: test mount forbidden or incorrect kernel protocol version, kernel protocol version 5.00 or above required.
failed (no valid automount entries defined.).
/etc/fstab file does exist in my file system.
You probably didn't load the module for it. Official documenatation: autofs.
One of the reason for this error too,can be /tmp directory is not present or it's permission/ownership is wrong.
Check if your /etc/fstab file exists.
Useful blog: nfs-autofs.

Minikube won't start on mac

Trying to start minikube on mac. Virtualization is being provided by VirtualBox.
$ minikube start
😄 minikube v1.1.0 on darwin (amd64)
đŸ”Ĩ Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
đŸŗ Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6
❌ Unable to load cached images: loading cached images: loading image /Users/paul/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.2: Docker load /tmp/kube-proxy_v1.14.2: command failed: docker load -i /tmp/kube-proxy_v1.14.2
stdout:
stderr: open /var/lib/docker/image/overlay2/layerdb/tmp/write-set-542676317/diff: read-only file system
: Process exited with status 1
đŸ’Ŗ Failed to setup certs: pre-copy: command failed: sudo rm -f /var/lib/minikube/certs/ca.crt
stdout:
stderr: rm: cannot remove '/var/lib/minikube/certs/ca.crt': Input/output error
: Process exited with status 1
đŸ˜ŋ Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new
Trying minikube delete followed by minikube start produces the same issue.
Docker is running and is signed in.
I also deleted all machines in virtualbox after minikube delete and still get the same result.
According to What if I answer a question in a comment? I am adding answer as well since many people dont read comments.
You can try to delete local config in MINIKUBE_HOME before starting minikube
rm -rf ~/.minikube
Try
minikube delete
and
minikube start