Why does my autofs service do not run on my linux container? - kubernetes

When deploying my system using kubernetes, autofs service is not running on the container.
running service autofs status returns the following error:
[FAIL] automount is not running ... failed!
running service autofs start returns the following error:
[....] Starting automount.../usr/sbin/automount: test mount forbidden or incorrect kernel protocol version, kernel protocol version 5.00 or above required.
failed (no valid automount entries defined.).
/etc/fstab file does exist in my file system.

You probably didn't load the module for it. Official documenatation: autofs.
One of the reason for this error too,can be /tmp directory is not present or it's permission/ownership is wrong.
Check if your /etc/fstab file exists.
Useful blog: nfs-autofs.

Related

Lens (K8s client) give error: exec: executable oci not found

I spent several hours looking at this Lens error for K8s. I installed Python, OCI-CLI for Windows 10 (I downloaded oci-cli offline installation, and run python install.py) and configured cluster access. Using CMD works ok:
kubectl command works fine, even get pods command works
But using Lens it gives me the error when connecting
Error getting Credentials: exec: executable oci not found
What am I missing?
I finally found the solution, it was to download kubectl.exe
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/windows/amd64/kubectl.exe
I put it in a folder on the disk, example c:\kubenetes
add that folder to the PATH environment variable.
Restart the PC. Without reboot it didn't work.

Solaris 11 smbfs mount utility is failing to mount SMB2.0 onward windows shares

We are using smbfs mount utility in solaris 11 to mount windows SMB2 share but failure was reported as below:
command(executed with root):
sudo mount -F smbfs -o user=administrator,uid=oracle //win-t370714v98p/TestMnt /mnt/TestMnt
explanation:
//win-t370714v98p/TestMnt - windows SMB2.0 share
/mnt/TestMnt - local mountpoint on solaris11 server
Error: /usr/lib/fs/smbfs/mount: //win-t370714v98p: login failed: syserr = Connection reset by peer
For SMb1.0 share smbfs mount utility is able to perform ount successfully but fails for SMB2.0 onward shareds.
Does solaris11 smbfs mount supports on SMB2.0?
No, it does not. The maximum protocol version for the client side is SMB1.0, as documented in https://docs.oracle.com/cd/E88353_01/html/E37852/smb-5.html.

Minikube Start Error (Kubernetes) When Using hyperv Driver on Windows server 2016

I am trying to install Kubernetes on windows server 2016.
I tried to install minikube, and got some errors.
This is the tutorial that I followed:
https://www.assistanz.com/installing-minikube-on-windows-2016-server/
This is the command + error that I got:
PS C:\Windows\system32> minikube start –vm-driver=hyperv –hyperv-virtual-switch=Minikube
Starting local Kubernetes v1.10.0 cluster...
Starting VM... Downloading Minikube ISO
170.78 MB / 170.78 MB [============================================] 100.00% 0s
E1106 19:29:10.616564 11852 start.go:168] Error starting host: Error creating host: Error executing step: Running precreate checks.
: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path.
Retrying.
E1106 19:29:10.689675 11852 start.go:174] Error starting host: Error creating host: Error executing step: Running precreate checks.
: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:
Someone knows how to solve it?
I googled it, but no luck.
Thanks!
I was never able to get the config parameters to work with minikube start.
I was able to get past this error using the minikube config commands in PowerShell (should also work at a command prompt):
minikube config set vm-driver hyperv
minikube config set hyperv-virtual-switch ExternalSwitch
minikube config view
minikube delete
minikube start
For more information on the command run: minikube config -h
Looking at the documentation you have provided, I have noticed that the screenshot shows a slight difference to the one they've quote.
I have also found this command in another piece of documentation from kubernetes here, showing the same command as that from the screenshot.
I suggest you try the following command;
minikube start --vm-driver=hyperv --hyperv-virtual-switch=Minikube
It is true that OP has pasted the incorrect command, because there is - instead of --. I tried to pass this arguments to minikube and all you get is an instant error. So the issue must be somewhere else. I remember having similar issue and it got resolved after deleting the .kube and .minikube folders and trying to run it again.
After taking a closer look this tutorial is destined for installation of minikube inside of a Windows Server 2016 Virtual Machine, so you have to have a Nested Virtualization able hardware:
Prerequisites The Hyper-V host and guest must both be Windows Server
2016/Windows 10 Anniversary Update or later. VM configuration version
8.0 or greater. An Intel processor with VT-x and EPT technology -- nesting is currently Intel-only. There are some differences with
virtual networking for second-level virtual machines. See "Nested
Virtual Machine Networking".
So the main question is, is that true in your scenario? Are you trying to perform your steps on Windows Server Hyper-V virtual machine with nested virtualization feature?
If you confirm that I have technical possibilities to check it in that scenario.
Otherwise I recommend using the "traditional way" of running minikube in Windows, according for example to this tutorial.

kubernetes ceph Storage Classes for Dynamic Provisioning executable file not found in $PATH

I am running local ceph (version 10.2.7) and kubernetes v1.6.5 in separate cluster. Using PV and PVM Claim I was about mount the rbd device to the pod.
When I configure to use ceph Storage Classes for Dynamic Provisioning. its giving the below error for pvclaim.
E0623 00:22:30.520160 1 rbd.go:317] rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH, command output:
W0623 00:22:45.513291 1 rbd_util.go:364] failed to create rbd image, output
E0623 00:22:45.513308 1 rbd.go:317] rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH, command output:
W0623 00:22:45.516768 1 rbd_util.go:364] failed to create rbd image, output
E0623 00:22:45.516830 1 rbd.go:317] rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH, command output:
I have installed ceph comman package on all the kuberernets cluster nodes. all the node running centos 7 OS.
How can I fix this error message?
Thanks
SR
Well, the internal kubernetes.io/rbd does not work, which is known for very long time and eg. discussed here.
One should use an external provisionier like the one mentioned here.
Kubelet is trying to run rbd create ....
The rbd command needs to be in the PATH of the kubelet binary.
Kubelet usually runs as root. Check if you can run rbd create as root. If not, add it to root's path, or to the environment of whatever script (systemd?) that is starting Kubelet.
You need define a new provisioner rbd-provisioner. Ref this issue.

Can we create volumes inside docker container

I am trying to create logical volumes (like /dev/sdb or so) inside a running centos docker container. If anyone has tried doing so successfully, please help!
After installing lvm2 and running lvmetad, when I tried creating a VG, I get the below error:
bash-4.2# lvcreate -L 2G stackit
/dev/mapper/control: open failed: Operation not permitted
Failure to communicate with kernel device-mapper driver.
Check that device-mapper is available in the kernel.
striped: Required device-mapper target(s) not detected in your
kernel.
Run `lvcreate --help' for more information.
I'm not sure what exactly what you are trying to do, but docker containers by default run with restricted privileges.
Try adding (old way)
--privileged=true
Or (new way)
--cap-add=ALL
To give the container full privileges. Then you can narrow down which capabilities you actually need to give the container.