I have tried desperately to apply a simple pod specification without any luck, even with this previous answer: Mount local directory into pod in minikube
The yaml file:
apiVersion: v1
kind: Pod
metadata:
name: hostpath-pod
spec:
containers:
- image: httpd
name: hostpath-pod
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp/data/
I started minikube cluster with: minikube start --mount-string="/tmp:/tmp" --mount and there are 3 files in /tmp/data:
ls /tmp/data/
file2.txt file3.txt hello
However, this is what I get when I do kubectl describe pods:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m26s default-scheduler Successfully assigned default/hostpath-pod to minikube
Normal Pulled 113s kubelet, minikube Successfully pulled image "httpd" in 32.404370125s
Normal Pulled 108s kubelet, minikube Successfully pulled image "httpd" in 3.99427232s
Normal Pulled 85s kubelet, minikube Successfully pulled image "httpd" in 3.906807762s
Normal Pulling 58s (x4 over 2m25s) kubelet, minikube Pulling image "httpd"
Normal Created 54s (x4 over 112s) kubelet, minikube Created container hostpath-pod
Normal Pulled 54s kubelet, minikube Successfully pulled image "httpd" in 4.364295872s
Warning Failed 53s (x4 over 112s) kubelet, minikube Error: failed to start container "hostpath-pod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/tmp/data" to rootfs at "/data" caused: stat /tmp/data: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Warning BackOff 14s (x6 over 103s) kubelet, minikube Back-off restarting failed container
Not sure what I'm doing wrong here. If it helps I'm using minikube version v1.23.2 and this was the output when I started minikube:
😄 minikube v1.23.2 on Darwin 11.5.2
▪ KUBECONFIG=/Users/sachinthaka/.kube/config-ds-dev:/Users/sachinthaka/.kube/config-ds-prod:/Users/sachinthaka/.kube/config-ds-dev-cn:/Users/sachinthaka/.kube/config-ds-prod-cn
✨ Using the hyperkit driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing hyperkit VM for "minikube" ...
❗ This VM is having trouble accessing https://k8s.gcr.io
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳 Preparing Kubernetes v1.22.2 on Docker 20.10.8 ...
🔎 Verifying Kubernetes components...
📁 Creating mount /tmp:/tmp ...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
❗ /usr/local/bin/kubectl is version 1.18.0, which may have incompatibilites with Kubernetes 1.22.2.
▪ Want kubectl v1.22.2? Try 'minikube kubectl -- get pods -A'
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Anything I can try? :'(
Update 1
Changing from minikube to microk8s helped. But I'm still not seeing anything inside /data/ in the pod.
Also changing from /tmp/ to a different folder helped in minikube. Something to do with MacOs.
OP has said, that problem is solved:
changing from /tmp/ to a different folder helped in minikube. Something to do with MacOs
For some reason minikube doesn't like /tmp/
An explanation of this problem:
You cannot mount /tmp to /tmp. The problem isn't with macOS, but with the way you do it. I tried to recreate this problem in several ways. I used a docker and got a very interesting error:
docker: Error response from daemon: Duplicate mount point: /tmp.
This error makes it clear what the problem is. If you mount your catalog elsewhere, everything should work (which was confirmed):
Do I understand correctly, that when you changed the mount point to a different folder, does it work?
that is correct. For some reason minikube doesn't like /tmp/
I know you are using hyperkit instead of docker in my case, but the only difference will be in the message you get on the screen. In the case of the docker, it is very clear.
Related
I am moving from Docker Desktop to Minikube and have been having some trouble in getting MetalLB to work properly. I am starting Minikube in MacOS Monterey.
I've started a Minikube profile using the command below:
minikube start -p myprofile --cpus=4 --memory='32g' --disk-size='100000mb'
--driver=hyperkit --kubernetes-version=v1.21.8 --addons=metallb
When I check the pods for MetalLB, they are in an ImagePullBackOff status. The pods are trying to pull images docker.io/metallb/controller:v0.9.6 and docker.io/metallb/speaker:v0.9.6 respectively.
NAME READY STATUS RESTARTS AGE
controller-5fd6788656-jvj4m 0/1 ImagePullBackOff 0 26m
speaker-ctdmw 0/1 ImagePullBackOff 0 37m
After running eval $(minikube -p myprofile docker-env) and manually pulling through docker pull docker.io/metallb/speaker:v0.9.6, I get the error:
Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on <ip-address>:53: read udp <ip-address>:49978-><ip-address>:53: i/o timeout
I'm not certain if it's useful, but after SSHing into the Minikube node, I've also verified ping google.com does not return a result.
When starting my Minikube profile, I had the following output:
😄 [myprofile] minikube v1.28.0 on Darwin 12.3.1
🆕 Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
✨ Using the hyperkit driver based on existing profile
👍 Starting control plane node myprofile in cluster myprofile
🔄 Restarting existing hyperkit VM for "myprofile" ...
❗ This VM is having trouble accessing https://k8s.gcr.io
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳 Preparing Kubernetes v1.21.8 on Docker 20.10.20 ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
▪ Using image metallb/speaker:v0.9.6
▪ Using image metallb/controller:v0.9.6
🌟 Enabled addons: storage-provisioner, metallb, default-storageclass
❗ /usr/local/bin/kubectl is version 1.25.4, which may have incompatibilities with Kubernetes 1.21.8.
▪ Want kubectl v1.21.8? Try 'minikube kubectl -- get pods -A'
🏄 Done! kubectl is now configured to use "myprofile" cluster and "default" namespace by default
kubernetes cannot pull a public image. Standard images like nginx are downloading successfully, but my pet project is not downloading. I'm using minikube for launch kubernetes-cluster
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway-deploumnet
labels:
app: api-gateway
spec:
replicas: 3
selector:
matchLabels:
app: api-gateway
template:
metadata:
labels:
app: api-gateway
spec:
containers:
- name: api-gateway
image: creatorsprodhouse/api-gateway:latest
imagePullPolicy: Always
ports:
- containerPort: 80
when I try to create a deployment I get an error that kubernetes cannot download my public image.
$ kubectl get pods
result:
NAME READY STATUS RESTARTS AGE
api-gateway-deploumnet-599c784984-j9mf2 0/1 ImagePullBackOff 0 13m
api-gateway-deploumnet-599c784984-qzklt 0/1 ImagePullBackOff 0 13m
api-gateway-deploumnet-599c784984-csxln 0/1 ImagePullBackOff 0 13m
$ kubectl logs api-gateway-deploumnet-599c784984-csxln
result
Error from server (BadRequest): container "api-gateway" in pod "api-gateway-deploumnet-86f6cc5b65-xdx85" is waiting to start: trying and failing to pull image
What could be the problem? The standard images are downloading but my public one is not. Any help would be appreciated.
EDIT 1
$ api-gateway-deploumnet-599c784984-csxln
result:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m22s default-scheduler Successfully assigned default/api-gateway-deploumnet-849899786d-mq4td to minikube
Warning Failed 3m8s kubelet Failed to pull image "creatorsprodhouse/api-gateway:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 3m8s kubelet Error: ErrImagePull
Normal BackOff 3m7s kubelet Back-off pulling image "creatorsprodhouse/api-gateway:latest"
Warning Failed 3m7s kubelet Error: ImagePullBackOff
Normal Pulling 2m53s (x2 over 8m21s) kubelet Pulling image "creatorsprodhouse/api-gateway:latest"
EDIT 2
If I try to download a separate docker image, it's fine
$ docker pull creatorsprodhouse/api-gateway:latest
result:
Digest: sha256:e664a9dd9025f80a3dd60d157ce1464d4df7d0f8a00538e6a137d44f9f9f12aa
Status: Downloaded newer image for creatorsprodhouse/api-gateway:latest
docker.io/creatorsprodhouse/api-gateway:latest
EDIT 3
After advice to restart minikube
$ minikube stop
$ minikube delete --purge
$ minikube start --cni=calico
I started the pods.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m28s default-scheduler Successfully assigned default/api-gateway-deploumnet-849899786d-bkr28 to minikube
Warning FailedCreatePodSandBox 4m27s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "7e112c92e24199f268ec9c6f3a6db69c2572c0751db9fd57a852d1b9b412e0a1" network for pod "api-gateway-deploumnet-849899786d-bkr28": networkPlugin cni failed to set up pod "api-gateway-deploumnet-849899786d-bkr28_default" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "7e112c92e24199f268ec9c6f3a6db69c2572c0751db9fd57a852d1b9b412e0a1" network for pod "api-gateway-deploumnet-849899786d-bkr28": networkPlugin cni failed to teardown pod "api-gateway-deploumnet-849899786d-bkr28_default" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.34 -j CNI-57e7da7379b524635074e6d0 -m comment --comment name: "crio" id: "7e112c92e24199f268ec9c6f3a6db69c2572c0751db9fd57a852d1b9b412e0a1" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-57e7da7379b524635074e6d0':No such file or directory
Try `iptables -h' or 'iptables --help' for more information.
I could not solve the problem in the ways I was suggested. However, it worked when I ran minikube with a different driver
$ minikube start --driver=none
--driver=none means that the cluster will run on your host instead of the standard --driver=docker which runs the cluster in docker.
It is better to run minikube with --driver=docker as it is safer and easier, but it didn't work for me as I could not download my images. For me personally it is ok to use --driver=none although it is a bit dangerous.
In general, if anyone knows what the problem is, please answer my question. In the meantime you can try to run minikube cluster on your host with the command I mentioned above.
In any case, thank you very much for your attention!
I'm following the documentation procedure and enabling the registration add-on in minikube.
So I'm running
minikube start --addons registry
kamel install
to start the cluster and install Camel K into it.
But when I run kubectl get pod I get CrashLoopBackOff as the camel-k-operator status.
kubectl get events gave me the following:
LAST SEEN TYPE REASON OBJECT MESSAGE
7m9s Normal Scheduled pod/camel-k-operator-848fd8785b-cr9pp Successfully assigned default/camel-k-operator-848fd8785b-cr9pp to minikube
7m5s Normal Pulling pod/camel-k-operator-848fd8785b-cr9pp Pulling image "docker.io/apache/camel-k:1.9.2"
2m23s Normal Pulled pod/camel-k-operator-848fd8785b-cr9pp Successfully pulled image "docker.io/apache/camel-k:1.9.2" in 4m45.3178036s
42s Normal Created pod/camel-k-operator-848fd8785b-cr9pp Created container camel-k-operator
42s Normal Started pod/camel-k-operator-848fd8785b-cr9pp Started container camel-k-operator
43s Normal Pulled pod/camel-k-operator-848fd8785b-cr9pp Container image "docker.io/apache/camel-k:1.9.2" already present on machine
55s Warning BackOff pod/camel-k-operator-848fd8785b-cr9pp Back-off restarting failed container
7m9s Normal SuccessfulCreate replicaset/camel-k-operator-848fd8785b Created pod: camel-k-operator-848fd8785b-cr9pp
7m9s Normal ScalingReplicaSet deployment/camel-k-operator Scaled up replica set camel-k-operator-848fd8785b to 1
Running kubectl logs [podname] -p I get
{
"level": "error",
"ts": 1658235623.4016757,
"logger": "cmd",
"msg": "failed to set GOMAXPROCS from cgroups",
"error": "path \"/docker/ec4a100d598f3529dbcc3a9364c8caceb32abd8c11632456d58c7948bb756d36\" is not a descendant of mount point root \"/docker/ec4a100d598f3529dbcc3a9364c8caceb32abd8c11632456d58c7948bb756d36/kubelet\" and cannot be exposed from \"/sys/fs/cgroup/rdma/kubelet\"",
"stacktrace": "github.com/apache/camel-k/pkg/cmd.(*operatorCmdOptions).run\n\tgithub.com/apache/camel-k/pkg/cmd/operator.go:57\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra#v1.4.0/command.go:860\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra#v1.4.0/command.go:974\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra#v1.4.0/command.go:902\nmain.main\n\tcommand-line-arguments/main.go:47\nruntime.main\n\truntime/proc.go:225"
}
Formatting the stacktrace we get:
github.com/apache/camel-k/pkg/cmd.(*operatorCmdOptions).run
github.com/apache/camel-k/pkg/cmd/operator.go:57
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra#v1.4.0/command.go:860
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra#v1.4.0/command.go:974
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra#v1.4.0/command.go:902
main.main
command-line-arguments/main.go:47
runtime.main
runtime/proc.go:225
Camel K Client 1.9.2
minikube v1.25.2
It's probably a bug with the docker driver.
A workaround is to use the hyperv driver instead:
minikube start --addons registry --driver hyperv
I'm creating kubevirt in minikube, initially kubevirt-operator.yaml fails with ImagePullBackOff. After I added secret in the yaml
imagePullSecrets:
- name: regcred
containers:
all my virt-operator* started to run. virt-api* pods still shows ImagePullBackOff.
The error comes out as
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27m default-scheduler Successfully assigned kubevirt/virt-api-787487d9cd-t68qf to minikube
Normal Pulling 25m (x4 over 27m) kubelet Pulling image "us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0"
Warning Failed 25m (x4 over 27m) kubelet Failed to pull image "us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0": rpc error: code = Unknown desc = Error response from daemon: pull access denied for us-ashburn-1.ocir.io/xxx/virt-api, repository does not exist or may require 'docker login': denied: Anonymous users are only allowed read access on public repos
Warning Failed 25m (x4 over 27m) kubelet Error: ErrImagePull
Warning Failed 25m (x6 over 27m) kubelet Error: ImagePullBackOff
Normal BackOff 2m26s (x106 over 27m) kubelet Back-off pulling image "us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0"
Manually, I can pull the same image with docker login. Any help would be much appreciated. Thanks
This docker image looks like it is in a private registry(and from oracle). And I assume the regcred is not correct. Can you login there with docker login? if so you can create regcred secret like this
$ kubectl create secret docker-registry regcred --docker-server=<region-key>.ocir.io --docker-username='<tenancy-namespace>/<oci-username>' --docker-password='<oci-auth-token>' --docker-email='<email-address>'
Also check this oracle tutorial: https://www.oracle.com/webfolder/technetwork/tutorials/obe/oci/oke-and-registry/index.html
Here you can find the steps to implement secret values to the cluster.
If you are using a private registry, check that your secret exists and the secret is correct. Your secret should also be in the same namespace.
Your Minikube is a VM not your localhost. You try this
Open Terminal
eval $(minikube docker-env)
docker build .
kubectl create -f deployment.yaml
just valid this terminal. if closed terminal again open terminal and write eval $(minikube docker-env)
eval $(minikube docker-env) this code build image in Minikube
Also, try to login docker on all nodes by using docker login.
There is also a lengthy blog post describing how to debug image pull back-off in depth here
Hey if you look here. I think you can find some helpful documentation.
What they are doing is, they are upload the dockerconfig file which has login credentials as a secret and then referring to that in the deployment.
You could try to follow these steps and do something similar. Let me know if it works
I am using minikube on Linux to get started with kubernetes. Going with the examples in the readme and going with the none vm-diver, I do the following.
$ minikube start --vm-driver=none
Starting local Kubernetes v1.9.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
===================
WARNING: IT IS RECOMMENDED NOT TO RUN THE NONE DRIVER ON PERSONAL WORKSTATIONS
The 'none' driver will run an insecure kubernetes apiserver as root that may leave the host vulnerable to CSRF attacks
When using the none driver, the kubectl config and credentials generated will be root owned and will appear in the root home directory.
You will need to move the files to the appropriate location and then set the correct permissions. An example of this is below:
sudo mv /root/.kube $HOME/.kube # this will write over any previous configuration
sudo chown -R $USER $HOME/.kube
sudo chgrp -R $USER $HOME/.kube
sudo mv /root/.minikube $HOME/.minikube # this will write over any previous configuration
sudo chown -R $USER $HOME/.minikube
sudo chgrp -R $USER $HOME/.minikube
This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
Loading cached images from config file.
$ kubectl get nodes
No resources found.
$ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.4 --port=8080
deployment "hello-minikube" created
$ kubectl expose deployment hello-minikube --type=NodePort
service "hello-minikube" exposed
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-minikube-c6c6764d-h64t8 0/1 Pending 0 3m
Now, the problem is that this pod continues to remain pending. It looks like there are no nodes to run it on but I do not know why. Where am I going wrong?
EDIT: Here is the output of describeing the pod.
$ kubectl describe pod hello-minikube-c6c6764d-h64t8
Name: hello-minikube-c6c6764d-h64t8
Namespace: default
Node: <none>
Labels: pod-template-hash=72723208
run=hello-minikube
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/hello-minikube-c6c6764d
Containers:
hello-minikube:
Image: k8s.gcr.io/echoserver:1.4
Port: 8080/TCP
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dw4j7 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-dw4j7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dw4j7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 20h (x4 over 20h) default-scheduler no nodes available to schedule pods
Warning FailedScheduling 20h (x4 over 20h) default-scheduler no nodes available to schedule pods
Warning FailedScheduling 20h (x5 over 20h) default-scheduler no nodes available to schedule pods
Warning FailedScheduling 19h (x5 over 19h) default-scheduler no nodes available to schedule pods
Warning FailedScheduling 19h (x5 over 19h) default-scheduler no nodes available to schedule pods
Warning FailedScheduling 18h (x5 over 18h) default-scheduler no nodes available to schedule pods
Warning FailedScheduling 1s (x5 over 16s) default-scheduler no nodes available to schedule pods
try running minikube status if you get this then try running this command.
fix worked for me:
sudo chown -R $USER /home/docker/.minikube/machines/minikube/config.json
Error with minikube status:
"X Error getting host status: load: filestore: open /home/docker/.minikube/machines/minikube/config.json: permission denied
*
* Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
- https://github.com/kubernetes/minikube/issues/new/choose"