I've successfully pushed my Docker container image to gcr.io with the following command:
$ gcloud docker push gcr.io/project-id-123456/my-image
But when I try to create a new pod I get the following error:
$ kubectl run my-image --image=gcr.io/project-id-123456/my-image
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-image my-image gcr.io/project-id-123456/my-image run=my-image 1
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-image-of9x7 0/1 Error pulling image (latest) from gcr.io/project-id-123456/my-image, Untar exit status 1 unexpected EOF 0 5m
It doesn't pull on my local as well:
$ docker rmi -f $(docker images -q) # Clear local image cache
$ gcloud docker pull gcr.io/project-id-123456/my-image:latest
…
Error pulling image (latest) from gcr.io/project-id-123456/my-image, Untar re-exec error: exit status 1: output: unexpected EOF
Can someone please suggest me how to fix this?
Ok, after digging around in the Docker code base, I think I have found some similar reports of what you are seeing.
The way this error is displayed changed in 1.7, but this thread seems related:
https://github.com/docker/docker/issues/14792
This turned me onto this fix, which landed in 1.8:
https://github.com/docker/docker/pull/15040
In particular, see this comment:
https://github.com/docker/docker/pull/15040#issuecomment-125661037
The comment seems to indicate that this is only a problem for v1 layers, so our Beta support for v2 may work around this issue.
You can push to our v2 beta via:
gcloud docker --server=beta.gcr.io push beta.gcr.io/project-id-123456/...
You can then simply change the reference in your Pod to "beta.gcr.io/..." and it will pull via v2.
Related
I was trying to get into kubernetes-dashboard Pod, but I keep getting this error:
C:\Users\USER>kubectl exec -n kubernetes-dashboard kubernetes-dashboard-66c887f759-bljtc -it -- sh
OCI runtime exec failed: exec failed: unable to start container process: exec: "sh": executable file not found in $PATH: unknown
command terminated with exit code 126
The Pod is running normally and I can access the Kubernetes UI via the browser. But I was getting some issues getting it running before, and I wanted to get inside the pod to run some commands, but I always get the same error mentioned above.
When I try the same command with a pod running nginx for example, it works:
C:\Users\USER>kubectl exec my-nginx -it -- sh
/ # ls
bin home proc sys
dev lib root tmp
docker-entrypoint.d media run usr
docker-entrypoint.sh mnt sbin var
etc opt srv
/ # exit
Any explanation, please?
Prefix the command to run with /bin so your updated command will look like:
kubectl exec -n kubernetes-dashboard <POD_NAME> -it -- /bin/sh
The reason you're getting that error is because Git in Windows slightly modifies the MSYS that changes command args. Generally using the command /bin/sh or /bash/bash works universally.
That error message means literally what it says: there is no sh or any other shell in the container. There's no particular requirement that a container have a shell, and if a Docker image is built FROM scratch (as the Kubernetes dashboard image is) or a "distroless" image, it just may not contain one.
In most cases you shouldn't need to "enter a container", and you should use kubectl exec (or docker exec) sparingly if at all. This is doubly true in Kubernetes: it's not just that changes you make manually will get lost when the container exits, but also that in Kubernetes you typically have multiple replicas that you can't manually edit all at once, and also that in some cases the cluster can delete and recreate a Pod outside of your control.
Sorry about a newbie question. I am trying to deploy an image into k3d (a dockerized version of k3s).
k3d image import -c my-cluster registry.gitlab.com/aaa/bbb/ccc/hello123
Now I can see the image on a node:
kubectl get node my-node -o json | grep hello123
However, the documentation doesn't say much about what "import" does. Is my image running? Is it allocated to a pod yet? Where can I find its logs?
If I knew what pod it's running in, I could do kubectl logs. The list of the cluster's pods doesn't show anything relevant.
I am beginning to think my image isn't running yet.
Edit: This if further confirmed by
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c
showing nothing relevant.
What's the next step?
You have just pulled the image to the cluster registry, the image has not been yet assigned to a pod.
Once you create a pod with the same image and image tag, it will try to pull from the local registry.
If you could ssh to the k8s node (kubectl get nodes -o wide and ssh user#nodeip), you can run the docker commands like:
docker images
You can expect to see the image that you pulled in the list.
If non of the pods are running the docker ps will return you an empty list.
trying to shell into the container by kubectl exec -it xxxxxx
but it returns
rpc error: code = 5 desc = open /var/run/docker/libcontainerd/containerd/faf3fd49262cc738e16368001eba5e1113abcb8a87e7b818cb84af3799906149/30fe901c16e0465aa15b596bf3e4f244fb12a7e4133b6e4da5aa35167a8dfb30/shim-log.json: no such file or directory
trying to reboot the node but not help
Thanks #Prafull Ladha
Eventually I restarted the docker (systemctl restart docker) of that Node which my pods could not be shelled, and it resumes to normal
The problem is with containerd, Once the containerd restart in the background, the docker daemon still try to process event streams against the old socket handles. After that, the error handeling when client can't connect to the containerd leads to the CPU spike on machine.
This is the open issue with docker and currently the workaround is to restart the docker.
sudo systemctl restart docker
It appears like some issue with the docker daemon. it would help if you add the logs from the container to research the root cause.
deploy alpine pod and see if you can get into the container. This is to isolate if the problem is with the platform or the pod that you are running.
kubectl run pingpong --image alpine ping 8.8.8.8
kubectl exec -it <pingpong-pod-name> sh
most likely something wrong with the pod that you are running. share the container logs for further help
I run the following commands and when I check if the pods are running I get the following errors:
Failed to pull image "tomcat": rpc error: code = Unknown desc = no
matching manifest for linux/amd64 in the manifest list entries
kubectl run tomcat --image=tomcat --port 8080
and
Failed to pull image "ngnix": rpc error: code = Unknown desc
= Error response from daemon: pull access denied for ngnix, repository does not exist or may require 'docker login'
kubectl run nginx3 --image ngnix --port 80
I seen a post in git about how to complete this when private repos cause an issue but not public. Has anyone ran into this before?
First Problem
From github issue
Sometimes, we'll have non-amd64 image build jobs finish before their amd64 counterparts, and due to the way we push the manifest list objects to the library namespace on the Docker Hub, that results in amd64-using folks (our primary target users) getting errors of the form "no supported platform found in manifest list" or "no matching manifest for XXX in the manifest list entries"
Docker Hub manifest list is not up-to-date with amd64 build for tomcat:latest.
Try another tag
kubectl run tomcat --image=tomcat:9.0 --port 8080
Second Problem
Use nginx not ngnix. Its a typo.
$ kubectl run nginx3 --image nginx --port 80
kubectl describe pods
logs
pods logs
logs
command:
- bundle exec unicorn -c config/unicorn/production.rb -E production
The container can't start on k8s and some errors occured .
But when I exec
docker run -d image [CMD]
The container works well.
"command" is an array, so each argument has to be a separate element, not all on one line
For anyone else running into this problem:
make sure the gems (including unicorn) are actually installed in the volume used by the container. If not, do a bundle install.
Another reason for this kind of error could be that the directory specified under working_dir (in the docker-compose.yml) does not exist (see Misleading error message "ERROR: Container command not found or does not exist.").