Portainer no such image error when deploying stack, but image exists. how to remove pull-policy? - docker-compose

I have the same porblem as in this post: Portainer no such image error when deploying stack, but image exists. I cannot ask in this post though because I dont have enough reputation. How do I remove the pull policy?

Related

Kubernetes imagePullPolicy:always behavior change?

I've been reading about the Kubernetes imagePullPolicy attribute when set to 'always', and it seems like something has changed:
Up through version 1.21 of the documentation, it said the following:
If you would like to always force a pull, you can do one of the following:
set the imagePullPolicy of the container to Always.
omit the imagePullPolicy and use :latest as the tag for the image to use; Kubernetes will set the policy to Always.
omit the imagePullPolicy and the tag for the image to use.
enable the AlwaysPullImages admission controller.
But starting with version 1.22 of the K8S documentation, it says imagePullPolicy works as follows when set to always:
Every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet pulls the image with the resolved digest, and uses that image to launch the container.
These are very different explanations: <= 1.21, it says that 'always' forces the image to always be pulled from the registry. But >=1.22, it says 'always' forces a digest check against the registry, but will use a cached copy if nothing changed.
I'm trying to understand if the behavior actually changed starting in 1.22, or was this simply a change to the explanation and documentation?
I think this change tends to save network bandwidth rather than changing the behaviour. In 1.21 or earlier versions, k8s is Always trying to pull the image all over again without checking some image layers exists on the node or not. In the new version, k8s will check the image layers and if they exist, it will pull only the missing layers. Yes, behavior is changed to some extent but users & clusters are not supposed to be effected negatively by this change.
These lines below in the same documentation indicates that its ultimate behaviour will not change;
The caching semantics of the underlying image provider make even imagePullPolicy: Always efficient, as long as the registry is reliably accessible. Your container runtime can notice that the image layers already exist on the node so that they don't need to be downloaded again.
Note:
You should avoid using the :latest tag when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly.

Image pull back off - Openshift

I keep getting the error of imagepullback off when pods are created.
The image tag is correct and I have deleted the pods but I get the same error. I tried all the other suggestions here but don't understand the issue.
Back-off pulling image "docker-registry.svc.us.com/f90-1/image:1.1.1"
Any suggestions for fix?
The "oc describe pod " commands should give you more details as to way the image pull fails.

How to use local docker images in kubernetes deployments (NOT minikube)

I have a VM with kubernetes installed using kubeadm (NOT minikube). The VM acts a the single node of the cluster, with taints removed to allow it to act as both Master and Worker node (as shown in the kubernetes documentation).
I have saved, transfered and loaded a my app:test image into it. I can easily run a container with it using docker run.
It shows up when I run sudo docker images.
When I create a deployment/pod that uses this image and specify Image-PullPolicy: IfNotPresent or Never, I still have the ImagePullBackoff error. The describe command shows me it tries to pull the image from dockerhub...
Note that when I try to use a local image that was pulled as the result of creating another pod, the ImagePullPolicies seem to work, no problem. Although the image doesn't appear when i run sudo docker images --all.
How can I use a local image for pods in kubernetes? Is there a way to do it without using a private repository?
image doesn't appear when i run sudo docker images --all
Based on your comment, you are using K8s v1.22, which means it is likely your cluster is using containerd container runtime instead of docker (you can check with kubectl get nodes -o wide, and see the last column).
Try listing your images with crictl images and pulling with crictl pull <image_name> to preload the images on the node.
One can do so with a combination of crictl and ctr, if using containerd.
TLDR: these steps, which are also described in the crictl github documentation:
1- Once you get the image on the node (in my case, a VM), make sure it is in an archive (.tar). You can do that with the docker save or ctr image export commands.
2- Use sudo ctr -n=k8s.io images import myimage.tar while in the same directory as thearchived image to add it to containerd in the namespace that kubernetes uses to track it's images. It should now appear when you run sudo crictl images.
As suggested, I tried listing images with crictl and my app:test did not appear. However, trying to import my local image through crictl didn't seem to work either. I used crictl pull app:test and it showed the following error message:
FATA[0000] pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/app:test": failed to resolve reference "docker.io/library/app:test": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed.
However, when following these steps, my image is finally recognized as an existing local image in kubernetes. They are actually the same as suggested in the crictl github documentation
How does one explain this? How do images get "registered" in the kubernetes cluster? Why couldn't crictl import the image? I might post another issue to ask that...
Your cluster is bottled inside of your VM, so what you call local will always be remote for that cluster in that VM. And the reason that kubernetes is trying to pull those images, is because it can't find them in the VM.
Dockerhub is the default place to download containers from, but you can set kubernetes to pull from aws (ECR) from azure (ACR), from github packages (GCR) and from your own private server.
You've got about 100 ways to solve this, none of them are easy or will just work.
1 - easiest, push your images to Dockerhub and let your cluster pull from it.
2 - setup a local private container registry and set your kubernetes VM to pull from it (see this)
3 - setup a private container registry in your kubernetes cluster and setup scripts in your local env to push to it (see this)

How to remove image from kubernetes(GKE) - Container image "<name/name>:latest" already present on machine

I have a failing public docker hub container and if I kubectl apply -f ... with the same version, :latest in this case, I am getting:
Container image "<name/name>:latest" already present on machine
I don't see the image anywhere, in this case I am running on Google Kubernetes Engine - and it is not in the google container registry.
The solution or workaround, is of course to fix the code error in the docker container, and add to the version number and push again - then it all works and get pulled.
But is there no way to clear the image in Kubernetes, something like in docker docker rmi <name/name>:latest?
I think use latest tag - not the best. But if it is necessary, official workaround imagePullPolicy=Always.
Why this not best way? More info can find this.

docker has image, but kubernetes says not

I was trying to build with Kubernetes, and I was wanted to use the local image which I pulled in the early days to save time.(So it does not have to pull a image every time a container is created).The problem is that Kubernetes just ignored my local images.
For example. when I run docker images, I got this in the list
gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.1 f8363dbf447b 7 months ago 52.36 MB
But when I tried to build a deployment with the config(just a part of the config file)
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
imagePullPolicy: Never
It said Container image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1" is not present with pull policy of Never
I wonder if there is anything I missed. Thanks for help!
If you are using a Kubernetes cluster, you have to make sure the Docker image you need is available in all nodes, as you don't really know which one will get assigned the Pod.
So: pull/build the needed image from each cluster node.
If you look closely, you'll notice that the name of the image it's saying is missing is not the same as the one you pulled yourself. It looks like you're attempting to deploy kube-dns, which is made up of multiple images. You'll need to pull each one of them (or allow Kubernetes to pull them on your behalf) for the entire thing to work. Take a look at the configuration files you're using to deploy again—you should find other containers in the pod specifying the other images.