What is the difference between `minikube cache list` and `minikube image list` - minikube

These two commands appear to list the images which are available in minikube. Obviously one refers to images that are 'in cache' and the other does not however it is not clear where the images being listed by minikube image list reside if not in the cache, or how images could be in the list returned by minikube image list but not be returned by minikube cache list.

The differences between them are that image list shows all images in cluster and cache list shows images which you added manually. You can read more about Offline usage and about Pushing images
minikube start caches all required Kubernetes images by default. This default may be changed by setting --cache-images=false. These images are not displayed by the minikube cache command.
To add Docker image to Minikube you can use:
minikube image load <name-of-docker-image>
Here is an example:
user#minikube:~/myproject$ minikube cache list
user#minikube:~/myproject$ minikube image load helloapp:v1
user#minikube:~/myproject$ minikube cache list
helloapp:v1
user#minikube:~/myproject$

Related

Minikube automatically runs images from docker registry

I am trying to learn Kubernetes and installed Minikube on my local. I created a sample python app and its corresponding image was pushed to public docker registry. I started the cluster with
kubectl apply -f <<my-app.yml>>
It got started as expected. I stopped Minikube and deleted all the containers and images and restarted my Mac.
My Questions
I start my docker desktop and as soon as I run
minikube start
Minikube goes and pulls the images from public docker registry and starts the container. Is there a configuration file that Minikube looks into to start my container that I had deleted from my local? I am not able to understand from where is Minikube picking up my-app's configurations which was defined in manifest folder.
I have tried to look for config files and did find cache folder. But it does not contain any information about my app
I found this is expected behavior:
minikube stop command should stop the underlying VM or container, but keep user data intact.
So when I manually delete already created resources it does not automatically starts.
More information :
https://github.com/kubernetes/minikube/issues/13552

How to use local docker images in kubernetes deployments (NOT minikube)

I have a VM with kubernetes installed using kubeadm (NOT minikube). The VM acts a the single node of the cluster, with taints removed to allow it to act as both Master and Worker node (as shown in the kubernetes documentation).
I have saved, transfered and loaded a my app:test image into it. I can easily run a container with it using docker run.
It shows up when I run sudo docker images.
When I create a deployment/pod that uses this image and specify Image-PullPolicy: IfNotPresent or Never, I still have the ImagePullBackoff error. The describe command shows me it tries to pull the image from dockerhub...
Note that when I try to use a local image that was pulled as the result of creating another pod, the ImagePullPolicies seem to work, no problem. Although the image doesn't appear when i run sudo docker images --all.
How can I use a local image for pods in kubernetes? Is there a way to do it without using a private repository?
image doesn't appear when i run sudo docker images --all
Based on your comment, you are using K8s v1.22, which means it is likely your cluster is using containerd container runtime instead of docker (you can check with kubectl get nodes -o wide, and see the last column).
Try listing your images with crictl images and pulling with crictl pull <image_name> to preload the images on the node.
One can do so with a combination of crictl and ctr, if using containerd.
TLDR: these steps, which are also described in the crictl github documentation:
1- Once you get the image on the node (in my case, a VM), make sure it is in an archive (.tar). You can do that with the docker save or ctr image export commands.
2- Use sudo ctr -n=k8s.io images import myimage.tar while in the same directory as thearchived image to add it to containerd in the namespace that kubernetes uses to track it's images. It should now appear when you run sudo crictl images.
As suggested, I tried listing images with crictl and my app:test did not appear. However, trying to import my local image through crictl didn't seem to work either. I used crictl pull app:test and it showed the following error message:
FATA[0000] pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/app:test": failed to resolve reference "docker.io/library/app:test": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed.
However, when following these steps, my image is finally recognized as an existing local image in kubernetes. They are actually the same as suggested in the crictl github documentation
How does one explain this? How do images get "registered" in the kubernetes cluster? Why couldn't crictl import the image? I might post another issue to ask that...
Your cluster is bottled inside of your VM, so what you call local will always be remote for that cluster in that VM. And the reason that kubernetes is trying to pull those images, is because it can't find them in the VM.
Dockerhub is the default place to download containers from, but you can set kubernetes to pull from aws (ECR) from azure (ACR), from github packages (GCR) and from your own private server.
You've got about 100 ways to solve this, none of them are easy or will just work.
1 - easiest, push your images to Dockerhub and let your cluster pull from it.
2 - setup a local private container registry and set your kubernetes VM to pull from it (see this)
3 - setup a private container registry in your kubernetes cluster and setup scripts in your local env to push to it (see this)

Is there a way to specify a tar file of docker image in manifest file for kubernetes?

Is there a way to specify a tar file of a docker image in a deployment manifest file for kubernetes? The nodes have access to a mounted network drive that will have the tar file. There's a post where the image is loaded by docker on each node, but I was wondering if there's a way just to specify the tar file and have Kubernetes do the loading and running.
--edit--
To be more exact, say I have a mounted network drive on each node, is there a way with just the manifest file to instruct kubernetes to load that image directly from tar file and not have to put it into a docker registry.
In general, no, Kubernetes can only access container images from a registry, not from a network drive, see documentation.
However, you could have a private registry inside your cluster (see docs). You could also have the images locally on the nodes (pre-pulled images) and have Kubernetes access them from there by setting imagePullPolicy to Never (see docs).
You have provided quite limited information about your environment and how it would looks like.
Two things comes to my mind.
Use initContainer to download this file using wget or similar.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
That way you can be sure that tar file will be downloaded before your application will start. Example can be found here
Use Mount Volume
In your deployment, statefulset, pod (not sure what you are using), you can Mount Volume into pod. After that you will be able to inside pod specified path from volume. Please keep in mind that you have to use proper access modes.
To run .tar file you can use some bash commands like in this documentation.

How do I update a service in the cluster to use a new docker image

I have created a new docker image that I want to use to replace the current docker image. The application is on the kubernetes engine on google cloud platform.
I believe I am supposed to use the gcloud container clusters update command. Although, I struggle to see how it works and how I'm supposed to replace the old docker image with the new one.
You may want to use kubectl in order to interact with your GKE cluster. Method of image update depends on how the Pod / Container was created.
For some example commands, see https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources
For example, kubectl set image deployment/frontend www=image:v2 will do a rolling update "www" containers of "frontend" deployment, updating the image.
Getting up and running on GKE: https://cloud.google.com/kubernetes-engine/docs/quickstart
You can use Container Registry[1] as a single place to manage Docker images.
Google Container Registry provides secure, private Docker repository storage on Google Cloud Platform. You can use gcloud to push[2] images to your registry, then you can pull images using an HTTP endpoint from any machine.
You can also use Docker Hub repositories[3] allow you share container images with your team, customers, or the Docker community at large.
[1]https://cloud.google.com/container-registry/
[2]https://cloud.google.com/container-registry/docs/pushing-and-pulling
[3]https://docs.docker.com/docker-hub/repos/

docker has image, but kubernetes says not

I was trying to build with Kubernetes, and I was wanted to use the local image which I pulled in the early days to save time.(So it does not have to pull a image every time a container is created).The problem is that Kubernetes just ignored my local images.
For example. when I run docker images, I got this in the list
gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.1 f8363dbf447b 7 months ago 52.36 MB
But when I tried to build a deployment with the config(just a part of the config file)
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
imagePullPolicy: Never
It said Container image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1" is not present with pull policy of Never
I wonder if there is anything I missed. Thanks for help!
If you are using a Kubernetes cluster, you have to make sure the Docker image you need is available in all nodes, as you don't really know which one will get assigned the Pod.
So: pull/build the needed image from each cluster node.
If you look closely, you'll notice that the name of the image it's saying is missing is not the same as the one you pulled yourself. It looks like you're attempting to deploy kube-dns, which is made up of multiple images. You'll need to pull each one of them (or allow Kubernetes to pull them on your behalf) for the entire thing to work. Take a look at the configuration files you're using to deploy again—you should find other containers in the pod specifying the other images.