I was trying to deploy my local docker image on kubernetes, but doesn't work for me.
I loaded image into docker and tagged it as app:v1, then I ran image by use kubectl this way kubectl run app --image=app:v1 --port=8080.
If I want to lookup my pods I see error "Failed to pull image "app:v1": rpc error: code = 2 desc = Error: image library/app not found".
What am I doing wrong?
In normal case your Kubernetes cluster runs on a different machine than your docker build was run on, hence it has no access to your local image (unless you are using minikube and you eval minikubes environment to actually run your docker commands against docker daemon powering the minikube install).
To get it working you need to push the image to a registry available to kubernetes cluster.
By running your command you actually tell kubernetes to pull app:v1 from official docherhub hosted images.
Related
I am trying to learn Kubernetes and installed Minikube on my local. I created a sample python app and its corresponding image was pushed to public docker registry. I started the cluster with
kubectl apply -f <<my-app.yml>>
It got started as expected. I stopped Minikube and deleted all the containers and images and restarted my Mac.
My Questions
I start my docker desktop and as soon as I run
minikube start
Minikube goes and pulls the images from public docker registry and starts the container. Is there a configuration file that Minikube looks into to start my container that I had deleted from my local? I am not able to understand from where is Minikube picking up my-app's configurations which was defined in manifest folder.
I have tried to look for config files and did find cache folder. But it does not contain any information about my app
I found this is expected behavior:
minikube stop command should stop the underlying VM or container, but keep user data intact.
So when I manually delete already created resources it does not automatically starts.
More information :
https://github.com/kubernetes/minikube/issues/13552
I have a VM with kubernetes installed using kubeadm (NOT minikube). The VM acts a the single node of the cluster, with taints removed to allow it to act as both Master and Worker node (as shown in the kubernetes documentation).
I have saved, transfered and loaded a my app:test image into it. I can easily run a container with it using docker run.
It shows up when I run sudo docker images.
When I create a deployment/pod that uses this image and specify Image-PullPolicy: IfNotPresent or Never, I still have the ImagePullBackoff error. The describe command shows me it tries to pull the image from dockerhub...
Note that when I try to use a local image that was pulled as the result of creating another pod, the ImagePullPolicies seem to work, no problem. Although the image doesn't appear when i run sudo docker images --all.
How can I use a local image for pods in kubernetes? Is there a way to do it without using a private repository?
image doesn't appear when i run sudo docker images --all
Based on your comment, you are using K8s v1.22, which means it is likely your cluster is using containerd container runtime instead of docker (you can check with kubectl get nodes -o wide, and see the last column).
Try listing your images with crictl images and pulling with crictl pull <image_name> to preload the images on the node.
One can do so with a combination of crictl and ctr, if using containerd.
TLDR: these steps, which are also described in the crictl github documentation:
1- Once you get the image on the node (in my case, a VM), make sure it is in an archive (.tar). You can do that with the docker save or ctr image export commands.
2- Use sudo ctr -n=k8s.io images import myimage.tar while in the same directory as thearchived image to add it to containerd in the namespace that kubernetes uses to track it's images. It should now appear when you run sudo crictl images.
As suggested, I tried listing images with crictl and my app:test did not appear. However, trying to import my local image through crictl didn't seem to work either. I used crictl pull app:test and it showed the following error message:
FATA[0000] pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/app:test": failed to resolve reference "docker.io/library/app:test": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed.
However, when following these steps, my image is finally recognized as an existing local image in kubernetes. They are actually the same as suggested in the crictl github documentation
How does one explain this? How do images get "registered" in the kubernetes cluster? Why couldn't crictl import the image? I might post another issue to ask that...
Your cluster is bottled inside of your VM, so what you call local will always be remote for that cluster in that VM. And the reason that kubernetes is trying to pull those images, is because it can't find them in the VM.
Dockerhub is the default place to download containers from, but you can set kubernetes to pull from aws (ECR) from azure (ACR), from github packages (GCR) and from your own private server.
You've got about 100 ways to solve this, none of them are easy or will just work.
1 - easiest, push your images to Dockerhub and let your cluster pull from it.
2 - setup a local private container registry and set your kubernetes VM to pull from it (see this)
3 - setup a private container registry in your kubernetes cluster and setup scripts in your local env to push to it (see this)
Is it possible to take an image or a snapshot of container running inside pod using kubectl?
Via docker, it is possible to use the docker commit command that creates an image of a container from which we can spawn more containers. I wanted to understand if there was something similar that we could do with kubectl.
No, partially because that's not in the kubernetes mental model of anything one would wish to do to a cluster, and partially because docker is not the only container runtime kubernetes uses. Every runtime one could use underneath kubernetes would need to support that operation, and I doubt they do.
You are welcome to do your own docker commit either by getting a shell on the Node, or by running a privileged Pod then connecting to the docker.sock via a volumeMount and running it that way
I launched a MongoDB replica set on Kubernetes (GKE as well as kubeadm). I faced no problems with the pods accessing the storage.
However, when I used Helm to deploy the same, I face this problem.
When I run this command-
(
kubectl describe po mongodb-shard1-0 --namespace=kube-system
)
(Here mongodb-shard1-0 is the first and only pod (of the desired three) which was created)
I get the error-
Events
Error: failed to start container "mongodb-shard1-container": Error
response from daemon: error while creating mount source path
'/mongo/data': mkdir /mongo: read-only file system
I noticed one major difference between the two ways of creating MongoDB cluster (without Helm, and with Helm)- when using Helm, I had to create a service account and install the Helm chart using that service account. Without Helm, I did not need that.
I used different mongo docker images, I faced the same error every time.
Can anybody help why I am facing this issue?
Docker exports volumes from filesystem using -v command line option. i.e. -v /var/tmp:/tmp
Can you check if the containers/pods are writing to shared volumes, not to the root filesystem?
I have an instance group of Container VMs running my app on a docker container.
I am trying to find a good strategy to manage the application logs for docker + MEAN + Google Cloud Compute Machines.
I can see the logs on individual containers running docker logs [container_id].
However, if I stop and start the VM I lose those logs. I also have VMs dynamically added by Auto scaler and would like to have a convenient way to access the logs.
Stack is MEAN and Logging tool is bunyan.
Is is possible to centralize or combine the logs from all VMS in one persistent location?
any suggestions?
UPDATES:
I installed fluentd agent and now I can see logs when I manually run thins on the shell: logger "some message for testing"
However, the logs from my container vm from my docker container never shows up on logs.
I still don't know how to get those docker logs to turn up on google cloud logs. It is supposed to be automatically collected.
cheers
Leo
Here is a yaml, Dockerfile and conf for a fluentd pod inside kubernetes.
Adjust the yaml to mount a disk:
https://github.com/GoogleCloudPlatform/kubernetes/tree/master/contrib/logging/fluentd-sidecar-gcp
Then adjust the config to log to the disk.
Build the container with the new configuration.
Deploy the new container.