kubernetes load single image into cluster - kubernetes

Is there a way to load a single image into a Kubernetes Cluster without going through Container Registries?
For example, if I build an image locally on my laptop, can kubectl do something akin to docker save/load to make that image available in a remote Kubernetes cluster?

I don't think kubectl can make a node to load up an image that was not built on itself, but I think you can achieve it in a similar way with docker Daemon CLI(make remote worker node to build image from local environment).
Something like:
$ docker -H tcp://0.0.0.0:2375 build <Dockerfile>
or setup docker host as environment in your local(laptop) environment.
$ export DOCKER_HOST="tcp://0.0.0.0:2375"
$ docker ps
Keep in mind that your remote worker node needs to be accessible to all the dependencies to build the image
See documentation
Plus, I am not sure why you want to work around remote repository but if the reason is because you don't want to expose your image in public, I suggest you setup a custom docker registry in long term.

Kubernetes requires container images to be in a registry - public or private, running in the cluster itself as a pod/container or remote with respect to the cluster. Even when the registry is on one of the cluster nodes - something often times used with local minikube - the registry is referenced by node's IP/hostname.
In order for a remote Kubernetes cluster to pull image from your local laptop you'd have to be running a registry locally (say, via docker run -d -p 5000:5000 --name registry registry:2) and have your laptop be reachable across the network.
Notice that either securing the local registry with trusted cert and key as from Let's Encrypt or other reputable CA will be required or if running insecure registry then Docker on the Kubernetes cluster nodes has to be configure to trust your insecure registry.

Related

How to specify "master" and "worker" nodes when using one machine to run Kubernetes?

I am using an Ubuntu 22.04 machine to run and test Kubernetes locally. I need some functionality like Docker-Desktop. I mean it seems both master and worker nodes/machines will be installed by Docker-Desktop on the same machine. But when I try to install Kubernetes and following the instructions like this, at some points it says run the following codes on master node:
sudo hostnamectl set-hostname kubernetes-master
Or run the following comands on the worker node machine:
sudo hostnamectl set-hostname kubernetes-worker
I don't know how to specify master/worker nodes if I have only my local Ubuntu machine?
Or should I run join command after kubeadm init command? Because I can't understand the commands I run in my terminal will be considered as a command for which master or worker machine?
I am a little bit confused about this master/worker nodes or client/server machine stuff while I am just using one machine for both client and server machines.
Prerequisites for installing kubernetes in cluster:
Ubuntu instance with 4 GB RAM - Master Node - (with ports open to all traffic)
Ubuntu instance with at least 2 GB RAM - Worker Node - (with ports open to all traffic)
It means you need to create 3 instances from any cloud provider like Google (GCP), Amazon (AWS), Atlantic.Net Cloud Platform, cloudsigma as per your convenience.
For creating an instance in gcp follow this guide. If you don’t have an account create a new account ,New customers also get $300 in free credits to run, test, and deploy workloads.
After creating instances you will get ips of the instance using them you can ssh into the instance using terminal in your local machine by using the command: ssh root#<ip address>
From there you can follow any guide for installing kubernetes by using worker and master nodes.
example:
sudo hostnamectl set-hostname <host name>
Above should be executed in the ssh of the worker node, similarly you need to execute it into the worker node.
The hostname does nothing about node roles.
If you do kubeadm init, the node will be a master node (currently called control plane).
This node can also be used as a worker node (currently called just a node), but by default, Pods cannot be scheduled on the control plane node.
You can turn off this restriction by removing its taints with the following command:
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
and then you can use this node as both control-plane and node.
But I guess some small kubernetes like k0s, k3s, and microk8s are better options for your use case rather than kubeadm.

Docker desktop k8s config file similar to Kind conf

I am using kind kubernetes cluster on my Mac and I created it using a config file with mounts as shown here
The pain here is I have to load all images already on my machine pulled using docker pull from my company's docker registry i.e. artifactory. Could I use Docker Desktop and not have to load images separately using kind load docker-image ${imageTag} and still mount volumes into Docker Desktop kubernetes cluster? Where is even the config file to create the cluster in Docker Desktop?
Solving this will hugely help!

Docker desktop kubernetes add node

I running docker desktop with kubernetes option turned on. I have one node called docker-for-dektop.
Now i have created a new ubuntu docker container. I want to add this container to my kubernetes cluster. Can be done? how can i do it?
As far as I'm aware, you cannot add a node to Docker for Desktop with Kubernetes enabled.
Docker for Desktop is a single-node Kubernetes or Docker Swarm cluster, you might try using kubernetes-the-hard-way as this explains how to setup a cluster and add nodes manually without the use of kubeadm.
But I don't think this might work as there will be a lot of issues with setting up the network to work correctly.
You can also use the instructions on how to install kubeadm with kubelet and kubectl on Linux machine and adding a node using kubeadm join.

Is it possible to mount a local computer folder to Kubernetes for development, like docker run -v

Do you know if it is possible to mount a local folder to a Kubernetes running container.
Like docker run -it -v .:/dev some-image bash I am doing this on my local machine and then remote debug into the container from VS Code.
Update: This might be a solution: telepresence
Link: https://kubernetes.io/docs/tasks/debug-application-cluster/local-debugging/
Do you know it it is possible to mount a local computer to Kubernetes. This container should have access to a Cassandra IP address.
Do you know if it is possible?
Kubernetes Volume
Using hostPath would be a solution: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
However, it will only work if your cluster runs on the same machine as your mounted folder.
Another but probably slightly over-powered method would be to use a distributed or parallel filesystem and mount it into your container as well as to mount it on your local host machine. An example would be CephFS which allows multi-read-write mounts. You could start a ceph cluster with rook: https://github.com/rook/rook
Kubernetes Native Dev Tools with File Sync Functionality
A solution would be to use a dev tool that allows you to sync the contents of the local folder to the folder inside a kubernetes pod. There, for example, is ksync: https://github.com/vapor-ware/ksync
I have tested ksync and many kubernetes native dev tools (e.g. telepresence, skaffold, draft) but I found them very hard to configure and time-consuming to use. That's why I created an open source project called DevSpace together with a colleague: https://github.com/loft-sh/devspace
It allows you to configure a real-time two-way sync between local folders and folders within containers running inside k8s pods. It is the only tool that is able to let you use hot reloading tools such as nodemon for nodejs. It works with volumes as well as with ephemeral / non-persistent folders and lets you directly enter the containers similar to kubectl exec and much more. It works with minikube and any other self-hosted or cloud-based kubernetes clusters.
Let me know if that helps you and feel free to open an issue if you are missing something you need for your optimal dev workflow with Kubernetes. We will be happy to work on it.
As long as we talk about doing stuff like docker -v a hostPath volume type should do the trick. But that means that you need to have the content you want to use stored on the Node that the Pod will run upon. Meaning that in case of GKE it would mean the code needs to exist on google compute node, not on your workstation. If you have local k8s cluster provisioned (minikube, kubeadm...) for local dev, that could be set to work as well.

how to execute docker diff command using kubernetes

I am using docker diff in my application in order to find all changed files in a container. Now, my application manages containers through Kubernetes and don't have a direct access to them. I found Kubernetes implementations for several docker commands (like kubectl logs), bit docker diff is missed.
Is there a way to execute docker diff for a pod through Kubernetes?
Many thanks
Kubernetes (kubectl) does not offer an equivalent command. Ideally you should not be using this command at all outside your local development environment (which is docker)
The best practice is to start containers with readonly root filesystem, so that you avoid storing any important state in containers. Kubernetes can kill and restart your pod in another node as a new container, so you should not care about the docker diff that happens on the container.