I am using kind kubernetes cluster on my Mac and I created it using a config file with mounts as shown here
The pain here is I have to load all images already on my machine pulled using docker pull from my company's docker registry i.e. artifactory. Could I use Docker Desktop and not have to load images separately using kind load docker-image ${imageTag} and still mount volumes into Docker Desktop kubernetes cluster? Where is even the config file to create the cluster in Docker Desktop?
Solving this will hugely help!
Related
I have a Linux CentOS host running Docker Ce with multiple containers running a few multi-containers apps (webapps using docker-compose) and I would like to migrate those containers to Azure Containers serverless platform.
How can i migrate all those containers with the volumes?
Making a Azure container registry and push the containers to that registry will move the data volumes? or how is the process to migrate?
Thanks
Making an Azure container registry and push the containers to that
registry will move the data volumes?
No, push the images to the Azure Container Registry just push the images, not the volume that mounts to the containers.
How can I migrate all those containers with the volumes?
You just can use the ACI via docker-compose to deploy the YAML to the ACI without the local volumes. You can try to upload the files to the Azure File Share and mount the File share to each instance.
Instead of using Azure Container Instances, I used Azure WebApps for Containers.
Is there a way to load a single image into a Kubernetes Cluster without going through Container Registries?
For example, if I build an image locally on my laptop, can kubectl do something akin to docker save/load to make that image available in a remote Kubernetes cluster?
I don't think kubectl can make a node to load up an image that was not built on itself, but I think you can achieve it in a similar way with docker Daemon CLI(make remote worker node to build image from local environment).
Something like:
$ docker -H tcp://0.0.0.0:2375 build <Dockerfile>
or setup docker host as environment in your local(laptop) environment.
$ export DOCKER_HOST="tcp://0.0.0.0:2375"
$ docker ps
Keep in mind that your remote worker node needs to be accessible to all the dependencies to build the image
See documentation
Plus, I am not sure why you want to work around remote repository but if the reason is because you don't want to expose your image in public, I suggest you setup a custom docker registry in long term.
Kubernetes requires container images to be in a registry - public or private, running in the cluster itself as a pod/container or remote with respect to the cluster. Even when the registry is on one of the cluster nodes - something often times used with local minikube - the registry is referenced by node's IP/hostname.
In order for a remote Kubernetes cluster to pull image from your local laptop you'd have to be running a registry locally (say, via docker run -d -p 5000:5000 --name registry registry:2) and have your laptop be reachable across the network.
Notice that either securing the local registry with trusted cert and key as from Let's Encrypt or other reputable CA will be required or if running insecure registry then Docker on the Kubernetes cluster nodes has to be configure to trust your insecure registry.
I running docker desktop with kubernetes option turned on. I have one node called docker-for-dektop.
Now i have created a new ubuntu docker container. I want to add this container to my kubernetes cluster. Can be done? how can i do it?
As far as I'm aware, you cannot add a node to Docker for Desktop with Kubernetes enabled.
Docker for Desktop is a single-node Kubernetes or Docker Swarm cluster, you might try using kubernetes-the-hard-way as this explains how to setup a cluster and add nodes manually without the use of kubeadm.
But I don't think this might work as there will be a lot of issues with setting up the network to work correctly.
You can also use the instructions on how to install kubeadm with kubelet and kubectl on Linux machine and adding a node using kubeadm join.
Do you know if it is possible to mount a local folder to a Kubernetes running container.
Like docker run -it -v .:/dev some-image bash I am doing this on my local machine and then remote debug into the container from VS Code.
Update: This might be a solution: telepresence
Link: https://kubernetes.io/docs/tasks/debug-application-cluster/local-debugging/
Do you know it it is possible to mount a local computer to Kubernetes. This container should have access to a Cassandra IP address.
Do you know if it is possible?
Kubernetes Volume
Using hostPath would be a solution: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
However, it will only work if your cluster runs on the same machine as your mounted folder.
Another but probably slightly over-powered method would be to use a distributed or parallel filesystem and mount it into your container as well as to mount it on your local host machine. An example would be CephFS which allows multi-read-write mounts. You could start a ceph cluster with rook: https://github.com/rook/rook
Kubernetes Native Dev Tools with File Sync Functionality
A solution would be to use a dev tool that allows you to sync the contents of the local folder to the folder inside a kubernetes pod. There, for example, is ksync: https://github.com/vapor-ware/ksync
I have tested ksync and many kubernetes native dev tools (e.g. telepresence, skaffold, draft) but I found them very hard to configure and time-consuming to use. That's why I created an open source project called DevSpace together with a colleague: https://github.com/loft-sh/devspace
It allows you to configure a real-time two-way sync between local folders and folders within containers running inside k8s pods. It is the only tool that is able to let you use hot reloading tools such as nodemon for nodejs. It works with volumes as well as with ephemeral / non-persistent folders and lets you directly enter the containers similar to kubectl exec and much more. It works with minikube and any other self-hosted or cloud-based kubernetes clusters.
Let me know if that helps you and feel free to open an issue if you are missing something you need for your optimal dev workflow with Kubernetes. We will be happy to work on it.
As long as we talk about doing stuff like docker -v a hostPath volume type should do the trick. But that means that you need to have the content you want to use stored on the Node that the Pod will run upon. Meaning that in case of GKE it would mean the code needs to exist on google compute node, not on your workstation. If you have local k8s cluster provisioned (minikube, kubeadm...) for local dev, that could be set to work as well.
I am not sure if this is possible, but if it is and someone have the experience to do so, could we not create a docker image for windows that represent an node?
I imaging that we will have a folder with configuration files, that can be mounted with docker -v
then if one needed a 5 node cluster, i would just run
docker run -v c:/dev/config:c:/config microsoft/servicefabric create-node --someOptions
for each node we wanted.
Is there any barriers for doign this? have anyone create the docker images for doign so? This would really simplify setting up a cluster on premise.
Using the 6.1 release you can run a cluster in a container, for dev/test purposes.
I'm not sure if you can get it to work with multiple containers though.
Service Fabric Linux Clusters in a Container
We have provided a
pre-configured Docker container image to run Service Fabric Linux
clusters in a container. The main scenario is to provide a
light-weight development experience for MacOS, but the container will
also run on Linux and Windows, using Docker CE.
To get started, follow the directions here:
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-get-started-mac
and
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-local-linux-cluster-windows