I have a Linux CentOS host running Docker Ce with multiple containers running a few multi-containers apps (webapps using docker-compose) and I would like to migrate those containers to Azure Containers serverless platform.
How can i migrate all those containers with the volumes?
Making a Azure container registry and push the containers to that registry will move the data volumes? or how is the process to migrate?
Thanks
Making an Azure container registry and push the containers to that
registry will move the data volumes?
No, push the images to the Azure Container Registry just push the images, not the volume that mounts to the containers.
How can I migrate all those containers with the volumes?
You just can use the ACI via docker-compose to deploy the YAML to the ACI without the local volumes. You can try to upload the files to the Azure File Share and mount the File share to each instance.
Instead of using Azure Container Instances, I used Azure WebApps for Containers.
Related
I am using kind kubernetes cluster on my Mac and I created it using a config file with mounts as shown here
The pain here is I have to load all images already on my machine pulled using docker pull from my company's docker registry i.e. artifactory. Could I use Docker Desktop and not have to load images separately using kind load docker-image ${imageTag} and still mount volumes into Docker Desktop kubernetes cluster? Where is even the config file to create the cluster in Docker Desktop?
Solving this will hugely help!
I have helm + kubernetes setup. I need to store large file ~30-80 MB in cluster and mount it to pods. How do I achieve this, so that I don't manually upload the file to every environment?
You can share common files using NFS. There are many ways to use NFS with K8s such as this one. If your cluster is managed by cloud provider such as AWS, you can consider EFS which is NFS compatible. NFS compatible solution on cloud platform is very common today. This way you never need to manually upload files to worker nodes. Your helm chart will focus on create the necessary PersistentVolumeClaim/PersistentVolume and volume mount to access the shared files.
One way to do this is to use a helm install+upgrade hook AND an init container.
Set a helm install hook to create a kubernetes job that will download the file to the mounted volume.
The init container on the pod will wait indefinitely until the download is complete.
I have a corda example and spring web server built and deployed in azure vm.
Now I would like to try run each node in kubernetes containers. Any references?
Yeah so this is totally doable. Take a look at this example repo I found online on running corda docker containers in a docker compose cluster along with the individual processes.
Hope this helps:
https://github.com/EricMcEvoyR3/corda-docker-compose
Do you know if it is possible to mount a local folder to a Kubernetes running container.
Like docker run -it -v .:/dev some-image bash I am doing this on my local machine and then remote debug into the container from VS Code.
Update: This might be a solution: telepresence
Link: https://kubernetes.io/docs/tasks/debug-application-cluster/local-debugging/
Do you know it it is possible to mount a local computer to Kubernetes. This container should have access to a Cassandra IP address.
Do you know if it is possible?
Kubernetes Volume
Using hostPath would be a solution: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
However, it will only work if your cluster runs on the same machine as your mounted folder.
Another but probably slightly over-powered method would be to use a distributed or parallel filesystem and mount it into your container as well as to mount it on your local host machine. An example would be CephFS which allows multi-read-write mounts. You could start a ceph cluster with rook: https://github.com/rook/rook
Kubernetes Native Dev Tools with File Sync Functionality
A solution would be to use a dev tool that allows you to sync the contents of the local folder to the folder inside a kubernetes pod. There, for example, is ksync: https://github.com/vapor-ware/ksync
I have tested ksync and many kubernetes native dev tools (e.g. telepresence, skaffold, draft) but I found them very hard to configure and time-consuming to use. That's why I created an open source project called DevSpace together with a colleague: https://github.com/loft-sh/devspace
It allows you to configure a real-time two-way sync between local folders and folders within containers running inside k8s pods. It is the only tool that is able to let you use hot reloading tools such as nodemon for nodejs. It works with volumes as well as with ephemeral / non-persistent folders and lets you directly enter the containers similar to kubectl exec and much more. It works with minikube and any other self-hosted or cloud-based kubernetes clusters.
Let me know if that helps you and feel free to open an issue if you are missing something you need for your optimal dev workflow with Kubernetes. We will be happy to work on it.
As long as we talk about doing stuff like docker -v a hostPath volume type should do the trick. But that means that you need to have the content you want to use stored on the Node that the Pod will run upon. Meaning that in case of GKE it would mean the code needs to exist on google compute node, not on your workstation. If you have local k8s cluster provisioned (minikube, kubeadm...) for local dev, that could be set to work as well.
I have an app that I deploy as part of a stream with Spring Cloud Dataflow on a Kubernetes cluster. The Docker image for the app contains a VOLUME instruction and I'd like to specify a directory on the host to mount the volume to. (This is network-attached storage that all hosts in the cluster can access.)
I didn't see anything in KubernetesDeployerProperties.
Is this possible?
Sorry, no built-in support for volumes. Feel free to raise an issue here: https://github.com/spring-cloud/spring-cloud-deployer-kubernetes/issues