how to execute docker diff command using kubernetes - kubernetes

I am using docker diff in my application in order to find all changed files in a container. Now, my application manages containers through Kubernetes and don't have a direct access to them. I found Kubernetes implementations for several docker commands (like kubectl logs), bit docker diff is missed.
Is there a way to execute docker diff for a pod through Kubernetes?
Many thanks

Kubernetes (kubectl) does not offer an equivalent command. Ideally you should not be using this command at all outside your local development environment (which is docker)
The best practice is to start containers with readonly root filesystem, so that you avoid storing any important state in containers. Kubernetes can kill and restart your pod in another node as a new container, so you should not care about the docker diff that happens on the container.

Related

Is there a way to mount docker socket from one to container to another?

I'm looking to mount docker socket from one container to another without involving the host. Is it possible? I searched around and couldn't find an example of such a situation. The issue is that host uses a very old version of docker so I setup docker within the container which works okay. Now I need other docker containers to use the socket from the base container and not the host. Is there any way to achieve this (in kubernetes)?
The only way comes in mind is to use hostPath volume with type socket, and mount it into multiple containers:
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
Even if it works you will end up with "other containers" launching containers within your "newer docker" container, which is not a good practice. I would suggest spinning another node with newer docker, connecting it to your master, and spinning the part of load that requires access to docker sock there. You can use nodeSelector to schedule properly:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#step-two-add-a-nodeselector-field-to-your-pod-configuration
You can port this onto k8s further by turning your control container into operator https://www.slideshare.net/Jakobkaralus/the-kubernetes-operator-pattern-containerconf-nov-2017 (use k8s API instead of docker sock)

Is it possible to mount a local computer folder to Kubernetes for development, like docker run -v

Do you know if it is possible to mount a local folder to a Kubernetes running container.
Like docker run -it -v .:/dev some-image bash I am doing this on my local machine and then remote debug into the container from VS Code.
Update: This might be a solution: telepresence
Link: https://kubernetes.io/docs/tasks/debug-application-cluster/local-debugging/
Do you know it it is possible to mount a local computer to Kubernetes. This container should have access to a Cassandra IP address.
Do you know if it is possible?
Kubernetes Volume
Using hostPath would be a solution: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
However, it will only work if your cluster runs on the same machine as your mounted folder.
Another but probably slightly over-powered method would be to use a distributed or parallel filesystem and mount it into your container as well as to mount it on your local host machine. An example would be CephFS which allows multi-read-write mounts. You could start a ceph cluster with rook: https://github.com/rook/rook
Kubernetes Native Dev Tools with File Sync Functionality
A solution would be to use a dev tool that allows you to sync the contents of the local folder to the folder inside a kubernetes pod. There, for example, is ksync: https://github.com/vapor-ware/ksync
I have tested ksync and many kubernetes native dev tools (e.g. telepresence, skaffold, draft) but I found them very hard to configure and time-consuming to use. That's why I created an open source project called DevSpace together with a colleague: https://github.com/loft-sh/devspace
It allows you to configure a real-time two-way sync between local folders and folders within containers running inside k8s pods. It is the only tool that is able to let you use hot reloading tools such as nodemon for nodejs. It works with volumes as well as with ephemeral / non-persistent folders and lets you directly enter the containers similar to kubectl exec and much more. It works with minikube and any other self-hosted or cloud-based kubernetes clusters.
Let me know if that helps you and feel free to open an issue if you are missing something you need for your optimal dev workflow with Kubernetes. We will be happy to work on it.
As long as we talk about doing stuff like docker -v a hostPath volume type should do the trick. But that means that you need to have the content you want to use stored on the Node that the Pod will run upon. Meaning that in case of GKE it would mean the code needs to exist on google compute node, not on your workstation. If you have local k8s cluster provisioned (minikube, kubeadm...) for local dev, that could be set to work as well.

how to run an onpremise service fabric cluster in docker containers for windows?

I am not sure if this is possible, but if it is and someone have the experience to do so, could we not create a docker image for windows that represent an node?
I imaging that we will have a folder with configuration files, that can be mounted with docker -v
then if one needed a 5 node cluster, i would just run
docker run -v c:/dev/config:c:/config microsoft/servicefabric create-node --someOptions
for each node we wanted.
Is there any barriers for doign this? have anyone create the docker images for doign so? This would really simplify setting up a cluster on premise.
Using the 6.1 release you can run a cluster in a container, for dev/test purposes.
I'm not sure if you can get it to work with multiple containers though.
Service Fabric Linux Clusters in a Container
We have provided a
pre-configured Docker container image to run Service Fabric Linux
clusters in a container. The main scenario is to provide a
light-weight development experience for MacOS, but the container will
also run on Linux and Windows, using Docker CE.
To get started, follow the directions here:
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-get-started-mac
and
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-local-linux-cluster-windows

Kubernetes: How to run a Bash command on Docker container B from Docker container A

I've set up a simple Kubernetes test environment based on Docker-Multinode. I've also set up a quite large Docker image based on Ubuntu, which contains several of my tools. Most of them are written in C++ and have quite a lot of dependencies to libraries installed on the system and otherwise.
My goal is to distribute legacy batch tasks between multiple nodes. I liked the easy setup of Docker-Multinode, but now I wonder if this is the right thing for me - since I have my actual legacy applications in the other Docker image.
How can I run a Bash command on Docker container B (the Ubuntu Docker container with my legacy tools) from Docker container A (the multinode worker Docker container)? Or is this not advisable at all?
To clarify, Docker container A (the worker multinode worker Docker container) and Docker container B (the legacy tools Ubuntu Docker container) run on the same host (each machine will have both of them each).
Your question is really not clear:
Kubernetes runs Docker containers; any Docker container.
Kubernetes itself runs in Docker, and in 'multi-node' the dependencies needed by Kubernetes run in Docker, but in what is called bootstrapped Docker.
Now, it's not clear in your question where Docker A runs, vs. Docker B.
Furthermore, if you want to 'distribute' batch jobs, then each job should be an independent job that runs in its own container and container A should not depend on container A.
If you need the dependencies (libraries) in Docker container B to run your job, then you really only need to use the Docker container B as your base image for your job containers A. A Docker image is layered, so that even if it is big, if another container uses it as a base image, it is only loaded once overall, so it's not a problem to have five containers of type A with B as the base image, because the base image is only loaded once.
If you really need a container to communicate with another, then you should build an API to pass commands from one to the other (a RESTful API of some sort, that can communicate via HTTP calls to pass requests for a process to run on one container and return a result).

How to manifest a container with /dev/console from a pod definition with Kubernetes?

We use systemd in our container to manage the processes running in the container.
We configure journald in the container so, that it sends all logs to /dev/console.
In order to have /dev/console in a container we have to use "-t" option of Docker when we deploy the container.
I would like to ask, what the equivalent way is with Kubernetes. Where can we state in the pod manifest that we need /dev/console in the containers?
I understand, that with kubectl it is possible (with "--tty" or "-t"). But we do not want to start containers with kubectl.
We do support TTY containers in kubernetes v1.1, but not a tty without input. If you want to see that, I think a GitHub issue would be appropriate.
I agree with Spencer that running systemd in a container is not "best practice" but there are valid reasons to do it, not the least of which is "that's what we know how to do". People's usage of container will evolve over time.
The kubectl --tty option only applies to kubectl exec --tty, which is for running a process inside a container that has already been deployed in a pod. So it would not help you deploy pods with /dev/console defined.
As far as I can see there's no way in current Kubernetes to cause pods to be launched with containers having /dev/console defined.
I would go further and say that the way these containers are defined, with multiple processes managed by systemd and logged by journald, is outside the usual use cases for Kubernetes. Kubernetes has value where the containers are simple, individual processes, running as daemons. Kubernetes manages the launching of multiple distict containers per pod, and/or multiple pods as replicas, including monitoring, logging, restart, etc. Having a separate launch/init and log scheme inside each container doesn't fit the usual Kubernetes use case.