The documentation for etcd says that in order to connect to etcd from a job running inside a container, you need to do the following:
[...]you must use the IP address assigned to the docker0 interface on the CoreOS host.
$ curl -L http://172.17.42.1:2379/v2/keys/
What's the best way of passing this IP address to all of my container jobs? Specifically I'm using docker-compose to run my container jobs.
The documentation you reference is making a few assumptions without stating those assumptions.
I think the big assumption is that you want to connect to an etcd that is running on the host from a container. If you're running a project with docker-compose you should run etcd in a container as part of the project. This makes it very easy to connect to etcd. Use the name you gave to the etcd service in the Compose file as the hostname. If you named it etcd, you would use something like this:
http://etcd:2379/v2/keys/
Related
Is there a way to load a single image into a Kubernetes Cluster without going through Container Registries?
For example, if I build an image locally on my laptop, can kubectl do something akin to docker save/load to make that image available in a remote Kubernetes cluster?
I don't think kubectl can make a node to load up an image that was not built on itself, but I think you can achieve it in a similar way with docker Daemon CLI(make remote worker node to build image from local environment).
Something like:
$ docker -H tcp://0.0.0.0:2375 build <Dockerfile>
or setup docker host as environment in your local(laptop) environment.
$ export DOCKER_HOST="tcp://0.0.0.0:2375"
$ docker ps
Keep in mind that your remote worker node needs to be accessible to all the dependencies to build the image
See documentation
Plus, I am not sure why you want to work around remote repository but if the reason is because you don't want to expose your image in public, I suggest you setup a custom docker registry in long term.
Kubernetes requires container images to be in a registry - public or private, running in the cluster itself as a pod/container or remote with respect to the cluster. Even when the registry is on one of the cluster nodes - something often times used with local minikube - the registry is referenced by node's IP/hostname.
In order for a remote Kubernetes cluster to pull image from your local laptop you'd have to be running a registry locally (say, via docker run -d -p 5000:5000 --name registry registry:2) and have your laptop be reachable across the network.
Notice that either securing the local registry with trusted cert and key as from Let's Encrypt or other reputable CA will be required or if running insecure registry then Docker on the Kubernetes cluster nodes has to be configure to trust your insecure registry.
I'm looking to mount docker socket from one container to another without involving the host. Is it possible? I searched around and couldn't find an example of such a situation. The issue is that host uses a very old version of docker so I setup docker within the container which works okay. Now I need other docker containers to use the socket from the base container and not the host. Is there any way to achieve this (in kubernetes)?
The only way comes in mind is to use hostPath volume with type socket, and mount it into multiple containers:
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
Even if it works you will end up with "other containers" launching containers within your "newer docker" container, which is not a good practice. I would suggest spinning another node with newer docker, connecting it to your master, and spinning the part of load that requires access to docker sock there. You can use nodeSelector to schedule properly:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#step-two-add-a-nodeselector-field-to-your-pod-configuration
You can port this onto k8s further by turning your control container into operator https://www.slideshare.net/Jakobkaralus/the-kubernetes-operator-pattern-containerconf-nov-2017 (use k8s API instead of docker sock)
I have a weird situation where one of the service name, let's say 'myservice' in docker swarm shares the name with an actual host in my network. Sometimes the resolution of 'myservice' picks up that host IP and things fail since its not related to anything I am running. Is there a way to give 'myservice' in a fashion that forces docker to resolve it with its own services? Is that 'tasks.myservice' or something better?
Docker swarm CE 17.09 is the version in use
The easiest thing to do is change your Swarm service name... or give it a custom name that's different from service name to use, with --hostname option.
I would think the docker internal DNS would always resolve bridge/overlay network hostnames first before searching external resolvers.
Note that any containers on docker virtual networks will never resolve the the container hostname on a different bridge/overlay network, so in those cases they would correctly resolve the external DNS.
I have set up a multi-container pod consisting on multiple interrelated microservices. On docker-compose if I wanted to access the another container in a compose I just use the name of the service.
I am trying to do the same thing with Kube, without having to create a pod per microservice.
I tried the name of the container or suffix with .local neither worked and I got an UnknownHostException.
My preference is also to have all the microservices runnning on port 80, but in case that does not work within a single pod, I also tried having each microservice running on it's own port and use localhost but that didn't work either it simply said connection refused (as opposed to Unknown Host)
The applications in a pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost. Because of this, applications in a pod must coordinate their usage of ports.
https://kubernetes.io/docs/concepts/workloads/pods/pod/#resource-sharing-and-communication
I am using docker diff in my application in order to find all changed files in a container. Now, my application manages containers through Kubernetes and don't have a direct access to them. I found Kubernetes implementations for several docker commands (like kubectl logs), bit docker diff is missed.
Is there a way to execute docker diff for a pod through Kubernetes?
Many thanks
Kubernetes (kubectl) does not offer an equivalent command. Ideally you should not be using this command at all outside your local development environment (which is docker)
The best practice is to start containers with readonly root filesystem, so that you avoid storing any important state in containers. Kubernetes can kill and restart your pod in another node as a new container, so you should not care about the docker diff that happens on the container.