Docker swarm service resolution with same name as another host - service

I have a weird situation where one of the service name, let's say 'myservice' in docker swarm shares the name with an actual host in my network. Sometimes the resolution of 'myservice' picks up that host IP and things fail since its not related to anything I am running. Is there a way to give 'myservice' in a fashion that forces docker to resolve it with its own services? Is that 'tasks.myservice' or something better?
Docker swarm CE 17.09 is the version in use

The easiest thing to do is change your Swarm service name... or give it a custom name that's different from service name to use, with --hostname option.
I would think the docker internal DNS would always resolve bridge/overlay network hostnames first before searching external resolvers.
Note that any containers on docker virtual networks will never resolve the the container hostname on a different bridge/overlay network, so in those cases they would correctly resolve the external DNS.

Related

Internal and External reverse proxy network using kubernetes and traefik, how?

I am trying to learn kubernetes and rancher. Here is what i want to accomplish :
I have few docker containers which i want to service only from my internal network using x.mydomain.com
I have same as above but those containers will be accessible from internet on x.mydomain.com
What i have at the moment is following :
Rancher server
RancherOS to be used for the cluster and as one node
I have made a cluster and added the node from 2. and disabled the nginx controller.
Install traefik app
I have forwarded port 80, 443 to my node.
Added few containers
Added ingress rules
So at the moments it works with the external network. I can write app1.mydomain.com from the internet and everything works as it should.
Now my problem is how can i add the internal network now ?
Do i create another cluster ? Another node on the same host ? Should i install two traefik and then use class in ingress for the internal stuff ?
My idea was to add another ip to the same interface on the rancheros then add another node on the same host but with the other ip but i can’t get it to work. Rancher sees both nodes with the same name and doesn’t use the information i give it i mean --address when creating the node. Of course even when i do this it would require that i setup a DNS server internally so it knows which domains are served internally but i haven’t done that yet since i can’t seem to figure out how to handle the two ip on the host and use them in two different nodes. I am unsure what is require, maybe it’s the wrong route i am going.
I would appreciate if somebody had some ideas.
Update :
I thought i had made it clear what i want from above. There is no YAML at the moment since i don't know how to do it. In my head it's simple what i want. Let me try to cook it down with an example :
I want 2 docker containers with web server to be able to be accessible from the internet on web1.mydomain.com and web2.mydomain.com and at the same time i want 2 docker containers with web server that i can access only from internal network on web3.mydomain.com and web4.mydomain.com.
Additional info :
- I only have one host that will be hosting the services.
- I only have one public IPv4 address.
- I can add additional ip alias to the one host i have.
- I can if needed configure an internal DNS server if required.
/donnib

How hazelcast get overlay network ip in Docker Swarm

in my 3 nodes Docker Swarm environment,with spring cloud jhispter,use hazelcast I can get my docker_gwbridge,but I wanna get my cluster overlay ip address.
At setup with an warning "Could not find a matching address to start with! Picking one of non-loopback addresses." then get the docker_gwbridge ip addresss, the address is not match with jhispter microservice ip address.
have find the solution, https://github.com/bitsofinfo/hazelcast-docker-swarm-discovery-spi,
with the two ways to solve network errors.

How to access another container in a pod

I have set up a multi-container pod consisting on multiple interrelated microservices. On docker-compose if I wanted to access the another container in a compose I just use the name of the service.
I am trying to do the same thing with Kube, without having to create a pod per microservice.
I tried the name of the container or suffix with .local neither worked and I got an UnknownHostException.
My preference is also to have all the microservices runnning on port 80, but in case that does not work within a single pod, I also tried having each microservice running on it's own port and use localhost but that didn't work either it simply said connection refused (as opposed to Unknown Host)
The applications in a pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost. Because of this, applications in a pod must coordinate their usage of ports.
https://kubernetes.io/docs/concepts/workloads/pods/pod/#resource-sharing-and-communication

How should docker-compose jobs discover etcd?

The documentation for etcd says that in order to connect to etcd from a job running inside a container, you need to do the following:
[...]you must use the IP address assigned to the docker0 interface on the CoreOS host.
$ curl -L http://172.17.42.1:2379/v2/keys/
What's the best way of passing this IP address to all of my container jobs? Specifically I'm using docker-compose to run my container jobs.
The documentation you reference is making a few assumptions without stating those assumptions.
I think the big assumption is that you want to connect to an etcd that is running on the host from a container. If you're running a project with docker-compose you should run etcd in a container as part of the project. This makes it very easy to connect to etcd. Use the name you gave to the etcd service in the Compose file as the hostname. If you named it etcd, you would use something like this:
http://etcd:2379/v2/keys/

What's a workable way setup an Akka cluster in a multi-node Docker environment?

Assume the picture below. Each Docker container belongs to a single Akka cluster "foo", and each
container has runs one cluster node. The IP address assigned by Docker (inside the container) is
given in green. All the internal ports are 9090 but are mapped to various external ports on the host.
What is the Akka URI for the node in say Docker 5? Would it be akka.tcp://foo#10.0.0.195:9101
I've read some blogs on Akka and Docker that involve linking but this doesn't seem workable (?) for
a multi-node deployment and I'm not sure how linking scales to 100s of nodes.
I need some way for Akka to know the address of its cluster. Left to its own devices, Docker 5 might
decide it's reachable at akka.tcp://foo#192.178.1.2:9090, which is useless/unreachable outside of its own container.
At this point I'm thinking I pass the host's IP and port (e.g. 10.0.0.195:9101) to the Docker container
as a parameter on start-up for Akka to use when it configures itself.
Would this work, or is there a better way to go?
Indeed! New Akka (snapshot at time of posting) does have nifty new binds that solve this problem. Example of use here: https://github.com/gzoller/docker-exp/tree/cluster