How to allow one zone of eureka services to talk to multiple zones, while services in another zone is only allowed to talk with eachother - netflix-eureka

I have been trying to configure my eureka clients/servers to have this behavior, but cant make it work.
services in zone0 will only talk to each other and never with zone1 or zone2
services in zone1 will prefer to talk to each other but can also talk to zone0 if a service is not avaliable but never with zone2
services in zone2 will prefer to talk to each other but can also talk to zone0 if a service is not avaliable but never with zone1
I would expect the following client configurations to work, i dont know if im missing something or if i just have misunderstood the hole thing :)
clients in zone0
eureka.instance.metadataMap.zone=zone0
eureka.client.zone0.availabilityZones=zone0
eureka.client.serviceUrl.zone0=http://zone0.localdomain/eureka
eureka.client.serviceUrl.defaultZone=http://zone0.localdomain/eureka
eureka.instance.preferIpAddress=true
eureka.client.preferSameZone=true
eureka.client.register-with-eureka=true
clients in zone1
eureka.instance.metadataMap.zone=zone1
eureka.client.zone1.availabilityZones=zone1,zone0
eureka.client.serviceUrl.defaultZone=http://zone1.localdomain/eureka
eureka.client.serviceUrl.zone1=http://zone1.localdomain/eureka
eureka.client.serviceUrl.zone0=http://zone0.localdomain/eureka
eureka.instance.preferIpAddress=true
eureka.client.preferSameZone=true
eureka.client.register-with-eureka=true
clients in zone2
eureka.instance.metadataMap.zone=zone2
eureka.client.zone2.availabilityZones=zone2,zone0
eureka.client.serviceUrl.defaultZone=http://zone2.localdomain/eureka
eureka.client.serviceUrl.zone2=http://zone2.localdomain/eureka
eureka.client.serviceUrl.zone0=http://zone0.localdomain/eureka
eureka.instance.preferIpAddress=true
eureka.client.preferSameZone=true
eureka.client.register-with-eureka=true

Related

Eureka and Consul

I am implementing a service discovery and I evaluating two options: Eureka and Consul.
Help me decide! I am leaning towards Eureka, but I need to clear a main tech problem. My infrastructure is based on openshift. I can have multiple containers running Eureka Servers behind a load balancer. As far as I know each server needs to communicate with each other. Also, Eureka is mainly used with AWS...
(newbie) Question:
1) How can I configure each Eureka Server to communicate with each other? I have a single (load balanced) URL. My fear is that each server potentially may become desynchronized.
2) Am i missing something?
You're right, each of the Eureka Server must communicate with each other. You can also play with regions depending on your approach.
To make it work (without zones), you must configure the property:
eureka.client.service-url.defaultZone: http://1st-eureka-server-ip-or-hostname:port/eureka/,http://2nd-eureka-server-hostname:porta/eureka/
The configuration above accepts a comma-delimited set of IP/hostname of all the Eureka Servers.
If you want to have a multi-zone configuration, I recommend you to read this blog post.
To Configure each Eureka Server to communicate with each other try this way to make a diffrent diffrent zone in resource in folder.
application-zone1.yml
server.port: 8001
eureka:
instance:
hostname: localhost
metadataMap.zone: zone1
application-zone2.yml
server.port: 8002
eureka:
instance:
hostname: 127.0.0.1
metadataMap.zone: zone2
application.yml
client:
register-with-eureka: false
fetch-registry: false
region: region-1
service-url:
zone1: http://localhost:8001/eureka/
zone2: http://127.0.0.1:800/eureka/
availability-zones:
region-1: zone1,zone2
spring.profiles.active: zone1
Follow this tutorial

Register micro-services using spring-eureka from more than 1 server

I have a set of micro-services which need to communicate to each other.
The total number of micro-services does not fit to single physical server so I need to spread them out among 2 different servers.
My idea (do not know if correct) is to have spring-eureka instance per server to which all services from this particular server register. So:
Services (A,B) register to Eureka on Server 1.
Services (C,D) register to Eureka on Server 2.
After that eureka instances will exchange their knowledge (Peer Awareness).
The questions are:
Does described idea is correct approach? Or rather there should exist just single Eureka instance on single server to which all services from both servers will register (i.e. Eureka exists only on Server1)?
If described idea is correct then as I understand ports 8761 should be opened on Server1 and Server2 to allow communication between "Eurekas"? And the configuration should be as following:
Eureka on Server 1:
eureka.client.serviceUrl.defaultZone: http[s]://server2address:8761/eureka/
Eureka on Server 2:
eureka.client.serviceUrl.defaultZone: http[s]://server1address:8761/eureka/
1) normally you would have a server for each service (A,B,C,D eureka1 and eureka2)
2) eureka.client.serviceUrl.defaultZone is a comma separated list so it is more like "eureka.client.serviceUrl.defaultZone: http[s]://server1address:8761/eureka/,http[s]://server2address:8761/eureka/" for each service
Hope that helps, cheers

Marathon Service Ports

I know that 'servicePort' is used by marathon-lb to identify an app. Is there any other user of this setting besides marathon-lb?
If the answer is no, why is it mandatory (omitting it well generate one for me)? I have many marathon apps which are not managed by marathon-lb, and they all take up service ports by default.
From the documentation: ""servicePort" is a helper port intended for doing service discovery using a well-known port per service. The assigned servicePort value is not used/interpreted by Marathon itself but supposed to be used by the load balancer infrastructure."
So service ports seem to have no other use other than for marathon-lb.
When you don't specify a servicePort, its as if you put in "servicePort": 0.
See closed issue here.
Here's a discussion about the re-architected networking API.
If you look at the Jira ticket, you will see that the new API model lets you define services without servicePorts at all.

Pingfederate SSO on port 9031

Why do SSO providers like Ping Federate run on ports that aren't well-known like 9031. Does this enhance security? It seems like it just increases connectivity issues in organizations with strict firewall rules.
That's just a default semi-random port so that it doesn't clash with existing services on the same machine and is a high port so that the server can run under a non-privileged user account.
For production usage one would typically change it to 443 and/or run a reverse-proxy/loadbalancer in front of the SSO server (on port 443).
Generally security is managed at the perimeter of a network. For deployments I have been involved, port 443 is predominately used for SSO (e.g. PingFederate) at the perimeter. For the internal network, I have seen two models, mainly (i) change the HTTPS port in PingFederate to 443, or (ii) utilize load balancer port forwarding from 443 to 9031. I usually see item (i) for Windows deployments and item (ii) for Linux deployments where reserved ports are avoided. There really isn't a true security enhancement for either pattern.
As Hans points out, PingFederate utilizes 9031 as a default so that conflict with other processes on a server are avoided when first deploying the technology. As the SSO capability matures into an environment, the proper port for the service can be managed. The default port avoids issues when first installing that can be frustrating to folks new to the technology.

Is a server farm abstracted on both sides?

I am trying to understand how a solution will behave if deployed in a server farm. We have a Java web application which will talk to an FTP server for file uploads and downloads.
It is also desirable to protect the FTP server with a firewall, such that it will allow incoming traffic only from the web server.
AT the moment since we do not have a server farm, all requests to the FTP server come from the same IP (web server IP) making it possible to add a simple rule in the firewall. However, if the application is moved to a server farm, then I do not know which machine in the farm will make a request to the FTP server.
Just like the farm is hidden behind a facade for it's clients, is it hidden behind a facade for the services it might invoke, so that regardless of which machine from the farm makes the request to the FTP server, it always sees the same IP?
Are all server farms implemented the same way, or would this behavior depend on the type of server farm? I am thinking of using Amazon Elastic CLoud.
It depends very much on how your web cluster is configured. If your cluster is behind a NAT firewall, then yes, all outgoing connections will appear to come from the same address. Otherwise, the IP addresses will be different, but they'll almost certainly all be in a fairly small range of addresses, and you should be able to add that range to the firewall's exclude list, or even just list the IP address of each machine individually.
Usually you can enter cnames or subnets when setting up firewall rules, which would simplify the maintenance of them. You can also send all traffic through a load balancer or proxy. Thats essentially how any cloud/cluster/farm service works.
many client ips <-> load balancer <-> many servers