Register micro-services using spring-eureka from more than 1 server - spring-cloud

I have a set of micro-services which need to communicate to each other.
The total number of micro-services does not fit to single physical server so I need to spread them out among 2 different servers.
My idea (do not know if correct) is to have spring-eureka instance per server to which all services from this particular server register. So:
Services (A,B) register to Eureka on Server 1.
Services (C,D) register to Eureka on Server 2.
After that eureka instances will exchange their knowledge (Peer Awareness).
The questions are:
Does described idea is correct approach? Or rather there should exist just single Eureka instance on single server to which all services from both servers will register (i.e. Eureka exists only on Server1)?
If described idea is correct then as I understand ports 8761 should be opened on Server1 and Server2 to allow communication between "Eurekas"? And the configuration should be as following:
Eureka on Server 1:
eureka.client.serviceUrl.defaultZone: http[s]://server2address:8761/eureka/
Eureka on Server 2:
eureka.client.serviceUrl.defaultZone: http[s]://server1address:8761/eureka/

1) normally you would have a server for each service (A,B,C,D eureka1 and eureka2)
2) eureka.client.serviceUrl.defaultZone is a comma separated list so it is more like "eureka.client.serviceUrl.defaultZone: http[s]://server1address:8761/eureka/,http[s]://server2address:8761/eureka/" for each service
Hope that helps, cheers

Related

Trying to use AWS EC2 node.js app to talk to AWS Mongo Linux instance via AWS ELB

I have 2 x AWS EC2 instances with a node.js app. Out of the box, they come with a local mongod instance that works fine. Given the criticality of the app, I decided to spin up 2 x EC2 front ends (node js) to talk to a mongo db in another availability zone using the AWS ELB.
Full IP communication/27017 connectivity exists between all 3 nodes.
When using only 1 server to the mongo server, it works just fine. When adding both front end servers into the ELB target group, I get random 504 gateway errors.
Removing a server from the group fixes the issue.
Any suggestions on what I should look for?
In terms of how the node.js server connects to mongo, there is a config.json file that simply points out the IP and DB name required.
Thanks!
AWS Load Balancer use "round robin" mechanism to route user's requests. Does your application have way to control user sessions? If not then your first request come to server 1, then second request to server 2 which doesn't have any information to the first request may result in error. That explain why it works fine when you have 1 server only
the server uses redis (the app server) and the following components:
Node.js - Server-side Javascript-framework
Express.js - Web application framework for Node.js
Nginx - Web server & reverse proxy
MongoDB - NoSQL database
redis - Session Manager & data structure server
Socket.IO - Bi-directional communication between web clients and servers

Gatling with load balanced IP hash Nginx

I'm load testing a Tomcat web application with 4 nodes. Those nodes are configured through Nginx with ip_hash:
ip_hash;
server example:8888 weight=2 max_fails=3 fail_timeout=10s;
server example:8888 weight=4 max_fails=3 fail_timeout=10s;
server example:8888 weight=2 max_fails=3 fail_timeout=10s;
server example:8888 weight=2 max_fails=3 fail_timeout=10s;
Anyway, I use Gatling for load and performance testing but everytime when I start a test all traffic is routed to one node.. Only when I change the load balance node to least_conn of round robin then the traffic is divided. But this application needs a persistent node to do the work.
Is there any way to let Gatling route the traffic to all 4 nodes during a run? Maybe with a setup configuration? I'm using this setUp right now:
setUp(scenario1.inject(
atOnceUsers(50),
rampUsers(300) over (1800 seconds),
).protocols(httpConf)
)
Thank you!
ip_hash;
Specifies that a group should use a load balancing method where requests are distributed between servers based on client IP addresses.
You should use sticky:
Enables session affinity, which causes requests from the same client to be passed to the same server in a group of servers.
Edit:
Right, I didn't see that it's for nginx plus only :(
I found this post (maybe it helps...):
https://serverfault.com/questions/832790/sticky-sessions-with-nginx-proxy
Reference to: https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng
There is also a version of the module for older versions of nginx:
http://dgtool.treitos.com/2013/02/nginx-as-sticky-balancer-for-ha-using.html
Reference to: https://code.google.com/archive/p/nginx-sticky-module/

Multi-tenancy in the matrix.org for single homeserver with multi domain

I have deployed single instance running for home-server(synapse) with multiple domain attached to it as example.com and example1.com.
I want to create the users like b1#example.com and b1#example1.com
Is is possible ?
Let me know advance?
You can install multiple instances of synapse using python virtual environments. Configure each instance to listen only on localhost, on different ports. Then use an nginx reverse proxy to direct traffic to the correct instance based on the domain name requested.
As far my knowledge goes this is currently not possible.
Whenever you setup a Matrix Synapse homeserver you define a unique name (=domain). See in the Synapse docs:
The server name determines the "domain" part of user-ids for users on
your server: these will all be of the format #user:my.domain.name. It
also determines how other matrix servers will reach yours for
Federation. ... Beware that the server name cannot be changed later.

How to define standard port numbers for different apps

We have several enterprise application to deploy on Weblogic server. As you know for each domain we could define specific port, deploy an application on that and clients could access to server application by that port. My question is about the port number standard.
Is there any standard to assign server ports number to different application, if No what is your suggestion ? Is counting method (for example) god ?!! or ...
Thanks for your replies
You can deploy your applications in different managed servers. What you have to do is create a managed server per application under your weblogic domain. During the configuration of the managed server you will be asked to give a port number. After setting the managed server your applications can be deployed on them, using the ports you have specified earlier. More information about the weblogic managed servers can be found in weblogic administration guide.

Is a server farm abstracted on both sides?

I am trying to understand how a solution will behave if deployed in a server farm. We have a Java web application which will talk to an FTP server for file uploads and downloads.
It is also desirable to protect the FTP server with a firewall, such that it will allow incoming traffic only from the web server.
AT the moment since we do not have a server farm, all requests to the FTP server come from the same IP (web server IP) making it possible to add a simple rule in the firewall. However, if the application is moved to a server farm, then I do not know which machine in the farm will make a request to the FTP server.
Just like the farm is hidden behind a facade for it's clients, is it hidden behind a facade for the services it might invoke, so that regardless of which machine from the farm makes the request to the FTP server, it always sees the same IP?
Are all server farms implemented the same way, or would this behavior depend on the type of server farm? I am thinking of using Amazon Elastic CLoud.
It depends very much on how your web cluster is configured. If your cluster is behind a NAT firewall, then yes, all outgoing connections will appear to come from the same address. Otherwise, the IP addresses will be different, but they'll almost certainly all be in a fairly small range of addresses, and you should be able to add that range to the firewall's exclude list, or even just list the IP address of each machine individually.
Usually you can enter cnames or subnets when setting up firewall rules, which would simplify the maintenance of them. You can also send all traffic through a load balancer or proxy. Thats essentially how any cloud/cluster/farm service works.
many client ips <-> load balancer <-> many servers