I have two Microservices, MS1 and MS2 both registered with Eureka server. MS1 invokes MS2. I found that even after I shutdown the Eureka server, I am able to run MS1 without any error. If the Eureka server is down how is MS1 able to figure out details of MS2?
In fact every eureka client has a cache for all the server info in eureka server, so the ribbon loadbalancer in eureka client don't have to query eureka server in every request, and the cache refreshes every 30 seconds by default.
That's why your MS1 server can still work even after you shutting down the eureka server.
Related
A nodejs server on kubernetes get many websocket connections - all is fine, but from time to time an abrupt disconnect occurs (code 1006).
Then every few minutes, the server disconnects from all clients (all disconnects have code 1006).
Important to note that this happens to all replicas at the same time, indicating the cause is external to the servers (and the clients). Could it be the application gateway?
How can I debug this further?
Changing from the default azure application gateway to nginx solved this problem.
I am trying to setup an environment with Docker containers in which I run Spring Cloud Applications. I am using Zuul as gateway and Eureka for service discovery.
Now, what I am trying to do, is when I send SIGTERM signal through docker stop command and the Java process is shutting down, I need to catch somehow this event, put the service OUT_OF_SERVICE in Eureka registry, then wait some time, and then shut it down, as mentioned by #spencergibb here: Make Spring Cloud app to wait for eureka clients to remove it before fully shutting down
Do you happen to know how to do this?
You can use actuator's shutdown endpoint /actuator/shutdown to gracefully shutdown spring cloud application, consider doing that in an JVM shutdown hook.
I have a micro-service written in spring boot and registered with Eureka. All I need to do is test the circuit breaker using Hystrix i.e I have an endpoint which will fetch me some data from the db and in case if anything goes wrong i have a fallback method to execute which gives me a mocked response. Everything works fine with the code I have written.
Thee problem I face is that during start of my application it successfully registers with eureka with status UP. The moment I shutdown my database to test the circuit breaker Eureka marks the service as down but still the service works fine.
I really could not understand the reason why Eureka says the service as DOWN.
I'm trying to configure reliable configuration service that uses bus to update clients when config change happens. I started two config servers that monitors local file system. Two eureka servers, so that clients could discover config service at startup (i.e. eureka fist config type). I used rabbitMQ as a amqp bus.
Current behavior is the following: If I update config file and request post on http://config-server1/bus/refresh, config server sends notification and only one client picks it. So that to update 3 clients I need to make 3 posts.
Question: How can I configure bus so that one post to /bus/refresh will update all clients.
Thank you in advance.
When I run 3 mesos-master with QUORUM=2, they fail 1 minute after being elected as the leader, giving errors:
E1015 11:50:35.539562 19150 socket.hpp:174] Shutdown failed on fd=25: Transport endpoint is not connected [107]
E1015 11:50:35.539897 19150 socket.hpp:174] Shutdown failed on fd=24: Transport endpoint is not connected [107]
They keep electing one another in a loop, consistently failing and re-electing.
If I set QUORUM=1, everything works well. What could be the reason for this?
One problem was that AWS firewall was blocking reaching public IPs of the server and zookeeper was broadcasting public IP (set in advertise_ip) so nobody was able to connect each other. Slaves also couldn't connect to the masters with the same error.
When I set local IP to advertise_ip (so that Zookeeper broadcasted local IPs), masters could communicate and QUORUM=2 worked. When I removed the firewall rule, slaves could connect to the master.
We had a similar problem yesterday, marathon was a little weird because some applications were not been deployed. The strange was that the application goes up but the health check never turns green, and so nixy wasn't updating nginx.
After a lot of investigation we came to this very same error:
E0718 18:51:05.836688 5049 socket.hpp:107] Shutdown failed on fd=46: Transport endpoint is not connected [107]
In the end we discovery that the problem was in the election, even that our QUORUM=1 (we have 2 masters) somehow it looses itself and one master wasn't communicating with the other.
To solve this we triggered a new election using Marathon API /v2/leader DELETE method and everything worked fine after that.
We had the same problem, the mesos-master log flooding with messages like:
mesos-master[27499]: E0616 14:29:39.310302 27523 socket.hpp:174] Shutdown failed on fd=67: Transport endpoint is not connected [107]
Turned out it was the loadbalancers health check to /stats.json