Renews threshold and Renews (last min) - spring-cloud

I am testing a spring-cloud eureka server and client.
I have a simple question about the default configuration (server & client).
On the server side, the renew threshold is equal to 3.
On the client side, it sends a heartbeat every 30 seconds (a maximum of 2 per minute).
When I look at the registry dashboard, when the waitTimeInMswhenSyncEmpty is over, I see the following warning message:
EMERGENCY! EUREKA MAY BE INCORRECTLY CLAIMING INSTANCES ARE UP WHEN THEY'RE NOT. RENEWALS ARE LESSER THAN THRESHOLD AND HENCE THE INSTANCES ARE NOT BEING EXPIRED JUST TO BE SAFE
When I look at the code, the test getNumOfRenewsInLastMin() <= numberOfRenewsPerMinThreshold is always true (2 <= 3)
Why is the default configuration, it seems weird because it constantly generates a warning!
If there is anyone who can give me an explanation. I think I've missed something…

I have the same problem and investigated it a little bit. The root cause for the warning message is that the renews are exactly 1 lower than the threshold.
It occurs when you start a plain eureka server and do not register any clients. It will get a 1 at the Renews threshold.
When you now register a client, the 2 from the client are just added to the one that is already there. Thus making the Renews threshold now 3 and thats higher than the renews can ever get with one client. Wait a few minutes (about 4min) and the warning will appear.
My application.yml is:
spring:
application:
name: service-registry
server:
port: 8761
eureka:
instance:
hostname: localhost
client:
registerWithEureka: false
fetchRegistry: false
serviceUrl:
defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
I'm using Brixton.RC1.
Found two other SQ questions in the same topic:
Understanding Spring Cloud Eureka Server self preservation and renew threshold
Spring Eureka server shows RENEWALS ARE LESSER THAN THE THRESHOLD

Here are some details :
You could find the test allowing to display the message in this file below.
https://github.com/spring-cloud/spring-cloud-netflix/blob/master/spring-cloud-netflix-eureka-server/src/main/resources/templates/eureka/navbar.ftl
The value of the
"isBelowRenewThresold" comes from the code below :
model.put("isBelowRenewThresold", registry.isBelowRenewThresold() == 1);
The invoked method can be found in the following file :
https://github.com/Netflix/eureka/blob/master/eureka-core/src/main/java/com/netflix/eureka/registry/PeerAwareInstanceRegistryImpl.java
Thank you for your help.
Regards,
Stephane

Had the same problems, I tried this in application.properties.
eureka.client.lease.duration=10
eureka.instance.leaseRenewalIntervalInSeconds=5
eureka.instance.leaseExpirationDurationInSeconds=2

Related

Hyperledger fabric v1.4 certificate renewal gives peer.blocksprovider warning at peer

Entities: Org1 with 2 peers(peer0 & peer1), 1 Orderer, 1 IntCA.
Both peers joining a single channel
I won't be able to add files/logs or code, as it's not allowed to. Hope it's understood.
Network was initially build with peer0+CA+orderer and later peer1 was added into Org1.
Recently we renewed certificates before the expiry date. peer0 and peer1 allows transaction in, but peer1 also throws a warning/error
[peer.blocksprovider] func1 -\u003e WARN 4c87\u001b[0m Encountered an
error reading from deliver stream: rpc error: code = Canceled desc =
context canceled channel=mobileid orderer-address=orderer.xyz.com
What could be the possibility for this error(peer.blocksprovider)? could there be a mistake in cert renewal? if yes, what part could it be?
This issue was due to disabling gossip protocol within an org. Both the peers were leaders in this case and peer1 failing to disseminate blocks to peer0.
Same is not the case with peer0, which makes me think back on this.
Still no idea why this issue triggered after certificate renewal.

How to disable sniffing of elasticsearch in Monstache

I'm getting this error while using Monstache:
Unable to create Elasticsearch client: health check timeout: no Elasticsearch node available
I applied these lines to Monstache configuration:
elasticsearch-validate-pem-file = false
elasticsearch-healthcheck-timeout-startup = 200
elasticsearch-healthcheck-timeout = 200
However, I still encounter the mentioned error. When I searched about it, I found that the problem is due to sniffing in elasticsearch client. But I don't know where and how exactly I must change it?
I should denote that I studied this tutorial for this problem, but I'm still full of ambiguities.
The problem has been solved when I installed Monstache on the same local server on which the ELK stack was installed. Also, the MongoDB database on the remote server has been changed to a single node replica set to be able to connect to Monstache.
let's try to use
elastic.SetSniff(false)

I see errors using node-rdkafka but it seems to be working ok

I have a Bluemix Node.js (6.1.0) application that uses node-rdkafka 1.0.3. It seems to be working ok but there are tons of error events like Error: Local: Broker Transport Failure or Error: Local: Authentication failure.
The producer options I have set are:
var producer_opts = {
"metadata.broker.list":env.messagehub.brokers,
"security.protocol":"sasl_ssl",
"ssl.ca.location":env.messagehub.calocation,
"sasl.mechanisms":"PLAIN",
"sasl.username":env.messagehub.user,
"sasl.password":env.messagehub.password,
"api.version.request":true,
"socket.timeout.ms": 10000,
"dr_msg_cb":true
};
Consumer has similar settings plus the group.id tag.
I wonder if I should be worrying for theese errors and if there is a way to eliminate them.
Thanks!
You are probably hitting https://github.com/edenhill/librdkafka/issues/1218.
In many cases, as you've noticed, these errors are harmless. The library node-rdkafka is based onto, librdkafka, always connects to all brokers in the cluster. Brokers your applications doesn't interact with will close the idle connections after a while leading to these error messages in your clients.
Unfortunately we don't have a recommended way to eliminate them at the moment. We are currently working on a potential solution to at least reduce their rate and maybe get rid of them.
Update:
With the most recent releases of node-rdkafka (>2.2), you can get rid of all the noisy logs by setting the following properties when creating clients:
'broker.version.fallback': '0.10.2.1',
'log.connection.close' : false

"host not allowed" error when deploying a play framework application to Amazon AWS with Boxfuse

I am trying to deploy a simple web application written using Play Framework in Scala to Amazon web service.
The web application is running OK in development mode and production mode in my local machine, and I've changed its default port to 80.
I used Boxfuse to deploy to AWS as suggested.
I first run "sbt dist"
then "boxfuse run -env=prod"
Things went well as desired. The image is fused and pushed to AWS. AMI is created. Instance was started and my application was running.
i-0f696ff22df4a2b71 => 2017-07-13 01:28:23.940 [info] play.api.Play - Application started (Prod)
Then came the error message:
WARNING: Healthcheck (http://35.156.38.90/) returned 400 instead of 200. Retrying for the next 300 seconds ...
i-0f696ff22df4a2b71 => 2017-07-13 01:28:24.977 [info] p.c.s.AkkaHttpServer - Listening for HTTP on /0.0.0.0:80
i-0f696ff22df4a2b71 => 2017-07-13 01:28:25.512 [warn] p.f.h.AllowedHostsFilter - Host not allowed: 35.156.38.90
The instance was terminated after repeated try after 3 minutes. It gave a warning like:
Ensure your application responds with an HTTP 200 at / on port 80
But I've made sure the application responds in local machine, and I tried both Windows and Ubuntu, all works well.
Also, running "boxfuse run" on local machine, I can connect to it using "http://localhost", but still have the error.
Hope someone with experience can give me some suggestions. Thanks in advance.
ps: not sure if relevant, I added these settings to application.conf
http {
address = 0.0.0.0
port = 80
}
Judging from the error message, it looks like the problem might be related to play.filters.hosts.allowed not set up in application.conf. The filter lets you configure which hosts can access your application. More details about the Play filter is available here.
Here's a configuration example:
play.filters.hosts {
allowed = ["."]
}
Note that allowed = ["."] matches all hosts hence would not be recommended in a production environment.
As stated in the Boxfuse Play Documentation:
If your application uses the allowed hosts filter you must ensure play.filters.hosts.allowed in application.conf allows connections from anywhere as this filter otherwise causes ELB healthchecks to fail. For example:
play.filters.hosts {
allowed = ["."]
}
More info in the official Play documentation.

Postman : socket hang up

I just started using Postman. I had this error "Error: socket hang up" when I was executing a collection runner. I've read a few post regarding socket hang up and it mention about sending a request and there's no response from the server side and probably timeout. How do I extend the length of time of the request in Postman Collection Runner?
For me it was because my application was switched to https and my postman requests still had http in them. Changing postman to https fixed it.
Socket hang up, error is port related error. I am sharing my experience. When you use same port for connecting database, which port is already in use for other service, then "Socket Hang up" error comes out.
eg:- port 6455 is dedicated port for some other service or connection. You cannot use same port (6455) for making a database connection on same server.
Sometimes, this error rises when a client waits for a response for a very long time. This can be resolved using the 202 (Accepted) Http code. This basically means that you will tell the server to start the job you want it to do, and then, every some-time-period check if it has finished the job.
If you are the one who wrote the server, this is relatively easy to implement. If not, check the documentation of the server you're using.
Postman was giving "Could not get response" "Error: socket hang up".
I solved this problem by adding the Content-Length http header to my request
Are you using nodemon, or some other file-watcher? In my case, I was generating some local files, uploading them, then sending the URL back to my user. Unfortunately nodemon would see the "changes" to the project, and trigger a restart before a response was sent. I ignored the build directories from my file-watcher and solved this issue.
Here is the Nodemon readme on ignoring files: https://github.com/remy/nodemon#ignoring-files
I have just faced the same problem and I fixed it by close my VPN. So I guess that's a network agent problem. You can check if you have some network proxy is on.
this happaned when client wait for response for long time
try to sync your API requests from postman
then make login post and your are done
I defined Authenticate method to generate a token and mentioned its return type as nullable string as:
public string? Authenticate(string username, string password)
{
if(!users.Any(u => u.Key==username && u.Value == password))
{
return null;
}
var tokenHandler = new JwtSecurityTokenHandler();
var tokenKey = Encoding.ASCII.GetBytes(key);
var tokenDescriptor = new SecurityTokenDescriptor()
{
Subject = new ClaimsIdentity(new Claim[]
{
new Claim(ClaimTypes.Name, username)
}),
Expires = DateTime.UtcNow.AddHours(1),
SigningCredentials = new SigningCredentials(new
SymmetricSecurityKey(tokenKey),
SecurityAlgorithms.HmacSha256Signature)
};
var token = tokenHandler.CreateToken(tokenDescriptor);
return tokenHandler.WriteToken(token);
}
Changing nullable string to simply string fixed "Socket Hang Up" issue for me!
If Postman doesn't get response within a specified time it will throw the error "socket hang up".
I was doing something like below to achieve 60 minutes of delay between each scenario in a collection:
get https://postman-echo.com/delay/10
pre request script :-
setTimeout(function(){}, [50000]);
I reduced time duration to 30 seconds:
setTimeout(function(){}, [20000]);
After that I stopped getting this error.
I solved this problem with disconnection my vpn. you should check if there is vpn connected.
What helped for me was replacing 'localhost' in the url to http://127.0.0.1 or whatever other address your local machine has assigned localhost to.
Socket hang up error could be due to the wrong URL of the API you are trying to access in the postman. please check the URL once carefully.
It's possible there are 2 things, happening at the same time.
The url contains a port which is not commonly used AND
you are using a VPN or proxy that does not support that port.
I had this problem. My server port was 45860 and I was using pSiphon anti-filter VPN. In that condition my Postman reported "connection hang-up" only when server's reply was an error with status codes bigger than 0. (It was fine when some text was returning from server with no error code.)
When I changed my web service port to 8080 on my server, WOW, it worked! even though pSiphon VPN was connected.
Following on Abhay's answer: double check the scheme. A server that is secured may disconnect if you call an https endpoint with http.
This happened to me while debugging an ASP.NET Core API running on localhost using the local cert. Took me a while to figure out since it was inside a Postman environment and also it was a Monday.
In my case, adding in the header the "Content-length" parameter did the job.
My environment is
Mac:
[Terminal command: sw_vers]
ProductName: macOS
ProductVersion: 12.0.1. (Monterey)
BuildVersion: 21A559
mysql:
[Terminal command: mysql --version]
Ver 8.0.27 for macos11.6 on x86_64 (Homebrew)
Apache:
[Terminal command: httpd -v]
Server version: Apache/2.4.48 (Unix)
Server built: Oct 1 2021 20:08:18.
*Laravel
[Terminal command: php artisan --version]
Laravel Framework 8.76.2
Postman
Version 9.1.5 (9.1.5)
socket hang up error can also occur due to backend API handling logic.
For example - I was trying to create an Nginx config file and restart the service by using the incoming API request body. This resulted in temporary disconnection of the Nginx service while handling the API request and resulted in socket hang up.
If you have tried all the steps mentioned in other comments, and still face the issue. I suggest you check the API handler code thoroughly.
I handled the above-mentioned example by calling the Nginx reset method with delay and a separate API to check the status of the prev reset request.
For me it was giving Socket Hung Up error only while running Collection Runner not with single request.
Adding a small delay (100-300ms) in the collection Runner solved issue for me.
In my case, I had to provide --ssl-client-key and --ssl-client-cert files to overcome these errors.
Great error, it is so general that for everyone something different helps.
In my case I was not able to fix it and what is really funny is fact that I am expecting to get multipart file on one endpoint. When I prepare request in postman I get "Error: socket hang up". If I change for other endpoint(even not existing) is exactly that same error. But when I call any endpoint without body that request works and after that all subsequent attempts works perfectly.
In my case this is purely postman issue. Any request using curl is never giving that error.
For me the issue was related to the mismatch of the http versions on the client and server.
Client was assuming http v2 while server (spring boot/ tomcat) in the case was http v1
When on the server I configured server to v2, the issue got resolved in a go.
In spring boot you can configure the http v2 as below:-
server.http2.enabled=true
Note - Also the scenario was related to using client auth mechanism (i.e. MTLS)
Without client auth/ MTLS it worked without issues but for client auth the version setting in spring boot was the important rescue point
"socket hang up" is proxy related issue. when we run same collection with the help of newman on jenkins then all test are passed.
change the proxy setting
https://docs.cloudfoundry.org/cf-cli/http-proxy.html
I had the same issue: "Error: socket hang up" when sending a request to store a file and backend logs mentioned a timeout as you described. In my case I was using mongoDB and the real problem was my collection’s array capacity was full. When I cleared the documents in that collection the error was dismissed. Hope this will help someone who faces a similar scenario.
"Socket Hung Up" can be on-premise issue some time's, because, of bottle neck in %temp% folder, try to free up the "temp" folder and give a try
I fixed this issue by disabling Postman token header. Screenshot:
I face the same issue in when calling a SOAP API with POSTMAN
by adding the following data in the header my issue was fixed
Key:Content-Length
Value:<calculated when request is sent>
In my case, I was incorrectly using a port reserved for https version of my api.
For example, I was supposed to use https://localhost:6211, but I was using http://localhost:6211.
It is port related error. I was trying to hit the API with an invalid port.
if it helps to anybody... In my case, i just forgot to use json parser (const jsonParser = express.json();) to have access to json type of objects sending to the server from the client. Be careful, don't waste your time =)
This happened to me while I was learning ASP.NET Web API.
In my case it was because the SSL certificate verification.
I was using VS Code so I oversee about SSL certificate verification and it came with https protocol.
I solved this with testing my endpoints with http protocol.
Another approach can be just disabling the SSL certificate Verification on Postman Settings.
This error was coming for me since the request url is not correct --> here you can see my url does not contains : after http
The url I was using was : http//locahost:9090/someApi
Solution
adding a colon new url is http://localhost:9090/someApi
the socket error was not coming
This is just my case may be your case is totally different as mentioned in the other answers :)