Can I Feign clients ConnectTimeout & ReadTimeout override using Ribbon configuration? - spring-cloud

First of all, I'm sorry for my bad English.
Can I Feign clients ConnectTimeout & ReadTimeout override using Ribbon configuration?
I don't know how to set 'Feign.client.ReadTimeout' with 'ribbon.readTimeout'.
Below are the relevant my settings.
<serviceId>:
ribbon:
ConnectTimeout: 100
ReadTimeout: 500
MaxAutoRetriesNextServer: 0
feign:
client:
config:
<commandKey>:
connectTimeout: 100
readTimeout: 500
I hope you give an answer to me. 🙏

Use spring boot configuration replacements.
feign:
client:
config:
<commandKey>:
connectTimeout: ${<serviceId>.ribbon.ConnectTimeout:100}
readTimeout: ${<serviceId>.ribbon.ReadTimeout:500}

Related

Haproxy agent-check DRAIN(agent) status still accepts new connections

I am trying to perform a load balancing for my Node.js application which uses websockets. I need haproxy to stop load balance new connections on a server, which has reached its maximum number of connections, keeping the existing ones intact in the same time.
I do this by performing an agent-check for each of my servers. If a server cannot accept new connections it responds with "drain" to an agent-check. If a server is able to respond to a new connection, it responds with "ready" to an agent-check.
Here is my haproxy.cfg configuration file:
global
daemon
maxconn 240000
log /dev/log local0 debug
log /dev/log local1 notice
tune.ssl.default-dh-param 2048
defaults
mode http
log global
option httplog
option dontlognull
option dontlog-normal
option http-server-close
option redispatch
timeout connect 20000ms
timeout http-request 1m
timeout client 2100000ms
timeout server 2100000ms
timeout queue 30s
timeout check 5s
timeout http-keep-alive 180s
timeout tunnel 3600s
timeout tarpit 60s
frontend stats
bind *:8084
stats enable
stats uri /stats
stats refresh 10s
stats admin if TRUE
frontend test
mode http
bind *:5000
default_backend ws
backend ws
mode http
fullconn 100000
balance roundrobin
cookie SERVERID insert indirect nocache
server 1 backend1:9999 check agent-check agent-port 8080 cookie 1 inter 500 fall 1 rise 2
And here is how I respond to haproxy agent-check in my Node.js app:
const healthCheckServer = net.createServer((c) => {
let data = '';
if (currentConn < MAX_CONN) {
data += 'ready';
} else {
data += 'drain';
}
c.write(data + '\r\n');
c.destroy();
});
healthCheckServer.listen(8080, '0.0.0.0');
When number of connections to my application is reaching its maximum, haproxy correctly changes server status to DRAIN (agent) (I can observe this in haproxy web dashboard). The problem is, that new connections are still accepted by the application.
I am new to haproxy, so can someone point me to where I am wrong?
Found out that when server is drained by agent (status is set to DRAIN (agent)) and if server is the only one existing in the backend it will still accept new connections.
When there are multiple servers present and each server is drained the behavior is just like expected: haproxy returns HTTP 503.
UPDATE:
Turned out that I was looking in the wrong direction all the time.
First I had to mark my backend which processes WebSocket connections as a non-http (remove mode http line). My guess is haproxy incorrectly counts current sessions for http backend when using WebSockets. Removing mode http solved my problem.
Second, returning maxconn:<conn> in agent-check looks like much simple and more idiomatic way to limit the number of concurrent connections.
Sources:
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-agent-check
https://www.haproxy.com/blog/websockets-load-balancing-with-haproxy/
I hope this will help someone.

haproxy close connecton after 1 minute

this config
frontend https_frontend
bind *:4055
mode tcp
maxconn 8192
use_backend https_web
backend https_web
mode tcp
balance roundrobin
option http-keep-alive
server haproxy2 xxx.xxx.xxx.xxx:4055 send-proxy-v2
new connection send keep-alive packets every 30 seconds. but connection drop after 1 minute
I think this is because you're using mode tcp, but option http-keep-alive is a mode http option. In this case, it would most likely be using whatever value you have for timeout client or timeout server before dropping the connection.
For more details about option http-keep-alive and mode http, see:
https://www.haproxy.com/documentation/aloha/7-5/traffic-management/lb-layer7/http-modes/#http-modes-in-haproxy
frontend https_frontend
bind *:4055
mode tcp
maxconn 8192
use_backend https_web
backend https_web
mode tcp
balance roundrobin
timeout client 600000
timeout server 600000
server haproxy2 147.78.65.172:4055 send-proxy-v2
now i send keep-alive packets and real data every 30 seconds
but steel drop after 2 minutes
its not http/https query. its sample tcp communication with rand data. maybe it problem?

haproxy heartbeat with backend based on http post

I want to create a configuration such that the heartbeat between haproxy and the backend is based on HTTP POST.
Does anyone have any idea about this?
I have tried the below configuration, but it only sent the http HEAD to the backend server (I want HTTP POST):
backend mlp
mode http
balance roundrobin
server mlp1 192.168.12.165:9210 check
server mlp2 192.168.12.166:9210 check
Thanks for your help.
#Mohsin,
Thank you so much. I indeed work.
But I want to specify the request message, seems my configure doesn't work. I appreciate that if you can help too.
[root#LB_vAPP_1 tmp]# more /var/www/index.txt
POST / HTTP/1.1\r\nHost: 176.16.0.8:2234\r\nContent-Length: 653\r\n\r\n<?xml version=\"1.0\" encoding=\"gb2312\"?>\r\n<svc_init ver=\"3.2.0\">\r\n<hdr ver=\"3.2.0\">\r\n<client>\r\n<id>915948</id>\r\n<pwd>915948</pwd>\r\n<serviceid></serviceid>\r\n</client>\r\n<requestor><id>13969041845</id></requestor>\r\n</hdr>\r\n<slir ver=\"3.2.0\" res_type=\"SYNC\">\r\n<msids><msid enc=\"ASC\" type=\"MSISDN\">00000000000</msid></msids>\r\n<eqop>\r\n<resp_req type=\"LOW_DELAY\"/>\r\n<hor_acc>200</hor_acc>\r\n</eqop>\r\n<geo_info>\r\n<CoordinateReferenceSystem>\r\n<Identifier
>\r\n<code>4326</code>\r\n<codeSpace>EPSG</codeSpace>\r\n<edition>6.1</edition>\r\n</Identifier\r\n</CoordinateReferenceSystem>\r\n</geo_info>\r\n<loc_type type=\"CURRENT_OR_LAST\"/>\r\n<prio type=\"HIGH\"/>\r\n</slir>\r\n</svc_init>\r\n\r\n\r\n\r\n
my haproxy.conf file is as bellowing:
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local7
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
ulimit-n 65536
daemon
nbproc 1
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
defaults
mode tcp
retries 3
log global
option redispatch
# option abortonclose
retries 3
timeout queue 28s
timeout connect 28s
timeout client 28s
timeout server 28s
timeout check 1s
maxconn 32000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend mlp
mode tcp
option persist
# bind 10.68.97.42:9211 ssl crt /etc/ssl/server.pem
#bind 10.68.97.42:9211
bind 10.68.97.42:9210
default_backend mlp
frontend supl
mode tcp
option persist
bind 10.68.97.42:7275
default_backend supl
#-------------
# option1 http check
#------------
backend mlp
mode http
balance roundrobin
option httpchk POST / HTTP/1.1\r\nHost: 176.16.0.8:2234\r\nContent-Length: 653\r\n\r\n{<?xml version=\"1.0\" encoding=\"gb2312\"?>\r\n<svc_init ver=\"3.2.0\">\r\n<hdr ver=\"3.2.0\">\r\n<client>\r\n<id>915948</id>\r\n<pwd>915948</pwd>\r\n<serviceid></serviceid>\r\n</client>\r\n<requestor><id>13969041845</id></requestor>\r\n</hdr>\r\n<slir ver=\"3.2.0\" res_type=\"SYNC\">\r\n<msids><msid enc=\"ASC\" type=\"MSISDN\">00000000000</msid></msids>\r\n<eqop>\r\n<resp_req type=\"LOW_DELAY\"/>\r\n<hor_acc>200</hor_acc>\r\n</eqop>\r\n<geo_info>\r\n<CoordinateReferenceSystem>\r\n<Identifier>\r\n<code>4326</code>\r\n<codeSpace>EPSG</codeSpace>\r\n<edition>6.1</edition>\r\n</Identifier>\r\n</CoordinateReferenceSystem>\r\n</geo_info>\r\n<loc_type type=\"CURRENT_OR_LAST\"/>\r\n<prio type=\"HIGH\"/>\r\n</slir>\r\n</svc_init>\r\n\r\n\r\n\r\n}
http-check expect rstring <result resid=\"4\">UNKNOWN SUBSCRIBER</result>
server mlp1 192.168.12.165:9210 check
server mlp2 192.168.12.166:9210 check
#server mlp2 192.168.12.166:9210 check
backend supl
mode tcp
source 0.0.0.0 usesrc clientip
balance roundrobin
server supl1 192.168.12.165:7275 check
server supl2 192.168.12.166:7275 check
#server supl2 192.168.12.166:7275 check
#Mohsin,
Thanks for your answer, it gave me the critical clue to resolve this issue.
However, my message is as bellowing, right now it can work as I want(send the specified request and check the specified response). I post it, hopefully, it may help others also. One point is, the content-length is very important.
backend mlp
mode http
balance roundrobin
option httpchk POST / HTTP/1.1\r\nUser-Agent:HAProxy\r\nHost:176.16.0.8:2234\r\nContent-Type:\ text/xml\r\nContent-Length:516\r\n\r\n91594891594813969041845000000000003200
http-check expect rstring <result resid=\"4\">UNKNOWN SUBSCRIBER</result>
server mlp1 192.168.12.165:9210 check
server mlp2 192.168.12.166:9210 check
I was able to get this working after a bit of experimenting.
This was my setup
HAProxy -> NGINX -> Backend
I was sniffing the requests at the NGINX stage with tcpdump to see what was actually happening.
In order to change the health check request we have to follow a hack described in the documentation to change the HTTP version and send headers:
It is possible to send HTTP headers after the string by concatenating them using rn and backslashes spaces. This is useful to send Host headers when probing a virtual host
This is the raw http check I want to send:
POST ${ENDPOINT} HTTP/1.0
Content-Type: application/json
{"body": "json"}
The big issue here is that HAProxy adds a new header by itself: Connection: close, so this is what NGINX gets:
POST ${ENDPOINT} HTTP/1.0
Content-Type: application/json
{"body": "json"}
Connection: close
This leads, at least in my case to error 400s due to a malformed request.
The fix is to add a Content-Length header:
POST ${ENDPOINT} HTTP/1.0
Content-Type: application/json
Content-Length: 16
{"body": "json"}
Connection: close
Since the Content-Length should take precedence over the actual length, this forces the last header to be ignored. This is what NGINX passes to the backend:
POST ${ENDPOINT} HTTP/1.0
Host: ~^(.+)$
X-Real-IP: ${IP}
X-Forwarded-For: ${IP}
Connection: close
Content-Length: 16
Content-Type: application/json
{"body": "json"}
This is my final check:
option httpchk POST ${ENDPOINT} HTTP/1.0\r\nContent-Type:\ application/json\r\nContent-Length:\ 16\r\n\r\n{\"body\":\"json\"}
If it's just JSON you should be ok copying and pasting this and adjusting the content length.
However, I do recommend that you follow the same procedure and sniff the actual health checks, because, with the characters one has to escape in the config file, creating the request properly can be tricky.
Open haproxy/conf/haproxy.conf file. Goto end of the page, you will see that there is a line 'option httpchk GET /', change GET to POST and you are done.
Let me know if you face any problem.

Eureka based configuration server breaking actuator endpoints

I setup my services to use the spring cloud eureka based config server.
version info: spring cloud 1.0.1.RELEASE
When I set it up as a fixed endpoint, I can see that it gets the right configuration file and that I can access actuator endpoints like health, info etc. so a .../manage/info returns the correct information.
However when I set it up to use discovery, the same actuator endpoints timeout on trying got access them.
In each case the configuration file is retrieved and downloaded (included log file).
Is there an issue with how I setup config server and bookmark service (the service which uses the config server)?
My configuration server setting is as follows:
server:
port: 8888
contextPath: /configurationservice
eureka:
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://localhost:8761/eureka/
instance:
leaseRenewalIntervalInSeconds: 10
statusPageUrlPath: /configurationservice/info
homePageUrlPath: /configurationservice/
healthCheckUrlPath: /configurationservice/health
preferIpAddress: true
spring:
cloud:
config:
server:
native:
searchLocations: file:/Users/larrymitchell/libertas/configserver/configfiles
The service bootstrap.yml settings are:
spring:
profiles:
default: development
active: development
application:
name: bookmarkservice
cloud:
config:
enabled: true # note this needs to be turned on if you wnat the config server to work
# uri: http://localhost:8888/configurationservice
label: 1.0.0
discovery:
enabled: true
serviceId: configurationservice
The application.yml settings are:
# general spring settings
spring:
application:
name: bookmarkservice
profiles:
default: development
active: development
# name of the service
service:
name: bookmarkservice
# embedded web server settings
# some of these are specific to tomcat
server:
port: 9001
# the context path is the part after http:/localhost:8080
contextPath: /bookmarkservice
tomcat:
basedir: target/tomcat
uri-encoding: UTF-8
management:
context-path: /manage
security:
enabled: false
eureka:
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://localhost:8761/eureka/
instance:
statusPageUrlPath: /bookmarkservice/manage/info
homePageUrlPath: /bookmarkservice/manage
healthCheckUrlPath: /bookmarkservice/manage/health
preferIpAddress: true
The startup log for bookmark service is as follows:
2015-06-24 17:52:49.806 DEBUG 11234 --- [ main] o.s.web.client.RestTemplate : Created GET request for "http://10.132.1.56:8888/configurationservice/bookmarkservice/development/1.0.0"
2015-06-24 17:52:49.890 DEBUG 11234 --- [ main] o.s.web.client.RestTemplate : Setting request Accept header to [application/json, application/*+json]
2015-06-24 17:52:50.439 DEBUG 11234 --- [ main] o.s.web.client.RestTemplate : GET request for "http://10.132.1.56:8888/configurationservice/bookmarkservice/development/1.0.0" resulted in 200 (OK)
2015-06-24 17:52:50.441 DEBUG 11234 --- [ main] o.s.web.client.RestTemplate : Reading [class org.springframework.cloud.config.environment.Environment] as "application/json;charset=UTF-8" using [org.springframework.http.converter.json.MappingJackson2HttpMessageConverter#2b07e607]
2015-06-24 17:52:50.466 INFO 11234 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: CompositePropertySource [name='configService', propertySources=[MapPropertySource [name='file:/Users/larrymitchell/libertas/configserver/configfiles/1.0.0/bookmarkservice-development.yml']]]
2015-06-24 17:52:50.503 INFO 11234 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#5fa23965: startup date [Wed Jun 24 17:52:50 EDT 2015]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext#5cced717
2015-06-24 17:52:51.723 WARN 11234 --- [ main] .i.s.PathMatchingResourcePatternResolver : Skipping [/var/folders/kq/ykvl3t4n3l71p7s9ymywb4ym0000gn/T/spring-boot-libs/06f98804e83cf4a94380b46591b976b1d17c36b8-eureka-client-1.1.147.jar] because it does not denote a directory
2015-06-24 17:52:53.662 INFO 11234 --- [ main] o.s.b.f.config.PropertiesFactoryBean : Loading properties file from URL [jar:file:/Users/larrymitchell/libertas/vipaas/applicationservices/bookmarkservice/target/bookmarkservice.jar!/lib/spring-integration-core-4.1.2.RELEASE.jar!/META-INF/spring.integration.default.properties]
Ok, after talking it over with another coworker I figured out what the actual issue is.
Part of the confusion is that I am using the spring cloud (https://github.com/VanRoy/spring-cloud-dashboard) which is a great front end by the way. So when the service starts we see it for to discovery and retrieve the correct configuration file and load it. After I go to the spring cloud console and see a setting of UP which means that it is discovered and registered though discovery. There is a second status indicator which is when the spring cloud dashboard takes the registered endpoint and get health. In my issue the endpoint was showing up as UNKNOWN.
If I then use the endpoint that shows up in the console and try the info actuator endpoint then the request times out. This was the nature of my problem
Ok, so what was the issue?
Basically since I defined the in the application.yml and since when the service registers in bootstrap it does not know the port yet and then it picks a default of 8080 (my assumption since that is what it does). The server port is set in application.yml to 9001 but discovery sees a registration of 8080 so the spring cloud console cannot access the localhost:8080/bookmarkservice/manage/health since there is no service at that endpoint (which is at 9001 actually). Other services also cannot find the service.
By moving the server.port to bootstrap.yml then the correct endpoint of rate service is registered and the service properly accessible.

jBoss thread count increaed after upgrading haproxy from 1.5dev21 to 1.5.1

I upgraded my haproxy from 1.5dev21 to 1.5.1 stable version with same configuartion. At the backend, I am using jBoss.
As soon as we upgraded, I encountered serious issue regarding jBoss thread counts. It has been increased tremendously.
After rollback to 1.5dev21, everything works fine.
Please find my below configuration file of haproxy. Kindly suggest any changes required to migrate/upgrade to 1.5.1
global
daemon
maxconn 20000
defaults
mode http
timeout connect 15000ms
timeout client 50000ms
timeout server 50000ms
timeout queue 60s
stats enable
stats refresh 5s
backend backend_http
mode http
cookie JSESSIONID prefix
balance leastconn
option forceclose
option persist
option redispatch
option forwardfor
server server3 192.168.58.211:80 cookie server3_cokkie maxconn 1024 check
server server4 192.168.58.212:80 cookie server4_cookie maxconn 1024 check
acl force_sticky_server3 hdr_sub(server3_cookie) TEST=true
force-persist if force_sticky_server3
acl force_sticky_server4 hdr_sub(server4_cookie) TEST=true
force-persist if force_sticky_server4
rspidel ^Server:.*
rspidel ^X-Powered-By:.*
rspidel ^AMF-Ver:.*
listen frontend_http *:80
mode http
maxconn 20000
default_backend backend_http
listen frontend_https
mode http
maxconn 20000
bind *:443 ssl crt /opt/haproxy-ssl/conf/ssl/testsite.pem
reqadd X-Forwarded-Proto:\ https
reqadd X-Forwarded-Protocol:\ https
reqadd X-Forwarded-Port:\ 443
reqadd X-Forwarded-SSL:\ on
acl valid_domains hdr_end(host) -i gateway.testsite.com www.testsite.com m.testsite.com
redirect scheme http if !valid_domains
default_backend backend_http if valid_domains
Found this on the haproxy manual, may be of help:
Option "http-tunnel" disables any HTTP processing past the first request and
the first response. This is the mode which was used by default in versions
1.0 to 1.5-dev21. It is the mode with the lowest processing overhead, which
is normally not needed anymore unless in very specific cases such as when
using an in-house protocol that looks like HTTP but is not compatible, or
just to log one request per client in order to reduce log size. Note that
everything which works at the HTTP level, including header parsing/addition,
cookie processing or content switching will only work for the first request
and will be ignored after the first response.