setting spring data jedis connection pool using application.yml file - spring-data

We are using spring data redis. Below mentioned are property for sentinel configuration
spring.redis.sentinel.master: globalsessions_dev
spring.redis.sentinel.nodes: sentinel01.stage.shutterfly.com:26379,sentinel02.stage.shutterfly.com:26379,sentinel03.stage.shutterfly.com:26379
We would like to use connection pool also to be configured in same manner. Spring redis documentation does not provide details of connection pool yml property.
Thanks in advance.

spring:
profiles: live
redis:
sentinel:
master:
nodes:
host: 192.168.1.1000
port: 6379
password:
pool:
max-wait: -1
max-active: -1
max-idle: -1
min-idle: 16
You could also use Spring tool suite, it has nice autocompletion of yaml properties ;)

Related

Docker Compose Config Server unreachable by Spring Cloud Data Flow Microservices

The config server is reachable from localhost:8888 but when I deploy my applications on SCDF the following error occurs:
Fetching config from server at : http://localhost:8888
2021-07-30 14:58:53.535 INFO 143 --- [ main] o.s.b.context.config.ConfigDataLoader : Connect Timeout Exception on Url - http://localhost:8888. Will be trying the next url if available
2021-07-30 14:58:53.535 WARN 143 --- [ main] o.s.b.context.config.ConfigDataLoader : Could not locate PropertySource ([ConfigServerConfigDataResource#3de88f64 uris = array<String>['http://localhost:8888'], optional = true, profiles = list['default']]): I/O error on GET request for "http://localhost:8888/backend-service/default": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
The application(s) deploy successfully on SCDF apart from the config server connection. The only property I specify in SCDF is the docker network. I'm using spring.config.import and am not using any bootstraps. This all works correctly when deployed locally but the microservices can't connect to the config server when deployed on SCDF.
Spring Boot Version: 2.5.1
app properties
spring.application.name=backend-service
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-attempts=6
spring.cloud.config.retry.max-interval=11000
spring.config.import=optional:configserver:http://localhost:8888
config server properties
spring.cloud.config.server.git.uri=...
management.endpoints.web.exposure.include=*
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-attempts=6
spring.cloud.config.retry.max-interval=11000
spring.cloud.bus.id=my-config-server
spring.cloud.stream.rabbit.bindings.springCloudBus.consumer.declareExchange=false
spring.rabbitmq.host=127.0.0.1
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
spring.cloud.bus.enabled=true
spring.cloud.bus.refresh.enabled: true
spring.cloud.bus.env.enabled: true
server.port=8888
docker-compose.yml
version: '3.1'
services:
h2:
...
rabbitmq-container:
image: rabbitmq:3.7.14-management
hostname: dataflow-rabbitmq
expose:
- '5672'
ports:
- "5672:5672"
- "15672:15672"
networks:
- scdfnet
dataflow-server:
...
networks:
- scdfnet
app-import:
...
networks:
- scdfnet
skipper-server:
...
networks:
- scdfnet
configserver-container:
image: ...
ports:
- "8888:8888"
expose:
- '8888'
environment:
- spring_rabbitmq_host=rabbitmq-container
- spring_rabbitmq_port=5672
- spring_rabbitmq_username=guest
- spring_rabbitmq_password=guest
depends_on:
- rabbitmq-container
networks:
- scdfnet
networks:
scdfnet:
external:
name: scdfnet
volumes:
h2-data:
For anyone else having this problem, I have found two ways of solving it. The problem is that once the Spring Boot application is containerized, the localhost referred to in the properties file will cause the program to fetch the localhost of the application container's virtual network and not that of your local machine.
There are numerous Stack Overflow answers for this same error but all center around corrections to bootstrap properties. However, bootstrap context initialization is deprecated since Spring Boot 2.4.
The first solution is to use your IPv4 address instead of localhost.
spring.config.import=configserver:http://<insert IPv4 address>:8888
For Example:
spring.config.import=configserver:http://10.6.39.148:8888
A much better solution than hardwiring addresses is to reference the config server container running in docker compose:
spring.config.import=optional:configserver:http://configserver-container:8888
Make sure that all of the Docker Compose services are running on the same network (scdf_network in my case) and note that this address will only work when running on docker-compose so if you are building the maven file on Eclipse, you may need to remove or disable your tests to build successfully. That might be unnecessary; it could just be that there is some property that I failed to copy to my local application.properties file which is causing the context tests to fail. According to the documentation, the optional label should allow the config client to run even if contact cannot be established with the config server.

Concourse error - discovering any new version loop

Has anyone encountered an infinite loopdiscovering any new versions when building? I suspect there's an issue with one of the workers, does this worker config look ok? Installing the binary - my worker config is shown below.Fly workers list returns nothing.
concourse:
worker:
config:
name: ci_worker01
bind-ip: 0.0.0.0
bind-port: 7777
tsa-host: 127.0.0.1
tsa-port: 2222
tsa-public-key: /opt/concourse/.ssh/id_web_rsa.pub
tsa-worker-private-key: /opt/concourse/.ssh/id_worker_rsa
service: True

Spring Cloud - Registry Service port customization

I'd like to customize the Eureka port with Spring Cloud.
With the default port below, the services registry sees itself right (within the provided GUI)
spring:
application:
name: services-registry
server:
port: 8761
eureka:
instance:
hostname: localhost
nonSecurePort: ${server.port}
client:
register-with-eureka: true
fetch-registry: false
service-url:
default-zone: http://${eureka.instance.hostname}:${server.port}/eureka/
But if I just change server.port to 8787, no service can register itself, not even the services registry itself.
2017-01-09 16:18:21.584 WARN 17496 --- [nfoReplicator-0] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failure
2017-01-09 16:18:21.584 WARN 17496 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_SERVICES-REGISTRY/xxx.org:services-registry:8787 - registration failed Cannot execute request on any known server
com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server
...
2017-01-09 16:13:33.299 WARN 17496 --- [nfoReplicator-0] c.n.discovery.InstanceInfoReplicator : There was a problem with the instance info replicator
com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server
Can someone explain this issue and save my day? Thanks!
Ok, got it... the label after service-url property (which can be aliased as serviceUrl in YML) is a HashMap KEY, not a property label. So it has to be kept as a Camel Case tag in any ways!
eureka.client.service-url.defaultZone=http://[myIP#]:8787/eureka

Spring cloud sidecar can not un-register nodeJS service once it is shut down

I suspect this is an issue, can anyone help to have a check?
In my sideCar application, I have application.yml:
server:
port: 5678
spring:
application:
name: nodeservice
sidecar:
port: ${nodeServer.instance.port:3000}
health-uri: http://localhost:${nodeServer.instance.port:3000}/app/health.json
eureka:
instance:
hostname: ${host.instance.name:localhost}
leaseRenewalIntervalInSeconds: 5 #default is 30, recommended to keep default
metadataMap:
instanceId: ${spring.application.name}:${spring.application.instance_id:${random.value}}
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
And in my main spring config app, I have:
String url_node = "";
try {
InstanceInfo instance = discoveryClient.getNextServerFromEureka("nodeservice", false);
// InstanceInfo instance = discoveryClient.getNextServerFromEureka("foo", false);
url_node = instance.getHomePageUrl();
} catch (Exception e) {
}
Now I start my nodeJS server, I have in spring app:
url for nodeService is: http://SJCC02MT0NUFD58.local:3000/
This is perfect, but after I shutdown my nodeJS server,
http://localhost:3000/app/health.json url is totally down, BUT, in the main java spring app, I still see the same output there.
So it seemed even if the NodeJS service is no longer available, eureka is still remembering that in memory.
Anything wrong for my configuration?
Another question is why the url being discovered by spring is http://SJCC02MT0NUFD58.local:3000/, not http://localhost:3000? I already configured Eureka.server.instance.host to be localhost.
Thanks
You are seeing the appropriate behavior. Eureka and ribbon are built to be very resilient (AP in CAP). In the case you described, a service had at least one instance, then there were none, the ribbon eureka client keeps the last know list of servers around as a last resort. You're just printing the names, if you try to connect to that service it will fail. This is where you use the Hystrix Circuit Breaker that can provide a fallback in the case that no instances are up.

Zuul timing out in long-ish requests

I am using a front end Spring Cloud application (micro service) acting as a Zuul proxy (#EnableZuulProxy) to route requests from an external source to other internal micro services written using spring cloud (spring boot).
The Zuul server is straight out of the applications in the samples section
#SpringBootApplication
#Controller
#EnableZuulProxy
#EnableDiscoveryClient
public class ZuulServerApplication {
public static void main(String[] args) {
new SpringApplicationBuilder(ZuulServerApplication.class).web(true).run(args);
}
}
I ran this set of services locally and it all seems to work fine but if I run it on a network with some load, or through a VPN, then I start to see Zuul forwarding errors, which I am seeing as client timeouts in the logs.
Is there any way to change the timeout on the Zuul forwards so that I can eliminate this issue from my immediate concerns? What accessible parameter settings are there for this?
In my case I had to change the following property:
zuul.host.socket-timeout-millis=30000
The properties to set are: ribbon.ReadTimeout in general and <service>.ribbon.ReadTimeout for a specific service, in milliseconds. The Ribbon wiki has some examples. This javadoc has the property names.
I have experienced the same problem: in long requests, Zuul's hystrix command kept timing out after around a second in spite of setting ribbon.ReadTimeout=10000.
I solved it by disabling timeouts completely:
hystrix:
command:
default:
execution:
timeout:
enabled: false
An alternative that also works is change Zuul's Hystrix isolation strategy to THREAD:
hystrix:
command:
default:
execution:
isolation:
strategy: THREAD
thread:
timeoutInMilliseconds: 10000
This worked for me, I had to set connection and socket timeout in the application.yml:
zuul:
host:
connect-timeout-millis: 60000 # starting the connection
socket-timeout-millis: 60000 # monitor the continuous incoming data flow
I had to alter two timeouts to force zuul to stop timing out long-running requests. Even if hystrix timeouts are disabled ribbon will still timeout.
hystrix:
command:
default:
execution:
timeout:
enabled: false
ribbon:
ReadTimeout: 100000
ConnectTimeout: 100000
If Zuul uses service discovery, you need to configure these timeouts with the ribbon.ReadTimeout and ribbon.SocketTimeout Ribbon properties.
If you have configured Zuul routes by specifying URLs, you need to use zuul.host.connect-timeout-millis and zuul.host.socket-timeout-millis
by routes i mean
zuul:
routes:
dummy-service:
path: /dummy/**
I had a similar issue and I was trying to set timeout globally, and also sequence of setting timeout for Hystrix and Ribbon matters.
After spending plenty of time, I ended up with this solution. My service was taking upto 50 seconds because of huge volume of data.
Points to consider before changing default value for Timeout:
Hystrix time should be greater than combined time of Ribbon ReadTimeout and ConnectionTimeout.
Use for specific service only, which means don't set globally (which doesn't work).
I mean use this:
command:
your-service-name:
instead of this:
command:
default:
Working solution:
hystrix:
command:
your-service-name:
execution:
isolation:
strategy: THREAD
thread:
timeoutInMilliseconds: 95000
your-service-name:
ribbon:
ConnectTimeout: 30000
ReadTimeout: 60000
MaxTotalHttpConnections: 500
MaxConnectionsPerHost: 100
Reference
Only these settings on application.yml worked for me:
ribbon:
ReadTimeout: 90000
ConnectTimeout: 90000
eureka:
enabled: true
zuul:
host:
max-total-connections: 1000
max-per-route-connections: 100
semaphore:
max-semaphores: 500
hystrix:
command:
default:
execution:
isolation:
thread:
timeoutInMilliseconds: 1000000
Hope it helps someone!