Implement Retry mechanism using Mono.retry() within Spring WebFlux Reactive code - rest

Am using Java 8 and Spring WebFlux to call an external REST based server which also has a backup server.
Currently, if there's a failure when calling the server, it hits the backup server with no retry mechanism:
public class ServiceImpl {
#Autowired
MyRestClient myRestClient;
#Autowired
MyRestClient myRestClientBackup;
public Mono<ResponseOutput> getResponseOutput(ResponseInput responseInput) {
return Mono.just(responseInput)
.flatMap(input -> {
Mono<ResponseOutput> mono = myRestClient.post(input)
.doOnSuccess(responseOutput -> {
log.info("Successfully got responseOutput={}", responseOutput);
});
return mono;
})
.onErrorResume(e -> {
log.warn("Call to server failed, falling back to backup server...");
Mono<ResponseOutput> mono = myRestClientBackup.post(responseInput)
.doOnSuccess(responseOutput ->
log.info("Successfully got backup responseOutput={}", responseOutput));
return mono;
});
}
}
Am trying to implement a retry mechanism where upon setting a numRetries property in application.yml set to a specific number, this should happen:
e.g. if the numRetries = 2
Use MyRestClient to hit the original server twice (since numRetries = 2) and if that fails to hit the backup server.
This has the numRetries configuration property set to a static value:
application.yml:
client:
numRetries: 2;
Class that loads config properties from application.yml file:
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Configuration;
#Configuration
public ClientConfig {
#Value("${client.numRetries:0}")
private int numRetries;
// Getters and setters omitted for brevity
}
ClientConfig is used by MyRestClient:
public class MyRestClient {
#Autowired
ClientConfig config;
// Getters and setters omitted for brevity
}
Upon obtaining the value for the numRetries from MyRestClient how can I change the original implementation inside ServiceImpl.getResponseOutput() to use numRetries to hit the original server before calling the backup server? As you can see that this is all originally done using Java 8 lambda and streams...
Found this annotation #Retryable(value = RestClientException.class) but don't know how to specify the else if numRetries is used to call the backup server (pseudocode)?
Is there a lambda function to denote the numRetries to use within the:
return Mono.just(responseInput)
.flatMap(input -> {
How to do use the Mono.retry() operator?
Also, the version of reactor in this code seems to have deprecated the Mono.retry().retryBackOff() method?
How could I use the value referencing numRetries (which would be an int) to dictate to keep retrying (upon failure) and then when numRetries is exceeded up .onErrorResume() to call the other backup server?

Use retry(int) operator from Mono as below:
Mono.just(responseInput)
.flatMap(input -> myRestClient.post(input)
.retry(getNumRetries()) // this is the important part
.doOnSuccess(responseOutput -> log.info("Successfully got responseOutput={}", responseOutput)))
.onErrorResume(e -> {
log.warn("Call to server failed, falling back to backup server...");
return myRestClientBackup.post(responseInput)
.doOnSuccess(responseOutput -> log.info("Successfully got backup responseOutput={}", responseOutput));
});

Related

Integration testing for spring cloud stream, how to ensure application resources are not used

I'm trying to do some integration testing for my cloud streaming application. One of the main issues I'm observing so far is that the TestChannelBinderConfiguration keeps picking up the configuration specified in src/main/java/resources/application.yml instead of keeping it as blank (since there is no config file in /src/test/resources/).
If I delete the application.yml file or remove all spring-cloud-stream related configuration, the test passes. How can I ensure that the TestChannelBinderConfiguration does not pick up application.yml file.
#Test
public void echoTransformTest() {
try (ConfigurableApplicationContext context =
new SpringApplicationBuilder(
TestChannelBinderConfiguration.getCompleteConfiguration(DataflowApplication.class))
.properties(new Properties())
.run("--spring.cloud.function.definition=echo")) {
InputDestination source = context.getBean(InputDestination.class);
OutputDestination target = context.getBean(OutputDestination.class);
GenericMessage<byte[]> inputMessage = new GenericMessage<>("hello".getBytes());
source.send(inputMessage);
assertThat(target.receive().getPayload()).isEqualTo("hello".getBytes());
}
}
I resolved this by doing the following:
SpringBootTest(properties = {"spring.cloud.stream.function.definition=reverse"})
#Import(TestChannelBinderConfiguration.class)
public class EchoTransformerTest {
#Autowired private InputDestination input;
#Autowired private OutputDestination output;
#Test
public void testTransformer() {
this.input.send(new GenericMessage<byte[]>("hello".getBytes()));
assertThat(output.receive().getPayload()).isEqualTo("olleh".getBytes());
}
}
and adding an application.yml to the test/resources this ensures that we don't read the src/resources application properties.
Another way was to explicitly define
#TestPropertySource(locations= "test.yml")

Why Is My Spring Cloud Function Attempting to Open Local HTTP Connections?

I'm deploying a rather simple Spring Cloud Function to AWS Lambda and am running into an issue with slow cold starts and occasional failures noted in calling the function once deployed.
First, here is my single class. (Eventually this function will do some domain record lookups against a database, so the name 'domain' is used here fairly liberally. I've also removed any of the actual data handling and am just returning strings.
<< imports >>
#SpringBootConfiguration
public class DomainApplication implements ApplicationContextInitializer<GenericApplicationContext> {
private static Log logger = LogFactory.getLog(DomainApplication.class);
public static void main(String[] args) throws Exception {
FunctionalSpringApplication.run(DomainApplication.class, args);
}
public Supplier<String> domains(){
return () -> {
logger.info("Return a List of Domains");
return "All Domains";
};
}
public Function<String, String> domain() {
return value -> {
logger.info("Return A Single Domains");
return "This Domain" + value;
};
}
#Override
public void initialize(GenericApplicationContext context) {
context.registerBean("domain", FunctionRegistration.class,
() -> new FunctionRegistration<Function<String, String>>(domain())
.type(FunctionType.from(String.class).to(String.class).getType()));
context.registerBean("domains", FunctionRegistration.class,
() -> new FunctionRegistration<Supplier<String>>(domains())
.type(FunctionType.from(String.class).to(String.class).getType()));
}
}
Here's the dependencies of the project:
...
set('springCloudVersion', '2.1.0.RELEASE')
...
implementation "org.springframework.cloud:spring-cloud-function-context:${springCloudVersion}"
implementation "org.springframework.cloud:spring-cloud-starter-function-webflux:${springCloudVersion}"
implementation "org.springframework.cloud:spring-cloud-function-adapter-aws:${springCloudVersion}"
implementation 'com.amazonaws:aws-lambda-java-core:1.2.0'
implementation 'com.amazonaws:aws-lambda-java-events:2.2.6'
testCompile("org.springframework.boot:spring-boot-starter-test:${springCloudVersion}")
Now, when I package and deploy a 'shadowJar' version of the app to AWS Lambda the startup logs show a connection refused failure:
2019-05-14 20:45:21.205 ERROR 1 --- [or-http-epoll-3] reactor.Flux.MonoRepeatPredicate.1 : onError(io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connection refused: localhost/127.0.0.1:80)
... is there a reason why the startup would be attempting to connect locally to port 80? (And as importantly - can I shut that off?)
Facing the same issue, already reported to spring cloud team with
https://github.com/spring-cloud/spring-cloud-function/issues/367

Pattern for using properly MongoClient in Vert.x

I feel quite uncomfortable with the MongoClient class, certainly because I don't exactly understand what it is and how it works.
The first call to MongoClient.createShared will actually create the
pool, and the specified config will be used.
Subsequent calls will return a new client instance that uses the same
pool, so the configuration won’t be used.
Does that mean that the pattern should be:
In startup function, to create the pool, we make a call
mc = MongoClient.createShared(vx, config, "poolname");
Is the returned value mc important for this first call if it succeeds? What is its value if the creation of the pool fails? The documentations doesn't say. There is a socket exception if mongod is not running, but what about the other cases?
In another place in the code (another verticle, for example), can we write mc = MongoClient.createShared(vx, new JsonObject(), "poolname"); to avoid to systematically need to access shared objects.
Again, In another verticle where we need to access the database, should we define MongoClient mc
as a class field in which case it will be released to the pool only in the stop() method, or
shouldn't it be a variable populated with MongoClient.createShared(...) and de-allocated with mc.close() once we don't need the connection any more in order release it again to the pool ?
What I would write is as follows
// Main startup Verticle
import ...
public class MainVerticle extends AbstractVerticle {
...
#Override
public void start(Future<Void> sf) throws Exception {
...
try {
MongoClient.createShared(vx, config().getJsonObject("mgcnf"), "pool");
}
catch(Exception e) {
log.error("error error...");
sf.fail("failure reason");
return;
}
...
sf.complete();
}
...some other methods
}
and then, in some other place
public class SomeVerticle extends AbstractVerticle {
public void someMethod(...) {
...
// use the database:
MongoClient mc = MongoClient.createShared(vx, new JsonObject(), "pool");
mc.save(the_coll, the_doc, res -> {
mc.close();
if(res.succeeded()) {
...
}
else {
...
}
}
...
}
...
}
Does that make sense ? Yet, this is not what is in the examples that I could find around the internet.
Don't worry about pools. Don't use them. They don't do what you think they do.
In your start method of any verticle, set a field (what you call a class field, but you really mean instance field) on the inheritor of AbstractVerticle to MongoClient.createShared(getVertx(), config). Close the client in your stop method. That's it.
The other exceptions you'll see are:
Bad username/password
Unhealthy cluster state
The Java driver has a limit of 500 or 1,000 connections (depending on version), you'll receive an exception if you exceed this connection count
Both will be propagated up from the driver wrapped in a VertxException.

How to Disable Ribbon and just use FeignClient in Spring Cloud

I am aware that we can force FeignClient to use OkHttp instead of Ribbon by providing the url Ex. #FeignClient(url="serviceId", name="serviceId")
I want the OkHttpClient to be used even when just the name is provided. Ex. #FeignClient(name="serviceId")
As per the spring cloud documentation "if Ribbon is enabled it is a LoadBalancerFeignClient, otherwise the default feign client is used."
How can I disable ribbon so that the default feign client will be used.
None of the solutions on the internet worked for me.
Simply setting an absolute url in the url portion resulted in loadbalancing exceptions
// this resulted in java.lang.RuntimeException: com.netflix.client.ClientException: Load balancer does not have available server for client: localhost
#Lazy
#Configuration
#Import(FeignClientsConfiguration.class)
public class MyConfig {
#LocalServerPort
private int port;
#Bean
public MyClient myClient(final Decoder decoder, final Encoder encoder, final Client client) {
return Feign.builder().client(client)
.encoder(encoder)
.decoder(decoder)
.target(MyClient.class, "http://localhost:" + localServerPort);
}
}
setting spring.cloud.loadbalancing.ribbon.enabled=false resulted in application context problems. Additional settings needs to be disabled for this to work. I did not probe further
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'eurekaLoadBalancerClientConfiguration': Invocation of init method failed; nested exception is java.lang.NullPointerException
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:160)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:416)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1788)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:595)
...
...
My working solution
Finally, after inspecting the source code in org.springframework.cloud.openfeign.ribbon.DefaultFeignLoadBalancedConfiguration, I came up with this solution
#Lazy // required for #LocalServerPort to work in a #Configuration/#TestConfiguration
#TestConfiguration
#Import(FeignClientsConfiguration.class)
public class MyConfig {
#LocalServerPort
private int port;
#Bean
public MyClient myClient(Decoder decoder, Encoder encoder, Client client, Contract contract) {
return Feign.builder().client(client)
.encoder(encoder)
.decoder(decoder)
.contract(contract)
.target(MyClient.class, "http://localhost:" + localServerPort);
}
// provide a default `FeignClient` so that Spring will not automatically create their LoadBalancingFeignClient
#Bean
public Client feignClient(SpringClientFactory clientFactory) {
return new Client.Default(null, null);
}
}
I had the same question but my setup is a bit different and I did not get it working in my case (using spring-cloud-starter-openfeign with spring mvc style annotations).
FYI: I needed a custom client with an SSLSocketFactory and ended up just creating the bean for the client and keeping the url on #FeignClient
#Bean
public Client myClient() {
return new Client.Default(getSSLSocketFactory(), new NoopHostnameVerifier());
}
However, we do have projects using spring-cloud-starter-feign where the URL is not provided on the annotation. Not sure if the config below is complete (I did not set it up) but it might point you in the right direction...
dependencies
compile("org.springframework.cloud:spring-cloud-starter-feign") {
exclude group: 'org.springframework.cloud', module: 'spring-cloud-starter-ribbon'
exclude group: 'org.springframework.cloud', module: 'spring-cloud-starter-archaius'
}
config
#Configuration
#Import(FeignClientsConfiguration.class) // org.springframework.cloud.netflix.feign.FeignClientsConfiguration
public class MyConfig {
#Value("${client.url}")
private String url;
#Bean
public MyClient myClient(final Decoder decoder, final Encoder encoder, final Client client) {
return Feign.builder().client(client)
.encoder(encoder)
.decoder(decoder)
.target(MyClient.class, url);
}
}
It has nothing to do with Ribbon.
Check this:
feign:
httpclient:
enabled: false
This will disable the spring cloud autoconfigured httpclient, and will search a #Bean named httpClient in the context. So provide the definition of #Bean in a #Configuration class and that's all.
Check class FeignAutoConfiguration in spring cloud feign.
https://cloud.spring.io/spring-cloud-netflix/multi/multi_spring-cloud-feign.html

UnitTest FluentNhibernate using PostgreSQLConfiguration

When setting up our new architecture I followed a guide which used NHibernate with MsSql2008 configuration.
We are not using MsSql2008, instead using Postgresql. The configuration for this all works great and it saves to the database etc.
I am trying to write a unit test to test the UoW but I can't get the InMemory configuration to work.
The guide that I followed used this following Provider:
public class InMemoryNHibernateConfigurationProvider : NHibernateConfigurationProvider
{
public override Configuration GetDatabaseConfiguration()
{
var databaseDriver = SQLiteConfiguration.Standard.InMemory().ShowSql();
return CreateCoreDatabaseConfiguration(databaseDriver);
}
public static void InitialiseDatabase(Configuration configuration, ISession session)
{
new SchemaExport(configuration).Execute(true, true, false, session.Connection, Console.Out);
}
}
My standard (Non UnitTest) configuration looks like this:
public abstract class NHibernateConfigurationProvider : INHibernateConfigurationProvider
{
public abstract Configuration GetDatabaseConfiguration();
public Configuration CreateCoreDatabaseConfiguration(
IPersistenceConfigurer databaseDriver,
Action<Configuration> databaseBuilder = null)
{
var fluentConfiguration =
Fluently.Configure()
.Database(databaseDriver)
.Mappings(m => m.AutoMappings.Add(AutoMap.AssemblyOf<Organisation>(new DefaultMappingConfiguration())
//.Conventions.AddFromAssemblyOf<IdGenerationConvention>()
.UseOverridesFromAssemblyOf<OrganisationMappingOverride>()));
if (databaseBuilder != null)
{
fluentConfiguration.ExposeConfiguration(databaseBuilder);
}
return fluentConfiguration.BuildConfiguration();
}
}
public class PostgreSQLServerNHibernateConfigurationProvider : NHibernateConfigurationProvider
{
private static readonly string NpgsqlConnectionString = ConfigurationManager.ConnectionStrings["ProdDBConnection"].ConnectionString;
public override Configuration GetDatabaseConfiguration()
{
return CreateCoreDatabaseConfiguration(
PostgreSQLConfiguration.Standard.ConnectionString(NpgsqlConnectionString).
Dialect("NHibernate.Dialect.PostgreSQL82Dialect").ShowSql(),
BuildDatabase);
}
....... // Other Methods etc
}
How do I write a InMemoryConfigurationProvider that tests using PostgresqlConfiguration instead of SqlLiteCOnfiguration. PostgresqlConfiguration does not have an InMemory option.
Do I implement a configuration that creates another database and just drop it on teardown? Or is there perhaps another way of doing it?
Using sqlite works really well and although it does have some differences to SQL-server which we use they are so minor it doesn't matter for testing purposes.
With that said, this is how we setup the tests:
All test-cases where we want to write/read from db extend the SqLiteTestBaseclass. That way they all get access to a session created by the basesetup method, and can setup the daos / repositories as needed.
Using this approach we also always get a fresh new db for each test-case.
Update:
After trying this out a bit more I actually found that you have to modify it a bit to use InMemory (we had previously used sqlite backed by a file on disk instead). So the updated (complete) setup looks like this:
private Configuration _savedConfig;
[SetUp]
public void BaseSetup()
{
FluentConfiguration configuration =
Fluently.Configure()
.Database(SQLiteConfiguration.Standard
.InMemory)
.ExposeConfiguration(
x => x.SetInterceptor(new MultiTenancyInterceptor(ff)))
.Mappings(m => m.FluentMappings.AddFromAssemblyOf<IRepository>())
.Mappings(m => m.FluentMappings.ExportTo("c:\\temp\\mapping"))
.ExposeConfiguration(x => _savedConfig = x) //save the nhibernate configuration for use when creating the schema, in order to be able to use the same connection
.ExposeConfiguration(x => ConfigureEnvers(x))
.ExposeConfiguration(x => ConfigureListeners(x));
ISessionFactory sessionFactory;
try
{
sessionFactory = configuration.BuildSessionFactory();
}
catch (Exception ex)
{
Console.WriteLine(ex.StackTrace);
throw;
}
_session = sessionFactory.OpenSession();
BuildSchema(_savedConfig, _session);
}
private void BuildSchema(Configuration config, ISession session)
{
new SchemaExport(config)
.Execute(false, true, false, session.Connection, null);
}
The reason why you have to jump through all these hoops in order to use the in-memory version of Sqlite is due to the db being tied to the connection. You have to use the same connection that creates the db to populate the schema, thus we have to save the Configuration object so that we can export the schema later when we've created the connection.
See this blogpost for some more details: http://www.tigraine.at/2009/05/29/fluent-nhibernate-gotchas-when-testing-with-an-in-memory-database/
N.B: This only shows the setup of the db. We have some code which also populates the db with standard values (users, customers, masterdata etc) but I've omitted that for brevity.