Ribbon: Unable to set default configuration using #RibbonClients(defaultConfiguration=...) - spring-cloud

The #RibbonClients annotation allows us to customise the Ribbon configuration per client. This process is described in the documentation at http://projects.spring.io/spring-cloud/spring-cloud.html#_customizing_the_ribbon_client
This is all fine. I tried to use the same approach to override the default configuration that should be applied to all my clients. So I defined the following configuration class and made sure it is considered by the component scan:
#Configuration
#RibbonClients(defaultConfiguration = MyDefaultRibbonConfig.class)
public class MyRibbonAutoConfiguration {
}
Unfortunately, it turns out that MyDefaultRibbonConfig is not taken into account when building the ribbon client's application context. A quick look and trace at RibbonClientConfigurationRegistrar let me think my #RibbonClients(default=...) annotation is unconditionally overridden by the one provided by org.springframework.cloud.netflix.ribbon.eureka.RibbonEurekaAutoConfiguration.
However, it works if the #RibbonClients annotation is set on a inner class instead of a top-level class:
#Configuration
public class MyRibbonAutoConfiguration {
#Configuration
#RibbonClients(defaultConfiguration = MyDefaultRibbonConfig.class)
static class SubConfig {
}
}
This is a side-effect the strategy followed by RibbonClientConfigurationRegistrar to give a name to the discovered configuration beans:
registerClientConfiguration(registry,
"default." + metadata.getEnclosingClassName(),
attrs.get("defaultConfiguration"));
The configuration for annotations declared on a top-level class are then registered with a bean name set to default.null.defaultConfiguration - so the next one overrides the previous (not sure the order is predictable though).
This behaviour looks strange to me. Did I miss something? Should I proceed differently?

This was an issue in SpringCloud-Netflix 1.0.1. See https://github.com/spring-cloud/spring-cloud-netflix/issues/374 for more information.

Related

VaadinServiceInitListener not picked up in a Quarkus app

I have a Quarkus application using current versions of Vaadin Flow and Quarkus (23.2.4 and 2.13.1.Final). I want to have a VaadinServiceInitListener to check access annotations on the views (#RolesAllowed(...)) using AccessAnnotationChecker. I believe annotating the implementation with #VaadinServiceEnabled
should fix this, but I need to register it in META-INF/services/com.vaadin.flow.server.VaadinServiceInitListener to have it activated. This is how to do it when not using a dependency injection framework. Then everything works as expected and I can use AccessAnnotationChecker to see if the user has access to that view, on BeforeEnterEvent.
I also notice the message Can't find any #VaadinServiceScoped bean implementing 'I18NProvider'. Cannot use CDI beans for I18N, falling back to the default behavior. on startup. Strangely, implementing I18NProvided in a class and annotating it with #VaadinServiceEnabled and #VaadinServiceScoped makes that message go away, eg. it is recognized by CDI.
Why isn't my VaadinServiceInitListener implementation recogized? Currently it is annotated with
#VaadinServiceEnabled
#VaadinServiceScoped
#Unremovable
My pom.xml include
vaadin-quarkus-extension,
quarkus-oidc,
quarkus-keycloak-authorization,
vaadin-jandex
Instead of using a listener, you can use a CDI event.
Quarkus's dependency injection solution is based on CDI, so you can use the same events. Here's an example
public class BootstrapCustomizer {
private void onServiceInit(#Observes
ServiceInitEvent serviceInitEvent) {
serviceInitEvent.addIndexHtmlRequestListener(
this::modifyBootstrapPage);
}
private void modifyBootstrapPage(
IndexHtmlResponse response) {
response.getDocument().body().append(
"<p>By CDI add-on</p>");
}
}
More information here https://vaadin.com/docs/latest/integrations/cdi/events

Where to put #OpenAPIDefinition?

The documentation for defining general API information using the quarkus-smallrye-openapi extension is extremely sparse, and does not explain how to use all the annotations for setting up the openApi generation.
For some background, I am using a clean and largely empty project (quarkus version1.0.1.FINAL) generated from code.quarkus.io, with a single class defined as followed (With the attempted #OpenAPIDefinition annotation):
#OpenAPIDefinition(
info = #Info(
title = "Custom API title",
version = "3.14"
)
)
#Path("/hello")
public class ExampleResource {
#GET
#Produces(MediaType.TEXT_PLAIN)
public String hello() {
return "hello";
}
}
I have eventually found that general api information (contact info, version, etc) through much digging is defined using the #OpenAPIDefinition annotation, but when used on my existing endpoint definition, no changes are made to the generated openapi specification. What am I doing wrong?
Try putting the annotation on the JAX-RS Application class. I realize you don't need one of those in a Quarkus application, but I think it doesn't hurt either. For reference in the specification TCK:
https://github.com/eclipse/microprofile-open-api/blob/master/tck/src/main/java/org/eclipse/microprofile/openapi/apps/airlines/JAXRSApp.java

How to override main application.yml with testing application.yml when testing REST API in an Autowired service class?

I'm writing automated test using TestNG for the REST API of my application. The application has a RestController which contains an #Autowired service class. When the REST endpoint is called with a HTTP GET request, the service looks into a storage directory for XML files, transforms their contents into objects and stores them in a database. The important thing for my question is that the path to the storage directory is stored in /src/main/resources/application.yml (source.storage) and imported via a #Value annotation.
Now, I have the source.storage property also in src/test/resources/application.yml pointing to a different directory within src/test, where I store my testing XML files, and import them to my test class with a #Value annotation again. My test calls the REST endpoint with a HTTP GET. However, it seems that the service still draws the source.storage property the main application.yml, while I would like that value overriden by the one in test application.yml file. In other words, the service tries to import XML files from the application storage directory, rather than from my testing storage.
#ActiveProfiles and #TestPropertySource do not seem to work for me. Scanning the main application.yml for its storage property is not an option, as in the end the application.yml will be drawn from a Spring Cloud Config, and I would not know where the main application.yml would be located.
Is there a way with which I could make the #Autowired service draw the source.storage property from the test application.yml, rather from the main one?
Any advice would be appreciated.
Thanks, Petr
Well, it really depends on what you're trying to build, if it is some sort of unit test of the controller or more likely an integration test. Both approaches are explained in this tutorial.
If you're trying to write integration test, which seems a bit more likely from your question, then #ActiveProfiles or #TestPropertySource should work for you. I would suggest to use profiles, in growing application with a lot of properties it is a bit more convenient to just replace some of the properties for the testing. Below is setup which worked for me when writing integration tests for controller endpoints:
#RunWith(SpringRunner.class)
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#ActiveProfiles("test")
#FixMethodOrder(MethodSorters.NAME_ASCENDING)
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
public class AreaControllerTest {
#Autowired
TestRestTemplate rest;
#MockBean
private JobExecutor jobExecutor;
#Test
public void test01_List() {
//
}
#Test
public void test02_Get() {
//
}
// ...
}
There are several important things.
The testing properties are in src/test/resources/application-test.properties and merges with the ones in application.properties as the #ActiveProfiles("test") annotation suggests.
Essential is also #RunWith(SpringRunner.class) which is JUnit specific, for TestNG alternative please refer to this SO question.
Finally the #SpringBootTest annotation will start the whole application context.
#FixMethodOrder and #DirtiesContext are further setup of the testing case and are not really necessary.
Notice also the #MockBean annotation, in this case we did not wanted to use real-life implementation of JobExecutor, so we replaced it with mock.
If you want to write unit test where you want to just check the logic of controller and service on their own, then you have to have two test classes, each testing respective classes. Testing service should be standard unit test, testing controller is a bit trickier and is probably more inclined to partial integration test. If this is your case I would recommend to use MockMvc approach explained in the above mentioned tutorial. Small snippet from there:
#RunWith(SpringRunner.class)
#WebMvcTest(GreetingController.class)
public class WebMockTest {
#Autowired
private MockMvc mockMvc;
#MockBean
private GreetingService service;
#Test
public void greetingShouldReturnMessageFromService() throws Exception {
when(service.greet()).thenReturn("Hello Mock");
this.mockMvc.perform(get("/greeting")).andDo(print()).andExpect(status().isOk())
.andExpect(content().string(containsString("Hello Mock")));
}
}
Notice the #MockBean annotation which mocks service where you can specify your own behaviour of mock. This point is critical, because this sort of test does not load whole application context, but only MVC context, so the services are not available. Again as in the integration test the #RunWith(SpringRunner.class) annotation is essential. Finally #WebMvcTest(GreetingController.class) starts only MVC context of the GreetingController class and not the whole application.
You can try supplying the property directly to the spring boot test.
#SpringBootTest(properties= {"source.storage=someValue"})
Regarding the application picking up the wrong property source, You should also check if your application is being built properly.

SetExecutionStrategy to SqlAzureExecutionStrategy with DbMigrationsConfiguration?

I saw a post today about implementing SqlAzureExecutionStrategy:
http://romiller.com/tag/sqlazureexecutionstrategy/
However, all examples I can find of this use a Configuration that inherits from DbConfiguration. My project is using EF6 Code First Migrations, and the Configuration it created inherits from DbMigrationsConfiguration. This class doesn't contain a definition for SetExecutionStrategy, and I can find no examples that actually combine SqlAzureExecutionStrategy (or any SetExecutionStrategy) with DbMigrationsConfiguration.
Can this be done?
If anyone else comes across this question, this is what we figured out:
Create a custom class that inherits from DbConfiguration (which has SetExecutionStrategy):
public class DataContextConfiguration : DbConfiguration
{
public DataContextConfiguration()
{
SetExecutionStrategy("System.Data.SqlClient", () => new SqlAzureExecutionStrategy());
}
}
Then add this attribute to your DataContext, specifying that it is to use your custom class:
[DbConfigurationType(typeof(DataContextConfiguration))]
public class DataContext : DbContext, IDataContext
{
...
}
After more investigation, now I think the correct answer is that:
DbMigrationsConfiguration is completely separate and only configures the migration settings. That's why it doesn't inherit from or have the same options as DbConfiguration.
It is not loaded, and is irrelevant, for actual operation.
So you can (and should) declare a separate class based on DbConfiguration to configure the runtime behaviour.
I added some tracing and I saw that the first time you use a DatabaseContext in an application, it runs up the migration, and the migration configuration.
But, the first time the DatabaseContext is actually used (e.g. to load some data from the database) it will load your DbConfiguration class as well.
So I don't think there is any problem at all.

Castle windsor logging facility

I'm trying to remove some logging dependencies and stumbled across Castle Windsor's logging facility. However, I'm kind of skeptical about whether I should use it or not.
public class MyClass
{
public Castle.Core.Logging.ILogger Logger { get; set; }
...
}
Windsor's logging facility requires that you expose your logger as a property. Is that really a good practice? I feel like I'm almost breaking encapsulation because normally when I reuse a component, I don't care about it's logging mechanism and I don't normally want to see it exposed.
If I use a custom wrapper that uses a static class to create the log, I can keep it private. For example:
public class MyClass
{
private static MyCustomWrapper.ILogger Logger = LogManager.GetLogger(typeof(MyClass));
...
}
I've searched the web for reasons why I should use the logging facility, but I'm only finding articles on how to use it, but not why I should use it. I feel like I'm missing the point. Having the logging component exposed is kind of scarying me away.
Windsor's logging facility requires that you expose your logger as a property.
Not necessarily. You can also put your logger as a constructor (i.e. mandatory) dependency. The logger is usually declared as a property (i.e optional dependency) because there might be no logger. That is, the component should be able to function without a logger.
If I use a custom wrapper that uses a static class to create the log
That's a service locator, that code couples MyClass to LogManager, which is IMHO worse than what you were trying to get away from.