Integration testing for spring cloud stream, how to ensure application resources are not used - spring-cloud

I'm trying to do some integration testing for my cloud streaming application. One of the main issues I'm observing so far is that the TestChannelBinderConfiguration keeps picking up the configuration specified in src/main/java/resources/application.yml instead of keeping it as blank (since there is no config file in /src/test/resources/).
If I delete the application.yml file or remove all spring-cloud-stream related configuration, the test passes. How can I ensure that the TestChannelBinderConfiguration does not pick up application.yml file.
#Test
public void echoTransformTest() {
try (ConfigurableApplicationContext context =
new SpringApplicationBuilder(
TestChannelBinderConfiguration.getCompleteConfiguration(DataflowApplication.class))
.properties(new Properties())
.run("--spring.cloud.function.definition=echo")) {
InputDestination source = context.getBean(InputDestination.class);
OutputDestination target = context.getBean(OutputDestination.class);
GenericMessage<byte[]> inputMessage = new GenericMessage<>("hello".getBytes());
source.send(inputMessage);
assertThat(target.receive().getPayload()).isEqualTo("hello".getBytes());
}
}

I resolved this by doing the following:
SpringBootTest(properties = {"spring.cloud.stream.function.definition=reverse"})
#Import(TestChannelBinderConfiguration.class)
public class EchoTransformerTest {
#Autowired private InputDestination input;
#Autowired private OutputDestination output;
#Test
public void testTransformer() {
this.input.send(new GenericMessage<byte[]>("hello".getBytes()));
assertThat(output.receive().getPayload()).isEqualTo("olleh".getBytes());
}
}
and adding an application.yml to the test/resources this ensures that we don't read the src/resources application properties.
Another way was to explicitly define
#TestPropertySource(locations= "test.yml")

Related

How Can I have multiples instances of a Spring boot Repository(Interface), to have a complete test-state-isolation?

1) Contextualization:
In order, to have a complete test-isolation-state in all test of my Test-Class;
I would like to have a new-instance-repository(DAO) for each individual test;
My Repository is a Interface, thats the why I can not simply instantiate that.
My Goal is:
Run all tests 'Parallelly', meaning 'at the same time';
That's the why, I need individual/multiple instances of Repository(DAO) in each test;
Those multiple instances will make sure that the tests' conclusion would not interfere on those that still is running.
Below is the code for the above situation:
1.1) Code:
Current working status: working, BUT with ths SAME-REPOSITORY-INSTANCE;
Current behaviour:
The tests are not stable;
SOMETIMES, they interfere in each other;
meaning, the test that finalize early, destroy the Repository Bean that still is being used, for the test that is still running.
public class ServiceTests2 extends ConfigTests {
private List<Customer> customerList;
private Flux<Customer> customerFlux;
#Lazy
#Autowired
private ICustomerRepo repo;
private ICustomerService service;
#BeforeEach
public void setUp() {
service = new CustomerService(repo);
Customer customer1 = customerWithName().create();
Customer customer2 = customerWithName().create();
customerList = Arrays.asList(customer1,customer2);
customerFlux = service.saveAll(customerList);
}
#Test
#DisplayName("Save")
public void save() {
StepVerifier.create(customerFlux)
.expectNextSequence(customerList)
.verifyComplete();
}
#Test
#DisplayName("Find: Objects")
public void find_object() {
StepVerifier
.create(customerFlux)
.expectNext(customerList.get(0))
.expectNext(customerList.get(1))
.verifyComplete();
}
}
2) The ERROR happening:
This ERROR happens in the failed-Tests:
3) Question:
How Can I create multiple instances of Repository
Even if, it being a Interface(does not allow instantation)?
In order, to have a COMPLETE TEST-ISOLATION
Meaning: ONE different instance of Repository in each test?
Thanks a lot for any help or idea
You can use the #DirtiesContext annotation on the test class that modifies the application context.
Java Doc
Spring documentation
By default, this will mark the application context as dirty after the entire test class is run. If you would like to mark the context as dirty after a single test method, then you can either annotate the test method instead or set the classMode property to AFTER_EACH_TEST_METHOD at your class level annotation.
#DirtiesContext(classMode = ClassMode.AFTER_EACH_TEST_METHOD)
When an application context is marked dirty, it is removed from the
testing framework's cache and closed; thus the underlying Spring
container is rebuilt for any subsequent test that requires a context
with the same set of resource locations.

How to write integration tests for spring-batch-integration?

I'm using spring-integration bundled with spring-batch and got stuck trying to write integration tests to test the whole flow, not just single config.
I've created Embedded Sftp Server for this tests and trying to send message to sftpInboundChannel - the message is sent, but nothing happens, but when i send this message to the next channel (after sftpInboundChannel) it goes ok. Also i'm not able to load test source properties, even though i'm using #TestPropertySource annotation.
This are my class annotations
#TestPropertySource(properties = {
//here goes all the properties
})
#EnableConfigurationProperties
#RunWith(SpringRunner.class)
#Import({TestConfig.class, SessionConfig.class})
#ActiveProfiles("it")
#SpringIntegrationTest
#EnableIntegration
#SpringBootTest
#DirtiesContext(classMode = DirtiesContext.ClassMode.BEFORE_EACH_TEST_METHOD)
This is my class body
#Autowired
private PollableChannel sftpInboundChannel;
#Autowired
private SessionFactory<ChannelSftp.LsEntry> defaultSftpSessionFactory;
#Autowired
private EmbeddedSftpServer server;
#Test
public void shouldDoSmth() {
RemoteFileTemplate<ChannelSftp.LsEntry> template;
try {
template = new RemoteFileTemplate<>(defaultSftpSessionFactory);
SftpTestUtils.moveToRemoteFolder(template);
final List<ChannelSftp.LsEntry> movedFiles = SftpTestUtils.listFilesFromDirectory("folder/subfolder", template);
log.info("Moved file {}", movedFiles.size());
final MessageBuilder<String> messageBuilder = MessageBuilder.withPayload("Sample.txt") // path to file
.setHeader("file_Path", "Sample.txt")
boolean wasSent = this.sftpInboundChannel.send(messageBuilder.build());
log.info("Was sent to sftpInboundChannel channel {}", wasSent);
log.info("message {}", messageBuilder.build());
} finally {
SftpTestUtils.cleanUp();
}
}
To the case of not read the property file one solution is add in your Test class something like this:
#BeforeClass
public static void beforeClass() {
System.setProperty("propertyfile", "nameOfFile.properties");
}
A second way is to create a xml (or class) config where you add the tag:
<context:property-placeholder
location="nameOfFile.properties"
ignore-resource-not-found="true" system-properties-mode="OVERRIDE" />
and your file will be localized.
The property file should be inside of resources folder.

How to generate pretty print docs while using spring cloud contract

Without introducing spring cloud contract, I customized the configuration of restdocs as below,
#Rule
public JUnitRestDocumentation restDocumentation = new JUnitRestDocumentation();
protected WebTestClient http;
#Autowired
private ApplicationContext context;
/**
* setup.
*/
#Before
public void before() {
this.http = WebTestClient.bindToApplicationContext(context)
.configureClient()
.baseUrl("http://theserver")
.filter(WebTestClientRestDocumentation
.documentationConfiguration(this.restDocumentation)
.operationPreprocessors()
.withRequestDefaults(prettyPrint())
.withResponseDefaults(prettyPrint())
)
.build();
}
However while using spring restdocs and cloud contract together, I have to use the annotation to enable rest docs and cloud contract,
#AutoConfigureRestDocs(uriHost = "theserver", uriPort = 80)
#AutoConfigureWebTestClient
public abstract class BaseTest {
Any advice how to generate pretty print docs while generating cloud contract stubs?
What you can do is not to use the #AutoConfigureRestDocs but use the API to pass to WebTestClientRestDocumentation.documentationConfiguration(...) the .snippets().withAdditionalDefaults(new WireMockSnippet()) line. That way by default you will start producing WireMock snippets and all of your previous configuration will not be discarded.

Problems while connecting to two MongoDBs via Spring

I'm trying to achieve to connect to two different MongoDBs with Spring (1.5.2. --> we included Spring in an internal Framework therefore it is not the latest version yet) and this already works partially but not fully. More precisely I found a strange behavior which I will describe below after showing my setup.
So this is what I done so far:
Project structure
backend
config
domain
customer
internal
repository
customer
internal
service
In configI have my Mongoconfigurations.
I created one base class which extends AbstractMongoConfiguration. This class holds fields for database, host etc. which are filled with the properties from a application.yml. It also holds a couple of methods for creating MongoClient and SimpleMongoDbFactory.
Furthermore there are two custom configuration classes. For each MongoDB one config. Both extend the base class.
Here is how they are coded:
Primary Connection
#Primary
#EntityScan(basePackages = "backend.domain.customer")
#Configuration
#EnableMongoRepositories(
basePackages = {"backend.repository.customer"},
mongoTemplateRef = "customerDataMongoTemplate")
#ConfigurationProperties(prefix = "customer.mongodb")
public class CustomerDataMongoConnection extends BaseMongoConfig{
public static final String TEMPLATE_NAME = "customerDataMongoTemplate";
#Override
#Bean(name = CustomerDataMongoConnection.TEMPLATE_NAME)
public MongoTemplate mongoTemplate() {
MongoClient client = getMongoClient(getAddress(),
getCredentials());
SimpleMongoDbFactory factory = getSimpleMongoDbFactory(client,
getDatabaseName());
return new MongoTemplate(factory);
}
}
The second configuration class looks pretty similar. Here it is:
#EntityScan(basePackages = "backend.domain.internal")
#Configuration
#EnableMongoRepositories(
basePackages = {"backend.repository.internal"}
mongoTemplateRef = InternalDataMongoConnection.TEMPLATE_NAME
)
#ConfigurationProperties(prefix = "internal.mongodb")
public class InternalDataMongoConnection extends BaseMongoConfig{
public static final String TEMPLATE_NAME = "internalDataMongoTemplate";
#Override
#Bean(name = InternalDataMongoConnection.TEMPLATE_NAME)
public MongoTemplate mongoTemplate() {
MongoClient client = getMongoClient(getAddress(), getCredentials());
SimpleMongoDbFactory factory = getSimpleMongoDbFactory(client,
getDatabaseName());
return new MongoTemplate(factory);
}
}
As you can see, I use EnableMongoRepositoriesto define which repository should use which connection.
My repositories are defined just like it is described in the Spring documentation.
However, here is one example which is located in package backend.repository.customer:
public interface ContactHistoryRepository extends MongoRepository<ContactHistoryEntity, String> {
public ContactHistoryEntity findById(String id);
}
The problem is that my backend always only uses the primary connection with this setup. Interestingly, when I remove the beanname for the MongoTemplate (just #Bean) the backend then uses the secondary connection (InternalMongoDataConnection). This is true for all defined repositories.
My question is, how can I achieve that my backend really take care of both connections? Probably I missed to set another parameter/configuration?
Since this is a pretty extensive post I apologise if I forgot something to mention. Please ask for missing information in the comments.
I found the answer.
In my package structure there was a empty configuration class (of my colleague) with the annotation #Configurationand #EnableMongoRepositories. This triggered the automatic wiring process of Stpring Data and therefore led to the problems I reported above.
I simply deleted the class and now it works as it should!

Overridden RabbitSourceConfiguration (app starters) does not work with Spring Cloud Edgware

I'm testing an upgrade of my Spring Cloud DataFlow services from Spring Cloud Dalston.SR4/Spring Boot 1.5.9 to Spring Cloud Edgware/Spring Boot 1.5.9. Some of my services extend source (or sink) components from the app starters. I've found this does not work with Spring Cloud Edgware.
For example, I have overridden org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration and bound my app to my overridden version. This has previously worked with Spring Cloud versions going back almost a year.
With Edgware, I get the following (whether the app is run standalone or within dataflow):
***************************
APPLICATION FAILED TO START
***************************
Description:
Field channels in org.springframework.cloud.stream.app.rabbit.source.RabbitSourceConfiguration required a bean of type 'org.springframework.cloud.stream.messaging.Source' that could not be found.
Action:
Consider defining a bean of type 'org.springframework.cloud.stream.messaging.Source' in your configuration.
I get the same behaviour with the 1.3.0.RELEASE and 1.2.0.RELEASE of spring-cloud-starter-stream-rabbit.
I override RabbitSourceConfiguration so I can set a header mapper on the AmqpInboundChannelAdapter, and also to perform a connectivity test prior to starting up the container.
My subclass is bound to the Spring Boot application with #EnableBinding(HeaderMapperRabbitSourceConfiguration.class). A cutdown version of my subclass is:
public class HeaderMapperRabbitSourceConfiguration extends RabbitSourceConfiguration {
public HeaderMapperRabbitSourceConfiguration(final MyHealthCheck healthCheck,
final MyAppConfig config) {
// ...
}
#Bean
#Override
public AmqpInboundChannelAdapter adapter() {
final AmqpInboundChannelAdapter adapter = super.adapter();
adapter.setHeaderMapper(new NotificationHeaderMapper(config));
return adapter;
}
#Bean
#Override
public SimpleMessageListenerContainer container() {
if (config.performConnectivityCheckOnStartup()) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info("Attempting connectivity with ...");
}
final Health health = healthCheck.health();
if (health.getStatus() == Status.DOWN) {
LOGGER.error("Unable to connect .....");
throw new UnableToLoginException("Unable to connect ...");
} else if (LOGGER.isInfoEnabled()) {
LOGGER.info("Connectivity established with ...");
}
}
return super.container();
}
}
You really should never do stuff like healthCheck.health(); within a #Bean definition. The application context is not yet fully baked or started; it may, or may not, work depending on the order that beans are created.
If you want to prevent the app from starting, add a bean that implements SmartLifecycle, put the bean in a late phase (high value) so it's started after everything else. Then put your code in start(). autStartup must be true.
In this case, it's being run before the stream infrastructure has created the channel.
Some ordering might have changed from the earlier release but, in any case, performing activity like this in a #Bean definition is dangerous.
You just happened to be lucky before.
EDIT
I just noticed your #EnableBinding is wrong; it should be Source.class. I can't see how that would ever have worked - that's what creates the bean for the channels field of type Source.
This works fine for me after updating stream and the binder to 1.3.0.RELEASE...
#Configuration
public class MySource extends RabbitSourceConfiguration {
#Bean
#Override
public AmqpInboundChannelAdapter adapter() {
AmqpInboundChannelAdapter adapter = super.adapter();
adapter.setHeaderMapper(new MyMapper());
return adapter;
}
}
and
#SpringBootApplication
#EnableBinding(Source.class)
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
If that doesn't work, please edit the question to show your POM.