I have been playing with Spring Cloud Contracts. Here is my understanding of the workflow thus far.
On the server Side
Write the contract (in groovy or yaml)
Auto generate the tests (using gradle plugin)
Setup BaseClass that does appropriate setup setups for the Controller
Run the auto generated Tests
Publish the stubs jar file that is generated to some local repo (which contains wiremock server built in, with request/responses)
On the client side
Download the stub jar file
Write tests against this stub jar. Use stubrunner to verify responses
What I fail to understand is how is this Consumer driven? The contracts seems to originate from the producer, the consumer seems to be passively testing what the producer has published (using stubs jar file). A producer could accidentally not update the contracts, but make breaking changes. This can lead to tests on the client passing even though it should have failed. Is this true or have I misunderstood a step where the contracts are created from the consumer side
Thoughts?
Consumer Driven Contract (CDC) Development is basically a Test Driven Development (TDD) extended to the Producer-Consumer applications. Since it's TDD - tests should come first and then the implementation. And since it's Consumer Driven - the consumer creates tests for the producer.
So let's assume that we a have a Producer and a Consumer and some new feature that needs to be implemented. In CDC the workflow would go as follows (you can find more information in the official documentation).
On the Consumer side:
Write the missing implementation for the feature
Clone Producer repository locally
Define the contract locally in the repository of Producer (and auto-generate unit tests for it)
Run the integration tests (on cosumer's side)
File a pull request
On the Producer side:
Take over the pull request (tests are already generated here by the cosumer)
Write the missing implementation (TDD-style)
Deploy your app
Work online
It all makes sense now since consumer writes contracts for the new feature (but in the producer's repository) - we have a Consumer Driven Approach.
Related
I need to write integration tests and mock reactive Kafka consumer. I see there're ways to do it with blocking Kafka, like using #EmbeddedKafka but was not able to find information about reactive
As linked in the comments, reactor-kafka project themselves use TestContainers for integration tests. If you don't run tests in an environment with Docker, then spring-kafka-test's EmbeddedKafka, or junit-kafka project should still work with Reactive-based Kafka clients since you really only need bootstrap.servers property to point at any valid broker server.
Regarding mocks, you don't need an actual broker; that's the point of mocking. Their source code does include mock classes, too.
You know the dependencies of some microservice from Configuration (if you uses centralized configuration).
According to the image: As you can see, this .yml file (starting with ms-port) of configuration with multiple dependencies to other microservices.
According to the Example The urlRecapcha is pointing to microservices ms-recaptcha.
But, How is it possible to know which microservices consume a certain microservice?
How the dependency know its consumers?
How the microservices ms-recaptcha knows that ms-port is its consumer?
Is it possible?
You can use Consumer Driven Contract approach (e.g. via Spring Cloud Contract - I'm a maintainer of that project, that's why I mention it but you can use other ones like Pact), where you store the information about which application uses which and you can generate tests out of it. You can check this part of the documentation of the Spring Cloud Contract project https://docs.spring.io/spring-cloud-contract/docs/3.1.4/reference/html/howto.html#how-to-common-repo-with-contracts that describes how you can create such a centralized repository that contains all the contracts for your company.
I have a Spring cloud stream application which I need to make an integration test for (to be specific using cucumber). The application communicate with other services using Kafka message broker. From what I know I could make this work using either a kafka testcontainers or using spring provided embedded kafka. But what I don't know is which one would be the best solution so are there anything that the testcontainer could do but embedded can't or the other way around? (use cases or example would be appreciate!)
p.s This integration should be able to run on ci/cd pipeline.
It is called embedded for a reason. It really can be only accessed from the process which spawned it. With Testcontainers you really can reuse existing container and have access to it from the other process. But that's probably to exotic.
I guess with properly configured Testcontainers you can reach as much as possible similarities with the prod you'd deploy your solution. The embedded Kafka might be limited in some areas, e.g. SSL configuration or so.
I want to write tests for spring kafka producer and consumer. I have tried multiple ways:
EmbeddedKafka annotation
EmbeddedKafkaRule
EmbeddedKafkaBroker
etc...
Every time I get one or the other error and all the examples posted on GitHub don't seem to run at all. I checked the spring kafka versions for compatibility as well.
Can someone share an example code base that was recently written and has seen it run successfully?
There are hundreds of tests in the framework itself.
This is probably the most extensive one...
https://github.com/spring-projects/spring-kafka/blob/1b9a9451feea7cca16903f1c990c74c6be9b8ffb/spring-kafka/src/test/java/org/springframework/kafka/annotation/EnableKafkaIntegrationTests.java#L164-L176
Currently, at my company we are migrating from Kafka 0.8 to 0.11, brokers migration steps and clearly stated in kafka documentation here
What I am stuck in is, upgrading the kafka clients (producers, consumers, spark-streaming), I don't find any documentation/ articles listing out clearly what are the required changes or steps to follow to upgarde the client, all what I found is the java doc Producer Client
What I did so far is to change the kafka client version in my gradle to kafka-clients-0.11.0.0, and everything from the compilation point of view went fine with no code changes at all.
What I seek help with is, is there any expected problems I should take care of, any pointers for client changes other than the kafka-client version?
I went through lots of experiments to get this done.
For the consumers and producers, I just used the kafka consumers and producers 0.11.0.
The trick part was replacing spark-streaming, spark-streaming latest version only support upto kafka 0.10.X, which doesn't contains any updates related to the new broker.
What I recommend here, if you are about to write an application from scratch and your main goal is realtime streaming go for kafka-streaming API, it is just AWESOME!, if you already have spark streaming app (which was my case), you should either judge which is more important than the other wether to get stuck with the kafka-broker version 10.X and spark-streaming which was [experimental][1] btw.
The benefits of having the streaming inside kafka not spark the following:
Kafka streaming is a normal jar that can be injected in any java application, so you don't care that much about deployment, and environment
Auto-scaling is so easy when using kafka-streaming using any scaleset provided by any cloud service provider, unlike scaling a HDP cluster.
Monitoring using something like prometheus would be much easier.