we are observing a behavior where in the variables set in the "before-call" and "after-call" blocks of the spy munit processor are not being recognized in the actual mule flows. Is it an expected behavior?
MUnit message processors can only be used in MUnit tests, not in Mule flows.
Related
I need to write integration tests and mock reactive Kafka consumer. I see there're ways to do it with blocking Kafka, like using #EmbeddedKafka but was not able to find information about reactive
As linked in the comments, reactor-kafka project themselves use TestContainers for integration tests. If you don't run tests in an environment with Docker, then spring-kafka-test's EmbeddedKafka, or junit-kafka project should still work with Reactive-based Kafka clients since you really only need bootstrap.servers property to point at any valid broker server.
Regarding mocks, you don't need an actual broker; that's the point of mocking. Their source code does include mock classes, too.
I am using rest assured to automate my project. In the same project, I want to do performance testing in the API. I want to know how can I achieve this task??
If you have existing set of tests and want to run them in multithreaded manner the options are in:
use ExecutorService to run them in parallel
"wrap" them into functions with JMH annotations
use a load testing tool capable of running JUnit tests (or whatever is your xUnit framework) like JUnit Sampler of Apache JMeter
However the above approaches will only allow you to kick off your tests in parallel and you won't be able to collect a lot of metrics like:
number of active threads
number of hits per seconds
response time
HTTP-protocol-based metrics like response code, connect time, latency
so it makes sense considering converting your restassured tests into "real" tests driven by the "normal" load testing tool, the majority of load testing tools provide record-and-replay capability by exposing a HTTP Proxy so if you run your restassured tests via this proxy the load testing tool will capture them and convert them into corresponding HTTP requests.
I have created an ESB application which fails to execute the flow because the library js-engine-1.1-jdk14.jar is present in Mule Runtime server 3.9.0 Community Edition.
So I want to know how can I remove this jar js-engine-1.1-jdk14.jar from the mule runtime server in Anypoint Studio so that my flow can execute properly.
Error is :
com.sun.phobos.script.util.ExtendedScriptException:
org.mozilla.javascript.EcmaError: ReferenceError: "load" is not
defined. (#3) in at line number 3
You should never remove a jar from the runtime, under risk of breaking it. Same with thing with overriding or updating them. Instead, at least for Mule 3.x, you have to adapt your application to the libraries provided.
You didn't mention what was the error or problem why it fails. If the application is using a different and incompatible version than the one provided, then it needs to be modified to use the same one than the runtime.
In the EE you have the alternative method of fine grained class loading control though it is not useful for every use case.
The lib can be found inside plugins/org.mule.tooling.server.3.9.0_6.4.0.201710051922\mule\lib\opt. We can remove it from there.
I have been playing with Spring Cloud Contracts. Here is my understanding of the workflow thus far.
On the server Side
Write the contract (in groovy or yaml)
Auto generate the tests (using gradle plugin)
Setup BaseClass that does appropriate setup setups for the Controller
Run the auto generated Tests
Publish the stubs jar file that is generated to some local repo (which contains wiremock server built in, with request/responses)
On the client side
Download the stub jar file
Write tests against this stub jar. Use stubrunner to verify responses
What I fail to understand is how is this Consumer driven? The contracts seems to originate from the producer, the consumer seems to be passively testing what the producer has published (using stubs jar file). A producer could accidentally not update the contracts, but make breaking changes. This can lead to tests on the client passing even though it should have failed. Is this true or have I misunderstood a step where the contracts are created from the consumer side
Thoughts?
Consumer Driven Contract (CDC) Development is basically a Test Driven Development (TDD) extended to the Producer-Consumer applications. Since it's TDD - tests should come first and then the implementation. And since it's Consumer Driven - the consumer creates tests for the producer.
So let's assume that we a have a Producer and a Consumer and some new feature that needs to be implemented. In CDC the workflow would go as follows (you can find more information in the official documentation).
On the Consumer side:
Write the missing implementation for the feature
Clone Producer repository locally
Define the contract locally in the repository of Producer (and auto-generate unit tests for it)
Run the integration tests (on cosumer's side)
File a pull request
On the Producer side:
Take over the pull request (tests are already generated here by the cosumer)
Write the missing implementation (TDD-style)
Deploy your app
Work online
It all makes sense now since consumer writes contracts for the new feature (but in the producer's repository) - we have a Consumer Driven Approach.
I was trying to use remoting between different akka versions. I have an application running akka 2.2.1 on scala 2.10.2 and an application running akka 2.0.5 on scala 2.9.2. The second app uses a library which is not available for scala 2.10.2, so I cannot simply update the app, neither downgrade the other one. I get a message error saying that the message was not delivered.
To test it, I created a dummy 2.2.1 akka application sending a String to a 2.0.5 akka actor which prints it to the console. To avoid the missing sender, the 2.2.1 app sends a message to an actor which routes it to an actor in the other version.
Are there any known compatibility issues between the two versions?
I already took care of conf files, changing netty and stuff, so it should only be a matter of versions. The dummy apps works fine if they have the same akka versions.
I can provide the error logs if you need them.
The remote communication protocol of Akka is not (yet) compatible between versions, meaning that what you observe is intentional. We need to wait at least one more major release before we can start stabilizing and then freeze the protocol to allow future interoperability. We recommend decoupling components using REST APIs for now and using remoting only where lockstep updates are possible.