How the JmsConfiguration can now be set to EmbeddedActiveMQ? - activemq-artemis

EmbeddedJMS is deprecated in favor of org.apache.activemq.artemis.core.server.embedded.EmbeddedActiveMQ.
With EmbeddedJMS you can set a JMSConfiguration.
Configuration configuration = new ConfigurationImpl();
...
JMSConfiguration jmsConfig = new JMSConfigurationImpl();
...
EmbeddedJMS jmsServer = new EmbeddedJMS().setConfiguration(configuration).setJmsConfiguration(jmsConfig).start();
How the JmsConfiguration can now be set to EmbeddedActiveMQ?

Server-side configuration should be expressed exclusively in terms of "core" resources (i.e. addresses, queues, & routing-types). See the documentation for details on how JMS queues and JMS topics map to core resources.
Also, there should be no need to configure any JNDI related details from JMQConfiguration anymore since JNDI look-ups are now handled by a client-side only implementation. See the documentation for more details on that as well.
To be clear, JMS-specific configuration elements (both programmatic and XML) were deprecated after the scope of ActiveMQ Artemis was broadened by adding support for STOMP, AMQP, & MQTT. Like JMS, each of these protocols has its own quirks and conventions. However, we didn't want to add specific XML elements and APIs to support each protocol, and ultimately it no longer made sense to have the same for JMS either.

Related

Difference between Opentracing and W3C Trace Context (with respect to headers)

The W3C trace context defines the traceparent and tracestate headers for enabling distributed tracing.
My question(s) is then
How is it different from OpenTracing.
If W3C has already defined usage of the headers, then is opentracing using some other headers?
OpenTracing, by design, did not define a format for propagating tracing headers. It was the responsibility of libraries who implemented OpenTracing to provide their own format for serialization/de-serialization of the span context. This was mostly an effort to be as broadly compatible as possible. Generally, you'll find three different popular header formats for OpenTracing - Zipkin (B3-*), Jaeger (uber-*), and the OpenTracing 'sample' headers (ot-*), although some vendors have started to add W3C TraceContext as well.
OpenTelemetry has chosen to adopt W3C TraceContext as one of it's core propagation formats (in addition to Zipkin's B3 format) which should alleviate this problem in the future.

How to use stubsPerConsumer with restdocs

How do I use the stubsPerConsumer feature when creating a stub from a producer with restdocs?
If this is not supported, is it possible to generate the asciidoc snippets from the groovy DSL contract?
Update
It looks like baseClassMappings is not supported when using spring-cloud-contract with restdocs. Has anyone found a clever way to get this to work using the assembly-plugin (that doesn't require a lot of manual setup for each consumer)?
Currently, it's not supported on the producer side with rest docs out of the box. We treat rest docs as a way to do the producer contract approach. Theoretically what you could do is to create different output snippet directories. Instead of for example target/snippets you could do target/snippets/myconsumer. Then with the assembly plugin you would just pick the target/snippets. At least that's how theory would work.
As for the contracts and adocs you can check out this: https://github.com/spring-cloud-samples/spring-cloud-contract-samples/blob/master/beer_contracts/src/test/java/docs/GenerateAdocsFromContractsTests.java . It's a poor man's version of going through all of the contracts and generation of adoc documentation from them.

In Domain Driven Design is it okay to call an application service of another bounded context?

I'm reading Implementing Domain Driven Design by Vaughn Vernon. In one of the examples he shows a Forum being created in the Collaboration bounded context. Before it's created, a Creator value object is instantiated. The information for the Creator object comes from a different bounded context. A HTTP request is made to a REST API to retrieve a User from the Identity and Access bounded context. It is then translated to a Creator object.
private Forum startNewForum(
Tenant aTenant,
String aCreatorId,
String aModeratorId,
String aSubject,
String aDescription,
String anExclusiveOwner) {
Creator creator =
this.collaboratorService().creatorFrom(aTenant, aCreatorId);
Moderator moderator =
this.collaboratorService().moderatorFrom(aTenant, aModeratorId);
Forum newForum =
new Forum(
aTenant,
this.forumRepository().nextIdentity(),
creator,
moderator,
aSubject,
aDescription,
anExclusiveOwner);
this.forumRepository().save(newForum);
return newForum;
}
UserInRoleAdapter makes a call to a REST API in another bounded context and translates that into a Creator object.
public class TranslatingCollaboratorService implements CollaboratorService {
private UserInRoleAdapter userInRoleAdapter;
...
#Override
public Creator creatorFrom(Tenant aTenant, String anIdentity) {
Creator creator =
this.userInRoleAdapter()
.toCollaborator(
aTenant,
anIdentity,
"Creator",
Creator.class);
return creator;
}
...
}
JSON retrieved from REST API is used to instantiate a Creator object.
private T extends <Collaborator> T newCollaborator(String aUsername,
String aFirstName,
String aLastName,
String aEmailAddress,
Class<T> aCollaboratorClass)
throws Exception {
Constructor<T> ctor =
aCollaboratorClass.getConstructor(
String.class, String.class, String.class);
T collaborator =
ctor.newInstance(
aUsername,
(aFirstName + " " + aLastName).trim(),
aEmailAddress);
return collaborator;
}
Is this call to the REST API equivalent to calling an application service method in the other bounded context directly? Or how is it different? Is calling application services directly in other bounded contexts and then translating the result allowed? Is having a REST API for communicating directly between bounded contexts common?
EDIT: My question is similar to this one but I just want a bit more information. In that question it is said that it is preferable to use events and keep local copies of data over calling an application method in another bounded context. If these application methods are put behind REST interfaces, is it still preferable to use events. Is it always frowned upon to call application methods of other bounded contexts directly or should it just be used as least as possible?
Bounded Context boundaries have nothing to do with message propagation way. It's up to the architects to decide how to organize application, whether to decompose it into microservices, etc. It's important to be explicit and strict about the interface. REST or RPC or direct call, you name it, is just a transport.
Of course, almost by definition concepts inside the bounded context are more tightly coupled, than external things, which usually makes it possible to make a separate service, but I doubt DDD insists on any specific way. The book in question mentions for instance Open Host Service pattern among others in "Mapping Three Contexts" saying: "We generally think of Open Host Service as a remote procedure call (RPC) API, but it can be implemented using message exchange.", which can be just book's author opinion.
I think, sometimes (and it is also brought up in the book), Anticorruption Layer (of Collaboration Context) is used for effective decoupling, because, in book example, Creator is Collaboration Context concept, and User comes from another service, and only some aspects are needed for Collaboration (the book describes it at length, I do not want to repeat).
The only problem with direct call I see is that when doing direct calls making adapters "out if thin air" may feel "overkill" for some, and User may end up being used. With REST it's psychologically easier to translate external context's concept into what is consumed in the context in question.
While not directly relevant, the following question/answers may provide you more insights: Rest API and DDD .
Using concrete message-passing technique is orthogonal to applying DDD methodology. It's more into specific situation and desired system characteristics, and may also be a matter of opinion. For example, whether to save (cache?) information from another Bounded Context is up to the software designer. Not from the DDD books, but resolution of the CAP theorem for the distributed system you build depends on requirements, there is no generic answer possible to guide you.
In the same author's another book - "Domain-Driven Design Distilled", Chapter 4. Strategic Design with Context Mapping, you can find some clarifications, to cite: "You may be wondering what specific kind of interface would be supplied to allow you to integrate with a given Bounded Context. That depends on what the team that owns the Bounded Context provides. It could be RPC via SOAP, or RESTful interfaces with resources, or it could be a messaging interface using queues or Publish-Subscribe. In the least favorable of situations you may be forced to use database or file system integration, but let’s hope that doesn’t happen..."

Java: Fast and generic gateway Data to Soap

I do want to build a generic gateway from a nested map (generated from binary data stream) to SOAP- clients.
Background: a non-java-application which needs to call SOAP-Services can't generate json or SOAP/XML, but easily generate a custom protocol (which is under our control).
So a proxy is needed. That proxy should not be rewritten on every change of the WSDL or rollout of the next Webservice.
My plan is:
to have url, port and service-name (url:port/service-name) as "strict" defined parameters of that proxy,
to have the SOAP Action as a "strict" defined parameter
to request (possibly cached) the wsdl of url:port/service-name?wsdl and initiate the stub-call dynamically (cached),
to fill the values, which are present in the nested map, to that stub
call the SOAP-Service
convert the answer back to that binary protocol.
If some necessary values are missing it should send the equivalent of a SOAP-Error.
All that of course with small (affordable) latency, high stability, absolute minimal deployment downtime (for updates) and quite some load.
I see several possibilities:
a) Using a ESB like WSO2ESB. There I would implement the stream format as a special input format adapter, convert it to internal XMLStream (at least the json-adapters seem to work that way) and send it to mediator. That mediator would try something like in
http://today.java.net/pub/a/today/2006/12/13/invoking-web-services-using-apache-axis2.html "Creating a Dynamic Client" and call the SOAP-Service directly.
b) using a MOM-Middleware like ApacheMQ with Camel,
c) reduce it to something like Apache Karaf and CXF
I'm a bit lost between all those possibilities, and those are just more or less arbitrary samples of each kind.
Thoughts to a):
minus: It feels a bit odd to have no ESB-Target, since the mediator would directly call the given SOAP-Requests
minus: I wonder if internally converting into XML-Stream would not cost extra time and resources
minus: changing the code needs restart of the WSO2ESB as far as I got it
plus: instead of url, port, service-name I could define symbolic names which are resolved using the ESB -- iff that doesn't take extra milliseconds.
For b) I have not yet checked how easily those format conversions are in Camel and if SOAP-Service-Requests fit into Message Sending and Queueing.
I did already some searches to that topic but it's really confusing because of the overlapping scopes of quite different products. I thought it to be a standard problem but apparently there are no obvious solutions - at least I didn't find them.
I do hope to get a clue which of those solutions could lead into trouble or much work (and which into easy success), and I hope that there is some reason in my approach.
Thanks for any qualified comments!
Marco

Avoid namespace conflicts in Java MPI-Bindings

I am using the MPJ-api for my current project. The two implementations I am using are MPJ-express and Fast-MPJ. However, since they both implement the same API, namely the MPJ-API, I cannot simultaneously support both implementations due to name-space collisions.
Is there any way to wrap two different libraries with the same package and class-names such that both can be supported at the same time in Java or Scala?
So far, the only way I can think of is to move the module into separate projects, but I am not sure this would be the way to go.
If your code use only a subset of MPI functions (like most of the MPI code I've reviewed), you can write an abstraction layer (traits or even Cake-Pattern) which defines the ops your are actually using. You can then implement a concrete adapter for each implementation.
This approach will also work with non-MPI communication layers (think Akka, JGroups, etc.)
As a bonus point, you could use the SLF4J approach: the correct implementation is chosen at runtime according to what's actually in the classpath.