The W3C trace context defines the traceparent and tracestate headers for enabling distributed tracing.
My question(s) is then
How is it different from OpenTracing.
If W3C has already defined usage of the headers, then is opentracing using some other headers?
OpenTracing, by design, did not define a format for propagating tracing headers. It was the responsibility of libraries who implemented OpenTracing to provide their own format for serialization/de-serialization of the span context. This was mostly an effort to be as broadly compatible as possible. Generally, you'll find three different popular header formats for OpenTracing - Zipkin (B3-*), Jaeger (uber-*), and the OpenTracing 'sample' headers (ot-*), although some vendors have started to add W3C TraceContext as well.
OpenTelemetry has chosen to adopt W3C TraceContext as one of it's core propagation formats (in addition to Zipkin's B3 format) which should alleviate this problem in the future.
Related
EmbeddedJMS is deprecated in favor of org.apache.activemq.artemis.core.server.embedded.EmbeddedActiveMQ.
With EmbeddedJMS you can set a JMSConfiguration.
Configuration configuration = new ConfigurationImpl();
...
JMSConfiguration jmsConfig = new JMSConfigurationImpl();
...
EmbeddedJMS jmsServer = new EmbeddedJMS().setConfiguration(configuration).setJmsConfiguration(jmsConfig).start();
How the JmsConfiguration can now be set to EmbeddedActiveMQ?
Server-side configuration should be expressed exclusively in terms of "core" resources (i.e. addresses, queues, & routing-types). See the documentation for details on how JMS queues and JMS topics map to core resources.
Also, there should be no need to configure any JNDI related details from JMQConfiguration anymore since JNDI look-ups are now handled by a client-side only implementation. See the documentation for more details on that as well.
To be clear, JMS-specific configuration elements (both programmatic and XML) were deprecated after the scope of ActiveMQ Artemis was broadened by adding support for STOMP, AMQP, & MQTT. Like JMS, each of these protocols has its own quirks and conventions. However, we didn't want to add specific XML elements and APIs to support each protocol, and ultimately it no longer made sense to have the same for JMS either.
This is not Java specific question, but let's have an example in Java: It is a standard practice in the Java world to add xmime:expectedContentTypes="*/* to base64 elements to enable MTOM processing on the server side - it results in the #XmlMimeType annotation, use of DataHandlers instead of byte arrays etc. While this description is of course greatly simplified, xmime:expectedContentTypes="*/* is usually recognized as 'MTOM ready' by the developers (and more importantly also by the implementing libraries) when seen in the schema. From what I've gathered from the examples, the situation is the same in the C# world.
It does however make no sense to me - the attribute specifies what kind of data we might actually expect in the XML, not that it can be used together with MTOM. I have also not found any direct connection between expected content type and MTOM in any RFC or similar document for SOAP 1.1.
My question can be phrased in two ways:
How does the service make clear that it accepts / serves binary data as MTOM attachments in the request / response?
How does the client correctly recognize that the binary data can be sent / obtained by using MTOM attachments for the given service?
It seems you are slightly confused between attachments, SOAP Attachment and MTOM.
SOAP-Attachment was first introduced in December 2000 as a W3C note (not a specification) and defined an extension to the transport binding mechanisms defined in SOAP 1.1. In particular, this note defined:
a binding for a SOAP 1.1 message to be carried within a MIME multipart/related message in such a way that the processing rules for the SOAP 1.1 message are preserved.The MIME multipart mechanism for encapsulation of compound documents can be used to bundle entities related to the SOAP 1.1 message such as attachments.
In simple terms, it defined a mechanism for multiple documents (attachments) to be associated with SOAP message in their native formats using a multipart mime structure for transport. This was achieved using a combination of "Content-Location" and "Content-ID" headers along with a set of rules for interpreting the URI that was referred to by "Content-Location" headers.
A SOAP message in this format can be visualized as below (encapsulated as multipart/mime):
This is also the format that you might have worked with when you used SAAJ, but is not recommended anymore, unless you are working with legacy code. The W3C note was later revised to a "feature" level in 2004 (along with SOAP 1.2) and was eventually superseded by SOAP MTOM mechanism.
SOAP Message Transmission Optimization Mechanism (MTOM) is officially defined as not one, but three separate features that work together to deliver the functionality:
"Abstract SOAP Transmission Optimization Feature" describes an abstract feature for optimizing the transmission and/or wire format of the SOAP message by selectively encoding portions of the message, while still presenting an XML infoset to the SOAP application.
"An optimized MIME Multipart/Related serialization of SOAP Messages" describes an Optimized MIME Multipart/Related Serialization of SOAP Messages implementing the Abstract SOAP Transmission Optimization Feature in a binding independent way.
"HTTP SOAP Transmission Optimization Feature" describes an implementation of the Abstract Transport Optimization Feature for the SOAP 1.2 HTTP binding.
If you read the second document, you will realize that "attachments" has been replaced with XML binary optimized "packages" or XOP.
A XOP package is created by placing a serialization of the XML Infoset inside of an extensible packaging format (such a MIME Multipart/Related, see [RFC 2387]). Then, selected portions of its content that are base64-encoded binary data are extracted and re-encoded (i.e., the data is decoded from base64) and placed into the package. The locations of those selected portions are marked in the XML with a special element that links to the packaged data using URIs.
In simple terms, this means that instead of encapsulating data as "attachment" in a multipart/mime message, the data is now referred to by a "pointer" or links. The following diagrams may assist in understanding:
Now that we have the background, let us come back to your questions.
How does the service make clear that it accepts / serves binary data as MTOM attachments in the request / response?
It does not. There is no concept of an attachment with MTOM, and thus a server can't declare that it accepts attachments.
How does the client correctly recognize that the binary data can be sent / obtained by using MTOM attachments for the given service?
Like I said above, there is no way for a client to do this as "attachments" are not supported.
Having said that, there is yet another W3C spec on XML media types that states:
The xmime:contentType attribute information item allows Web services applications to optimize the handling of the binary data defined by a binary element information item and should be considered as meta-data. The presence of the xmime:contentType attribute does not changes the value of the element content.
When you enable MTOM using xmime:contentType and xmime:expectedContentTypes="application/octet-stream (* should not be used), the generated WSDL will have an entry like this:
<element name="myImage" xmime:contentType="xsd:base64Binary" xmime:expectedContentTypes="application/octet-stream"/>
This is server's way of declaring that it can receive an XML binary optimized package (which could be broken down into multipart MIME message).
When the client sees the above, the client knows server can accept XML binary optimized packages and generates appropriate HTTP requests as defined Identifying XOP Documents:
XOP Documents, when used in MIME-like systems, are identified with the "application/xop+xml" media type, with the required "type" parameter conveying the original XML serialization's associated content type.
Hope that helps!
I do want to build a generic gateway from a nested map (generated from binary data stream) to SOAP- clients.
Background: a non-java-application which needs to call SOAP-Services can't generate json or SOAP/XML, but easily generate a custom protocol (which is under our control).
So a proxy is needed. That proxy should not be rewritten on every change of the WSDL or rollout of the next Webservice.
My plan is:
to have url, port and service-name (url:port/service-name) as "strict" defined parameters of that proxy,
to have the SOAP Action as a "strict" defined parameter
to request (possibly cached) the wsdl of url:port/service-name?wsdl and initiate the stub-call dynamically (cached),
to fill the values, which are present in the nested map, to that stub
call the SOAP-Service
convert the answer back to that binary protocol.
If some necessary values are missing it should send the equivalent of a SOAP-Error.
All that of course with small (affordable) latency, high stability, absolute minimal deployment downtime (for updates) and quite some load.
I see several possibilities:
a) Using a ESB like WSO2ESB. There I would implement the stream format as a special input format adapter, convert it to internal XMLStream (at least the json-adapters seem to work that way) and send it to mediator. That mediator would try something like in
http://today.java.net/pub/a/today/2006/12/13/invoking-web-services-using-apache-axis2.html "Creating a Dynamic Client" and call the SOAP-Service directly.
b) using a MOM-Middleware like ApacheMQ with Camel,
c) reduce it to something like Apache Karaf and CXF
I'm a bit lost between all those possibilities, and those are just more or less arbitrary samples of each kind.
Thoughts to a):
minus: It feels a bit odd to have no ESB-Target, since the mediator would directly call the given SOAP-Requests
minus: I wonder if internally converting into XML-Stream would not cost extra time and resources
minus: changing the code needs restart of the WSO2ESB as far as I got it
plus: instead of url, port, service-name I could define symbolic names which are resolved using the ESB -- iff that doesn't take extra milliseconds.
For b) I have not yet checked how easily those format conversions are in Camel and if SOAP-Service-Requests fit into Message Sending and Queueing.
I did already some searches to that topic but it's really confusing because of the overlapping scopes of quite different products. I thought it to be a standard problem but apparently there are no obvious solutions - at least I didn't find them.
I do hope to get a clue which of those solutions could lead into trouble or much work (and which into easy success), and I hope that there is some reason in my approach.
Thanks for any qualified comments!
Marco
I am using Resteasy and GWT. For certain reasons, as many others have similar motivations, I am not using GWT-RPC for some of the functionality of the software I am working on.
I need to pass POJOs between GWT client and server by marshalling/demarshalling the POJOs into/from JSON.
OK, easier said than done because I need the POJO-JSON converters on both sides to match.
Q1. Is there a standard POJO notation in JSON? Is there an ietf RFC or ISO or ECMA that specifies the format of POJO notation in JSON? Or is it a free for all, libertarian anarchy?
Q2. Do Jettison and Jackson (when used with JAXB) and Autobeans produce the same JSON for POJOs?
Q3. This is the most crucial question. You can ignore the other questions above but you MUST answer this. Give me a combination pair of server-side and GWT client side JSONizer/deJSONizer that works together. For example, can I use Autobeans on client-side and use JAXB-jettison on server side and expect the JSONized POJO notation to be the same?
Q4. Is it possible to use JAXB-Jettison or JAXB-Jackson on GWT client-side by including the java source code for JAXB, Jettison/Jackson in the whatever.gwt.xml file? Are there parts of JAXB, Jettison/Jackson source code that might e.g., depend on reflection, or non-serializable, etc that would not make it possible to use JAXB + Jettison/Jackson in GWT client code? If possible, please explain how?
~
I should clarify concerning Q1:
I am not asking about RFC for JSON. I am asking about JSON POJO format. When a POJO is converted to JSON, everybody does it their own way - so, I am thinking that there should be an RFC to standardise the way and format a POJO is converted to JSON. Is there a standard or not? !!I hope your answers should not quote me the RFC for JSON!!
~
What about
Someone needs to tell me about
badgerfish on GWT client
and GWT client-server matched JSON-RPC.
There is no standard for mapping, but I would claim there is obvious simple mapping, given simplicity of JSON format, and de facto standard of Java Beans (i.e. mapping of set/get methods to logical property names). One of few exceptions is Jettison.
Jettison is not as much a JSON/POJO library as it is JSON<->XML library: it converts JSON to XML API calls (and vice versa), to allow use of XML tools such as JAXB for XML data binding, on JSON. But the cost here is that JSON it produces and consumes has extra complexity which is only needed to work with XML APIs. And this is what makes it non-standard compared to the usual straight-forward bindings like used by Jackson, GSON, Flex-json and other "native" JSON libs.
I would recommend not using Jettison unless you really, really must for some reason. Not even if you produce both XML and JSON -- usually you are better off mapping JSON to/from POJOs using JSON tools, and XML separate to/from POJOs (using JAXB etc).
Jettison was intended to bridge the gap between (then) more mature XML tools and newish JSON format. But there isn't much benefit nowadays when there are dozens of mature JSON libraries available.
JSON is just a subset of JavaScript, it was "invented" by Douglas Crockford. Here is the RFC for application/json: http://www.ietf.org/rfc/rfc4627.txt?number=4627. So any of your server side solutions should create the same result.
We are using RestyGwt ( http://restygwt.fusesource.org/ ) on the clientside and it works like charm. Its JSON encoding style is compatible with the default Jackson Data Binding so it should work with Jackson as well.
I was wondering if someone could explain the differences in a SOAP request/response of a Web service with the following wsdl binding style/use:
Document/literal
RPC/literal
wrapped document style
Thanks in advance
This article from IBM DeveloperWorks [Which style of WSDL should I use?] has an excellent explanation of the differences between these binding styles. In a nutshell, the only differences are the values of the SOAP binding "style" attribute ("rpc" or "document") in the WSDL file and the way the message arguments and return values are defined (and thus how they appear in the SOAP messages themselves):
[Note the reordering of items from the question to emphasize relationships]
RPC/literal - WSDL message element defines arguments and return value(s) of operations.
PROS: simple WSDL, operation name appears in SOAP message, WS-I compliant.
CONS: difficult to validate since arguments are defined in WSDL not XSD.
Document/literal - WSDL message parts are references to elements defined in XML Schema.
PROS: easily validated with XSD, WS-I compliant but allows breakage.
CONS: complicated WSDL, SOAP message does not contain operation name.
Document/literal wrapped (or "wrapped document style") - WSDL message input has single input and output parameters and input refers to XSD element with same local name as WSDL operation.
PROS: easily validated, SOAP message contains operation name, WS-I compliant.
CONS: most complicated WSDL (not an official style but a convention).
In my experience, #3 (Document/literal Wrapped) is very common in large enterprise projects because it is both Microsoft and OSS friendly and it is well suited for a top-down development model (e.g. WSDL/XSD first, then generate code artifacts). Microsoft invented it [1] and popular Java/OSS tools (Axis2, JAX-WS) support it explicitly.
The "real world" difference likely comes down to which styles are supported — and how well — by the tools of your choice.