Can an API in SOAP/WSDL be kept backwards compatible easily? - soap

When using an IPC library, it is important that it provides the possibility that both client and server can communicate even when their version of the API differs. As I'm considering using SOAP for our client/server application, I wonder whether a SOAP/WSDL solution can deal with API changes well.
For example:
Adding parameters to existing functions
Adding variables to existing structs that are used in existing functions
Removing functions
Removing parameters from existing functions
Removing variables from existing structs that are used in existing functions
Changing the type of a parameter used in an existing function
Changing the order of parameters in an existing function
Changing the order of composite parts in an existing struct
Renaming existing functions
Renaming parameters
Note: by "struct" I mean a composite type

As far as I know there is not such stuff as per the SOAP/WSDL standard. But tools exists to cope with such issues. For instance, in Glassfish you can specify XSL stylesheet to transform the request/response of a web service. Other solution such as Oracle SOA suite offer much more elaborated tools to manage versioning of web service and integration of component together. Message can be routed automatically to different version of a web service and/or transformed. You will need to check what your target infrastructure offers.
EDIT:
XML and XSD is more flexible regarding evolution of the schema than types and serialization in object-oriented languages. Some stuff can be made backward compatible by simply declaring them as optional, e.g.
Adding parameters to existing functions - if a parameter is optional, you get a null value if the client doesn't send it
Adding variables to existing structure that are used in existing functions - if the value is optional, you get null if the client doesn't provide it
Removing functions - no magic here
Removing parameters from existing functions - parameters sent by the client will be superfluous according to the new definition and will be omitted
Removing variables from existing structure that are used in existing functions - I don't know in this case
Changing the type of a parameter used in an existing function - that depends on the change. For a simple type the serialization/deserialization may still work, e.g. String to int.
Note that I'm not 100% sure of the list. But a few tests can show you what works and what doesn't. The point is that XML is sent over the wire, so it gives some flexibility.

It doesn't. You'll have to manage that manually somehow. Typically by creating a new interface as you introduce major/breaking changes.
More generally speaking, this is an architectural problem, rather than a technical one. Once an interface is published, you really need to think about how to handle changes.

Related

Runtime creation and persistence of executable model rules

We have the need to create and persist rules at runtime. The goal is to create the rules, persist them and then reload them at a later point in time. Using bits and pieces of code cobbled together from drools unit tests, I can successfully create rules from DRL strings and then persist them to a kjar. And using the new KieBuilder.buildAll overload, the kjar (presumably) is built using the new executable model. All of that seems to work.
But what I really want to do is eliminate the DRL strings entirely and create my rules at runtime using the flow or pattern DSL. Again, using example code, I can create those rules at runtime, and execute them in a session. What I can’t seem to do is actually persist them as a kjar (or any other form that I can devise). It seems that the end result of building a rule using flow or pattern DSL is a KieBase. And there seems to be no way to serialize or persist a KieBase. At some point in the process, I need to be able to getBytes() in order to persist the KieBase.
For example, I can create the KieBase like this:
Rule rule = getRule();
ModelImpl model = new ModelImpl().addRule( rule );
KieBase kieBase = KieBaseBuilder.createKieBaseFromModel( model );
But I then need to be able to persist that newly created kieBase so it can be reloaded later. And there doesn't seem to be a workable way to do that.
Any suggestions? I’m using 7.7.0 for my testing.
UPDATE 2018-07-23
Let me clarify my original question with additional information. There are really two use cases where I’d like to use the new executable model to author rules in Java: 1) at design time; 2) at run time. Each use case has slightly different requirements, and so far I’ve been unsuccessful in getting either one to work completely.
For the 1st use case, at design time I need the ability to write rules in Java (using the new pattern DSL) and then save those rules to a kjar. Once there, they can be loaded into a KieServer instance and executed. Purportedly the Kie Maven Plugin can do this, and I’ve attempted to follow the instructions given in the drools doc (for example section 2.2.1.4 of the 7.8.0 doc). But those instructions appear to be incomplete, and there just aren’t any examples of how to accomplish this. What file or files need to be added to the resources\META-INF folder to identify the rules? How are the rules actually exposed in the Java code? Do they need to be in a particular type of class? Are the rules returned from public methods? How are those methods identified as having rules? Are any Java annotations needed to make this work?
All of those questions would be answered for me if there was just one simple end-to-end example that demonstrated how to author a rule in Java, AND create the kjar containing that rule.
For the 2nd use case (actually the more important of the two for me), I need the ability to dynamically create rules at runtime. Based on configuration data within our application, multiple rules need to be programmatically created and ultimately loaded into a KieServer instance. My assumption was that the process would be similar to use case #1 where a kjar could be programmatically created and then loaded into the KieServer. And remember that in this case, the Maven Plugin isn’t in the picture since this is all being done at runtime, not design time. Using the examples for the executable model (primarily the unit tests), I can author the rules in Java, and I can execute them. But I’ve found no way to actually build a kjar from them, or to directly load them into a KieServer.
To execute the rules, they have to be in a specific Java file and the kjar needs to have a file into the META-INF folder stating where the rules actually are.
Take a look at what's the maven plugin doing here
https://github.com/kiegroup/droolsjbpm-integration/blob/master/kie-maven-plugin/src/main/java/org/kie/maven/plugin/GenerateModelMojo.java#L165
There will be probably an easier way in the future, but I can't tell you when.
Thank you for using the bleeding edge features, and good luck with that.

OCM or Nodes in JCR?

We are developing a CMS based on JCR/Sling/JSP/Felix/etc.
What I found so far is using Nodes are very straight forward and flexible. But my concern is over time it could become too hard to maintain and manage.
So, is it wise to invest in using a OCM? Would it be just an extra layer of complexity? What's the real benefit in OCM if there's any? Or it's better for us to stick to Nodes instead?
And lastly, is Jackrabbit OCM the best option for us if we are to go down that path?
Thank you.
In my personal experience I can say it severly depends on your situation if OCM is a useful tool for your project or not.
The real problem in using OCM (in my personal experience) is when the definition of a class used in existing persisted data (as objects) in the repository has changed. For example: you found it necessary to change some members and methods of a class to match with functionality changes. By this I mean that the class definition of the persisted data object in the repository no longer matches the definition of actual class. When a persisted data is saved to the jcr repository it is usually saved in a format that java understands in terms of serialization. Which means that when something changes to the definition of the used class, the saved data in the repository can no longer be correctly interpreted by java. This issue tends to lead to complex deployment where you need to convert old persisted data objects to the new definition and save them again in the repository to make sure you can still use "old" but still required persisted data.
What does work (in my opinion) is using a framework that allows to map nodes and node properties to java objects directly (for example by using annotations) and the other way around (persist a java object to the repository as a JCR node where the java member fields are actual node properties). This way you stick to the data representation of jcr (nodes with properties) and can still map them to the members of a java class.
I've used a framework like this in a cms called AEM (of Adobe) before, although I must mention this is in a OSGI context (but the prinicipe still stands). The used framework basically allowed maximum flexibility and persists the java object as a JCR node and the other way around. Because it mapped directly to the jcr definition, code changes in the class and members ment just changing annotations, and old persisted data was still usuable without much effort.

Java Rest Client consuming JSON - how to create JAXB objects?

I need to write a rest client (in Java - using RestEasy) that can consume JSON responses. Regarding the need for the rest client (or wrapping service) to translate the JSON responses to a Java type, I see the following options:
1. map the response to a string and then use JsonParser tools to extract data and build types manually.
2. Use JAXB annotated POJOs - in conjunction with jackson - to automatically bind the json response to an object.
Regarding 2, is it desirable / correct to define an XSD to generate the JAXB annotated POJOs? I can advantages to doing this using, e.g. reuse by an XML client.
Thanks.
I'm a fan of #2.
The reasoning is that your JAXB annotated model objects essentially are the contract for the business/domain logic that you're trying to represent on a transport level, and POJOs obviously give you excellent control over getter/setter validation, and you can control your element names and namespaces with fine granularity.
With that said, I like having an additional "inner" model of POJOs (if necessary, depending on problem complexity/project scope) to isolate the transport layer from the domain objects. Also, you get a nice warm feeling that you're not directly tied to your transport layer if things need to change internally in your business/domain object representation. A co-worker mentioned Dozer, a tool for mapping beans to beans, but I have no direct experience with it to comment further.
I'm not a fan of generating code from XSDs. Often the code is ugly or downright unreadable; and managing change, however subtle or insignificant can introduce unexpected results. Maybe I'm wrong about that but I require good unit-tests on a proven model.
This is based on my personal experience writing a customer-facing SDK with a hairy XML-over-HTTP (we don't call it REST) API. JAXB/Jackson annotated POJOs made it relatively painless. Hope that helps.

Sending persisted JDO instances over GWT-RPC

I've just started learning Google Web Toolkit and finished writing the Stock Watcher tutorial app.
Is my thinking correct that if one wants to persist a business object (like a Stock) using JDO and send it back and forth to/from the client over RPC then one has to create two separate classes for that object: One with the JDO annotations for persisting it on the server and another which is serialisable and used over RPC?
I notice the Stock Watcher has separate classes and I can theorise why:
Otherwise the gwt compiler would try
to generate javascript for everything
the persisted class referenced like
JDO and com.google.blah.users.User, etc
Also there may be logic on the server-side
class which doesn't apply to the client
and vice-versa.
I just want to make sure I'm understanding this correctly. I don't want to have to create two versions of all my business object classes which I want to use over RPC if I don't have to.
The short answer is: you don't need to create duplicate classes.
I recommend that you take a look from the following google groups discussion on the gwt-contributors list:
http://groups.google.com/group/google-web-toolkit-contributors/browse_thread/thread/3c768d8d33bfb1dc/5a38aa812c0ac52b
Here is an interesting excerpt:
If this is all you're interested in, I
described a way to make GAE and
GWT-RPC work together "out of the
box". Just declare your entities as:
#PersistenceCapable(identityType =
IdentityType.APPLICATION, detachable
= "false") public class MyPojo implements Serializable { }
and everything will work, but you'll
have to manually deal with
re-attachment when sending objects
from the client back to the server.
You can use this option, and you will not need a mirror (DTO) class.
You can also try gilead (former hibernate4gwt), which takes care of some details within the problems of serializing enhanced objects.
Your assessment is correct. JDO replaces instances of Collections with their own implementations, in order to sniff when the object graph changes, I suppose. These implementations are not known by the GWT compiler, so it will not be able to serialize them. This happens often for classes that are composed of otherwise GWT compliant types, but with JDO annotations, especially if some of the object properties are Collections.
For a detailed explanation and a workaround, check out this pretty influential essay on the topic: http://timepedia.blogspot.com/2009/04/google-appengine-and-gwt-now-marriage.html
I finally found a solution. Don't change your object at all, but for the listing do it this way:
List<YourCustomObject> secureList=(List<YourCustomObject>)pm.newQuery(query).execute();
return new ArrayList<YourCustomObject>(secureList);
The actual problem is not in Serializing the Object... the problem is to Serialize the Collection class which is implemented by Google and is not allowed to Serialize out.
You do not have to create two versions of the domain model.
Here are two tips:
Use a String encoded key, not the Appengine Key class.
pojo = pm.detachCopy(pojo)
...will remove all the JDO enhancements.
You don't have to create separate instances at all, in fact you're better off not doing it. Your JDO objects should be plain POJOs anyway, and should never contain business logic. That's for your business layer, not your persistent objects themselves.
All you need to do is include the source for the annotations you are using and GWT should compile your class just fine. Also, you want to avoid using libraries that GWT can't compile (like things that use reflection, etc.), but in all the projects I've done this has never been a problem.
I think that a better format to send objects through GWT is through JSON. In this case from the server a JSON string would be sent which would then have to be parsed in the client. The advantage is that the final Javascript which is rendered in the browser has a smaller size. thus causing the page to load faster.
Secondly to send objects through GWT, the objects should be serializable. This may not be the case for all objects
Thirdly GWT has inbuilt functions to handle JSON... so no issues on the client end

Entity Framework and Encapsulation

I would like to experimentally apply an aspect of encapsulation that I read about once, where an entity object includes domains for its attributes, e.g. for its CostCentre property, it contains the list of valid cost centres. This way, when I open an edit form for an Extension, I only need pass the form one Extension object, where I normally access a CostCentre object when initialising the form.
This also applies where I have a list of Extensions bound to a grid (telerik RadGrid), and I handle an edit command on the grid. I want to create an edit form and pass it an Extension object, where now I pass the edit form an ExtensionID and create my object in the form.
What I'm actually asking here is for pointers to guidance on doing this this way, or the 'proper' way of achieving something similar to what I have described here.
It would depend on your data source. If you are retrieving the list of Cost Centers from a database, that would be one approach. If it's a short list of predetermined values (like Yes/No/Maybe So) then property attributes might do the trick. If it needs to be more configurable per-environment, then IoC or the Provider pattern would be the best choice.
I think your problem is similar to a custom ad-hoc search page we did on a previous project. We decorated our entity classes and properties with attributes that contained some predetermined 'pointers' to the lookup value methods, and their relationships. Then we created a single custom UI control (like your edit page described in your post) which used these attributes to generate the drop down and auto-completion text box lists by dynamically generating a LINQ expression, then executing it at run-time based on whatever the user was doing.
This was accomplished with basically three moving parts: A) the attributes on the data access objects B) the 'attribute facade' methods at the middle-tier compiling and generation dynamic LINQ expressions and C) the custom UI control that called our middle-tier service methods.
Sometimes plans like these backfire, but in our case it worked great. Decorating our objects with attributes, then creating a single path of logic gave us just enough power to do what we needed to do while minimizing the amount of code required, and completely eliminated any boilerplate. However, this approach was not very configurable. By compiling these attributes into the code, we tightly coupled our application to the datasource. On this particular project it wasn't a big deal because it was a clients internal system and it fit the project timeline. However, on a "real product" implementing the logic with the Provider pattern or using something like the Castle Projects IoC would have allowed us the same power with a great deal more configurability. The downside of this is there is more to manage, and more that can go wrong with deployments, etc.