I'm developing a SOAP service using JAX-WS and JAXB under Netbeans 6.8, and getting a little frustrated with Netbeans trashing my work every time the XSD schema my JAXB bindings are based upon changes.
To elaborate, the IDE automatically generates classes bound to the schema, which can then be (un)marshalled from/to XML using JAXB. To these classes I've added extra methods to (for example) convert to and from separate classes designed to be persisted to database with JPA. The problem is that whenever the schema changes and I rebuild, these classes are regenerated, and all my custom methods are deleted. I can manually replace them by copy-pasting from a backup file, but that is rather time-consuming and tedious. As I'm using an iterative design approach, the schema is changing rather frequently and I'm wasting an awful lot of time whenever it does, simply to reinstate my previous code.
While the IDE automatically regenerating the JAXB-bound classes is entirely reasonable and I don't mean to imply otherwise, I was wondering if anyone had any bright ideas as to how to prevent my extra work having to be manually reinstated every time my schema changes?
Making modifications to the XJC-generate source isn't really a good idea, for reasons that you've discovered. You should either use a binding customization or XJC to plugin to generate the additional code that you need, or else move your additional code out of the XJC-generate code and into separate source files.
If your additional code is there to convert between JAXB and JPA class models, then it can probably stand on its own as a distinct translation layer. It's not very OO that way, but it'll get around your problem.
Alternatively, there's an XJC plugin that is supposed to let you preserve code that's been manually added to the generated source, but it's poorly documented (and I haven't used it myself). You might have to dog around on http://jaxb.dev.java.net/ to find out how to use it.
Instead of modifying the generated classes to add methods - extend the generated classes and put your additional methods in your classes derived from the generated classes.
Related
We have the need to create and persist rules at runtime. The goal is to create the rules, persist them and then reload them at a later point in time. Using bits and pieces of code cobbled together from drools unit tests, I can successfully create rules from DRL strings and then persist them to a kjar. And using the new KieBuilder.buildAll overload, the kjar (presumably) is built using the new executable model. All of that seems to work.
But what I really want to do is eliminate the DRL strings entirely and create my rules at runtime using the flow or pattern DSL. Again, using example code, I can create those rules at runtime, and execute them in a session. What I can’t seem to do is actually persist them as a kjar (or any other form that I can devise). It seems that the end result of building a rule using flow or pattern DSL is a KieBase. And there seems to be no way to serialize or persist a KieBase. At some point in the process, I need to be able to getBytes() in order to persist the KieBase.
For example, I can create the KieBase like this:
Rule rule = getRule();
ModelImpl model = new ModelImpl().addRule( rule );
KieBase kieBase = KieBaseBuilder.createKieBaseFromModel( model );
But I then need to be able to persist that newly created kieBase so it can be reloaded later. And there doesn't seem to be a workable way to do that.
Any suggestions? I’m using 7.7.0 for my testing.
UPDATE 2018-07-23
Let me clarify my original question with additional information. There are really two use cases where I’d like to use the new executable model to author rules in Java: 1) at design time; 2) at run time. Each use case has slightly different requirements, and so far I’ve been unsuccessful in getting either one to work completely.
For the 1st use case, at design time I need the ability to write rules in Java (using the new pattern DSL) and then save those rules to a kjar. Once there, they can be loaded into a KieServer instance and executed. Purportedly the Kie Maven Plugin can do this, and I’ve attempted to follow the instructions given in the drools doc (for example section 2.2.1.4 of the 7.8.0 doc). But those instructions appear to be incomplete, and there just aren’t any examples of how to accomplish this. What file or files need to be added to the resources\META-INF folder to identify the rules? How are the rules actually exposed in the Java code? Do they need to be in a particular type of class? Are the rules returned from public methods? How are those methods identified as having rules? Are any Java annotations needed to make this work?
All of those questions would be answered for me if there was just one simple end-to-end example that demonstrated how to author a rule in Java, AND create the kjar containing that rule.
For the 2nd use case (actually the more important of the two for me), I need the ability to dynamically create rules at runtime. Based on configuration data within our application, multiple rules need to be programmatically created and ultimately loaded into a KieServer instance. My assumption was that the process would be similar to use case #1 where a kjar could be programmatically created and then loaded into the KieServer. And remember that in this case, the Maven Plugin isn’t in the picture since this is all being done at runtime, not design time. Using the examples for the executable model (primarily the unit tests), I can author the rules in Java, and I can execute them. But I’ve found no way to actually build a kjar from them, or to directly load them into a KieServer.
To execute the rules, they have to be in a specific Java file and the kjar needs to have a file into the META-INF folder stating where the rules actually are.
Take a look at what's the maven plugin doing here
https://github.com/kiegroup/droolsjbpm-integration/blob/master/kie-maven-plugin/src/main/java/org/kie/maven/plugin/GenerateModelMojo.java#L165
There will be probably an easier way in the future, but I can't tell you when.
Thank you for using the bleeding edge features, and good luck with that.
We are developing a CMS based on JCR/Sling/JSP/Felix/etc.
What I found so far is using Nodes are very straight forward and flexible. But my concern is over time it could become too hard to maintain and manage.
So, is it wise to invest in using a OCM? Would it be just an extra layer of complexity? What's the real benefit in OCM if there's any? Or it's better for us to stick to Nodes instead?
And lastly, is Jackrabbit OCM the best option for us if we are to go down that path?
Thank you.
In my personal experience I can say it severly depends on your situation if OCM is a useful tool for your project or not.
The real problem in using OCM (in my personal experience) is when the definition of a class used in existing persisted data (as objects) in the repository has changed. For example: you found it necessary to change some members and methods of a class to match with functionality changes. By this I mean that the class definition of the persisted data object in the repository no longer matches the definition of actual class. When a persisted data is saved to the jcr repository it is usually saved in a format that java understands in terms of serialization. Which means that when something changes to the definition of the used class, the saved data in the repository can no longer be correctly interpreted by java. This issue tends to lead to complex deployment where you need to convert old persisted data objects to the new definition and save them again in the repository to make sure you can still use "old" but still required persisted data.
What does work (in my opinion) is using a framework that allows to map nodes and node properties to java objects directly (for example by using annotations) and the other way around (persist a java object to the repository as a JCR node where the java member fields are actual node properties). This way you stick to the data representation of jcr (nodes with properties) and can still map them to the members of a java class.
I've used a framework like this in a cms called AEM (of Adobe) before, although I must mention this is in a OSGI context (but the prinicipe still stands). The used framework basically allowed maximum flexibility and persists the java object as a JCR node and the other way around. Because it mapped directly to the jcr definition, code changes in the class and members ment just changing annotations, and old persisted data was still usuable without much effort.
I have written a wrapper around ADO.NET's DbProviderFactory that I use extensively throughout my applications. I also have written a lot of code that maps IDataReader rows to POCOs. However, as I have tons of classes the whole thing is getting to be a pain in the ass to maintain.
I have been looking at replacing the whole she-bang with a micro-orm like Petapoco. I have a few queries though:
I have lots of POCOs that contain other POCOs in them as properties. How well does the Petapoco support this?
Should I use a ORM like Massive or Simple.Data that returns a dynamic object and map that to a POCO?
Are there any approaches I can take to the whole mapping of rows to POCOs? I can't really use convention-based tools as my database isn't particularly consistent in how it is designed.
How about using a text templating/code generator to build out a lightweight persistence layer? I have a battle-hardened open source project called TextMetal to generate the necessary persistence layer based on tried and true architectural decisions. The only lacking thing is object to object relations but it does support query expressions and works well with poorly designed data schemas.
You can see a real world project that uses the above tool call Can Do It For.
Feel free to ask me about any design decisions once you take a look-sse.
Simple.Data automagically casts its dynamic type to static types. It will map nested properties as long as they have been eager-loaded using the .With method. So for example
Customer customer = db.Customer.WithOrders().Get(42);
would populate the Orders property of the customer object.
Could you use QueryFirst, or modify it? It takes your sql and wraps it in vanilla ADO code, generated at design time. You get fresh POCOs from your result schema every time you save your file. Additionally, you can choose to test all queries and regenerate all wrappers via the option in the tools menu. It's dependent on Sql Server and SqlClient, so unless you do some modification, you'll lose DbProviderFactory.
I am looking to see if Extended Properties can be made to be part of Entities in EF 4.0, when the .edmx is generated or updated from the database. I also would like to see an example of running a stored procedure (function) from the .edmx in a T4 template, since I do have a procedure that returns the Extended Prop values.
Thanks
So, a few things to bear in mind here:
The designer is not really extensible, but the provider is. That doesn't really help much because writing an EF provider is not a walk in the park. It's really complex.
The designer-related code, including the bits that relate to metadata, is mostly sealed and internal and almost completely unusable by you.
However, the EDMX file (the XML file itself) is very well documented: http://msdn.microsoft.com/en-us/data/jj650889
... You can freely modify the XML yourself (by hand or through some Add-in or external utility), as long as you stick to the specifications.
The general idea is that you can use your own tool to read the extended properties and change the EDMX XML.
You will be adding "Annotations" to the SSDL (store metadata in the EDMX) elements. These Annotation values will be based on your extended properties of the relative entities in the DB.
Later on, when T4 executes, T4 receives the metadata collections based on the EDMX elements. This metadata will contain the Annotations you previously wrote there. Just about any element can have one or more annotations. You can then add custom code to the T4 template to handle the annotations that are based on your extended properties. The designer will not show the annotations, and you can't manipulate them in the designer, but it should preserve them (won't overwrite them if they are present in the EDMX).
Of course, this would be a lot easier if the designer was extensible, or even if the designer-related code was usable by you. Right now, that's not the case. Most parts of EF are moving to open-source, but the designer is still not there (yet). If the designer ever gets into open source, then you can probably make changes to start using that - and given that the community keeps asking for this kind of feature, I imagine the community will change the source to make it happen anyway. Until then, you have to manually edit the EDMX or write some tool to do it for you.
I've just started learning Google Web Toolkit and finished writing the Stock Watcher tutorial app.
Is my thinking correct that if one wants to persist a business object (like a Stock) using JDO and send it back and forth to/from the client over RPC then one has to create two separate classes for that object: One with the JDO annotations for persisting it on the server and another which is serialisable and used over RPC?
I notice the Stock Watcher has separate classes and I can theorise why:
Otherwise the gwt compiler would try
to generate javascript for everything
the persisted class referenced like
JDO and com.google.blah.users.User, etc
Also there may be logic on the server-side
class which doesn't apply to the client
and vice-versa.
I just want to make sure I'm understanding this correctly. I don't want to have to create two versions of all my business object classes which I want to use over RPC if I don't have to.
The short answer is: you don't need to create duplicate classes.
I recommend that you take a look from the following google groups discussion on the gwt-contributors list:
http://groups.google.com/group/google-web-toolkit-contributors/browse_thread/thread/3c768d8d33bfb1dc/5a38aa812c0ac52b
Here is an interesting excerpt:
If this is all you're interested in, I
described a way to make GAE and
GWT-RPC work together "out of the
box". Just declare your entities as:
#PersistenceCapable(identityType =
IdentityType.APPLICATION, detachable
= "false") public class MyPojo implements Serializable { }
and everything will work, but you'll
have to manually deal with
re-attachment when sending objects
from the client back to the server.
You can use this option, and you will not need a mirror (DTO) class.
You can also try gilead (former hibernate4gwt), which takes care of some details within the problems of serializing enhanced objects.
Your assessment is correct. JDO replaces instances of Collections with their own implementations, in order to sniff when the object graph changes, I suppose. These implementations are not known by the GWT compiler, so it will not be able to serialize them. This happens often for classes that are composed of otherwise GWT compliant types, but with JDO annotations, especially if some of the object properties are Collections.
For a detailed explanation and a workaround, check out this pretty influential essay on the topic: http://timepedia.blogspot.com/2009/04/google-appengine-and-gwt-now-marriage.html
I finally found a solution. Don't change your object at all, but for the listing do it this way:
List<YourCustomObject> secureList=(List<YourCustomObject>)pm.newQuery(query).execute();
return new ArrayList<YourCustomObject>(secureList);
The actual problem is not in Serializing the Object... the problem is to Serialize the Collection class which is implemented by Google and is not allowed to Serialize out.
You do not have to create two versions of the domain model.
Here are two tips:
Use a String encoded key, not the Appengine Key class.
pojo = pm.detachCopy(pojo)
...will remove all the JDO enhancements.
You don't have to create separate instances at all, in fact you're better off not doing it. Your JDO objects should be plain POJOs anyway, and should never contain business logic. That's for your business layer, not your persistent objects themselves.
All you need to do is include the source for the annotations you are using and GWT should compile your class just fine. Also, you want to avoid using libraries that GWT can't compile (like things that use reflection, etc.), but in all the projects I've done this has never been a problem.
I think that a better format to send objects through GWT is through JSON. In this case from the server a JSON string would be sent which would then have to be parsed in the client. The advantage is that the final Javascript which is rendered in the browser has a smaller size. thus causing the page to load faster.
Secondly to send objects through GWT, the objects should be serializable. This may not be the case for all objects
Thirdly GWT has inbuilt functions to handle JSON... so no issues on the client end