I have encountered following problem.
Currently I'm working with colleague on GWT project.
We are using RPC async service. We often need to send and receive state object which is a HashMap.
We have bunch of service methods which are always have state as parameter and as a return type:
HashMap<String, Serializable> fillAndGetUI(HashMap<String, Serializable> state) throws ProjectServiceException;
I'm telling not to use this because we have Serializable interface in method declaration which is not good for RPC and GWT compilation.
But: HashMap is useful while we can use hotswap instead of restarting server each time (it's enough to write method put and get).
My suggestion was to use POJO, but we can loose hotswap abliliy which is critical.
What is the solution to not use HashMap in declarations and have Hotswap ability in the same time ? Can RequestFactory solve this issue? (We are using GWT 2.1. version change is not an option)
The easiest solution use plain old RequestBuilder, JSON and Overlay Types. RequestFactory will not help you
Related
When using gwt-maven-plugin's generateAsync, is it possible to apply an annotation (or something) to an individual gwt-rpc service so that the corresponding async isn't auto-generated and can be written manually?
Alternatively, is there an annotation (or something) that makes the generated asyncs have the "Request" return type?
From the gwt-maven-plugin's documentation you need to adjust the servicePattern configuration property, or you can ask it to always generate methods returning Request.
Or, even better, don't use this goal!
(or only call it manually once in a while and copy the generated classes to your sources)
The GWT Generators will never create a class if one already exists with that name. This means you can ask GWT to compile and generate the code, then copy the classes into your sources and customize them, and later compiler runs will not attempt to generate sources.
This may have other side effects - if the proxy, typeserializer, or fieldserializer is prevented from being generated, then the RPC generators may assume that other dependencies have also all been correctly generated, so you may find yourself missing classes if you don't also copy those other classes. Likewise, of course any changes that require your serializers being modified or rebuilt will have to be done manually, such as changing a serializable type, or modifying a RPC method.
Your async interface can always declare a return type of Request or RequestBuilder instead of void. If you declare RequestBuilder, then the request will not be sent automatically, and you must call send(), whereas a Request returned means that the request has been sent.
I have a GWT project in Eclipse that throws a com.google.gwt.user.client.rpc.IncompatibleRemoteServiceException when using hosted mode because the code server RPC file hashcode does not match the server RPC file hashcode.
I've tracked this down to a couple classes that implement com.extjs.gxt.ui.client.data.BeanModelTag. These classes appear to be included in the code server generated RPC file incorrectly. Additionally, the class names appear mangled.
For example, instead of com.acme.beans.MyBean the class is referenced as com.acme.beans.BeanModel_com_acme_beans_MyBean.
I suspect this has something to do with the class path for my debug target incorrectly including some jar, src dir, or other project incorrectly, but I don't have good feel for how to debug this further.
GXT 2 (current is 3, 4 should beta soonish) had a feature where it could generate BaseModelData types based on a java bean or pojo, allowing for reflection-like features that GXT 2 used to render templates and grid cells (GXT 3 has compile-time features that work out that property access instead). The BeanModels are not meant to be sent over the wire - instead, you should be sending your original MyBean over the wire.
This generated BeanModel instance is designed to wrap the original MyBean, and is only available to the client code. To pass back to the server again, unwrap the bean - use getBean() to get the underlying pojo.
On startup of my application i would like to make an rpc call form the client to the server. The call would result in the server creating a Properties object from a .properties file and passing it back to the client. However this does not seem to be possible as when i do this i get an error "No source code is available for type java.util.Properties; did you forget to inherit a required module?". I then tried to use a GWT Dictionary instead but doing so resulted in a error because a dictionary object is not serializable. Any ideas of how to fix either of the above 2 errors or of another way of doing this.
You cannot pass java.util.Properties back to client in RPC. The list of java classes in GWT that are emulated is listed http://www.gwtproject.org/doc/latest/RefJreEmulation.html
Also you should process the properties file into a model/pojo class in serializable java class and pass it back in RPC. You can use JSON object to do the same.
In any case you should process the properties file on server side into a format that is acceptable to GWT via JSON or RequestFactory or RPC.
We are trying to use RequestFactory with an existing Java entity model. Our Java entities all implement a DomainObject interface and expose a getObjectId() method (this name was chosen as getId() can be ambiguous and conflict with the domain object's actual ID from the domain being modeled.
The ServiceLayerDecorator interface allows for customization of ID and Version property lookup strategies.
public class MyServiceLayerDecorator extends ServiceLayerDecorator {
#Override
public Object getId(Object object) {
DomainObject domainObject = (DomainObject) object;
return domainObject.getObjectId();
}
}
So far, so good. However, trying to deploy this solution yields runtime errors. In particular, RequestFactoryInterfaceValidator complains:
[ERROR] There is no getId() method in type com.mycompany.server.MyEntity
Then later on:
[ERROR] Type type com.mycompany.client.MyEntityProxy was previously marked as bad
[ERROR] The type com.mycompany.client.MyEntityProxy did not pass RequestFactory validation
[ERROR] Unexpected error
com.google.web.bindery.requestfactory.server.UnexpectedException: The type com.mycompany.client.MyEntityProxy did not pass RequestFactory validation
at com.google.web.bindery.requestfactory.server.ServiceLayerDecorator.die(ServiceLayerDecorator.java:212) ~[gwt-servlet.jar:na]
My question is - why does the ServiceLayerDecorator allow for customized ID and Version lookup strategies if RequestFactoryInterfaceValidator is hardcoding the convention of getId() and getVersion()?
I guess I could override ServiceLayerDecorator.resolveClass() to ignore "poisoned" proxy classes but at this point it seems like I'm fighting the framework too much...
Couple of options, some of which have already been mentioned:
Locator. I like to make a single Locator for the entire proj, or at least for groups of related objects that have similar key types. The getId() call will be able to invoke your DomainObject.getObjectId() method and return that value. Note that the getDomainType() method is currently unused, and can return null or throw an exception.
ValueProxy. Instead of having your objects map to something RF can understand as an entity, map them to plain value objects - no id or version required. RF misses out on a lot of clever things it can do, especially with regard to avoiding sending redundant data to the server.
ServiceLayerDecorator. This worked pre 2.4, but with the annotation processing that goes on now, it works less well, since it tries to do some of the work for you. It seems ServiceLayerDecorator has lost a lot of its teeth in the last few months - in theory, you could use it to rebuild getters to talk directly to your persistence mechanism, but now that the annotation processing verifies your code, that is no longer an option.
Big issue in all of this is that RequestFactory is designed to solve a single problem, and solve it well: Allow developers to use POJOs mapped to some persistence mechanism, and refer to those objects from the client, following certain conventions to avoid writing extra code or configuration.
As a result, it solves its own problem pretty well, and ends up being a bad fit for many other problems/use-cases. You might be finding that it isn't worth it: if so, a few thoughts you might consider:
RPC. It isn't perfect for much, but it does an okay job for a lot.
AutoBeans (which RF is based on) is still a pretty fast, lightweight way to send data over the wire and get it into the app. You could build your own wrapper around it, like RF has done, and slim down the problem it is trying to solve to just your use-case.
I'm trying to access a data source that is defined within a web container (JBoss) from a fat client outside the container.
I've decided to look up the data source through JNDI. Actually, my persistence framework (Ibatis) does this.
When performing queries I always end up getting this error:
java.lang.IllegalAccessException: Method=public abstract java.sql.Connection java.sql.Statement.getConnection() throws java.sql.SQLException does not return Serializable
Stacktrace:
org.jboss.resource.adapter.jdbc.remote.WrapperDataSourceService.doStatementMethod(WrapperDataSourceS
ervice.java:411),
org.jboss.resource.adapter.jdbc.remote.WrapperDataSourceService.invoke(WrapperDataSourceService.java
:223),
sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source),
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25),
java.lang.reflect.Method.invoke(Method.java:585),
org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:155),
org.jboss.mx.server.Invocation.dispatch(Invocation.java:94),
org.jboss.mx.server.Invocation.invoke(Invocation.java:86),
org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264),
org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:659),
My Datasource:
<?xml version="1.0" encoding="UTF-8"?>
<datasources>
<local-tx-datasource>
<jndi-name>jdbc/xxxxxDS</jndi-name>
<connection-url>jdbc:oracle:thin:#xxxxxxxxx:1521:xxxxxxx</connection-url>
<use-java-context>false</use-java-context>
<driver-class>oracle.jdbc.driver.OracleDriver</driver-class>
<user-name>xxxxxxxx</user-name>
<password>xxxxxx</password>
<exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name>
<min-pool-size>5</min-pool-size>
<max-pool-size>20</max-pool-size>
</local-tx-datasource>
</datasources>
Does anyone have a clue where this could come from?
Maybe someone even knows a better way how to achieve this.
Any hints are much appreciated!
Cheers,
Michael
Not sure if this is the same issue?
JBoss DataSource config
DataSource wrappers are not usable outside of the server VM
#Michael Well, java.sql.Connection is an Interface - it might technically be possible for the concrete implementation you're getting from JBoss to be Serializable - but I don't think you're really going to have any options you can use. If it was possible, it would probably be easy :)
I think #toolkit might have said the right words with useable outside the VM - the JDBC drivers will be talking to native driver code running in the underlying OS I guess, so that might explain why you can't just pass a connection over the network elsewhere.
My advice, (if you don't get any better advice!) would be to find a different approach - if you have access to locate the resource on the JBoss directory, maybe implement a proxy object that you can locate and obtain from the directory that allows you to use the connection remotely from your fat client. That's a design pattern called data transfer object I think Wikipedia entry
#toolkit:
Well, not exactly. Since I can access the data source over JNDI, it is actually visible and thus usable.
Or am I getting something totally wrong?
#Brabster:
I think you're on the right track. Isn't there a way to make the connection serializable? Maybe it's just a configuration issue...
I've read up on Ibatis now - maybe you can make your implementations of Dao etc. Serializable, post them into your directory and so retrieve them and use them in your fat client? You'd get reuse benefits out of that too.
Here's an example of something looks similar for Wicket
JBoss wraps up all DataSources with it's own ones.
That lets it play tricks with autocommit to get the specified J2EE behaviour out of a JDBC connection. They are mostly serailizable. But you needn't trust them.
I'd look carefully at it's wrappers. I've written a surrogate for JBoss's J2EE wrappers wrapper for JDBC that works with OOCJNDI to get my DAO code unit test-able standalone.
You just wrap java.sql.Driver, point OOCJNDI at your class, and run in JUnit.
The Driver wrapper can just directly create a SQL Driver and delegate to it.
Return a java.sql.Connection wrapper of your own devising on Connect.
A ConnectionWrapper can just wrap the Connection your Oracle driver gives you,
and all it does special is set Autocommit true.
Don't forget Eclipse can wrt delgates for you. Add a member you need to delegate to , then select it and right click, source -=>add delgage methods.
This is great when you get paid by the line ;-)
Bada-bing, Bada-boom, JUnit out of the box J2EE testing.
Your problem is probably amenable to the same thing, with JUnit crossed out and FatCLient written in an crayon.
My FatClient uses RMI generated with xdoclet to talk to the J2EE server, so I don't have your problem.
I think the exception indicates that the SQLConnection object you're trying to retrieve doesn't implement the Serializable interface, so it can't be passed to you the way you asked for it.
From the limited work I've done with JDNI, if you're asking for an object via JNDI it must be serializable. As far as I know, there's no way round that - if I think of a better way I'll post it up...
OK, one obvious option is to provide a serializable object local to the datasource that uses it but doesn't have the datasource as part of its serializable object graph. The fat client could then look up that object and query it instead.
Or create a (web?) service through which to access the datasource is governed - again your fat client would hit the service - this would probably be better encapsulated and more reuseable approach if those are concerns for you.