Can I use RequestFactory without getId() and getVersion() methods? - gwt

We are trying to use RequestFactory with an existing Java entity model. Our Java entities all implement a DomainObject interface and expose a getObjectId() method (this name was chosen as getId() can be ambiguous and conflict with the domain object's actual ID from the domain being modeled.
The ServiceLayerDecorator interface allows for customization of ID and Version property lookup strategies.
public class MyServiceLayerDecorator extends ServiceLayerDecorator {
#Override
public Object getId(Object object) {
DomainObject domainObject = (DomainObject) object;
return domainObject.getObjectId();
}
}
So far, so good. However, trying to deploy this solution yields runtime errors. In particular, RequestFactoryInterfaceValidator complains:
[ERROR] There is no getId() method in type com.mycompany.server.MyEntity
Then later on:
[ERROR] Type type com.mycompany.client.MyEntityProxy was previously marked as bad
[ERROR] The type com.mycompany.client.MyEntityProxy did not pass RequestFactory validation
[ERROR] Unexpected error
com.google.web.bindery.requestfactory.server.UnexpectedException: The type com.mycompany.client.MyEntityProxy did not pass RequestFactory validation
at com.google.web.bindery.requestfactory.server.ServiceLayerDecorator.die(ServiceLayerDecorator.java:212) ~[gwt-servlet.jar:na]
My question is - why does the ServiceLayerDecorator allow for customized ID and Version lookup strategies if RequestFactoryInterfaceValidator is hardcoding the convention of getId() and getVersion()?
I guess I could override ServiceLayerDecorator.resolveClass() to ignore "poisoned" proxy classes but at this point it seems like I'm fighting the framework too much...

Couple of options, some of which have already been mentioned:
Locator. I like to make a single Locator for the entire proj, or at least for groups of related objects that have similar key types. The getId() call will be able to invoke your DomainObject.getObjectId() method and return that value. Note that the getDomainType() method is currently unused, and can return null or throw an exception.
ValueProxy. Instead of having your objects map to something RF can understand as an entity, map them to plain value objects - no id or version required. RF misses out on a lot of clever things it can do, especially with regard to avoiding sending redundant data to the server.
ServiceLayerDecorator. This worked pre 2.4, but with the annotation processing that goes on now, it works less well, since it tries to do some of the work for you. It seems ServiceLayerDecorator has lost a lot of its teeth in the last few months - in theory, you could use it to rebuild getters to talk directly to your persistence mechanism, but now that the annotation processing verifies your code, that is no longer an option.
Big issue in all of this is that RequestFactory is designed to solve a single problem, and solve it well: Allow developers to use POJOs mapped to some persistence mechanism, and refer to those objects from the client, following certain conventions to avoid writing extra code or configuration.
As a result, it solves its own problem pretty well, and ends up being a bad fit for many other problems/use-cases. You might be finding that it isn't worth it: if so, a few thoughts you might consider:
RPC. It isn't perfect for much, but it does an okay job for a lot.
AutoBeans (which RF is based on) is still a pretty fast, lightweight way to send data over the wire and get it into the app. You could build your own wrapper around it, like RF has done, and slim down the problem it is trying to solve to just your use-case.

Related

Proper way to modify public interface

Let's assume we have a function that returns a list of apples in our warehouse:
List<Apple> getApples();
After some lifetime of the application we've found a bug - in rare cases clients of this function get intoxication because some of the apples returned are not ripe yet.
However another set of clients absolutely does not care about ripeness, they use this function simply to know about all available apples.
Naive way of solving this problem would be to add the 'ripeness' member to an apple and then find all places where ripeness can cause problems and put some checks.
const auto apples = getApples();
for (const auto& apple : apples)
if (apple.isRipe())
consume(apple)
However, if we correlate this new requirement of having ripe apples with the way class interfaces are usually designed, we might find out that we need new interface which is a subset of a more generic one:
List<Apple> getRipeApples();
which basically extends the getApples() interface by filtering the ones that are not ripe.
So the questions are:
Is this correct way of thinking?
Should the old interface (getApples) remain unchanged?
How will it handle scaling if later on we figure out that some customers are allergic to red/green/yellow apples (getRipeNonRedApples)?
Are there any other alternative ways of modifying the API?
One constraint, though: how do we minimize the probability of inexperienced/inattentive developer calling getApples instead of getRipeApples? Subclass the Apple with the RipeApple? Make a downcast in the getRipeApples?
A pattern found often with Java people is the idea of versioned capabilities.
You have something like:
interface Capability ...
interface AppleDealer {
List<Apples> getApples();
}
and in order to retrieve an AppleDealer, there is some central service like
public <T> T getCapability (Class<T> type);
So your client code would be doing:
AppleDealer dealer = service.getCapability(AppleDealer.class);
When the need for another method comes up, you go:
interface AppleDealerV2 extends AppleDealer { ...
And clients that want V2, just do a `getCapability(AppleDealerV2.class) call. Those that don't care don't have to modify their code!
Please note: of course, this only works for extending interfaces. You can't use this approach neither to change signatures nor to remove methods in existing interfaces.
Regarding your question 3/4: I go with MaxZoom there, but to be precise: I would very much recommend for "flags" to be something like List<String>, or List<Integer> (for 'real' int like flags) or even Map<String, Object>. In other words: if you really don't know what kind of conditions might come over time, go for interfaces that work for everything: like one where you can give a map with "keys" and "expected values" for the different keys. If you go for pure enums there, you quickly run into similar "versioning" issues.
Alternatively: consider to allow your client to do the filtering himself, using something like; using Java8 you can think of Predicates, lambdas and all that stuff.
Example:
Predicate<Apple> applePredicate = new Predicate<Apple>() {
#Override
public boolean test(Apple a) {
return a.getColour() == AppleColor.GoldenPoisonFrogGolden;
}
};
List<Apples> myApples = dealer.getApples(applePredicate);
IMHO creating new class/method for any possible Apple combination will result in a code pollution. The situation described in your post could be gracefully handled by introducing flags parameter :
List<Apple> getApples(); // keep for backward compatibility
List<Apple> getApples(FLAGS); // use flag as a filter
Possible flags:
RED_FLAG
GREEN_FLAG
RIPE_FLAG
SWEET_FLAG
So a call like below could be possible:
List<Apple> getApples(RIPE_FLAG & RED_FLAG & SWEET_FLAG);
that will produce a list of apples that are ripe, and red-delicious.

GWT Async generation, turn off in some cases?

When using gwt-maven-plugin's generateAsync, is it possible to apply an annotation (or something) to an individual gwt-rpc service so that the corresponding async isn't auto-generated and can be written manually?
Alternatively, is there an annotation (or something) that makes the generated asyncs have the "Request" return type?
From the gwt-maven-plugin's documentation you need to adjust the servicePattern configuration property, or you can ask it to always generate methods returning Request.
Or, even better, don't use this goal!
(or only call it manually once in a while and copy the generated classes to your sources)
The GWT Generators will never create a class if one already exists with that name. This means you can ask GWT to compile and generate the code, then copy the classes into your sources and customize them, and later compiler runs will not attempt to generate sources.
This may have other side effects - if the proxy, typeserializer, or fieldserializer is prevented from being generated, then the RPC generators may assume that other dependencies have also all been correctly generated, so you may find yourself missing classes if you don't also copy those other classes. Likewise, of course any changes that require your serializers being modified or rebuilt will have to be done manually, such as changing a serializable type, or modifying a RPC method.
Your async interface can always declare a return type of Request or RequestBuilder instead of void. If you declare RequestBuilder, then the request will not be sent automatically, and you must call send(), whereas a Request returned means that the request has been sent.

How do I detect whether a mongodb serializer is already registered?

I have created a custom serializer for mongoDB.
I can register it and it works as expected.
However the my application sometimes throws an error because it tries to register the serializer twice.
How do I detect whether a serializer has already been registered and thus stop my application from registering a second time?
If you are using
BsonSerializer.RegisterSerializer(typeof (Type), typeSerializer);
you might get this error "there is already a serializer registered for type". Because you cannot register the same type of serializer 2 times. But you can write your own serializer and this serializer will work before default serializers.
For instance: if you want to use local DateTime instead of Utc which is default.
all you need to do is that writing a class implementing IBsonSerializationProviderand register this provider to BsonSerializer as soon as possible!
here is the sample code.
public class LocalDateTimeSerializationProvider : IBsonSerializationProvider
{
public IBsonSerializer GetSerializer(Type type)
{
return type == typeof(DateTime) ? DateTimeSerializer.LocalInstance : null;
}
}
and to be able to register
BsonSerializer.RegisterSerializationProvider(new LocalDateTimeSerializationProvider());
I hope this helps, you can also read the original documentation in here
this .net driver version of mongodb is 2.4!
TL;DR: Ig you are lazy, use BsonSerializer.LookupSerializer or BsonMemberMap.GetSerializer. To do it right, make sure the registration code is called once and only once.
The best approach to avoid this is to make sure the serializer is registered only once. It's a good idea to have some global startup code that registers anything that is global to the application once, and only once. That includes stuff like dependency injector configuration, tools like automapper and the mongodb driver. If you call this code only once and from a single point in code, you don't need to worry about thread safety, dead locks or similar troubles.
The MongoDB driver configuration settings are thread-safe, but don't assume that this is true for all software packages that you might need to configure. Also, locking can be very expensive performance wise if your code is multi-threaded, for instance in a web-application. Last but not least, that lookup you're doing might not be trivial in the first place, because some methods need to walk an entire inheritance tree.

RequestFactory's Entity Relationships

The details of the Request's with() implementation of RequestFactory in GWT is a bit unclear to me. See here for the official documentation.
Question 1:
When querying the server, RequestFactory does not automatically
populate relations in the object graph. To do this, use the with()
method on a request and specify the related property name as a String.
Does this mean that if the Entity at the server uses Lazy Fetching, the returned EntityProxy will have all the requested objects specified in with()? It seems a bit odd to instantiate the whole object graph of the Object server side, to only send a small piece to the client.
Question 2:
Does req.with("foo").with("foo"); do the same as req.with("foo"); ?
Question 3:
Does req.with("foo").with("bar"); do the same as req.with("foo","bar"); ?
NOTE: I'm having a really hard time finding the implementation details of with() in the source code and the API doesn't help me either.
Question 1:
It probably depends on your server side implemenation.
The with invocation will only make sure that the corresponding getter (getFoo()) is called shortly before the RF call returns to the client.
That's the reason why you also have to make sure to use an OpenSessionInView pattern, otherwise you might run into NullPointeterExceptions.
Question 2:
I guess the Request<T> implements a builder pattern.
The end-result will be the same.
However I am not sure if the getter() will be called twice or if the with method will check if the getter is already requested.
Question 3:
Yes it's the same.
As a sidenote. You can use req.with("foo.bar").
On the backend this will lead to a getFoo().getBar() call.

xtext check annotation issue

I'm using the #Check annotation in order to validate my dsl. my dsl is for json.
at first the method was invoked for a specific object and once per change
but it suddenly doesn't work in the same way anymore (and i'm not sure what i've done that effected it)
the method signature is:
#Check
public void validateJson(ObjectValue object) {...}
now its entering this method for each node in the gui although i'm editing only one node
The validator works normally in this case. When Xtext re-parses your model, it cannot always avoid re-creating the EMF model that is validated in the Check expression - in other words, the model is practically re-created every time, thus warranting a full validation.
However, in some cases, it is possible that only a partial re-creation of the model is necessary - in these cases it is possible that not all elements are re-validated (however, I am not sure whether this optimization was included).