How to append request contexts while they have different receiver implementation - gwt

In google io 2011, David Chandler mentioned that you can chain different request context by using append() method,but in practice, I don't know how to chain them up while they have different receiver,using to() and then fire()?
Please help.

There are two kinds of receivers: the ones bound to each method invocation (that you pass to the Request's to() method), and the context-level one (that you pass to the RequestContext's fire() method). The Request's fire(Receiver) method is a short-hand for to(receiver).fire(), i.e. it binds the Receiver to the method.
The method-level receivers depend on the method only, their generic parameterization depends on the method's return value (the generic parameterization of the Request or InstanceRequest), so whether you append() several RequestContexts together changes absolutely nothing.
The context-level receiver is always parameterized with Void. When you append() contexts together, they actually form a single context with several interfaces, so you only call fire() once, on any one of the appended contexts.
Now let's go back to the basics: without using append(), you can only batch together calls for methods that are declared on the context interface. If you have two distinct context interfaces you want to use, you have to make two fire(), i.e. two HTTP requests. The introduction of append() allows you to batch together calls for methods declared on any context interface: simply append a context to another one and the calls on both contexts will be batched together in the same HTTP request, triggered by a unique fire() on any one of the context being appended.
Now into the technical details: internally, a context is nothing more than a thin wrapper around a state object. When you edit() or create() a proxy, you add it to the internal state, and when you call a service method, the method name (actually, its obfuscated token) and the arguments are captured and pushed to the state as well. When you append() a context, you're only making it share its internal state with the one of the context you append it to. That way, when you call a service method on the appended context, its pushed on the exact same state as the one of the other context, and when you fire() any one of them, the state is serialized into a single HTTP request.
Note that, to append a context, its own internal state has to be empty, otherwise an exception will be raised, as the state would be thrown away to be replaced by the one of the other context.
In brief, and in practice:
FirstContext first = rf.first();
SomeProxy proxy = first.create(SomeProxy.class);
...
SecondContext second = first.append(rf.second());
OtherProxy other = second.create(OtherProxy.class);
other.setSome(proxy);
...
second.saveAndReturnSelf(other).to(new Receiver<OtherProxy>() {
...
});
...
first.fire();
Note that the line that creates and appends the second context could equally be written:
SecondContext second = rf.second();
first.append(second);
The append method returns its argument as a convenience, but it's really the same value you passed as the argument. This is only to allow writing the one-liner above, instead of being forced to use the two-liner.

Related

Can you add an object that is available in the request context only?

In Spring (java) and in .NET also, you can add an object in request scope?
i.e. a user makes a request, you perform some look in a filter or base controller, and then you can add this object into the request object for this current request only
Now in your Action you can check if the key exists and use this object in your action method.
In Play 2 Requests are immutable so you would generally wrap them and pass them around rather than modifying them.
You'd usually use something called action composition to do what you want. Action composition lets you write common code for actions so that you can pre-process a requests and maybe pass the action some data from the request.
Check out the Authenticated example in the docs which provides the action with an AuthenticatedRequest object. The AuthenticatedRequest object wraps the existing Request (rather than modifying it) and adds an additional username value.
As Rich said, you can use Action Composition and, if you want to put more information in Request context, you can use Http.current().args.put("key","value").

GWT - Where should i use code splitting while using places/activities/mappers?

"core" refers to the initial piece of the application that is loaded.
In order to bind url to places, GWT uses PlaceTokenizer<P extends Place>. When loading the application from the url, it calls the method P getPlace(String token) to retrieve a new instance of the place to call.
due to the asynchronous nature of code splitting, I can't create the place inside a runAsync in this method. So I have to put all the places of my app in the core.
To link places to activity, GWT callsActivity getActivity(Place place) (from com.google.gwt.activity.shared.ActivityMapper) to retrieve a new instance of the activity.
Once again, i have to put all my activities in the core.
Here's what I want to try: Write a custom com.google.gwt.place.shared.Delegate that
bind itself on PlaceChangeRequestEvent. If the AppPiece corresponding to the requestedPlace isn't loaded, it calls event.setWarning(NEED_TO_LOAD_MODULE)
in the confirm(String message) method, always return false when the message equals NEED_TO_LOAD_MODULE (so it doesn't bother the user), and load the module via RunAsync.
Once the module is loaded, call goTo(requestedPlace)
Each AppPiece of my application contains a bunch of activies and the corresponding views. Since the mappers are only called when PlaceChangeEventis fired, i could generate a new instance of my activity via AppPiece.getSomeActivityInstance().
I'm pretty sure this will work, but what bother me is that
Finding wich AppPiece to load depending on the requestedPlace will force me to write code that will be very similar to my mappers
I would like to have my places inside the corresponding AppPiece
Overriding Delegate for this purpose is tricky, and I'm looking for a better solution
You don't have to put all your activities in the core (as you call it): while an Activity instance is retrieved synchronously, it's allowed to start asynchronously. This is where you'd put your GWT.runAsync call.
See http://code.google.com/p/google-web-toolkit/issues/detail?id=5129 and https://groups.google.com/d/topic/google-web-toolkit/8_P_d4aT-0E/discussion

Using different delegates for NSXmlParser

I am trying to figure out the best way to design something. I am writing an iPhone App and for the most part I am using async calls to a web service. This means that I cam setting up a URLConnection, calling start and letting it call me back when the data is available or an exception occurs. This works well and I think is the correct way to handle things.
For example:
I request a list of people from a web service. The resulting list is Xml Person elements which will be translated into an objective-c "Person" object by my XmlDelegate.
When I call the function to get the person, I pass in a "PersonResultDelegate", which is a protocol with a single function called "PersonReceived:(Person *)p". So, each time I get a complete Person object, I call that method and all is well. So, my detail view (or search result view) just receives the elements as they are available.
The problem comes when I need to obtain more then one specific object. In my specific case, I need to get the first and last appointment for a person. So, I need to make two API calls to obtain these two single Appointment objects. Each Appointment object will result in a call to the registered AppointmentResultDelegate, but how will I know which is the first and which is the last? I also need to somehow handle the case when there is no "first" or "last" Appointments and the Delegate will never get called.
What would be the correct way design wise to handle this? Should I add some additional context information to the initial request which is passed back to the handle in the delegate? An opaque piece of data which only makes sense to the person who made the initial call? What are my other options?
Solution
What I actually ended up doing is just passing an opaque piece of data along with the Appointment to the delegate. So, when I request an appointment object I have a method like:
getNextAppointment withDelegate:self withContext:#"next"
getPrevAppointment withDelegate:self withContext:#"prev"
This way when the delegate gets called I know what appointment is being delivered.
"Each Appointment object will result in a call to the registered AppointmentResultDelegate, but how will I know which is the first and which is the last?"
By looking at the order in which you receive these callbacks. Or by looking at some value in that xml data. Like a sequence or data. I don't know your data of course.

Class Design: Demeter vs. Connection Lifetimes

Okay, so here's a problem I'm running into.
I have some classes in my application that have methods that require a database connection. I am torn between two different ways to design the classes, both of which are centered around dependency injection:
Provide a property for the connection that is set by the caller prior to method invocation. This has a few drawbacks.
Every method relying on the connection property has to validate that property to ensure that it isn't null, it's open and not involved in a transaction if that's going to muck up the operation.
If the connection property is unexpectedly closed, all the methods have to either (1.) throw an exception or (2.) coerce it open. Depending on the level of robustness you want, either case is appropriate. (Note that this is different from a connection that is passed to a method in that the reference to the connection exists for the lifetime of the object, not simply for the lifetime of the method invocation. Consequently, the volatility of the connection just seems higher to me.)
Providing a Connection property seems (to me, anyway) to scream out for a corresponding Transaction property. This creates additional overhead in the documentation, since you'd have to make it fairly obvious when the transaction was being used, and when it wasn't.
On the other hand, Microsoft seems to favor the whole set-and-invoke paradigm.
Require the connection to be passed as an argument to the method. This has a few advantages and disadvantages:
The parameter list is naturally larger. This is irksome to me, primarily at the point of call.
While a connection (and a transaction) must still be validated prior to use, the reference to it exists only for the duration of the method call.
The point of call is, however, quite clear. It's very obvious that you must provide the connection, and that the method won't be creating one behind your back automagically.
If a method doesn't require a transaction (say a method that only retrieves data from the database), no transaction is required. There's no lack of clarity due to the method signature.
If a method requires a transaction, it's very clear due to the method signature. Again, there's no lack of clarity.
Because the class does not expose a Connection or a Transaction property, there's no chance of callers trying to drill down through them to their properties and methods, thus enforcing the Law of Demeter.
I know, it's a lot. But on the one hand, there's the Microsoft Way: Provide properties, let the caller set the properties, and then invoke methods. That way, you don't have to create complex constructors or factory methods and the like. Also, avoid methods with lots of arguments.
Then, there's the simple fact that if I expose these two properties on my objects, they'll tend to encourage consumers to use them in nefarious ways. (Not that I'm responsible for that, but still.) But I just don't really want to write crappy code.
If you were in my shoes, what would you do?
Here is a third pattern to consider:
Create a class called ConnectionScope, which provides access to a connection
Any class at any time, can create a ConnectionScope
ConnectionScope has a property called Connection, which always returns a valid connection
Any (and every) ConnectionScope gives access to the same underlying connection object (within some scope, maybe within the same thread, or process)
You then are free to implement that Connection property however you want, and your classes don't have a property that needs to be set, nor is the connection a parameter, nor do they need to worry about opening or closing connections.
More details:
In C#, I'd recommend ConnectionScope implement IDisposable, that way your classes can write code like "using ( var scope = new ConnectionScope() )" and then ConnectionScope can free the connection (if appropriate) when it is destroyed
If you can limit yourself to one connection per thread (or process) then you can easily set the connection string in a [thread] static variable in ConnectionScope
You can then use reference counting to ensure that your single connection is re-used when its already open and connections are released when no one is using them
Updated: Here is some simplified sample code:
public class ConnectionScope : IDisposable
{
private static Connection m_Connection;
private static int m_ReferenceCount;
public Connection Connection
{
get
{
return m_Connection;
}
}
public ConnectionScope()
{
if ( m_Connection == null )
{
m_Connection = OpenConnection();
}
m_ReferenceCount++;
}
public void Dispose()
{
m_ReferenceCount--;
if ( m_ReferenceCount == 0 )
{
m_Connection.Dispose();
m_Connection = null;
}
}
}
Example code of how one (any) of your classes would use it:
using ( var scope = new ConnectionScope() )
{
scope.Connection.ExecuteCommand( ... )
}
I would prefer the latter method. It sounds like your classes use the database connection as a conduit to the persistence layer. Making the caller pass in the database connection makes it clear that this is the case. If the connection/transaction were represented as a property of the object, then things are not so clear and all of the ownership and lifetime issues come out. Better to avoid them from the start.

ServiceContainer, IoC, and disposable objects

I have a question, and I'm going to tag this subjective since that's what I think it evolves into, more of a discussion. I'm hoping for some good ideas or some thought-provokers. I apologize for the long-winded question but you need to know the context.
The question is basically:
How do you deal with concrete types in relation to IoC containers? Specifically, who is responsible for disposing them, if they require disposal, and how does that knowledge get propagated out to the calling code?
Do you require them to be IDisposable? If not, is that code future-proof, or is the rule that you cannot use disposable objects? If you enforce IDisposable-requirements on interfaces and concrete types to be future-proof, whose responsibility is objects injected as part of constructor calls?
Edit: I accepted the answer by #Chris Ballard since it's the closest one to the approach we ended up with.
Basically, we always return a type that looks like this:
public interface IService<T> : IDisposable
where T: class
{
T Instance { get; }
Boolean Success { get; }
String FailureMessage { get; } // in case Success=false
}
We then return an object implementing this interface back from both .Resolve and .TryResolve, so that what we get in the calling code is always the same type.
Now, the object implementing this interface, IService<T> is IDisposable, and should always be disposed of. It's not up to the programmer that resolves a service to decide whether the IService<T> object should be disposed or not.
However, and this is the crucial part, whether the service instance should be disposed or not, that knowledge is baked into the object implementing IService<T>, so if it's a factory-scoped service (ie. each call to Resolve ends up with a new service instance), then the service instance will be disposed when the IService<T> object is disposed.
This also made it possible to support other special scopes, like pooling. We can now say that we want minimum 2 service instances, maximum 15, and typically 5, which means that each call to .Resolve will either retrieve a service instance from a pool of available objects, or construct a new one. And then, when the IService<T> object that holds the pooled service is disposed of, the service instance is released back into its pool.
Sure, this made all code look like this:
using (var service = ServiceContainer.Global.Resolve<ISomeService>())
{
service.Instance.DoSomething();
}
but it's a clean approach, and it has the same syntax regardless of the type of service or concrete object in use, so we chose that as an acceptable solution.
Original question follows, for posterity
Long-winded question comes here:
We have a IoC container that we use, and recently we discovered what amounts to a problem.
In non-IoC code, when we wanted to use, say, a file, we used a class like this:
using (Stream stream = new FileStream(...))
{
...
}
There was no question as to whether this class was something that held a limited resource or not, since we knew that files had to be closed, and the class itself implemented IDisposable. The rule is simply that every class we construct an object of, that implements IDisposable, has to be disposed of. No questions asked. It's not up to the user of this class to decide if calling Dispose is optional or not.
Ok, so on to the first step towards the IoC container. Let's assume we don't want the code to talk directly to the file, but instead go through one layer of indirection. Let's call this class a BinaryDataProvider for this example. Internally, the class is using a stream, which is still a disposable object, so the above code would be changed to:
using (BinaryDataProvider provider = new BinaryDataProvider(...))
{
...
}
This doesn't change much. The knowledge that the class implements IDisposable is still here, no questions asked, we need to call Dispose.
But, let's assume that we have classes that provide data that right now doesn't use any such limited resources.
The above code could then be written as:
BinaryDataProvider provider = new BinaryDataProvider();
...
OK, so far so good, but here comes the meat of the question. Let's assume we want to use an IoC container to inject this provider instead of depending on a specific concrete type.
The code would then be:
IBinaryDataProvider provider =
ServiceContainer.Global.Resolve<IBinaryDataProvider>();
...
Note that I assume there is an independent interface available that we can access the object through.
With the above change, what if we later on want to use an object that really should be disposed of? None of the existing code that resolves that interface is written to dispose of the object, so what now?
The way we see it, we have to pick one solution:
Implement runtime checking that checks that if a concrete type that is being registered implements IDisposable, require that the interface it is exposed through also implements IDisposable. This is not a good solution
Enfore a constraint on the interfaces being used, they must always inherit from IDisposable, in order to be future-proof
Enforce runtime that no concrete types can be IDisposable, since this is specifically not handled by the code using the IoC container
Just leave it up to the programmer to check if the object implements IDisposable and "do the right thing"?
Are there others?
Also, what about injecting objects in constructors? Our container, and some of the other containers we've looked into, is capable of injecting a fresh object into a parameter to a constructor of a concrete type. For instance, if our BinaryDataProvider need an object that implements the ILogging interface, if we enforce IDispose-"ability" on these objects, whose responsibility is it to dispose of the logging object?
What do you think? I want opinions, good and bad.
One option might be to go with a factory pattern, so that the objects created directly by the IoC container never need to be disposed themselves, eg
IBinaryDataProviderFactory factory =
ServiceContainer.Global.Resolve<IBinaryDataProviderFactory>();
using(IBinaryDataProvider provider = factory.CreateProvider())
{
...
}
Downside is added complexity, but it does mean that the container never creates anything which the developer is supposed to dispose of - it is always explicit code which does this.
If you really want to make it obvious, the factory method could be named something like CreateDisposableProvider().
(Disclaimer: I'm answering this based on java stuff. Although I program C# I haven't proxied anything in C# but I know it's possible. Sorry about the java terminology)
You could let the IoC framework inspect the object being constructed to see if it supports
IDisposable. If not, you could use a dynamic proxy to wrap the actual object that the IoC framework provides to the client code. This dynamic proxy could implement IDisposable, so that you'd always deliver a IDisposable to the client. As long as you're working with interfaces that should be fairly simple ?
Then you'd just have the problem of communicating to the developer when the object is an IDisposable. I'm not really sure how this'd be done in a nice manner.
You actually came up with a very dirty solution: your IService contract violates the SRP, wich is a big no-no.
What I recommend is to distinguish so-called "singleton" services from so-called "prototype" services. Lifetime of "singleton" ones is managed by the container, which may query at runtime whether a particular instance implements IDisposable and invoke Dispose() on shutdown if so.
Managing prototypes, on the other hand, is totally the responsibility of the calling code.