Implementing Chain of Responsibility with Services - service

I'm thinking about a platform-neutral (i.e. not .NET MEF) technique of implementing chain-of-responsibliity pattern using web services as the handlers. I want to be able to add more CoR handlers by deploying new services and not compiling new CoR code, just change configuration info. It seems the challenge will be managing the metadata about available handlers and ensuring the handlers are conforming to the interface.
My question: any ideas on how I can safely ensure:
1. The web services are implementing the interface
2. The web services are implementing the base class behavior, like calling the successor
Because, in compiled code, I can have type-safety and therefore know that any handlers have derived from the abstract base class that ensures the interface and behavior I want. That seems to be missing in the world of services.

This seems like a valid question, but a rather simple one.
You are still afforded the protection of the typing system, even if you are loading code later, at runtime, that the original code never saw before.
I would think the preferred approach here would be to have something like a properties file with a list of implementers (your chain). Then in the code, you are going to have to have a way to instantiate an instance of each handler at runtime to construct the chain. When you construct the instance, you will have to check its type. In Java, for instance, that would take the form of instanceof (abomination ordinarily, but you get a pass for loading scenarios), or isAssignableFrom. In Objective C, it's conformsToProtocol.
If it doesn't, it can't be used and you can spit an error out to the console.

Related

How to share the wayland display connection between unrelated code parts

I am currently writing a library that can be used by a wayland client software. The library is intended to be mostly independent of the client toolkit (currently only Qt, but other wayland-enabled toolkits should be able to use it as well). It requires only a wl_display pointer passed to the initialization routine, which is retrieved from the GUI toolkit. After the initialization is done, the library should be able to work without contact to the toolkit to make it cross-toolkit compatible.
The problem arises when my library requires a couple of global object proxies (ie. wl_output). The library uses a custom wl_registry to bind custom proxies to required global objects. However, from the server's perspective all proxies to these objects are equivalent. When events that contain object proxies are sent by the server, they may contain either toolkit's or library's proxy reference. This leads to problems because there is no easy way to differentiate these. When the toolkit receives such events, it blindly assumes that the user data of the proxy belongs to the toolkit and uses it, even if the proxy it received belongs to my library.
Is there any way to reconcile such unrelated code, or is such use beyond the scope of the wayland library/protocol and I should re-design my solution?
Qt Wayland developer here.
When the toolkit receives such events, it blindly assumes that the user data of the proxy belongs to the toolkit and uses it, even if the proxy it received belongs to my library.
Are you sure about this part? When you bind to the global, you create a new proxy object, I don't see how the toolkit can know about this... Or are you talking about the arguments in events sent by globals. I.e. wl_pointer.set_cursor and the like? Would be nice if you could be more specific about what goes wrong...

Arguments against Inversion of Control containers

Seems like everyone is moving towards IoC containers. I've tried to "grok" it for a while, and as much as I don't want to be the one driver to go the wrong way on the highway, it still doesn't pass the test of common sense to me. Let me explain, and please correct/enlighten me if my arguments are flawed:
My understanding: IoC containers are supposed to make your life easier when combining different components. This is done through either a) constructor injection, b) setter injection and c) interface injection. These are then "wired up" programmatically or in a file that's read by the container. Components then get summoned by name and then cast manually whenever needed.
What I don't get:
EDIT: (Better phrasing)
Why use an opaque container that's not idiomatic to the language, when you can "wire up" the application in (imho) a much clearer way if the components were properly designed (using IoC patterns, loose-coupling)? How does this "managed code" gain non-trivial functionality? (I've heard some mentions to life-cycle management, but I don't necessarily understand how this is any better/faster than do-it-yourself.)
ORIGINAL:
Why go to all the lengths of storing the components in a container, "wiring them up" in ways that aren't idiomatic to the language, using things equivalent to "goto labels" when you call up components by name, and then losing many of the safety benefits of a statically-typed language by manual casting, when you'd get the equivalent functionality by not doing it, and instead using all the cool features of abstraction given by modern OO languages, e.g. programming to an interface? I mean, the parts that actually need to use the component at hand have to know they are using it in any case, and here you'd be doing the "wiring" using the most natural, idiomatic way - programming!
There are certainly people who think that DI Containers add no benefit, and the question is valid. If you look at it purely from an object composition angle, the benefit of a container may seem negligible. Any third party can connect loosely coupled components.
However, once you move beyond toy scenarios you should realize that the third party that connects collaborators must take on more that the simple responsibility of composition. There may also be decommissioning concerns to prevent resource leaks. As the composer is the only party that knows whether a given instance was shared or private, it must also take on the role of doing lifetime management.
When you start combining various instance scopes, using a combination of shared and private services, and perhaps even scoping some services to a particular context (such as a web request), things become complex. It's certainly possible to write all that code with poor man's DI, but it doesn't add any business value - it's pure infrastructure.
Such infrastructure code constitutes a Generic Subdomain, so it's very natural to create a reusable library to address such concerns. That's exactly what a DI Container is.
BTW, most containers I know don't use names to wire themselves - they use Auto-wiring, which combines the static information from Constructor Injection with the container's configuration of mappings from interfaces to concrete classes. In short, containers natively understand those patterns.
A DI Container is not required for DI - it's just damned helpful.
A more detailed treatment can be found in the article When to use a DI Container.
I'm sure there's a lot to be said on the subject, and hopefully I'll edit this answer to add more later (and hopefully more people will add more answers and insights), but just a couple quick points to your post...
Using an IoC container is a subset of inversion of control, not the whole thing. You can use inversion of control as a design construct without relying on an IoC container framework. At its simplest, inversion of control can be stated in this context as "supply, don't instantiate." As long as your objects aren't internally depending on implementations of other objects, and are instead requiring that instantiated implementations be supplied to them, then you're using inversion of control. Even if you're not using an IoC container framework.
To your point on programming to an interface... I'm not sure what your experience with IoC containers has been (my personal favorite is StructureMap), but you definitely program to an interface with IoC. The whole idea, at least in how I've used it, is that you separate your interfaces (your types) from your implementations (your injected classes). The code which relies on the interfaces is programmed only to those, and the implementations of those interfaces are injected when needed.
For example, you can have an IFooRepository which returns from a data store instances of type Foo. All of your code which needs those instances gets them from a supplied object of type IFooRepository. Elsewhere, you create an implementation of FooRepository and configure your IoC to supply that anywhere an IFooRepository is needed. This implementation can get them from a database, from an XML file, from an external service, etc. Doesn't matter where. That control has been inverted. Your code which uses objects of type Foo doesn't care where they come from.
The obvious benefit is that you can swap out that implementation any time you want. You can replace it with a test version, change versions based on environment, etc. But keep in mind that you also don't need to have such a 1-to-1 ratio of interfaces to implementations at any given time.
For example, I once used a code generating tool at a previous job which spit out tons and tons of DAL code into a single class. Breaking it apart would have been a pain, but what wasn't much of a pain was to configure it to spit it all out in specific method/property names. So I wrote a bunch of interfaces for my repositories and generated this one class which implemented all of them. For that generated class, it was ugly. But the rest of my application didn't care because it saw each interface as its own type. The IoC container just supplied that same class for each one.
We were able to get up and running quickly with this and nobody was waiting on the DAL development. While we continued to work in the domain code which used the interfaces, a junior dev was tasked with creating better implementations. Those implementations were later swapped in, all was well.
As I mentioned earlier, this can all be accomplished without an IoC container framework. It's the pattern itself that's important, really.
First of all what is IOC? It means that responsibility of creating the dependent object is taken away from the main object and delegated to third party framework. I always use spring as my IOC framework and it bring tons of benefit to the table.
Promotes coding to interface and decoupling - The key benefit is that IOC promotes and makes decoupling very easy. You can always inject an interface in your main object and then use the interface methods to perform tasks. The main object does not need to know which dependent object is assigned to the interface. When you want to use a different class as dependency all you need is to swap the old class with a new one in the config file without a single line of code change. Now you can argue that this can be done in the code using various interface design patterns. But IOC framework makes its walk in a park. So even as a newbie you become expert in levering various interface design patterns like bridge, factory etc.
Clean code - As most of object creation and object life-cycle operations are delegated to IOC container you saved from the writing broiler point repetitive code. So you have a cleaner, smaller and more understandable code.
Unit testing - IOC makes unit testing easy. Since you are left with decoupled code you can easily test the decoupled code in isolation. Also you can easily inject dependencies in your test cases and see how different component interacts.
Property Configurators - Almost all the applications have some properties file where they store application specific static properties. Now to access those properties developers need to write wrappers which will read and parse the properties file and store the properties in format that application can access. Now all the IOC frameworks provide a way of injecting static properties/values in specific class. So this again becomes walk in the park.
These are some of the points I can think right away I am sure there are more.

Understanding FP in an enterprise application context (in Scala)

Most examples (if not all) that I see are the sort of a function that does some sort of computation and finishes. In that aspect, FP shines. However, I have trouble seeing how to apply it in the context of an enterprise application environment where there's not much of algorithms going on and a lot of data transfer and services.
So I'd like to ask how to implement the following problem in FP style.
I want to implement an events bus service. The service has a register method for registering listeners and publish for publishing events.
In an OO setting this is done by creating an EventBus interface with both methods. Then an implementation can use a list to hold listeners that is updated by register and used in publish. Of course this means register has a side effect. Spring can be used to create the class and pass its instance to publishers or subscribers of events.
How to model this in FP, given that clients of the event bus service are independent (e.g., not all are created in a "test" method)? As far as I can see this negates making register return a new instance of EventBus, since other clients already hold a reference to the old instance (and e.g., publishing to it will only publish to the listeners it knows of)
I prefer a solution to be in Scala.
I think you should have a closer look at functional reactive programming techniques. Since you want something in Scala, I suggest reading Deprecating The observer pattern paper by Ingo Maier, Tiark Rompf and Martin Odersky.
The sketch of the solution is that publish should return IO[Unit]. Listeners should be iteratees. Registration also returns IO[Unit].

Design pattern for iPhone -> web service functionality?

I'm developing an app that will talk with a web service exposing multiple methods. I'm trying to figure out what the best pattern would be to centralize the access to the web service, give options for synchronous and asynchronous access, and return data to clients. Has anybody tackled this problem yet?
One class for all methods seems like it would centralize everything well, but I'm thinking it would get confusing to return data to the correct places, especially when dealing with multiple asynchronous calls. Another thought I had was a separate subclass for each method, with some sort of factory brokering access, but I'm thinking that might be overengineering the situation.
(note: not asking for what method calls to use/how to parse response/etc, looking for a high level design pattern solution to the general problem)
I recently came across the same problem. While I don't believe my solution to be optimal, it may help you out.
I created a web service manager and an endpoint protocol. Each object that implements the endpoint protocol is responsible for connecting to a web service endpoint(method), parsing the returned data, and notifying its delegate(usually the web service manager) of completion or any errors. I ended up creating an EndpointBase class that I use 99% of the time.
The web service manager is responsible for instantiating the endpoints as needed and invoking them. All of the calls happen asynchronously.
All in all it seems to work pretty well for me. I did end up with a situation in which one endpoint relied on the response of another (I used the command pattern there).
SDK Components that you'll want to look at are:
NSURLConnection
NSXMLParser
Factories? We don't need no stinkin' factories.
I've done this a few times, and I basically do what you're saying: one object that provides methods for all the web service calls, encapsulating the details of communicating with the service, handling connection issues, etc. In one app it was a singleton, because it needed to keep session state; in another app it was just a collection of static methods.
Along with some formatting of the response data, that's the entirety of its responsibility.
It's left up to the callers is whether the call is synchronous or asynchronous; the class itself is written synchronously, and a caller just uses it in a separate thread if necessary. Cocoa's performSelector... methods make that easy.
If REST is a good fit for your data interactions, then I would suggest the ObjectiveResource library . It's designed to work seamlessly with a Ruby on Rails app, but it basically speaks JSON or POX (plain old XML) over HTTP using rails ActiveResource conventions.
It's basically a set of categories on NSObject and some of the primitive object types that will let you make calls like [Dog findAllRemote] to return a list of Dog objects, or [myDog saveRemote] to send changes made to the myDog object back to the server.

IoC and events

I am having a really hard time reconciling IoC, interfaces, and events. Let's see if I can explain this without writing a book.
I'm just getting started with IoC and I'm playing with Spring. We have a simple data layer that was built long before EF or the others. One of the classes is a DBProcedure which has some methods and events.
I created an IDBProcedure interface that the 'real' DBProcedure class implements. In TDD fashion I'd like to be able to swap out the 'real' DBProcedure class for another that implements the same interface for testing. To me, this means that the IDBProcedure interface should be defind in a different namespace/project than my data layer, right?
But a DBProcedure can raise some events and those events deliver custom EventArgs-derived classes. Does that mean that the EventArgs-classes need to be defined outside the data layer too? Seems like it to make the interface work, but that seems bad because it spreads data-layerness around?
On the other hand maybe I have the wrong idea - is it ok to include the data layer namespace when I'm testing to get interface and event definitions even though I'm not using any of the 'real' classes?
Yes, you need to move the interfaces and all the types it depends on somewhere, because you do not want the interfaces module to depend on the implementations.
The typical choice for this is one of two alternatives
Impl ----> Api <---- client
(Implementation depends on api, client depends on api, everything in api module)
Impl ----> Api <----- client
\ | /
\ V /
------->Model<------
Here everyone depends on a common "model" module, and this contains the enums and such. The advantage of this version is that you can have multiple API modules share the same common enums and other artifacts. (Because you really don't want API's to depend on other API modules usually)