How can I use PriorityBlockingQueue with ListeningExecutorService? - guava

Since Guava's ListeningExecutorService is implemented by wrapping an existing ExecutorService, it 'decorates' the task by intercepting the execute() method. That means that if I want to use a custom PriorityQueue on the underlying ExecutorService, my comparator "sees" the decorated task as a ListenableFutureTask object instead of the original.
Is there a way to get a hold of the task that it wraps? So that the queue's comparator can use the tasks weight to determine ordering?

I assume that you're concerned with submit() rather than execute()? (See the bottom of my response.)
With a ListeningExecutorService from MoreExecutors.listeningDecorator (the kind of wrapper you refer to), you're out of luck. listeningDecorator, like most ExecutorService implementations, wraps any input to submit in a FutureTask. The normal solution to this problem is to implement AbstractExecutorService and override newTaskFor to return a custom object. That should work here, too. You'll basically be reimplementing listeningDecorator, which is a fairly trivial wrapper around AbstractListeningExecutorService, which is itself a fairly trivial wrapper around AbstractExecutorService.
There are two couple complications. (OK, there might be more. I admit that I haven't tested the approach I'm suggesting.)
AbstractListeningExecutorService doesn't allow you to override newTaskFor. (Why? I can explain if you'd like to file a feature request.) As a result, you'll have to extend AbstractExecutorService directly, largely duplicating the (short) AbstractListeningExecutorService implementation.
newTaskFor has to return a ListenableFuture that's also Comparable. The obvious choice for a ListenableFuture is ListenableFutureTask, but that class is final, so you can't make instances Comparable. The solution is to create a ListenableFutureTask and wrap it in a SimpleForwardingListenableFuture that implements Comparable.
Why do I assume you're dealing with submit() rather than execute()?
listeningDecorator(...).execute() doesn't wrap the input task, as shown by this test I just wrote:
public void testListeningDecorator_noWrapExecuteTask() {
ExecutorService delegate = mock(ExecutorService.class);
ListeningExecutorService service = listeningDecorator(delegate);
Runnable task = new Runnable() {
#Override
public void run() {}
};
service.execute(task);
verify(delegate).execute(task);
}

Related

Excluding member functions and inheritance, what are some of the most common programming patterns for adding functionality to a class?

There're likely no more than 2-4 widely used approaches to this problem.
I have a situation in which there's a common class I use all over the place, and (on occasion) I'd like to give it special abilities. For arguments sake, let's say that type checking is not a requirement.
What are some means of giving functionality to a class without it being simply inheritance or member functions?
One way I've seen is the "decorator" pattern in which a sort of mutator wraps around the class, modifies it a bit, and spits out a version of it with more functions.
Another one I've read about but never used is for gaming. It has something to do with entities and power-ups/augments. I'm not sure about the specifics, but I think they have a list of them.
???
I don't need specific code of a specific language so much as a general gist and some keywords. I can implement from there.
So as far as I understand, you're looking to extend an interface to allow client-specific implementations that may require additional functionality, and you want to do so in a way that doesn't clutter up the base class.
As you mentioned, for simple systems, the standard way is to use the Adaptor pattern: subclass the "special abilities", then call that particular subclass when you need it. This is definitely the best choice if the extent of the special abilities you'll need to add is known and reasonably small, i.e. you generally only use the base class, but for three-to-five places where additional functionality is needed.
But I can see why you'd want some other possible options, because rarely do we know upfront the full extent of the additional functionality that will be required of the subclasses (i.e. when implementing a Connection API or a Component Class, each of which could be extended almost without bound). Depending on how complex the client-specific implementations are, how much additional functionality is needed and how much it varies between the implementations, this could be solved in a variety of ways:
Decorator Pattern as you mentioned (useful in the case where the special entities are only ever expanding the pre-existing methods of the base class, without adding brand new ones)
class MyClass{};
DecoratedClass = decorate(MyClass);
A combined AbstractFactory/Adaptor builder for the subclasses (useful for cases where there are groupings of functionality in the subclasses that may differ in their implementations)
interface Button {
void paint();
}
interface GUIFactory {
Button createButton();
}
class WinFactory implements GUIFactory {
public Button createButton() {
return new WinButton();
}
}
class OSXFactory implements GUIFactory {
public Button createButton() {
return new OSXButton();
}
}
class WinButton implements Button {
public void paint() {
System.out.println("I'm a WinButton");
}
}
class OSXButton implements Button {
public void paint() {
System.out.println("I'm an OSXButton");
}
}
class Application {
public Application(GUIFactory factory) {
Button button = factory.createButton();
button.paint();
}
}
public class ApplicationRunner {
public static void main(String[] args) {
new Application(createOsSpecificFactory());
}
public static GUIFactory createOsSpecificFactory() {
int sys = readFromConfigFile("OS_TYPE");
if (sys == 0) return new WinFactory();
else return new OSXFactory();
}
}
The Strategy pattern could also work, depending on the use case. But that would be a heavier lift with the preexisting base class that you don't want to change, and depending on if it is a strategy that is changing between those subclasses. The Visitor Pattern could also fit, but would have the same problem and involve a major change to the architecture around the base class.
class MyClass{
public sort() { Globals.getSortStrategy()() }
};
Finally, if the "special abilities" needed are enough (or could eventually be enough) to justify a whole new interface, this may be a good time for the use of the Extension Objects Pattern. Though it does make your clients or subclasses far more complex, as they have to manage a lot more: checking that the specific extension object and it's required methods exist, etc.
class MyClass{
public addExtension(addMe) {
addMe.initialize(this);
}
public getExtension(getMe);
};
(new MyClass()).getExtension("wooper").doWoop();
With all that being said, keep it as simple as possible, sometimes you just have to write the specific subclasses or a few adaptors and you're done, especially with a preexisting class in use in many other places. You also have to ask how much you want to leave the class open for further extension. It might be worthwhile to keep the tech debt low with an abstract factory, so less changes need to be made when you add more functionality down the road. Or maybe what you really want is to lock the class down to prevent further extension, for the sake of understand-ability and simplicity. You have to examine your use case, future plans, and existing architecture to decide on the path forward. More than likely, there are lots of right answers and only a couple very wrong ones, so weigh the options, pick one that feels right, then implement and push code.
As far as I've gotten, adding functions to a class is a bit of a no-op. There are ways, but it seems to always get ugly because the class is meant to be itself and nothing else ever.
What has been more approachable is to add references to functions to an object or map.

What is a pointer language?

I am in the process of trying to gain a clear understanding of what a callback is. I came across this post: what-is-a-callback-function. The user 8bitjunkie who answered the question mentioned callbacks are named such because of how they are used in pointer languages. My initial assumption based on the name led me to think that a pointer language is a language where pointers can be directly manipulated. So I would like to know if c++ is a pointer language, and if my initial assumption was incorrect; what a pointer language is. As far as I can tell it does not seem to be a typical language agnostic term. If it is, it is covered by results relating to the usage of pointers.
Callbacks are not unique to languages that allow direct manipulation of pointers - but that is what a "Pointer Language" is. I will focus my answer on what callbacks are because that seems to be your main confusion.
Callbacks are available in Java, Python, JavaScript, and many other languages that hide pointers from you.
A callback is just a function that will be executed at the end of another function. Generally this is useful for asynchronous tasks, because it allows you to respond to the task in a specific way without blocking.
For an example I will use Java - a language with managed memory no direct access to pointers. The more native way to implement callbacks is with function pointers, and I think that is what your article meant about "Pointer Languages." But I'd rather show you what a callback is and how to use them without pointers in one fell swoop, so Java it is.
In this example we will have an interface defined like this.
public interface CallBack {
public void onFinished(boolean success);
}
This callback interface allows us to declare an object with a predefined method that will respond to either success or failure. We can then define a Runnable class like this.
public class CBObject implements Runnable {
private CallBack myCallback;
public CBObject(CallBack myCallback) {
this.myCallback = myCallback;
}
public void run() {
boolean success = false;
// do some stuff, set success = true if it works
myCallback.onFinished(success); // this calls the callback
}
}
Then if we want to use this callback we will do something like this.
public void doSomethingAsynchronous(CallBack callback) {
CBObject cb = new CBObject(callback);
Thread task = new Thread(cb);
task.start();
}
This will run this task asynchronously but allow the user to react to its success or failure.
I hope this helps!

Is it ok to put methods/fields to base class that will only be used by some of the derived classes

This is a bit of a generic software design question. Suppose you have a base class and lots of classes that derive from it (around 10).
There is some common functionality that is being shared between some of the classes (3-4 of derived classes need it). Basically a field for a UI control, an abstract method to create a UI control and the common code that uses the abstract method to recycle the UI piece (8-9 lines of code) using the abstract method. Something like this:
class BaseClass {
...
protected UIControl control;
protected abstract UIControl CreateUI();
protected void RecycleUI() {
if (/* some condition is met */) {
if (this.control != null) {
control.Dispose();
}
this.control = this.CreateUI();
this.AddToUITree(control);
}
}
...
}
Do you think it is OK to put this to base class instead of replicating the code in derived classes.
Drawback is that this piece of code is only used for some of the base classes and completely irrelevant for the other classes.
One alternative is to create an intermediate class that derives from BaseClass and use it as the base to the ones that need the functionality. I felt like creating a derived class for a couple line of code for a very specific purpose felt heavy. It doesn't feel like it is worth interrupting the inheritance tree for this. We try to keep the hierarchy as simple as possible so that it is easy to follow and understand the inheritance tree. Maybe if this was C++ where multiple inheritance is an option, it wouldn't be a big issue but multiple inheritance is not available.
Another option is to create a utility method and an interface to create/update the UI control:
interface UIContainer {
UIControl CreateUIControl();
UIControl GetUIControl();
void SetUIControl(UIControl control);
}
class UIControlUtil {
public void RecycleUI(UIContainer container) {
if (/* some condition is met */) {
if (container.GetUIControl() != null) {
container.GetUIControl().Dispose();
}
UIControl control = container.CreateUI();
container.SetUIControl(control);
container.AddToUITree(control);
}
}
}
I don't like this option because it bleeds UI logic externally which is less secure as its UI state can be manipulated externally. Also derived classes have to implement getter/setter now. One advantage is that there is another class outside of the aforementioned inheritance tree and it needs this functionality and it can use this utility function as well.
Do you have any other suggestions? Should I just suppress the urges that brew inside me to have common code not repeated?
One alternative is to create an intermediate class that derives from
BaseClass and use it as the base to the ones that need the
functionality.
Well, this is what I thought is the most appropriate. But it depends. The main question here is the following: are objects, that require UI recycling and really different from those, that do not? If they are really different, you have to create a new base class for them. If difference is really negligible, I think it's ok to leave things in a base class.
Do not forget about LSP.
We try to keep the hierarchy as simple as possible so that it is easy
to follow and understand the inheritance tree
I think more important here is to keep things not only simple, but also close to your real world things so that modeling new entities would be easy. Seeming easiness now may cause real troubles in the future.

Testing GWTP presenter with asynchronous calls

I'm using GWTP, adding a Contract layer to abstract the knowledge between Presenter and View, and I'm pretty satisfied of the result with GWTP.
I'm testing my presenters with Mockito.
But as time passed, I found it was hard to maintain a clean presenter with its tests.
There are some refactoring stuff I did to improve that, but I was still not satisfied.
I found the following to be the heart of the matter :
My presenters need often asynchronous call, or generally call to objects method with a callback to continue my presenter flow (they are usually nested).
For example :
this.populationManager.populate(new PopulationCallback()
{
public void onPopulate()
{
doSomeStufWithTheView(populationManager.get());
}
});
In my tests, I ended to verify the population() call of the mocked PopulationManager object. Then to create another test on the doSomeStufWithTheView() method.
But I discovered rather quickly that it was bad design : any change or refactoring ended to broke a lot of my tests, and forced me to create from start others, even though the presenter functionality did not change !
Plus I didn't test if the callback was effectively what I wanted.
So I tried to use mockito doAnswer method to do not break my presenter testing flow :
doAnswer(new Answer(){
public Object answer(InvocationOnMock invocation) throws Throwable
{
Object[] args = invocation.getArguments();
((PopulationCallback)args[0]).onPopulate();
return null;
}
}).when(this.populationManager).populate(any(PopulationCallback.class));
I factored the code for it to be less verbose (and internally less dependant to the arg position) :
doAnswer(new PopulationCallbackAnswer())
.when(this.populationManager).populate(any(PopulationCallback.class));
So while mocking the populationManager, I could still test the flow of my presenter, basically like that :
#Test
public void testSomeStuffAppends()
{
// Given
doAnswer(new PopulationCallbackAnswer())
.when(this.populationManager).populate(any(PopulationCallback.class));
// When
this.myPresenter.onReset();
// Then
verify(populationManager).populate(any(PopulationCallback.class)); // That was before
verify(this.myView).displaySomething(); // Now I can do that.
}
I am wondering if it is a good use of the doAnswer method, or if it is a code smell, and a better design can be used ?
Usually, my presenters tend to just use others object (like some Mediator Pattern) and interact with the view. I have some presenter with several hundred (~400) lines of code.
Again, is it a proof of bad design, or is it normal for a presenter to be verbose (because its using others objects) ?
Does anyone heard of some project which uses GWTP and tests its presenter cleanly ?
I hope I explained in a comprehensive way.
Thank you in advance.
PS : I'm pretty new to Stack Overflow, plus my English is still lacking, if my question needs something to be improved, please tell me.
You could use ArgumentCaptor:
Check out this blog post fore more details.
If I understood correctly you are asking about design/architecture.
This is shouldn't be counted as answer, it's just my thoughts.
If I have followed code:
public void loadEmoticonPacks() {
executor.execute(new Runnable() {
public void run() {
pack = loadFromServer();
savePackForUsageAfter();
}
});
}
I usually don't count on executor and just check that methods does concrete job by loading and saving. So the executor here is just instrument to prevent long operations in the UI thread.
If I have something like:
accountManager.setListener(this);
....
public void onAccountEvent(AccountEvent event) {
....
}
I will check first that we subscribed for events (and unsubscribed on some destroying) as well I would check that onAccountEvent does expected scenarios.
UPD1. Probably, in example 1, better would be extract method loadFromServerAndSave and check that it's not executed on UI thread as well check that it does everything as expected.
UPD2. It's better to use framework like Guava Bus for events processing.
We are using this doAnswer pattern in our presenter tests as well and usually it works just fine. One caveat though: If you test it like this you are effectively removing the asynchronous nature of the call, that is the callback is executed immediately after the server call is initiated.
This can lead to undiscovered race conditions. To check for those, you could make this a two-step process: when calling the server,the answer method only saves the callback. Then, when it is appropriate in your test, you call sometinh like flush() or onSuccess() on your answer (I would suggest making a utility class for this that can be reused in other circumstances), so that you can control when the callback for the result is really called.

Class design: file conversion logic and class design

This is pretty basic, but sort of a generic issue so I want to hear what people's thoughts are. I have a situation where I need to take an existing MSI file and update it with a few standard modifications and spit out a new MSI file (duplication of old file with changes).
I started writing this with a few public methods and a basic input path for the original MSI. The thing is, for this to work properly, a strict path of calls has to be followed from the caller:
var custom = CustomPackage(sourcemsipath);
custom.Duplicate(targetmsipath);
custom.Upgrade();
custom.Save();
custom.WriteSmsXmlFile(targetxmlpath);
Would it be better to put all the conversion logic as part of the constructor instead of making them available as public methods? (in order to avoid having the caller have to know what the "proper order" is):
var custom = CustomPackage(sourcemsipath, targetmsipath); // saves converted msi
custom.WriteSmsXmlFile(targetxmlpath); // saves optional xml for sms
The constructor would then directly duplicate the MSI file, upgrade it and save it to the target location. The "WriteSmsXmlFile is still a public method since it is not always required.
Personally I don't like to have the constructor actually "do stuff" - I prefer to be able to call public methods, but it seems wrong to assume that the caller should know the proper order of calls?
An alternative would be to duplicate the file first, and then pass the duplicated file to the constructor - but it seems better to have the class do this on its own.
Maybe I got it all backwards and need two classes instead: SourcePackage, TargetPackage and pass the SourcePackage into the constructor of the TargetPackage?
I'd go with your first thought: put all of the conversion logic into one place. No reason to expose that sequence to users.
Incidentally, I agree with you about not putting actions into a constructor. I'd probably not do this in the constructor, and instead do it in a separate converter method, but that's personal taste.
It may be just me, but the thought of a constructor doing all these things makes me shiver. But why not provide a static method, which does all this:
public class CustomPackage
{
private CustomPackage(String sourcePath)
{
...
}
public static CustomPackage Create(String sourcePath, String targetPath)
{
var custom = CustomPackage(sourcePath);
custom.Duplicate(targetPath);
custom.Upgrade();
custom.Save();
return custom;
}
}
The actual advantage of this method is, that you won't have to give out an instance of CustomPackage unless the conversion process actually succeeded (safe of the optional parts).
Edit In C#, this factory method can even be used (by using delegates) as a "true" factory according to the Factory Pattern:
public interface ICustomizedPackage
{
...
}
public class CustomPackage: ICustomizedPackage
{
...
}
public class Consumer
{
public delegate ICustomizedPackage Factory(String,String);
private Factory factory;
public Consumer(Factory factory)
{
this.factory = factory;
}
private ICustomizedPackage CreatePackage()
{
return factory.Invoke(..., ...);
}
...
}
and later:
new Consumer(CustomPackage.Create);
You're right to think that the constructor shouldn't do any more work than to simply initialize the object.
Sounds to me like what you need is a Convert(targetmsipath) function that wraps the calls to Duplicate, Upgrade and Save, thereby removing the need for the caller to know the correct order of operations, while at the same time keeping the logic out of the constructor.
You can also overload it to include a targetxmlpath parameter that, when supplied, also calls the WriteSmsXmlFile function. That way all the related operations are called from the same function on the caller's side and the order of operations is always correct.
In such situations I typicaly use the following design:
var task = new Task(src, dst); // required params goes to constructor
task.Progress = ProgressHandler; // optional params setup
task.Run();
I think there are service-oriented ways and object-oritented ways.
The service-oriented way would be to create series of filters that passes along an immutable data transfer object (entity).
var service1 = new Msi1Service();
var msi1 = service1.ReadFromFile(sourceMsiPath);
var service2 = new MsiCustomService();
var msi2 = service2.Convert(msi1);
service2.WriteToFile(msi2, targetMsiPath);
service2.WriteSmsXmlFile(msi2, targetXmlPath);
The object-oriented ways can use decorator pattern.
var decoratedMsi = new CustomMsiDecorator(new MsiFile(sourceMsiPath));
decoratedMsi.WriteToFile(targetMsiPath);
decoratedMsi.WriteSmsXmlFile(targetXmlPath);