Let's assume we have a function that returns a list of apples in our warehouse:
List<Apple> getApples();
After some lifetime of the application we've found a bug - in rare cases clients of this function get intoxication because some of the apples returned are not ripe yet.
However another set of clients absolutely does not care about ripeness, they use this function simply to know about all available apples.
Naive way of solving this problem would be to add the 'ripeness' member to an apple and then find all places where ripeness can cause problems and put some checks.
const auto apples = getApples();
for (const auto& apple : apples)
if (apple.isRipe())
consume(apple)
However, if we correlate this new requirement of having ripe apples with the way class interfaces are usually designed, we might find out that we need new interface which is a subset of a more generic one:
List<Apple> getRipeApples();
which basically extends the getApples() interface by filtering the ones that are not ripe.
So the questions are:
Is this correct way of thinking?
Should the old interface (getApples) remain unchanged?
How will it handle scaling if later on we figure out that some customers are allergic to red/green/yellow apples (getRipeNonRedApples)?
Are there any other alternative ways of modifying the API?
One constraint, though: how do we minimize the probability of inexperienced/inattentive developer calling getApples instead of getRipeApples? Subclass the Apple with the RipeApple? Make a downcast in the getRipeApples?
A pattern found often with Java people is the idea of versioned capabilities.
You have something like:
interface Capability ...
interface AppleDealer {
List<Apples> getApples();
}
and in order to retrieve an AppleDealer, there is some central service like
public <T> T getCapability (Class<T> type);
So your client code would be doing:
AppleDealer dealer = service.getCapability(AppleDealer.class);
When the need for another method comes up, you go:
interface AppleDealerV2 extends AppleDealer { ...
And clients that want V2, just do a `getCapability(AppleDealerV2.class) call. Those that don't care don't have to modify their code!
Please note: of course, this only works for extending interfaces. You can't use this approach neither to change signatures nor to remove methods in existing interfaces.
Regarding your question 3/4: I go with MaxZoom there, but to be precise: I would very much recommend for "flags" to be something like List<String>, or List<Integer> (for 'real' int like flags) or even Map<String, Object>. In other words: if you really don't know what kind of conditions might come over time, go for interfaces that work for everything: like one where you can give a map with "keys" and "expected values" for the different keys. If you go for pure enums there, you quickly run into similar "versioning" issues.
Alternatively: consider to allow your client to do the filtering himself, using something like; using Java8 you can think of Predicates, lambdas and all that stuff.
Example:
Predicate<Apple> applePredicate = new Predicate<Apple>() {
#Override
public boolean test(Apple a) {
return a.getColour() == AppleColor.GoldenPoisonFrogGolden;
}
};
List<Apples> myApples = dealer.getApples(applePredicate);
IMHO creating new class/method for any possible Apple combination will result in a code pollution. The situation described in your post could be gracefully handled by introducing flags parameter :
List<Apple> getApples(); // keep for backward compatibility
List<Apple> getApples(FLAGS); // use flag as a filter
Possible flags:
RED_FLAG
GREEN_FLAG
RIPE_FLAG
SWEET_FLAG
So a call like below could be possible:
List<Apple> getApples(RIPE_FLAG & RED_FLAG & SWEET_FLAG);
that will produce a list of apples that are ripe, and red-delicious.
Related
In Cucumber (the ruby version) you can easily call steps from other steps and thus build hierarchical libraries of steps making it easy to write the Gherkin feature specifications in the most generic terms.
However it is not readily apparent how to do this in Cucumber-JVM and I have been unable to find documentation for it.
Let me be clear I am not interested in calling the step implementation function directly because I don't want to have to know what its signature is, nor to change the call every time the implementation changes.
Rather, I want to pass an arbitrary string that will go through the regex matcher and automatically find the matching step and execute it. Just as the engine runs all steps.
simple example of what I would expect syntax to look like to define synonym "logout":
When("user logs out") { () =>
d.executeScript("logout();")
}
When("logout") { () =>
Step("user logs out")
}
This functionality is not supported in Cucumber-JVM. (Note that the Cucumber Backgrounder document you link in your question describes using Steps within Steps as "an anti-pattern")
Essentially, we believe that Cucumber is a collaboration tool and that Gherkin is not a programming language.
You can see a longer discussion of how we arrived at this decision here
To call steps within step definitions, inherit cuke4duke.Steps in java
import cuke4duke.StepMother;
import cuke4duke.Steps;
import cuke4duke.annotation.I18n.EN.When;
public class CallingSteps extends Steps {
public CallingSteps(StepMother stepMother) {
super(stepMother);
}
#When("^I call another step$")
public void iCallAnotherStep() {
Given("it is magic"); // This will call a step defined somewhere else.
}
}
Example:
https://github.com/cucumber-attic/cuke4duke/blob/master/examples/java/src/test/java/simple/CallingSteps.java
Note: cuke4duke support scala as well
Calling steps within steps is a terrible anti-pattern that can easily be replaced by something much simpler.
Instead of one step calling another step, have both steps call the same helper method.
If you apply this pattern with rigour you and up with
step definitions that are all just single calls to helper methods
a suite of helper methods that collectively provide a test-api
The art of elegantly implementing your Cucumber scenarios now becomes a known programming problem as all your functionality is now directly in code in your programming language rather than being in some restrictive construct specific to Cucumber.
You can now
refactor your helper methods to provide cleaner interaces
use parameters to give methods greater power
use naming to give all your calls greater clarity
use a helper method as an entry point to a suite of extra functionality
use delegation to move functionality out of helper methods and into test service objects
...
Providing this separation can be initially challenging if you are not a programmer or not experienced in the particular programming language in use. However once you get past this initial hurdle the code you can and should produce will be much easier to work with than the tangled mess that inevitably occurs with step nesting.
In Cucumber each Step is a Method. That way, you can call other methods in any step that you want.
#When("^click on \"([^\"]*)\"$")
public void clickOn(String arg1) throws Throwable {
driver.findElement(By.linkText(arg1)).click();
}
#Then("^should see the static elements changing$")
public void shouldSeeTheStaticElementsChanging() throws Throwable {
clickOn();
}
I'm searching for an elegant way to not allow specific EntityTypes to be expanded in BreezeJS. We have a (somewhat) public Web Service that we are exposing, and there are some tables that we don't want to be visible to some consumers of that service. Although we can only expose Web API Methods for those specific tables, consumers of the service could still access those tables by expanding from related tables.
Note: I've posted an answer to this question, giving a work-around. However, I'm interested if anyone out there knows a more elegant way of skinning this particular cat.
On the UserVoice page for requesting this feature to be formally added to Breeze, Ward Bell suggests a decent work-around:
Meanwhile, in your controller you can examine the query string from the request for presence of $select and $expand and throw an exception if you see it.
I'm guessing this would look something like this:
[HttpGet]
public IQueryable<Widget> Widgets() {
if (!string.IsNullOrEmpty(HttpContext.Current.Request.QueryString["$expand"]))
{
throw new Exception("Ah ah ah, you didn't say the magic word!");
}
return _contextProvider.Context.Widgets;
}
...to block all Expands, or something more specific to block the Expand of Features, itself. This isn't too shabby but not quite "elegant".
(Yes, that is a Jurassic Park reference.)
We are trying to use RequestFactory with an existing Java entity model. Our Java entities all implement a DomainObject interface and expose a getObjectId() method (this name was chosen as getId() can be ambiguous and conflict with the domain object's actual ID from the domain being modeled.
The ServiceLayerDecorator interface allows for customization of ID and Version property lookup strategies.
public class MyServiceLayerDecorator extends ServiceLayerDecorator {
#Override
public Object getId(Object object) {
DomainObject domainObject = (DomainObject) object;
return domainObject.getObjectId();
}
}
So far, so good. However, trying to deploy this solution yields runtime errors. In particular, RequestFactoryInterfaceValidator complains:
[ERROR] There is no getId() method in type com.mycompany.server.MyEntity
Then later on:
[ERROR] Type type com.mycompany.client.MyEntityProxy was previously marked as bad
[ERROR] The type com.mycompany.client.MyEntityProxy did not pass RequestFactory validation
[ERROR] Unexpected error
com.google.web.bindery.requestfactory.server.UnexpectedException: The type com.mycompany.client.MyEntityProxy did not pass RequestFactory validation
at com.google.web.bindery.requestfactory.server.ServiceLayerDecorator.die(ServiceLayerDecorator.java:212) ~[gwt-servlet.jar:na]
My question is - why does the ServiceLayerDecorator allow for customized ID and Version lookup strategies if RequestFactoryInterfaceValidator is hardcoding the convention of getId() and getVersion()?
I guess I could override ServiceLayerDecorator.resolveClass() to ignore "poisoned" proxy classes but at this point it seems like I'm fighting the framework too much...
Couple of options, some of which have already been mentioned:
Locator. I like to make a single Locator for the entire proj, or at least for groups of related objects that have similar key types. The getId() call will be able to invoke your DomainObject.getObjectId() method and return that value. Note that the getDomainType() method is currently unused, and can return null or throw an exception.
ValueProxy. Instead of having your objects map to something RF can understand as an entity, map them to plain value objects - no id or version required. RF misses out on a lot of clever things it can do, especially with regard to avoiding sending redundant data to the server.
ServiceLayerDecorator. This worked pre 2.4, but with the annotation processing that goes on now, it works less well, since it tries to do some of the work for you. It seems ServiceLayerDecorator has lost a lot of its teeth in the last few months - in theory, you could use it to rebuild getters to talk directly to your persistence mechanism, but now that the annotation processing verifies your code, that is no longer an option.
Big issue in all of this is that RequestFactory is designed to solve a single problem, and solve it well: Allow developers to use POJOs mapped to some persistence mechanism, and refer to those objects from the client, following certain conventions to avoid writing extra code or configuration.
As a result, it solves its own problem pretty well, and ends up being a bad fit for many other problems/use-cases. You might be finding that it isn't worth it: if so, a few thoughts you might consider:
RPC. It isn't perfect for much, but it does an okay job for a lot.
AutoBeans (which RF is based on) is still a pretty fast, lightweight way to send data over the wire and get it into the app. You could build your own wrapper around it, like RF has done, and slim down the problem it is trying to solve to just your use-case.
I am implementing a plugin system with Lua scripts for an application. Basically it will allow the users to extend the functionality by defining one or more functions in Lua. The plugin function will be called in response to an application event.
Are there some good open source plugin frameworks in Lua that can serve as a model?
In particular I wonder what is the best way to pass parameters to the plugin and receive the returned values, in a way that is both flexible and easy to use for the plugin writers.
Just to clarify, I am interested in the design of the API from the point of view of the script programming in Lua, not from the point of view of the hosting application.
Any other advice or best practices related to the design of a plugin system in Lua will be appreciated.
Lua's first-class functions make this kind of thing so simple that I think you won't find much in the way of frameworks. Remember that Lua's mantra is to provide minimal mechanism and let individual programmers work out policy for themselves.
Your question is very general, but here's what I recommend for your API:
A single plugin should be represented by a single Lua table (just as a Lua module is represented by a single table).
The fields of the table should contain operations or callbacks of the table.
Shared state should not be stored in the table; it should be stored in local variables of the code that creates the table, e.g.,
local initialized = false
return {
init = function(self, t) ... ; initialized = true end,
something_else = function (self, t)
if not initialized then error(...) end
...
end,
...
}
You'll also see that I recommend all plugin operations use the same interface:
The first argument to the plugin is the table itself
The only other argument is a table containing all other information needed by the operation.
Finally, each operation should return a result table.
The reason for passing and returning a single table instead of positional results is that it will help you keep code compatible as interfaces evolve.
In summary, use tables and first-class functions aggressively, and protect your plugin's private state.
The plugin function will be called in response to an application event.
That suggests the observer pattern. For example, if your app has two events, 'foo' and 'bar', you could write something like:
HostApp.listeners = {
foo = {},
bar = {},
}
function HostApp:addListener(event, listener)
table.insert(self.listeners[event], listener)
end
function HostApp:notifyListeners(event, ...)
for _,listener in pairs(self.listeners[event]) do
listener(...)
end
end
Then when the foo event happens:
self:notifyListeners('foo', 'apple', 'donut')
A client (e.g. a plugin) interested in the foo event would just register a listener for it:
HostApp:addListener('foo', function(...)
print('foo happened!', ...)
end)
Extend to suit your needs.
In particular I wonder what is the best way to pass parameters to the plugin and receive the returned values
The plugin just supples you a function to call. You can pass any parameters you want to it, and process it's return values however you wish.
IMHO, there are two techiques to handle a query for a resource:
For http GET you can override represent(Variant variant) or handleGet().
For http POST the same applies with acceptRepresentation(Representation entity) and handlePost().
The doc for handleGet says:
Handles a GET call by automatically returning the best representation available. The content negotiation is automatically supported based on the client's preferences available in the request. This feature can be turned off using the "negotiateContent" property.
and for represent:
Returns a full representation for a given variant previously returned via the getVariants() method. The default implementation directly returns the variant in case the variants are already full representations. In all other cases, you will need to override this method in order to provide your own implementation.
What are the main differences between these two types of implementations? In which case should I prefer one over the other? Is it right that I can achieve with e.g. handleGet() everything that would work with represent()?
I first started using handleGet setting the entity for the response. When I implemented another project I used represent. Looking back i can't really say one way is better or clearer than the other. What are your expirences for that?
I recommend using represent(Variant) because then you’ll be leveraging the content negotiation functionality provided by the default implementation of handleGet(Request, Response).
BTW, lately I've started using the annotation-based syntax instead of overriding superclass methods, and I like it. I find it clearer, simpler, and more flexible.
For example:
#Post('html')
Representation doSearch(Form form) throws ResourceException {
// get a field from the form
String query = form.getFirstValue("query");
// validate the form - primitive example of course
if (query == null || query.trim().length() == 0)
throw new ResourceException(Status.CLIENT_ERROR_BAD_REQUEST, "Query is required.");
// do something
SearchResults searchResults = SearchEngine.doSearch(query);
// return a HTML representation
return new StringRepresentation(searchResults.asHtmlString(), MediaType.TEXT_HTML);
}
The advantages of using this approach include the incoming representation being automatically converted to a useful form, the method can be named whatever makes sense for your application, and just by scanning the class you can see which class methods handle which HTTP methods, for what kind of representations.