Using Enumeration fields with Lift and CRUDify - scala

I have thrown together a small Lift application, using CRUDify, to perform basic CRUD operations on some database tables.
Several columns are of type "CHAR (1 byte)", and are intended to store values of "Y" or "N". My model class defines those fields like this example:
...
object isActive extends MappedEnum(this, YesNo) {
override def dbColumnName = "IS_ACTIVE"
override def displayName = "Active"
}
...
That type, "YesNo" is a Scala object defined as follows:
object YesNo extends Enumeration {
val Y, N = Value
}
In the web browser forms that are auto-generated by CRUDify, columns such as this do show up with "Y" and "N" as the available options. However, when you create or edit a row... what actually gets stored is "1" or "0"!
Clearly, I'm just missing the boat on something here. How can I structure this such that CRUDify will allow users to select from "Y" or "N" in the browser, and store either "Y" or "N" in the database?

Hmm... not a huge Scala / Lift community here on StackOverflow! Actually, it may be that there isn't much of a community for this "CRUDify" sub-component of Lift anywhere.
At any rate, I eventually found an answer (sorta) by subscribing to the "liftweb" Google Groups mailing list. Apparently, this is a known limitation in the CRUDify framework. It's been like that for years and is not a limitation that anyone particularly cares about, but it is known.
One developer back in 2009 tried to find a way around this by creating his own custom subclass of MappedField, and using that as the mapped type in his Lift model classes. The 140-line class, along with an email briefly describing it, may be found at:
http://groups.google.com/group/liftweb/browse_frm/thread/34560f30fab299a7/cdca54c8e1486237?pli=1
I am not sure that this worked 100% back in 2009, and it has a ton of problems when I tried to use it here in 2012 (Scala and Lift have both changed a lot over the past three years).
I invested a small amount of time into trying to make this MappedField subclass work... and then received approval to choose an approach other than CRUDify. Part of the mission for this little app was to learn some things about what to do and what not to do with Lift, and I think we've accomplished that part of the mission now. :)
However, if this research and sample code helps out someone else down the line later, then that would be great.

Related

How to get an OrderedSet of OccurrenceSpecifications from a Lifeline in QVTo?

From the diagram on page 570 of the UML spec I concluded that a Lifeline should have the events property, holding an OrderedSet(OcurrenceSpecification). Unfortunately it is not there, at least in the QVTo implementation that I use.
All I have is the coveredBy property, serving me with an (unordered) Set(InteractionFragment). Since my transformation relies on the correct order of MessageOcurrenceSpecification I somehow need to implement myself what I expected to be implemented by the missing events property.
This is what I have so far:
helper Lifeline::getEvents (): OrderedSet(OccurrenceSpecification) {
return self.coveredBy->selectByKind(OccurrenceSpecification)->sortedBy(true);
}
Obviously sortedBy(true) does not get me far, but I don't know any further. Who can help?
All I could find so far were other people struggling with the same issue years ago, but no solution:
https://www.eclipse.org/forums/index.php/m/1049555/
https://www.eclipse.org/forums/index.php/m/1050025/
https://www.eclipse.org/forums/index.php/m/1095764/
Based on the answer by Vincent combined with input from a colleague of mine I came up with the following solution for QVTo:
-- -----------------------------------------------------------------------------
-- Polyfill for the missing Lifeline::events property
query Lifeline::getEvents (): OrderedSet(OccurrenceSpecification) {
return self.interaction.fragment
->selectByKind(OccurrenceSpecification)
->select(os: OccurrenceSpecification | os.covered->includes(self))
->asOrderedSet();
}
I don't know if it is possible, using directly coveredBy to get an ordered collection. As coveredBy is unordered, if you access it directly by the feature, you will have an unpredictible order and if you try to access it using the eGet(...) stuff, the same result will occur.
However, if I understood correctly, there is a "trick" that could work.
It relies on the assumption that each OccurrenceSpecification instance you need is held by the same Interaction that contains the Lifeline and uses the way EMF stores contained elements. Actually, each contained element is always kind of 'ordered' relatively to its parent (and for each collection so EMF can find elements back when XMI references are expressed using the element position in collections). So, the idea would be to access all the elements contained by the Interaction that owns the lifeline and filter the ones that are contained in coveredBy.
Expression with Acceleo
This is easy to write in MTL/Acceleo. In know you don't use it, but it illustrates what the expression does:
# In Acceleo:
# 'self' is the lifeline instance
self.interaction.eAllContents(OccurrenceSpecification)->select(e | self.coveredBy->includes(e))->asOrderedSet()
with self.interaction we retrieve the Interaction, then we get all the contained elements with eAllContents(...) and we filter the ones that are in the self.coveredBy collection.
But it is way less intuitive in QVT as eAllContents(...) does not exist. Instead you have to gain access to eContents() which is defined on EObject and returns an EList which is transtyped to a Sequence (in QVT,eAllContents() returns a ETreeIterator which is not transtyped by the QVT engine).
So, how to gain access to eContents() in the helper? There is two solutions:
Solution 1: Using the emf.tools library
The emf.tools library give you the ability to use asEObject() which casts your object in a pure EObject and give you more methods to access to (as eClass() for example...etc).
import emf.tools; -- we import the EMF tools library
modeltype UML ...; -- all your metamodel imports and stuffs
...
helper Lifeline::getEvents (): OrderedSet(OccurrenceSpecification) {
return self.interaction.asEObject().eContents()[OccurrenceSpecification]->select(e | self.coveredBy->includes(e))->asOrderedSet();
}
Solution 2: Using oclAstype(...)
If for some reason emf.tools cannot be accessed, you can still cast to an EObject using oclAsType(...).
modeltype UML ...; -- all your metamodel imports and stuffs
modeltype ECORE "strict" uses ecore('http://www.eclipse.org/emf/2002/Ecore'); -- you also register the Ecore metamodel
...
helper Lifeline::getEvents (): OrderedSet(OccurrenceSpecification) {
return self.interaction.oclAsType(EObject).eContents()[OccurrenceSpecification]->select(e | self.coveredBy->includes(e))->asOrderedSet();
}
Limitation
Ok, let's be honest here, this solution seems to work on the quick tests I performed, but I'm not a 100% sure that you will have all the elements you want as this code relies on the strong assumption that every OccurrenceSpecification you need are in the same Interaction as the Liteline instance. If you are sure that all the coveredBy elements you need are in the Interaction (I think they should be), then, that's not the sexiest solution, but it should do the work.
EDIT>
The solution proposed by hielsnoppe is more eleguant than the one I presented here and should be prefered.
You are correct. The Lifeline::events property is clearly shown on the diagram and appears in derived models such as UML.merged.uml.
Unfortunately Eclipse QVTo uses the Ecore projection of the UML metamodel to UML.ecore where unnavigable opposites are pruned. (A recent UML2Ecore enhancement allows the name to be persisted as an EAnnotation.) However once the true property name "events" has been pruned the implicit property name "OccurrenceSpecification" should work.
All associations are navigable in both directions in OCL, so this loss is a bug. (The new Pivot-based Eclipse OCL goes back to the primary UML model in order to avoid UML2Ecore losses. Once Eclipse QVTo migrates to the Pivot OCL you should see the behavior you expected.)

Proper way to modify public interface

Let's assume we have a function that returns a list of apples in our warehouse:
List<Apple> getApples();
After some lifetime of the application we've found a bug - in rare cases clients of this function get intoxication because some of the apples returned are not ripe yet.
However another set of clients absolutely does not care about ripeness, they use this function simply to know about all available apples.
Naive way of solving this problem would be to add the 'ripeness' member to an apple and then find all places where ripeness can cause problems and put some checks.
const auto apples = getApples();
for (const auto& apple : apples)
if (apple.isRipe())
consume(apple)
However, if we correlate this new requirement of having ripe apples with the way class interfaces are usually designed, we might find out that we need new interface which is a subset of a more generic one:
List<Apple> getRipeApples();
which basically extends the getApples() interface by filtering the ones that are not ripe.
So the questions are:
Is this correct way of thinking?
Should the old interface (getApples) remain unchanged?
How will it handle scaling if later on we figure out that some customers are allergic to red/green/yellow apples (getRipeNonRedApples)?
Are there any other alternative ways of modifying the API?
One constraint, though: how do we minimize the probability of inexperienced/inattentive developer calling getApples instead of getRipeApples? Subclass the Apple with the RipeApple? Make a downcast in the getRipeApples?
A pattern found often with Java people is the idea of versioned capabilities.
You have something like:
interface Capability ...
interface AppleDealer {
List<Apples> getApples();
}
and in order to retrieve an AppleDealer, there is some central service like
public <T> T getCapability (Class<T> type);
So your client code would be doing:
AppleDealer dealer = service.getCapability(AppleDealer.class);
When the need for another method comes up, you go:
interface AppleDealerV2 extends AppleDealer { ...
And clients that want V2, just do a `getCapability(AppleDealerV2.class) call. Those that don't care don't have to modify their code!
Please note: of course, this only works for extending interfaces. You can't use this approach neither to change signatures nor to remove methods in existing interfaces.
Regarding your question 3/4: I go with MaxZoom there, but to be precise: I would very much recommend for "flags" to be something like List<String>, or List<Integer> (for 'real' int like flags) or even Map<String, Object>. In other words: if you really don't know what kind of conditions might come over time, go for interfaces that work for everything: like one where you can give a map with "keys" and "expected values" for the different keys. If you go for pure enums there, you quickly run into similar "versioning" issues.
Alternatively: consider to allow your client to do the filtering himself, using something like; using Java8 you can think of Predicates, lambdas and all that stuff.
Example:
Predicate<Apple> applePredicate = new Predicate<Apple>() {
#Override
public boolean test(Apple a) {
return a.getColour() == AppleColor.GoldenPoisonFrogGolden;
}
};
List<Apples> myApples = dealer.getApples(applePredicate);
IMHO creating new class/method for any possible Apple combination will result in a code pollution. The situation described in your post could be gracefully handled by introducing flags parameter :
List<Apple> getApples(); // keep for backward compatibility
List<Apple> getApples(FLAGS); // use flag as a filter
Possible flags:
RED_FLAG
GREEN_FLAG
RIPE_FLAG
SWEET_FLAG
So a call like below could be possible:
List<Apple> getApples(RIPE_FLAG & RED_FLAG & SWEET_FLAG);
that will produce a list of apples that are ripe, and red-delicious.

Is it OK to Move Common ViewController Presentation Logic into Presentation Helper Classes

I'm trying to use layers to make sure I separate everything into it's correct areas in my Swift / iOS / Xcode 6 project. The question, ultimately, is: is it OK / a common practice to move commonly used presentation-level logic to a separate presentation helper class instead of writing it over and over in multiple view controllers with little or no difference?
Here is an example to tie this in and give context:
One of the things I am aiming to do is use a UITableView to display report data. This UITableView will contain 7-10 rows, depending upon the user's preferences (nsuserdefaults). Each row contains a localized string for a label, and some decimal value.
As an example one row could be "Sales made this week: $500.00"
I have a reporting service class that's responsible for talking to the database and getting back / instantiating a report object. This report object contains the raw data for the report, i.e. how much you made this week, this month, this year, etc. Whether the user wants to show all these values or not is irrelevant to the service - it simply gets everything.
So since I have 3 view models that use this same report, I thought it would be wrong to rewrite the same code each time that checks the user's preferences, then creates/binds an array to the UITable and matches the labels with the report values from the object returned by the service.
A better way, I thought, was to create a presentation-level helper class whose job would be to take a report object (the thing I mentioned before that contains the report values), take a user's preferences, and then more generically combine them to create a list of localized strings matched with their respective report values, agnostic to what the view controller wants. Maybe if that requirement changes later (where different view controllers need more customization) I could use flags or different function names within that class.
This way all I have to do is something like
var report = ReportHelper.GenerateReport(reportData, userSettings)
and now report would be an object that could look like this (mock JSON data):
{"Amount made this week":"$100", "Amount made this month":"$500", "Amount made this year": "$10,000"}
And I can use this in any view controller.
The alternate is to just hard code those above values (obviously still pulling localized strings) but I don't know if I adding all those strings + checks based on user preferences + formatting. Seems more elegant to move it away.
Thanks!
I know too little about your problem to provide the definite answer, but basically you have 2 options:
- inheritance
- composition
I personally like inheritance although it is often stated you should choose composition over inheritance.
Your helper sounds somewhat like composition, so that would be the preferred setup. With your specific problem as I understand it, inheritance would lead to a duplication of data, so that is one more argument to choose composition.
So all in all you seem to be about right
Edit:
Inheritance is an "is" relationship, whereas composition is a "has" relationship.
See http://en.wikipedia.org/wiki/Object_composition for more details about composition.
And see http://en.wikipedia.org/wiki/Composition_over_inheritance for more info why and when to choose composition

How to create a scala class based on user input?

I have a use case where I need to create a class based on user input.
For example, the user input could be : "(Int,fieldname1) : (String,fieldname2) : .. etc"
Then a class has to be created as follows at runtime
Class Some
{
Int fieldname1
String fieldname2
..so..on..
}
Is this something that Scala supports? Any help is really appreciated.
Your scenario doesn't seem to make sense. It's not so much an issue of runtime instantiation (the JVM can certainly do this with reflection). Really, what you're asking is to dynamically generate a class, which is only useful if your code makes use of it later on. But how can your code make use of it later on if you don't know what it looks like? For example, how would your later code know which fields it could reference?
No, not really.
The idea of a class is to define a type that can be checked at compile time. You see, creating it at runtime would somewhat contradict that.
You might want to store the user input in a different way, e.g. a map.
What are you trying to achieve by creating a class at runtime?
I think this makes sense, as long as you are using your "data model" in a generic manner.
Will this approach work here? Depends.
If your data coming from a file that is read at runtime but available at compile time, then you're in luck and type-safety will be maintained. In fact, you will have two options.
Split your project into two:
In the first run, read the file and write the new source
programmatically (as Strings, or better, with Treehugger).
In the second run, compile your generated class with the rest of your project and use it normally.
If #1 is too "manual", then use Macro Annotations. The idea here is that the main sub-project's compile time follows the macro sub-project's runtime. Therefore, if we provide the main sub-project with an "empty" class, members can be added to it dynamically at compile time using data that the macro sees at runtime. - To get started, Modify the macro to read from a file in this example
Else, if you're data are truly only knowable at runtime, then #Rob Starling's suggestion may work for you as it did me. I'll share my attempt if you want to be a guinea pig. For debugging, I've got an App.scala in there that shows how to pass strings to a runtime class generator and access it at runtime with Java reflection, even define a Scala type alias with it. So the question is, will your new dynamic class serve as a type-parameter in Slick, or fail to, as it sometimes does with other libraries?

What is scala.mobile supposed to accomplish?

...and why has the package this misleading name (I assumed it had something to do with JavaME or mobile/smart phones)?
I found no references on the internet about scala.mobile.Code or scala.mobile.Location at all nor did I manage to do anything with those classes except getting ClassCastExcetions or NoSuchMethodErrors.
Actually there is not even a single test against scala.mobile in the Scala's test tree which could help understanding that code.
The classes really smell like they were forgotten in the source tree a long time ago and got accidentally released since that.
Maybe I just missed something about them?
Update:
scala.mobile was removed in Scala 2.9.
I just checked the source code.
When Scala changed the name mangling of class files a few years ago and it seems people forgot to update these classes accordingly.
So my answer would be:
At least Location has no purpose, because it is not possible to get anything sensible out of it (except exceptions) and Code without Location is severely limited. It works though if you pass the class literal to Code directly:
import scala.mobile._
val c = new Code(classOf[scala.collection.mutable.StringBuilder])
c.apply[StringBuilder, String]("append")("Foo")
c.apply[String]("toString")() // returns "Foo"
c.apply[Int]("length")() // returns 3
Looks like yet-another implementation in the standard library of reflection-slightly-nicer.
The description of Location pretty much explains what that is about:
The class Location provides a create method to instantiate objects
from a network location by specifying the URL address of the jar/class file.
It might be used by remote actors. Maybe.
As for why it has this misleading name? Well, back in 2004 smart phones had really low penetration, so maybe the association wasn't all that strong.