I'm new to Scala and Lift, coming from a slightly odd background in PLT Scheme. I've done a quick search on this topic and found lots of questions but no answers. I'm probably looking in the wrong place.
I've been working my way through tutorials on using Mapper to create database-backed objects, and I've hit a stumbling block: what types should be used to stored optional attribute values.
For example, a simple ToDo object might comprise a title and an optional deadline (e.g. http://rememberthemilk.com). The former would be a MappedString, but the latter could not be a MappedDateTime since the type constraints on the field require, say, defaultValue to return a Date (rather than a Date or null/false/???).
Is an underlying NULL handled by the MappedField subclasses? Or are there optional equivalents to things like MappedInt, MappedString, MappedDateTime that allow the value to be NULL in the database? Or am I approaching this in the wrong way?
The best place to have Lift questions answered is the Lift group. They aren't into Stack Overflow, but if you do go to their mailing list, they are very receptive and helpful.
David Pollak replied with:
Mapper handles nulls for non-JVM
primitives (e.g., String, Date, but
not Int, Long, Boolean). You'll get a
"null" from the MappedDateTime.is
method.
... which is spot on.
Related
From what I've seen online, people seem to suggest that the toString() method is to be used, however the documentation states:
Creates a String representation of this object. The default representation is platform dependent. On the java platform it is the concatenation of the class name, "#", and the object's hashcode in hexadecimal.
So it seems like using this method might cause some problems down the line?
There is also mkString and result(). The latter of which seems to make the most sense. But I'm not sure what the differences between these 3 methods are and if that's how result() is supposed to be used.
The toString implementation currently just redirects to the result method anyway, so those two methods will behave in the same way. However, they express slightly different intent:
toString requests a textual representation of StringBuilders current state that is "concise but informative (and) that is easy for a person to read". So, theoretically, the (vague) specification of this method does not forbid abbreviating the result, or enhancing conciseness and readability in any other way.
result requests the actual constructed string. No different readings seem possible here.
Therefore, if you want to obtain the resulting string, use result to express your intent as clearly as possible.
In this way, the reader of your code won't have to wonder whether StringBuilder.toString might shorten something for the sake of "conciseness" when the string gets over 9000 kB long, or something like that.
The mkString is for something else entirely, it's mostly used for interspersing separators, as in "hello".mkString(",") == "h,e,l,l,o".
Some further links:
The paragraph with "hashcode in hexadecimal" describes the default. It is just documentation inherited from AnyRef, because the creator of StringBuilder didn't bother to provide more detailed documentation.
If you look into code, you'll see that toString is actually just delegating to result.
The documentation of StringBuilder also mentions result() in the introductory overview paragraph.
Just use result().
TL;DR; use result as stated in the docs.
toString MUST never be called in anything at all for another purpose other than a quick debug.
mkString is inherited from collections hierarchy and it will basically create another StringBuilder so is very inefficient.
This is probably a very silly question, but I have a case class which takes as a parameter Option[Timestamp]. The reason this is necessary is because sometimes the timestamp isn't included. However, for testing purposes I'm making an object where I pass in
Timestamp.valueOf("2016-01-27 22:27:32.596150")
But, it seems I can't do this as this is an actual Timestamp, and it's expecting an Option.
How do I convert to Option[Timestamp] from Timestamp. Further, why does this cause a problem to begin with? Isn't the whole benefit of Option that it could be there or not?
Thanks in advance,
Option indicates the possibility of a missing value, but you still need to construct an Option[Timestamp] value. There are two subtypes for Option - None when there is no value, and Some[T] which contains a value of type T.
You can create one directly using Some:
Some(Timestamp.valueOf("2016-01-27 22:27:32.596150"))
or Option.apply:
Option(Timestamp.valueOf("2016-01-27 22:27:32.596150"))
I've been reading a lot of other people's Scala code recently, and one of the things that I have difficultly with (coming from Java) is a lack of explicit type annotations.
It's certainly convenient when writing code to be able to leave out type annotations -- however when reading code I often find that explicit type annotations help me to understand at a glance what code is doing more easily.
The Scala style guide (http://docs.scala-lang.org/style/types.html) doesn't seem to provide any definitive guidance on this, stating:
Use type inference where possible, but put clarity first, and favour explicitness in public APIs.
To my mind, this is a bit contradictory. While it's clearly obvious what type this variable is:
val tokens = new HashMap[String, Int]
It's not so obvious what type this one is:
val tokens = readTokens()
So, if I was putting clarity first I would probably annotate all variables where the type is not already declared on the same line.
Do any Scala practitioners have guidance on this? Am I crazy to be considering adding type annotations to my local variables? I'm particularly interested in hearing from folks who spend a lot of time reading scala code (for example, in code reviews), as well as writing it.
It's not so obvious what type this one is:
val tokens = readTokens()
Good names are important: the name is plural, ergo it returns some collection of some kind. The most general collection types in Scala are Traversable and Iterator, and they mostly share a common interface, so it's not really important which one of the two it is. The name also talks about "reading tokens", ergo it obviously should return Tokens in some fashion. And last but not least, the method call has parentheses, which according to the style guide means it has side-effects, so I wouldn't count on being able to traverse the collection more than once.
Ergo, the return type is something like
Traversable[Token]
or
Iterator[Token]
and which of the two it is doesn't really matter because their client interfaces are mostly identical.
Note also that the latter constraint (only traversing the collection once) isn't even captured in the type, even if you were providing an explicit type, you would still have to look at the name and the style!
I have been looking around, but most of them point to a java TreeMap. The only issue with that is I do not want to convert any Scala into java and back. If there really is no way, then I am ok with that, but I would like to hear it from some professionals just to be 100% sure and to have this question on here for others in the future to stumble upon when they have a similar issue. Thanks in advance.
EDIT:
Type: scala.collection.mutable.HashMap[String, String]
In general, a Scala HashMap does not guarantee the original order.
However, there is the LinkedHashMap, which states: "The iterator and all traversal methods of this class visit elements in the order they were inserted."
What is the exact type you are dealing with? If you can decide which implementation to use, then you can choose one that maintains order. If you are just given something of type HashMap, then you're out of luck.
At my work we use a typical heavy enterprise stack of Hibernate, Spring, and JSF to handle our application, but after learning Scala I've wanted to try to replicate much of our functionality within a more minimal Scala stack (Squeryl, Scalatra, Scalate) to see if I can decrease code and improve performance (an Achilles heal for us right now).
Often my way of doing things is influenced by our previous stack, so I'm open to advice on a way of doing things that are closer to Scala paradigms. However, I've chosen some of what I do based on previous paradigms we have in the Java code base so that other team members will hopefully be more receptive to the work I'm doing. But here is my question:
We have a domain class like so:
class Person(var firstName: String, var lastName: String)
Within a jade template I make a call like:
.section
- view(fields)
The backing class has a list of fields like so:
class PersonBean(val person: Person) {
val fields: Fields = Fields(person,
List(
Text(person.firstName),
Text(person.lastName)
))
}
Fields has a base object (person) and a list of Field objects. Its template prints all its fields templates. Text extends Field and its Jade template is supposed to print:
<label for="person:firstName">#{label}</label>: <input type="text" id="person:firstName" value="#{value}" />
Now the #{value} is simply a call to person.firstName. However, to find out the label I reference a ResourceBundle and need to produce a string key. I was thinking of using a naming convention like:
person.firstName.field=First Name
So the problem then becomes, how can I within the Text class (or parent Field class) discover what the parameter being passed in is? Is there a way I can pass in person.firstName and find that it is calling firstName on class Person? And finally, am I going about this completely wrong?
If you want to take a walk on the wild side, there's a (hidden) API in Scala that allows you to grab the syntax tree for a thunk of code - at runtime.
This incantation goes something like:
scala.reflect.Code.lift(f).tree
This should contain all the information you need, and then some, but you'll have your work cut out interpreting the output.
You can also read a bit more on the subject here: Can I get AST from live scala code?
Be warned though... It's rightly classified as experimental, do this at your own risk!
You can never do this anywhere from within Java, so I'm not wholly clear as to how you are just following the idiom you are used to. The obvious reason that this is not possible is that Java is pass-by-value. So in:
public void foo(String s) { ... }
There is no sense that the parameter s is anything other than what it is. It is not person.firstName just because you called foo like:
foo(person.firstName);
Because person.firstName and s are completely separate references!
What you could do is replacing the fields (e.g. firstname) with actual objects, which have a name attribute.
I did something similiar in a recent blog post:http://blog.schauderhaft.de/2011/05/01/binding-scala-objects-to-swing-components/
The property doesn't have a name property (yet), but it is a full object but is still just as easy to use as a field.
I would not be very surprised if the following is complete nonsense:
Make the parameter type of type A that gets passed in not A but Context[A]
create an implicit that turns any A into a Context[A] and while doing so captures the value of the parameter in a call-by-name parameter
then use reflection to inspect the call-by-name parameter that gets passed in
For this to work, you'd need very specific knowledge of how stuff gets turned into call-by-name functions; and how to extract the information you want (if it's present at all).