Shapeless. Modify tagged type - scala

For instance I have two tagged types:
trait Created
type CreatedDttm = LocalDateTime ## Created
type CreatedTs = Timestamp ## Created
This types are used to deal with models. The first one is for common model, and the second one is for db entity.
final case class Entity(created: CreatedDttm) // Common model
final case class EntityRow(created: CreatedTs) // DB model
I have a converter somewhere in my sources:
def toModel(e: EntityRow) = Entity(e.created.toLocalDateTime) // Does not compile
This convertion does not compile, cause e.created.toLocalDateTime returns LocalDateTime, but Entity needs LocalDateTime tagged by Created.
So I have to change my conversion to tag[Created](e.created.toLocalDateTime) to make this code compile. It works, but, imho, it looks a kind of ugly.
Timestamp was tagged by Created, and a new LocalDateTime must be also tagged by the same Created.
Is there any way to modify tagged type without need to retag a new modified value?

Afraid not. The best I think you can do it minimize your pain. I would use an implicit class with a method like toCreatedDttm. Something like this:
implicit class LocalDateTimeOps(value: LocalDateTime) {
def toCreatedDttm: CreatedDttm = tag[Created][LocalDateTime](value)
}
Then you can change your line to:
def toModel(e: EntityRow) = Entity(e.created.toLocalDateTime.toCreatedDttm)
Or perhaps you can have the implicit class operate directly on the Timestamp like this:
implicit class TimestampOps(value: Timestamp) {
def toCreatedDttm: CreatedDttm = tag[Created][LocalDateTime](value.toLocalDateTime)
}
Then the line could be:
def toModel(e: EntityRow) = Entity(e.created.toCreatedDttm)

Related

Using value classes in scala to implement trait methods?

I have a trait that defines a function--I don't want to specify how it will work until later. This trait is mixed in with several case classes, like so:
trait AnItem
trait DataFormatable {
def render():String = "" // dummy implementation
}
case class Person(name:String, age:Int) extends DataFormatable with AnItem
case class Building(numFloors:Int) extends DataFormatable with AnItem
Ok, so now I want includable modules that pimp specific implementations of this render behavior. Trying to use value classes here:
object JSON {
implicit class PersonRender( val p:Person ) extends AnyVal {
def render():String = {
//render json
}
}
// others
}
object XML {
implicit class PersonRender( val p:Person ) extends AnyVal {
def render():String = {
//render xml
}
}
// others
}
The ideal use would look like this (presuming JSON output desired):
import JSON._
val p:AnItem = Person("John",24)
println(p.render())
All cool--but it doesn't work. Is there a way I can make this loadable-implementation thing work? Am I close?
The DataFormatable trait is doing nothing here but holding you back. You should just get rid of it. Since you want to swap out render implementations based on the existence of implicits in scope, Person can't have it's own render method. The compiler will only look for an implicit conversion to PersonRender if Person doesn't have a method named render in the first place. But because Person inherits (or is forced to implement) render from DataFormatable, there is no need to look for the implicit conversion.
Based on your edit, if you have a collection of List[AnItem], it is also not possible to implicitly convert the elements to have render. While each of the sub-classes may have an implicit conversion that gives them render, the compiler doesn't know that when they are all piled into a list of a more abstract type. Particularly an empty trait such as AnItem.
How can you make this work? You have two simple options.
One, if you want to stick with the implicit conversions, you need to remove DataFormatable as the super-type of your case classes, so that they do not have their own render method. Then you can swap out XML._ and JSON._, and the conversions should work. However, you won't be allowed mixed collections.
Two, drop the implicits altogether and have your trait look like this:
trait DataFormatable {
def toXML: String
def toJSON: String
}
This way, you force every class that mixes in DataFormatable to contain serialization information (which is the way it should be, rather than hiding them in implicits). Now, when you have a List[DataFormatable], you can prove all of the elements can both be converted to JSON or XML, so you can convert a mixed list. I think this would be much better overall, as the code should be more straightforward. What imports you have shouldn't really be defining the behavior of what follows. Imagine the confusion that can arise because XML._ has been imported at the top of the file instead of JSON._.

How to map postgresql custom enum column with Slick2.0.1?

I just can't figure it out. What I am using right now is:
abstract class DBEnumString extends Enumeration {
implicit val enumMapper = MappedJdbcType.base[Value, String](
_.toString(),
s => this.withName(s)
)
}
And then:
object SomeEnum extends DBEnumString {
type T = Value
val A1 = Value("A1")
val A2 = Value("A2")
}
The problem is, during insert/update JDBC driver for PostgreSQL complains about parameter type being "character varying" when column type is "some_enum", which is reasonable as I am converting SomeEnum to String.
How do I tell Slick to treat String as DB-defined "enum_type"? Or how to define some other Scala-type that will map to "enum_type"?
I had similar confusion when trying to get my postgreSQL enums to work with slick. Slick-pg allows you to use Scala enums with your databases enums, and the test suite shows how.
Below is an example.
Say we have this enumerated type in our database.
CREATE TYPE Dog AS ENUM ('Poodle', 'Labrador');
We want to be able to map these to Scala enums, so we can use them happily with Slick. We can do this with slick-pg, an extension for slick.
First off, we make a Scala version of the above enum.
object Dogs extends Enumeration {
type Dog = Value
val Poodle, Labrador = Value
}
To get the extra functionality from slick-pg we extend the normal PostgresDriver and say we want to map our Scala enum to the PostgreSQL one (remember to change the slick driver in application.conf to the one you've created).
object MyPostgresDriver extends PostgresDriver with PgEnumSupport {
override val api = new API with MyEnumImplicits {}
trait MyEnumImplicits {
implicit val dogTypeMapper = createEnumJdbcType("Dog", Dogs)
implicit val dogListTypeMapper = createEnumListJdbcType("Dog", Dogs)
implicit val dogColumnExtensionMethodsBuilder = createEnumColumnExtensionMethodsBuilder(Dogs)
implicit val dogOptionColumnExtensionMethodsBuilder = createEnumOptionColumnExtensionMethodsBuilder(Dogs)
}
}
Now when you want to make a new model case class, simply use the corresponding Scala enum.
case class User(favouriteDog: Dog)
And when you do the whole DAO table shenanigans, again you can just use it.
class Users(tag: Tag) extends Table[User](tag, "User") {
def favouriteDog = column[Dog]("favouriteDog")
def * = (favouriteDog) <> (Dog.tupled, Dog.unapply _)
}
Obviously you need the Scala Dog enum in scope wherever you use it.
Due to a bug in slick, currently you can't dynamically link to a custom slick driver in application.conf (it should work). This means you either need to run play framework with start and not get dynamic recompiling, or you can create a standalone sbt project with just the custom slick driver in it and depend on it locally.

How to design immutable model classes when using inheritance

I'm having trouble finding an elegant way of designing a some simple classes to represent HTTP messages in Scala.
Say I have something like this:
abstract class HttpMessage(headers: List[String]) {
def addHeader(header: String) = ???
}
class HttpRequest(path: String, headers: List[String])
extends HttpMessage(headers)
new HttpRequest("/", List("foo")).addHeader("bar")
How can I make the addHeader method return a copy of itself with the new header added? (and keep the current value of path as well)
Thanks,
Rob.
It is annoying but the solution to implement your required pattern is not trivial.
The first point to notice is that if you want to preserve your subclass type, you need to add a type parameter. Without this, you are not able to specify an unknown return type in HttpMessage
abstract class HttpMessage(headers: List[String]) {
type X <: HttpMessage
def addHeader(header: String):X
}
Then you can implement the method in your concrete subclasses where you will have to specify the value of X:
class HttpRequest(path: String, headers: List[String])
extends HttpMessage(headers){
type X = HttpRequest
def addHeader(header: String):HttpRequest = new HttpRequest(path, headers :+header)
}
A better, more scalable solution is to use implicit for the purpose.
trait HeaderAdder[T<:HttpMessage]{
def addHeader(httpMessage:T, header:String):T
}
and now you can define your method on the HttpMessage class like the following:
abstract class HttpMessage(headers: List[String]) {
type X <: HttpMessage
def addHeader(header: String)(implicit headerAdder:HeaderAdder[X]):X = headerAdder.add(this,header) }
}
This latest approach is based on the typeclass concept and scales much better than inheritance. The idea is that you are not forced to have a valid HeaderAdder[T] for every T in your hierarchy, and if you try to call the method on a class for which no implicit is available in scope, you will get a compile time error.
This is great, because it prevents you to have to implement addHeader = sys.error("This is not supported")
for certain classes in the hierarchy when it becomes "dirty" or to refactor it to avoid it becomes "dirty".
The best way to manage implicit is to put them in a trait like the following:
trait HeaderAdders {
implicit val httpRequestHeaderAdder:HeaderAdder[HttpRequest] = new HeaderAdder[HttpRequest] { ... }
implicit val httpRequestHeaderAdder:HeaderAdder[HttpWhat] = new HeaderAdder[HttpWhat] { ... }
}
and then you provide also an object, in case user can't mix it (for example if you have frameworks that investigate through reflection properties of the object, you don't want extra properties to be added to your current instance) (http://www.artima.com/scalazine/articles/selfless_trait_pattern.html)
object HeaderAdders extends HeaderAdders
So for example you can write things such as
// mixing example
class MyTest extends HeaderAdders // who cares about having two extra value in the object
// import example
import HeaderAdders._
class MyDomainClass // implicits are in scope, but not mixed inside MyDomainClass, so reflection from Hiberante will still work correctly
By the way, this design problem is the same of Scala collections, with the only difference that your HttpMessage is TraversableLike. Have a look to this question Calling map on a parallel collection via a reference to an ancestor type

How to update a mongo record using Rogue with MongoCaseClassField when case class contains a scala Enumeration

I am upgrading existing code from Rogue 1.1.8 to 2.0.0 and lift-mongodb-record from 2.4-M5 to 2.5.
I'm having difficulty writing MongoCaseClassField that contains a scala enum, that I really could use some help with.
For example,
object MyEnum extends Enumeration {
type MyEnum = Value
val A = Value(0)
val B = Value(1)
}
case class MyCaseClass(name: String, value: MyEnum.MyEnum)
class MyMongo extends MongoRecord[MyMongo] with StringPk[MyMongo] {
def meta = MyMongo
class MongoCaseClassFieldWithMyEnum[OwnerType <: net.liftweb.record.Record[OwnerType], CaseType](rec : OwnerType)(implicit mf : Manifest[CaseType]) extends MongoCaseClassField[OwnerType, CaseType](rec)(mf) {
override def formats = super.formats + new EnumSerializer(MyEnum)
}
object myCaseClass extends MongoCaseClassFieldWithMyEnum[MyMongo, MyCaseClass](this)
/// ...
}
When we try to write to this field, we get the following error:
could not find implicit value for evidence parameter of type
com.foursquare.rogue.BSONType[MyCaseClass]
.and(_.myCaseClass setTo myCaseClass)
We used to have this working in Rogue 1.1.8, by using our own version of the MongoCaseClassField, which made the #formats method overridable. But that feature was included into lift-mongodb-record in 2.5-RC6, so we thought this should just work now?
Answer coming from : http://grokbase.com/t/gg/rogue-users/1367nscf80/how-to-update-a-record-with-mongocaseclassfield-when-case-class-contains-a-scala-enumeration#20130612woc3x7utvaoacu7tv7lzn4sr2q
But more convenient directly here on StackOverFlow:
Sorry, I should have chimed in here sooner.
One of the long-standing problems with Rogue was that it was too easy to
accidentally make a field that was not serializable as BSON, and have it
fail at runtime (when you try to add that value to a DBObject) rather than
at compile time.
I introduced the BSONType type class to try to address this. The upside is
it catches BSON errors at compile time. The downside is you need to make a
choice when it comes to case classes.
If you want to do this the "correct" way, define your case class plus a
BSONType "witness" for that case class. To define a BSONType witness, you
need to provide serialization from that type to a BSON type. Example:
case class TestCC(v: Int)
implicit object TestCCIsBSONType extends BSONType[TestCC] {
override def asBSONObject(v: TestCC): AnyRef = {
// Create a BSON object
val ret = new BasicBSONObject
// Serialize all the fields of the case class
ret.put("v", v.v)
ret
}
}
That said, this can be quite burdensome if you're doing it for each case
class. Your second option is to define a generic witness that works for any
case class, if you have a generic serialization scheme:
implicit def CaseClassesAreBSONTypes[CC <: CaseClass]: BSONType[CC] =
new BSONType[CC] {
override def asBSONObject(v: CC): AnyRef = {
// your generic serialization code here, maybe involving formats
}
}
Hope this helps,

How does the object in class pattern work, as used in the Lift Framework?

I'm new to scala and can't get my head around how the Lift guys implemented the Record API. However, the question is less about this API but more about Scala in general. I'm interested in how the object in class pattern works, used in Lift.
class MainDoc private() extends MongoRecord[MainDoc] with ObjectIdPk[MainDoc] {
def meta = MainDoc
object name extends StringField(this, 12)
object cnt extends IntField(this)
}
object MainDoc extends MainDoc with MongoMetaRecord[MainDoc]
In the upper snippet you can see how a record is defined in Lift. The interesting part is that the fields are defined as objects. The API allows you to create Instances like this:
val md1 = MainDoc.createRecord
.name("md1")
.cnt(5)
.save
This is probably done by using the apply method? But at the same time you are able to get the values by doing something like this:
val name = md1.name
How does this all work? Are the objects not that static when in scope of an class. Or are they just constructor classes for some internal representation? How is it possible to iterate over all fields, do you use Reflection?
Thanks,
Otto
Otto,
You are more of less on the right track. You actually don't need to define your fields as objects, you could have written your example as
class MainDoc private() extends MongoRecord[MainDoc] with ObjectIdPk[MainDoc] {
def meta = MainDoc
val name = new StringField(this, 12)
val cnt= new IntField(this)
}
object MainDoc extends MainDoc with MongoMetaRecord[MainDoc]
The net.liftweb.record.Field trait does contain an apply method that is the equivalent to set. That's why you can assign the fields by name after instantiating the object.
The field reference you mentioned:
val name = md1.name
Would type name as a StringField. If what you were thinking was
val name: String = md1.name
that would fail to compile (unless there was an implicit in scope to convert Field[T] => T). The proper way retrieve the String value of the field would be
val name = md1.name.get
Record does use reflection to gather the fields. When you define an object within a class, the compiler will create a field to hold the object instance. From the standpoint of reflection, the object appears very similar to the alternate way to define a field that I mentioned before. Each of the definitions probably creates a subclass of the field type, but that's no different than
val name = new StringField(this, 12) {
override def label: NodeSeq = <span>My String Field</span>
}
You're right about it being the apply method. Record's Field base class defines a few apply methods.
def apply(in: Box[MyType]): OwnerType
def apply(in: MyType): OwnerType
By returning the OwnerType, you can chain invocations together.
Regarding the use of object to define fields, that confused me at first, too. The object identifier defines an object within a particular scope. Even though it's convenient to think of object as a shortcut for the singleton pattern, it's more flexible than that. According to the Scala Language Spec (section 5.4):
It is roughly equivalent to the following definition of a lazy value:
lazy val m = new sc with mt1 with ... with mtn { this: m.type => stats }
<snip/>
The expansion given above is not accurate for top-level objects. It cannot be because variable and method definition cannot appear on the top-level outside of a
package object (§9.3). Instead, top-level objects are translated to static fields.
Regarding iterating over all the fields, Record objects define a allFields method which returns a List[net.liftweb.record.Field[_, MyType]].