How to write generic function with Scala Quill.io library - scala

I am trying to implement generic method in Scala operating on database using Quill.io library. Type T will be only case classes what works with Quill.io.
def insertOrUpdate[T](inserting: T, equality: (T,T) => Boolean)(implicit ctx: Db.Context): Unit = {
import ctx._
val existingQuery = quote {
query[T].filter { dbElement: T =>
equality(dbElement, inserting)
}
}
val updateQuery = quote {
query[T].filter { dbElement =>
equality(dbElement, lift(inserting))
}.update(lift(inserting))
}
val insertQuery = quote { query[T].insert(lift(inserting)) }
val existing = ctx.run(existingQuery)
existing.size match {
case 1 => ctx.run(updateQuery)
case _ => ctx.run(insertQuery)
}
}
But I am getting two types of compile error
Error:(119, 12) Can't find an implicit `SchemaMeta` for type `T`
query[T].filter { dbElement: T =>
Error:(125, 33) Can't find Encoder for type 'T'
equality(dbElement, lift(inserting))
How can I modify my code to let it work?

As I said in the issue that #VojtechLetal mentioned in his answer you have to use macros.
I added code implementing generic insert or update in my example Quill project.
It defines trait Queries that's mixed into context:
trait Queries {
this: JdbcContext[_, _] =>
def insertOrUpdate[T](entity: T, filter: (T) => Boolean): Unit = macro InsertOrUpdateMacro.insertOrUpdate[T]
}
This trait uses macro that's implementing your code with minor changes:
import scala.reflect.macros.whitebox.{Context => MacroContext}
class InsertOrUpdateMacro(val c: MacroContext) {
import c.universe._
def insertOrUpdate[T](entity: Tree, filter: Tree)(implicit t: WeakTypeTag[T]): Tree =
q"""
import ${c.prefix}._
val updateQuery = ${c.prefix}.quote {
${c.prefix}.query[$t].filter($filter).update(lift($entity))
}
val insertQuery = quote {
query[$t].insert(lift($entity))
}
run(${c.prefix}.query[$t].filter($filter)).size match {
case 1 => run(updateQuery)
case _ => run(insertQuery)
}
()
"""
}
Usage examples:
import io.getquill.{PostgresJdbcContext, SnakeCase}
package object genericInsertOrUpdate {
val ctx = new PostgresJdbcContext[SnakeCase]("jdbc.postgres") with Queries
def example1(): Unit = {
val inserting = Person(1, "")
ctx.insertOrUpdate(inserting, (p: Person) => p.name == "")
}
def example2(): Unit = {
import ctx._
val inserting = Person(1, "")
ctx.insertOrUpdate(inserting, (p: Person) => p.name == lift(inserting.name))
}
}
P.S. Because update() returns number of updated records your code can be simplified to:
class InsertOrUpdateMacro(val c: MacroContext) {
import c.universe._
def insertOrUpdate[T](entity: Tree, filter: Tree)(implicit t: WeakTypeTag[T]): Tree =
q"""
import ${c.prefix}._
if (run(${c.prefix}.quote {
${c.prefix}.query[$t].filter($filter).update(lift($entity))
}) == 0) {
run(quote {
query[$t].insert(lift($entity))
})
}
()
"""
}

As one of the quill contributors said in this issue:
If you want to make your solution generic then you have to use macros because Quill generates queries at compile time and T type has to be resolved at that time.
TL;DR The following did not work either, just playing
Anyway... just out of curiosity I tried to fix the issue by following the error which you mentioned. I changed the definition of the function as:
def insertOrUpdate[T: ctx.Encoder : ctx.SchemaMeta](...)
which yielded the following log
[info] PopulateAnomalyResultsTable.scala:71: Dynamic query
[info] case _ => ctx.run(insertQuery)
[info]
[error] PopulateAnomalyResultsTable.scala:68: exception during macro expansion:
[error] scala.reflect.macros.TypecheckException: Found the embedded 'T', but it is not a case class
[error] at scala.reflect.macros.contexts.Typers$$anonfun$typecheck$2$$anonfun$apply$1.apply(Typers.scala:34)
[error] at scala.reflect.macros.contexts.Typers$$anonfun$typecheck$2$$anonfun$apply$1.apply(Typers.scala:28)
It starts promising, since quill apparently gave up on static compilation and made the query dynamic. I checked the source code of the failing macro and it seems that quill is trying to get a constructor for T which is not known in the current context.

For more details see my answer Generic macro with quill or implementation
CrudMacro:
Complete project you will find on quill-generic
package pl.jozwik.quillgeneric.quillmacro
import scala.reflect.macros.whitebox.{ Context => MacroContext }
class CrudMacro(val c: MacroContext) extends AbstractCrudMacro {
import c.universe._
def callFilterOnIdTree[K: c.WeakTypeTag](id: Tree)(dSchema: c.Expr[_]): Tree =
callFilterOnId[K](c.Expr[K](q"$id"))(dSchema)
protected def callFilterOnId[K: c.WeakTypeTag](id: c.Expr[K])(dSchema: c.Expr[_]): Tree = {
val t = weakTypeOf[K]
t.baseClasses.find(c => compositeSet.contains(c.asClass.fullName)) match {
case None =>
q"$dSchema.filter(_.id == lift($id))"
case Some(base) =>
val query = q"$dSchema.filter(_.id.fk1 == lift($id.fk1)).filter(_.id.fk2 == lift($id.fk2))"
base.fullName match {
case `compositeKey4Name` =>
q"$query.filter(_.id.fk3 == lift($id.fk3)).filter(_.id.fk4 == lift($id.fk4))"
case `compositeKey3Name` =>
q"$query.filter(_.id.fk3 == lift($id.fk3))"
case `compositeKey2Name` =>
query
case x =>
c.abort(NoPosition, s"$x not supported")
}
}
}
def createAndGenerateIdOrUpdate[K: c.WeakTypeTag, T: c.WeakTypeTag](entity: Tree)(dSchema: c.Expr[_]): Tree = {
val filter = callFilter[K, T](entity)(dSchema)
q"""
import ${c.prefix}._
val id = $entity.id
val q = $filter
val result = run(
q.updateValue($entity)
)
if (result == 0) {
run($dSchema.insertValue($entity).returningGenerated(_.id))
} else {
id
}
"""
}
def createWithGenerateIdOrUpdateAndRead[K: c.WeakTypeTag, T: c.WeakTypeTag](entity: Tree)(dSchema: c.Expr[_]): Tree = {
val filter = callFilter[K, T](entity)(dSchema)
q"""
import ${c.prefix}._
val id = $entity.id
val q = $filter
val result = run(
q.updateValue($entity)
)
val newId =
if (result == 0) {
run($dSchema.insertValue($entity).returningGenerated(_.id))
} else {
id
}
run($dSchema.filter(_.id == lift(newId)))
.headOption
.getOrElse(throw new NoSuchElementException(s"$$newId"))
"""
}
}

Related

Cake pattern for deep function call chain ( threading dependency along as extra parameter)

Question on simple example:
Assume we have :
1) 3 functions:
f(d:Dependency), g(d:Dependency), h(d:Dependency)
and
2) the following function call graph : f calls g, g calls h
QUESTION: In h I would like to use the d passed to f, is there a way to use the cake pattern to get access to d from h? If yes, how ?
Question on real-world example:
In the code below I manually need to thread the Handler parameter from
// TOP LEVEL , INJECTION POINT
to
//USAGE OF Handler
Is it possible to use the cake pattern instead of the manual threading? If yes, how ?
package Demo
import interop.wrappers.EditableText.EditableText
import interop.wrappers.react_sortable_hoc.SortableContainer.Props
import japgolly.scalajs.react.ReactComponentC.ReqProps
import japgolly.scalajs.react._
import japgolly.scalajs.react.vdom.prefix_<^._
import interop.wrappers.react_sortable_hoc.{SortableContainer, SortableElement}
object EditableSortableListDemo {
trait Action
type Handler=Action=>Unit
object CompWithState {
case class UpdateElement(s:String) extends Action
class Backend($: BackendScope[Unit, String],r:Handler) {
def handler(s: String): Unit =
{
println("handler:"+s)
$.setState(s)
//USAGE OF Handler <<<<<=======================================
}
def render(s:String) = {
println("state:"+s)
<.div(<.span(s),EditableText(s, handler _)())
}
}
val Component = (f:Handler)=>(s:String)=>
ReactComponentB[Unit]("EditableText with state").initialState(s).backend(new Backend(_,f))
.renderBackend.build
}
// Equivalent of ({value}) => <li>{value}</li> in original demo
val itemView: Handler=>ReqProps[String, Unit, Unit, TopNode] = (f:Handler)=> ReactComponentB[String]("liView")
.render(d => {
<.div(
<.span(s"uhh ${d.props}"),
CompWithState.Component(f)("vazzeg:"+d.props)()
)
})
.build
// As in original demo
val sortableItem = (f:Handler)=>SortableElement.wrap(itemView(f))
val listView = (f:Handler)=>ReactComponentB[List[String]]("listView")
.render(d => {
<.div(
d.props.zipWithIndex.map {
case (value, index) =>
sortableItem(f)(SortableElement.Props(index = index))(value)
}
)
})
.build
// As in original demo
val sortableList: (Handler) => (Props) => (List[String]) => ReactComponentU_ =
(f:Handler)=>SortableContainer.wrap(listView(f))
// As in original SortableComponent
class Backend(scope: BackendScope[Unit, List[String]]) {
def render(props: Unit, items: List[String]) = {
def handler = ???
// TOP LEVEL , INJECTION POINT<<<<<<========================================
sortableList(handler)(
SortableContainer.Props(
onSortEnd = p => scope.modState( l => p.updatedList(l) ),
useDragHandle = false,
helperClass = "react-sortable-handler"
)
)(items)
}
}
val defaultItems = Range(0, 10).map("Item " + _).toList
val c = ReactComponentB[Unit]("SortableContainerDemo")
.initialState(defaultItems)
.backend(new Backend(_))
.render(s => s.backend.render(s.props, s.state))
.build
}

Slick codegen & tables with > 22 columns

I'm new to Slick. I'm creating a test suite for a Java application with Scala, ScalaTest and Slick. I'm using slick to prepare data before the test and to do assertions on the data after the test. The database used has some tables with more than 22 columns. I use slick-codegen to generate my schema code.
For tables with more than 22 columns, slick-codegen does not generate a case class, but a HList-based custom type and a companion ‘constructor’ method. As I understand it, this is because the limitation that tuples and case classes can only have 22 fields. The way the code is generated, the fields of a Row-object can only be accessed by index.
I have a couple of questions about this:
For what I understand, the 22 fields restriction for case classes is already fixed in Scala 2.11, right?
If that's the case, would it be possible to customize slick-codegen to generate case classes for all tables? I looked into this: I managed to set override def hlistEnabled = false in an overridden SourceCodeGenerator. But this results in Cannot generate tuple for > 22 columns, please set hlistEnable=true or override compound. So I don’t get the point of being able to disbale HList. May be the catch is in the ‘or override compound’ part, but I don't understand what that means.
Searching the internet on slick and 22 columns, I came across some solutions based on nested tuples. Would it be possible to customize the codegen to use this approach?
If generating code with case classes with > 22 fields is not a viable option, I think it would be possible to generate an ordinary class, which has an ‘accessor’ function for each column, thus providing a ‘mapping’ from index-based access to name-based access. I’d be happy to implement the generation for this myself, but I think I need some pointers where to start. I think it should be able to override the standard codegen for this. I already use an overridden SourceCodeGenerator for some custom data types. But apart from this use case, the documentation of the code generator does not help me that much.
I would really appreciate some help here. Thanks in advance!
I ended up further customizing slick-codegen. First, I'll answer my own questions, then I'll post my solution.
Answers to questions
The 22 arity limit might be lifted for case classes, it is not for tuples. And slick-codegen also generates some tuples, which I was not fully aware of when I asked the question.
Not relevant, see answer 1. (This might become relevant if the 22 arity limit gets lifted for tuples as well.)
I chose not to investigate this further, so this question remains unanswered for now.
This is the approach I took, eventually.
Solution: the generated code
So, I ended up generating "ordinary" classes for tables with more than 22 columns. Let me give an example of what I generate now. (Generator code follows below.) (This example has less than 22 columns, for brevity and readability reasons.)
case class BigAssTableRow(val id: Long, val name: String, val age: Option[Int] = None)
type BigAssTableRowList = HCons[Long,HCons[String,HCons[Option[Int]]], HNil]
object BigAssTableRow {
def apply(hList: BigAssTableRowList) = new BigAssTableRow(hlist.head, hList.tail.head, hList.tail.tail.head)
def unapply(row: BigAssTableRow) = Some(row.id :: row.name :: row.age)
}
implicit def GetResultBoekingenRow(implicit e0: GR[Long], e1: GR[String], e2: GR[Optional[Int]]) = GR{
prs => import prs._
BigAssTableRow.apply(<<[Long] :: <<[String] :: <<?[Int] :: HNil)
}
class BigAssTable(_tableTag: Tag) extends Table[BigAssTableRow](_tableTag, "big_ass") {
def * = id :: name :: age :: :: HNil <> (BigAssTableRow.apply, BigAssTableRow.unapply)
val id: Rep[Long] = column[Long]("id", O.PrimaryKey)
val name: Rep[String] = column[String]("name", O.Length(255,varying=true))
val age: Rep[Option[Int]] = column[Option[Int]]("age", O.Default(None))
}
lazy val BigAssTable = new TableQuery(tag => new BigAssTable(tag))
The hardest part was finding out how the * mapping works in Slick. There's not much documentation, but I found this Stackoverflow answer rather enlightening.
I created the BigAssTableRow object to make the use of HList transparent for the client code. Note that the apply function in the object overloads the apply from the case class. So I can still create entities by calling BigAssTableRow(id: 1L, name: "Foo"), while the * projection can still use the apply function that takes an HList.
So, I can now do things like this:
// I left out the driver import as well as the scala.concurrent imports
// for the Execution context.
val collection = TableQuery[BigAssTable]
val row = BigAssTableRow(id: 1L, name: "Qwerty") // Note that I leave out the optional age
Await.result(db.run(collection += row), Duration.Inf)
Await.result(db.run(collection.filter(_.id === 1L).result), Duration.Inf)
For this code it's totally transparent wether tuples or HLists are used under the hood.
Solution: how this is generated
I'll just post my entire generator code here. It's not perfect; please let me know if you have suggestions for improvement! Huge parts are just copied from the slick.codegen.AbstractSourceCodeGenerator and related classes and then slightly changed. There are also some things that are not directly related to this question, such as the addition of the java.time.* data types and the filtering of specific tables. I left them in, because they might be of use. Also note that this example is for a Postgres database.
import slick.codegen.SourceCodeGenerator
import slick.driver.{JdbcProfile, PostgresDriver}
import slick.jdbc.meta.MTable
import slick.model.Column
import scala.concurrent.Await
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration.Duration
object MySlickCodeGenerator {
val slickDriver = "slick.driver.PostgresDriver"
val jdbcDriver = "org.postgresql.Driver"
val url = "jdbc:postgresql://localhost:5432/dbname"
val outputFolder = "/path/to/project/src/test/scala"
val pkg = "my.package"
val user = "user"
val password = "password"
val driver: JdbcProfile = Class.forName(slickDriver + "$").getField("MODULE$").get(null).asInstanceOf[JdbcProfile]
val dbFactory = driver.api.Database
val db = dbFactory.forURL(url, driver = jdbcDriver, user = user, password = password, keepAliveConnection = true)
// The schema is generated using Liquibase, which creates these tables that I don't want to use
def excludedTables = Array("databasechangelog", "databasechangeloglock")
def tableFilter(table: MTable): Boolean = {
!excludedTables.contains(table.name.name) && schemaFilter(table.name.schema)
}
// There's also an 'audit' schema in the database, I don't want to use that one
def schemaFilter(schema: Option[String]): Boolean = {
schema match {
case Some("public") => true
case None => true
case _ => false
}
}
// Fetch data model
val modelAction = PostgresDriver.defaultTables
.map(_.filter(tableFilter))
.flatMap(PostgresDriver.createModelBuilder(_, ignoreInvalidDefaults = false).buildModel)
val modelFuture = db.run(modelAction)
// customize code generator
val codegenFuture = modelFuture.map(model => new SourceCodeGenerator(model) {
// add custom import for added data types
override def code = "import my.package.Java8DateTypes._" + "\n" + super.code
override def Table = new Table(_) {
table =>
// Use different factory and extractor functions for tables with > 22 columns
override def factory = if(columns.size == 1) TableClass.elementType else if(columns.size <= 22) s"${TableClass.elementType}.tupled" else s"${EntityType.name}.apply"
override def extractor = if(columns.size <= 22) s"${TableClass.elementType}.unapply" else s"${EntityType.name}.unapply"
override def EntityType = new EntityTypeDef {
override def code = {
val args = columns.map(c =>
c.default.map( v =>
s"${c.name}: ${c.exposedType} = $v"
).getOrElse(
s"${c.name}: ${c.exposedType}"
)
)
val callArgs = columns.map(c => s"${c.name}")
val types = columns.map(c => c.exposedType)
if(classEnabled){
val prns = (parents.take(1).map(" extends "+_) ++ parents.drop(1).map(" with "+_)).mkString("")
s"""case class $name(${args.mkString(", ")})$prns"""
} else {
s"""
/** Constructor for $name providing default values if available in the database schema. */
case class $name(${args.map(arg => {s"val $arg"}).mkString(", ")})
type ${name}List = ${compoundType(types)}
object $name {
def apply(hList: ${name}List): $name = new $name(${callArgs.zipWithIndex.map(pair => s"hList${tails(pair._2)}.head").mkString(", ")})
def unapply(row: $name) = Some(${compoundValue(callArgs.map(a => s"row.$a"))})
}
""".trim
}
}
}
override def PlainSqlMapper = new PlainSqlMapperDef {
override def code = {
val positional = compoundValue(columnsPositional.map(c => if (c.fakeNullable || c.model.nullable) s"<<?[${c.rawType}]" else s"<<[${c.rawType}]"))
val dependencies = columns.map(_.exposedType).distinct.zipWithIndex.map{ case (t,i) => s"""e$i: GR[$t]"""}.mkString(", ")
val rearranged = compoundValue(desiredColumnOrder.map(i => if(columns.size > 22) s"r($i)" else tuple(i)))
def result(args: String) = s"$factory($args)"
val body =
if(autoIncLastAsOption && columns.size > 1){
s"""
val r = $positional
import r._
${result(rearranged)} // putting AutoInc last
""".trim
} else {
result(positional)
}
s"""
implicit def $name(implicit $dependencies): GR[${TableClass.elementType}] = GR{
prs => import prs._
${indent(body)}
}
""".trim
}
}
override def TableClass = new TableClassDef {
override def star = {
val struct = compoundValue(columns.map(c=>if(c.fakeNullable)s"Rep.Some(${c.name})" else s"${c.name}"))
val rhs = s"$struct <> ($factory, $extractor)"
s"def * = $rhs"
}
}
def tails(n: Int) = {
List.fill(n)(".tail").mkString("")
}
// override column generator to add additional types
override def Column = new Column(_) {
override def rawType = {
typeMapper(model).getOrElse(super.rawType)
}
}
}
})
def typeMapper(column: Column): Option[String] = {
column.tpe match {
case "java.sql.Date" => Some("java.time.LocalDate")
case "java.sql.Timestamp" => Some("java.time.LocalDateTime")
case _ => None
}
}
def doCodeGen() = {
def generator = Await.result(codegenFuture, Duration.Inf)
generator.writeToFile(slickDriver, outputFolder, pkg, "Tables", "Tables.scala")
}
def main(args: Array[String]) {
doCodeGen()
db.close()
}
}
Update 2019-02-15: *with the release of Slick 3.3.0, as answered by #Marcus there's built-in support for code generation of tables with > 22 columns.
As of Slick 3.2.0, the simplest solution for >22 param case class is to define the default projection in the * method using mapTo instead of the <> operator (per documented unit test):
case class BigCase(id: Int,
p1i1: Int, p1i2: Int, p1i3: Int, p1i4: Int, p1i5: Int, p1i6: Int,
p2i1: Int, p2i2: Int, p2i3: Int, p2i4: Int, p2i5: Int, p2i6: Int,
p3i1: Int, p3i2: Int, p3i3: Int, p3i4: Int, p3i5: Int, p3i6: Int,
p4i1: Int, p4i2: Int, p4i3: Int, p4i4: Int, p4i5: Int, p4i6: Int)
class bigCaseTable(tag: Tag) extends Table[BigCase](tag, "t_wide") {
def id = column[Int]("id", O.PrimaryKey)
def p1i1 = column[Int]("p1i1")
def p1i2 = column[Int]("p1i2")
def p1i3 = column[Int]("p1i3")
def p1i4 = column[Int]("p1i4")
def p1i5 = column[Int]("p1i5")
def p1i6 = column[Int]("p1i6")
def p2i1 = column[Int]("p2i1")
def p2i2 = column[Int]("p2i2")
def p2i3 = column[Int]("p2i3")
def p2i4 = column[Int]("p2i4")
def p2i5 = column[Int]("p2i5")
def p2i6 = column[Int]("p2i6")
def p3i1 = column[Int]("p3i1")
def p3i2 = column[Int]("p3i2")
def p3i3 = column[Int]("p3i3")
def p3i4 = column[Int]("p3i4")
def p3i5 = column[Int]("p3i5")
def p3i6 = column[Int]("p3i6")
def p4i1 = column[Int]("p4i1")
def p4i2 = column[Int]("p4i2")
def p4i3 = column[Int]("p4i3")
def p4i4 = column[Int]("p4i4")
def p4i5 = column[Int]("p4i5")
def p4i6 = column[Int]("p4i6")
// HList-based wide case class mapping
def m3 = (
id ::
p1i1 :: p1i2 :: p1i3 :: p1i4 :: p1i5 :: p1i6 ::
p2i1 :: p2i2 :: p2i3 :: p2i4 :: p2i5 :: p2i6 ::
p3i1 :: p3i2 :: p3i3 :: p3i4 :: p3i5 :: p3i6 ::
p4i1 :: p4i2 :: p4i3 :: p4i4 :: p4i5 :: p4i6 :: HNil
).mapTo[BigCase]
def * = m3
}
EDIT
So, if you then want the slick-codegen to produce huge tables using the mapTo method described above, you override the relevant parts to the code generator and add in a mapTo statement:
package your.package
import slick.codegen.SourceCodeGenerator
import slick.{model => m}
class HugeTableCodegen(model: m.Model) extends SourceCodeGenerator(model) with GeneratorHelpers[String, String, String]{
override def Table = new Table(_) {
table =>
// always defines types using case classes
override def EntityType = new EntityTypeDef{
override def classEnabled = true
}
// allow compound statements using HNil, but not for when "def *()" is being defined, instead use mapTo statement
override def compoundValue(values: Seq[String]): String = {
// values.size>22 assumes that this must be for the "*" operator and NOT a primary/foreign key
if(hlistEnabled && values.size > 22) values.mkString("(", " :: ", s" :: HNil).mapTo[${StringExtensions(model.name.table).toCamelCase}Row]")
else if(hlistEnabled) values.mkString(" :: ") + " :: HNil"
else if (values.size == 1) values.head
else s"""(${values.mkString(", ")})"""
}
// should always be case classes, so no need to handle hlistEnabled here any longer
override def compoundType(types: Seq[String]): String = {
if (types.size == 1) types.head
else s"""(${types.mkString(", ")})"""
}
}
}
You then structure the codegen code in a separate project as documented so that it generates the source at compile time. You can pass your classname as an argument to the SourceCodeGenerator you're extending:
lazy val generateSlickSchema = taskKey[Seq[File]]("Generates Schema definitions for SQL tables")
generateSlickSchema := {
val managedSourceFolder = sourceManaged.value / "main" / "scala"
val packagePath = "your.sql.table.package"
(runner in Compile).value.run(
"slick.codegen.SourceCodeGenerator", (dependencyClasspath in Compile).value.files,
Array(
"env.db.connectorProfile",
"slick.db.driver",
"slick.db.url",
managedSourceFolder.getPath,
packagePath,
"slick.db.user",
"slick.db.password",
"true",
"your.package.HugeTableCodegen"
),
streams.value.log
)
Seq(managedSourceFolder / s"${packagePath.replace(".","/")}/Tables.scala")
}
There are few options available as you have already found out - nested tuples, conversion from Slick HList to Shapeless HList and then to case classes and so on.
I found all those options too complicated for the task and went with customised Slick Codegen to generate simple wrapper class with accessors.
Have a look at this gist.
class MyCodegenCustomisations(model: Model) extends slick.codegen.SourceCodeGenerator(model){
import ColumnDetection._
override def Table = new Table(_){
table =>
val columnIndexByName = columns.map(_.name).zipWithIndex.toMap
def getColumnIndex(columnName: String): Option[Int] = {
columnIndexByName.get(columnName)
}
private def getWrapperCode: Seq[String] = {
if (columns.length <= 22) {
//do not generate wrapper for tables which get case class generated by Slick
Seq.empty[String]
} else {
val lines =
columns.map{c =>
getColumnIndex(c.name) match {
case Some(colIndex) =>
//lazy val firstname: Option[String] = row.productElement(1).asInstanceOf[Option[String]]
val colType = c.exposedType
val line = s"lazy val ${c.name}: $colType = values($colIndex).asInstanceOf[$colType]"
line
case None => ""
}
}
Seq("",
"/*",
"case class Wrapper(private val row: Row) {",
"// addressing HList by index is very slow, let's convert it to vector",
"private lazy val values = row.toList.toVector",
""
) ++ lines ++ Seq("}", "*/", "")
}
}
override def code: Seq[String] = {
val originalCode = super.code
originalCode ++ this.getWrapperCode
}
}
}
This issue is solved in Slick 3.3:
https://github.com/slick/slick/pull/1889/
This solution provides def * and def ? and also supports plain SQL.

Scala Best Practices: Trait Inheritance vs Enumeration

I'm currently experimenting with Scala and looking for best practices. I found myself having two opposite approaches to solving a single problem. I'd like to know which is better and why, which is more conventional, and if maybe you know of some other better approaches. The second one looks prettier to me.
1. Enumeration-based solution
import org.squeryl.internals.DatabaseAdapter
import org.squeryl.adapters.{H2Adapter, MySQLAdapter, PostgreSqlAdapter}
import java.sql.Driver
object DBType extends Enumeration {
val MySql, PostgreSql, H2 = Value
def fromUrl(url: String) = {
url match {
case u if u.startsWith("jdbc:mysql:") => Some(MySql)
case u if u.startsWith("jdbc:postgresql:") => Some(PostgreSql)
case u if u.startsWith("jdbc:h2:") => Some(H2)
case _ => None
}
}
}
case class DBType(typ: DBType) {
lazy val driver: Driver = {
val name = typ match {
case DBType.MySql => "com.mysql.jdbc.Driver"
case DBType.PostgreSql => "org.postgresql.Driver"
case DBType.H2 => "org.h2.Driver"
}
Class.forName(name).newInstance().asInstanceOf[Driver]
}
lazy val adapter: DatabaseAdapter = {
typ match {
case DBType.MySql => new MySQLAdapter
case DBType.PostgreSql => new PostgreSqlAdapter
case DBType.H2 => new H2Adapter
}
}
}
2. Singleton-based solution
import org.squeryl.internals.DatabaseAdapter
import org.squeryl.adapters.{H2Adapter, MySQLAdapter, PostgreSqlAdapter}
import java.sql.Driver
trait DBType {
def driver: Driver
def adapter: DatabaseAdapter
}
object DBType {
object MySql extends DBType {
lazy val driver = Class.forName("com.mysql.jdbc.Driver").newInstance().asInstanceOf[Driver]
lazy val adapter = new MySQLAdapter
}
object PostgreSql extends DBType {
lazy val driver = Class.forName("org.postgresql.Driver").newInstance().asInstanceOf[Driver]
lazy val adapter = new PostgreSqlAdapter
}
object H2 extends DBType {
lazy val driver = Class.forName("org.h2.Driver").newInstance().asInstanceOf[Driver]
lazy val adapter = new H2Adapter
}
def fromUrl(url: String) = {
url match {
case u if u.startsWith("jdbc:mysql:") => Some(MySql)
case u if u.startsWith("jdbc:postgresql:") => Some(PostgreSql)
case u if u.startsWith("jdbc:h2:") => Some(H2)
case _ => None
}
}
}
If you declare a sealed trait DBType, you can pattern match on it with exhaustiveness checking (ie, Scala will tell you if you forget one case).
Anyway, I dislike Scala's Enumeration, and I'm hardly alone in that. I never use it, and if there's something for which enumeration is really the cleanest solution, it is better to just write it in Java, using Java's enumeration.
For this particular case you don't really need classes for each database type; it's just data. Unless the real case is dramatically more complex, I would use a map and string parsing based solution to minimize the amount of code duplication:
case class DBRecord(url: String, driver: String, adapter: () => DatabaseAdapter) {}
class DBType(record: DBRecord) {
lazy val driver = Class.forName(record.driver).newInstance().asInstanceOf[Driver]
lazy val adapter = record.adapter()
}
object DBType {
val knownDB = List(
DBRecord("mysql", "com.mysql.jdbc.Driver", () => new MySQLAdapter),
DBRecord("postgresql", "org.postgresql.Driver", () => new PostgreSqlAdapter),
DBRecord("h2", "org.h2.Driver", () => new H2Adapter)
)
val urlLookup = knownDB.map(rec => rec.url -> rec).toMap
def fromURL(url: String) = {
val parts = url.split(':')
if (parts.length < 3 || parts(0) != "jdbc") None
else urlLookup.get(parts(1)).map(rec => new DBType(rec))
}
}
I'd go for the singleton variant, since it allows clearer subclassing.
Also you might need to do db-specific things/overrides, since some queries/subqueries/operators might be different.
But i'd try something like this:
import org.squeryl.internals.DatabaseAdapter
import org.squeryl.adapters.{H2Adapter, MySQLAdapter, PostgreSqlAdapter}
import java.sql.Driver
abstract class DBType(jdbcDriver: String) {
lazy val driver = Class.forName(jdbcDriver).newInstance().asInstanceOf[Driver]
def adapter: DatabaseAdapter
}
object DBType {
object MySql extends DBType("com.mysql.jdbc.Driver") {
lazy val adapter = new MySQLAdapter
}
object PostgreSql extends DBType("org.postgresql.Driver") {
lazy val adapter = new PostgreSqlAdapter
}
object H2 extends DBType("org.h2.Driver") {
lazy val adapter = new H2Adapter
}
def fromUrl(url: String) = {
url match {
case _ if url.startsWith("jdbc:mysql:") => Some(MySql(url))
case _ if url.startsWith("jdbc:postgresql:") => Some(PostgreSql(url))
case _ if url.startsWith("jdbc:h2:") => Some(H2(url))
case _ => None
}
}
if this helped, please consider to +1 this :)

"scala is not an enclosing class"

When compiling this specification:
import org.specs.Specification
import org.specs.matcher.extension.ParserMatchers
class ParserSpec extends Specification with ParserMatchers {
type Elem = Char
"Vaadin DSL parser" should {
"parse attributes in parentheses" in {
DslParser.attributes must(
succeedOn(stringReader("""(attr1="val1")""")).
withResult(Map[String, AttrVal]("attr1" -> AttrVal("val1", "String"))))
}
}
}
I get the following error:
ParserSpec.scala:21
error: scala is not an enclosing class
withResult(Map[String, AttrVal]("attr1" -> AttrVal("val1", "String"))))
^
I don't understand the error message here at all. Why could it appear?
Scala version is 2.8.1, specs version is 1.6.7.2.
DslParser.attributes has type Parser[Map[String, AttrVal]] and the combinators succeedOn and withResult are defined as follows:
trait ParserMatchers extends Parsers with Matchers {
case class SucceedOn[T](str: Input,
resultMatcherOpt: Option[Matcher[T]]) extends Matcher[Parser[T]] {
def apply(parserBN: => Parser[T]) = {
val parser = parserBN
val parseResult = parser(str)
parseResult match {
case Success(result, remainingInput) =>
val succParseMsg = "Parser "+parser+" succeeded on input "+str+" with result "+result
val okMsgBuffer = new StringBuilder(succParseMsg)
val koMsgBuffer = new StringBuilder(succParseMsg)
val cond = resultMatcherOpt match {
case None =>
true
case Some(resultMatcher) =>
resultMatcher(result) match {
case (success, okMessage, koMessage) =>
okMsgBuffer.append(" and ").append(okMessage)
koMsgBuffer.append(" but ").append(koMessage)
success
}
}
(cond, okMsgBuffer.toString, koMsgBuffer.toString)
case _ =>
(false, "Parser succeeded", "Parser "+parser+": "+parseResult)
}
}
def resultMust(resultMatcher: Matcher[T]) = this.copy(resultMatcherOpt = Some(resultMatcher))
def withResult(expectedResult: T) = resultMust(beEqualTo(expectedResult))
def ignoringResult = this.copy(resultMatcherOpt = None)
}
def succeedOn[T](str: Input, expectedResultOpt: Option[Matcher[T]] = None) =
SucceedOn(str, expectedResultOpt)
implicit def stringReader(str: String): Reader[Char] = new CharSequenceReader(str)
}
This message can occur while the compiler is really trying to signal a type error or a type inference failure. It's a bug (or family of bugs) in scalac.
To locate the problem, progressively add in explicit types and type arguments; break complex expressions into smaller subexpressions.
For bonus points, produce a standalone example and file a bug.

What Automatic Resource Management alternatives exist for Scala?

I have seen many examples of ARM (automatic resource management) on the web for Scala. It seems to be a rite-of-passage to write one, though most look pretty much like one another. I did see a pretty cool example using continuations, though.
At any rate, a lot of that code has flaws of one type or another, so I figured it would be a good idea to have a reference here on Stack Overflow, where we can vote up the most correct and appropriate versions.
Chris Hansen's blog entry 'ARM Blocks in Scala: Revisited' from 3/26/09 talks about about slide 21 of Martin Odersky's FOSDEM presentation. This next block is taken straight from slide 21 (with permission):
def using[T <: { def close() }]
(resource: T)
(block: T => Unit)
{
try {
block(resource)
} finally {
if (resource != null) resource.close()
}
}
--end quote--
Then we can call like this:
using(new BufferedReader(new FileReader("file"))) { r =>
var count = 0
while (r.readLine != null) count += 1
println(count)
}
What are the drawbacks of this approach? That pattern would seem to address 95% of where I would need automatic resource management...
Edit: added code snippet
Edit2: extending the design pattern - taking inspiration from python with statement and addressing:
statements to run before the block
re-throwing exception depending on the managed resource
handling two resources with one single using statement
resource-specific handling by providing an implicit conversion and a Managed class
This is with Scala 2.8.
trait Managed[T] {
def onEnter(): T
def onExit(t:Throwable = null): Unit
def attempt(block: => Unit): Unit = {
try { block } finally {}
}
}
def using[T <: Any](managed: Managed[T])(block: T => Unit) {
val resource = managed.onEnter()
var exception = false
try { block(resource) } catch {
case t:Throwable => exception = true; managed.onExit(t)
} finally {
if (!exception) managed.onExit()
}
}
def using[T <: Any, U <: Any]
(managed1: Managed[T], managed2: Managed[U])
(block: T => U => Unit) {
using[T](managed1) { r =>
using[U](managed2) { s => block(r)(s) }
}
}
class ManagedOS(out:OutputStream) extends Managed[OutputStream] {
def onEnter(): OutputStream = out
def onExit(t:Throwable = null): Unit = {
attempt(out.close())
if (t != null) throw t
}
}
class ManagedIS(in:InputStream) extends Managed[InputStream] {
def onEnter(): InputStream = in
def onExit(t:Throwable = null): Unit = {
attempt(in.close())
if (t != null) throw t
}
}
implicit def os2managed(out:OutputStream): Managed[OutputStream] = {
return new ManagedOS(out)
}
implicit def is2managed(in:InputStream): Managed[InputStream] = {
return new ManagedIS(in)
}
def main(args:Array[String]): Unit = {
using(new FileInputStream("foo.txt"), new FileOutputStream("bar.txt")) {
in => out =>
Iterator continually { in.read() } takeWhile( _ != -1) foreach {
out.write(_)
}
}
}
Daniel,
I've just recently deployed the scala-arm library for automatic resource management. You can find the documentation here: https://github.com/jsuereth/scala-arm/wiki
This library supports three styles of usage (currently):
1) Imperative/for-expression:
import resource._
for(input <- managed(new FileInputStream("test.txt")) {
// Code that uses the input as a FileInputStream
}
2) Monadic-style
import resource._
import java.io._
val lines = for { input <- managed(new FileInputStream("test.txt"))
val bufferedReader = new BufferedReader(new InputStreamReader(input))
line <- makeBufferedReaderLineIterator(bufferedReader)
} yield line.trim()
lines foreach println
3) Delimited Continuations-style
Here's an "echo" tcp server:
import java.io._
import util.continuations._
import resource._
def each_line_from(r : BufferedReader) : String #suspendable =
shift { k =>
var line = r.readLine
while(line != null) {
k(line)
line = r.readLine
}
}
reset {
val server = managed(new ServerSocket(8007)) !
while(true) {
// This reset is not needed, however the below denotes a "flow" of execution that can be deferred.
// One can envision an asynchronous execuction model that would support the exact same semantics as below.
reset {
val connection = managed(server.accept) !
val output = managed(connection.getOutputStream) !
val input = managed(connection.getInputStream) !
val writer = new PrintWriter(new BufferedWriter(new OutputStreamWriter(output)))
val reader = new BufferedReader(new InputStreamReader(input))
writer.println(each_line_from(reader))
writer.flush()
}
}
}
The code makes uses of a Resource type-trait, so it's able to adapt to most resource types. It has a fallback to use structural typing against classes with either a close or dispose method. Please check out the documentation and let me know if you think of any handy features to add.
Here's James Iry solution using continuations:
// standard using block definition
def using[X <: {def close()}, A](resource : X)(f : X => A) = {
try {
f(resource)
} finally {
resource.close()
}
}
// A DC version of 'using'
def resource[X <: {def close()}, B](res : X) = shift(using[X, B](res))
// some sugar for reset
def withResources[A, C](x : => A #cps[A, C]) = reset{x}
Here are the solutions with and without continuations for comparison:
def copyFileCPS = using(new BufferedReader(new FileReader("test.txt"))) {
reader => {
using(new BufferedWriter(new FileWriter("test_copy.txt"))) {
writer => {
var line = reader.readLine
var count = 0
while (line != null) {
count += 1
writer.write(line)
writer.newLine
line = reader.readLine
}
count
}
}
}
}
def copyFileDC = withResources {
val reader = resource[BufferedReader,Int](new BufferedReader(new FileReader("test.txt")))
val writer = resource[BufferedWriter,Int](new BufferedWriter(new FileWriter("test_copy.txt")))
var line = reader.readLine
var count = 0
while(line != null) {
count += 1
writer write line
writer.newLine
line = reader.readLine
}
count
}
And here's Tiark Rompf's suggestion of improvement:
trait ContextType[B]
def forceContextType[B]: ContextType[B] = null
// A DC version of 'using'
def resource[X <: {def close()}, B: ContextType](res : X): X #cps[B,B] = shift(using[X, B](res))
// some sugar for reset
def withResources[A](x : => A #cps[A, A]) = reset{x}
// and now use our new lib
def copyFileDC = withResources {
implicit val _ = forceContextType[Int]
val reader = resource(new BufferedReader(new FileReader("test.txt")))
val writer = resource(new BufferedWriter(new FileWriter("test_copy.txt")))
var line = reader.readLine
var count = 0
while(line != null) {
count += 1
writer write line
writer.newLine
line = reader.readLine
}
count
}
For now Scala 2.13 has finally supported: try with resources by using Using :), Example:
val lines: Try[Seq[String]] =
Using(new BufferedReader(new FileReader("file.txt"))) { reader =>
Iterator.unfold(())(_ => Option(reader.readLine()).map(_ -> ())).toList
}
or using Using.resource avoid Try
val lines: Seq[String] =
Using.resource(new BufferedReader(new FileReader("file.txt"))) { reader =>
Iterator.unfold(())(_ => Option(reader.readLine()).map(_ -> ())).toList
}
You can find more examples from Using doc.
A utility for performing automatic resource management. It can be used to perform an operation using resources, after which it releases the resources in reverse order of their creation.
I see a gradual 4 step evolution for doing ARM in Scala:
No ARM: Dirt
Only closures: Better, but multiple nested blocks
Continuation Monad: Use For to flatten the nesting, but unnatural separation in 2 blocks
Direct style continuations: Nirava, aha! This is also the most type-safe alternative: a resource outside withResource block will be type error.
There is light-weight (10 lines of code) ARM included with better-files. See: https://github.com/pathikrit/better-files#lightweight-arm
import better.files._
for {
in <- inputStream.autoClosed
out <- outputStream.autoClosed
} in.pipeTo(out)
// The input and output streams are auto-closed once out of scope
Here is how it is implemented if you don't want the whole library:
type Closeable = {
def close(): Unit
}
type ManagedResource[A <: Closeable] = Traversable[A]
implicit class CloseableOps[A <: Closeable](resource: A) {
def autoClosed: ManagedResource[A] = new Traversable[A] {
override def foreach[U](f: A => U) = try {
f(resource)
} finally {
resource.close()
}
}
}
How about using Type classes
trait GenericDisposable[-T] {
def dispose(v:T):Unit
}
...
def using[T,U](r:T)(block:T => U)(implicit disp:GenericDisposable[T]):U = try {
block(r)
} finally {
Option(r).foreach { r => disp.dispose(r) }
}
Another alternative is Choppy's Lazy TryClose monad. It's pretty good with database connections:
val ds = new JdbcDataSource()
val output = for {
conn <- TryClose(ds.getConnection())
ps <- TryClose(conn.prepareStatement("select * from MyTable"))
rs <- TryClose.wrap(ps.executeQuery())
} yield wrap(extractResult(rs))
// Note that Nothing will actually be done until 'resolve' is called
output.resolve match {
case Success(result) => // Do something
case Failure(e) => // Handle Stuff
}
And with streams:
val output = for {
outputStream <- TryClose(new ByteArrayOutputStream())
gzipOutputStream <- TryClose(new GZIPOutputStream(outputStream))
_ <- TryClose.wrap(gzipOutputStream.write(content))
} yield wrap({gzipOutputStream.flush(); outputStream.toByteArray})
output.resolve.unwrap match {
case Success(bytes) => // process result
case Failure(e) => // handle exception
}
More info here: https://github.com/choppythelumberjack/tryclose
Here is #chengpohi's answer, modified so it works with Scala 2.8+, instead of just Scala 2.13 (yes, it works with Scala 2.13 also):
def unfold[A, S](start: S)(op: S => Option[(A, S)]): List[A] =
Iterator
.iterate(op(start))(_.flatMap{ case (_, s) => op(s) })
.map(_.map(_._1))
.takeWhile(_.isDefined)
.flatten
.toList
def using[A <: AutoCloseable, B](resource: A)
(block: A => B): B =
try block(resource) finally resource.close()
val lines: Seq[String] =
using(new BufferedReader(new FileReader("file.txt"))) { reader =>
unfold(())(_ => Option(reader.readLine()).map(_ -> ())).toList
}
While Using is OK, I prefer the monadic style of resource composition. Twitter Util's Managed is pretty nice, except for its dependencies and its not-very-polished API.
To that end, I've published https://github.com/dvgica/managerial for Scala 2.12, 2.13, and 3.0.0. Largely based on the Twitter Util Managed code, no dependencies, with some API improvements inspired by cats-effect Resource.
The simple example:
import ca.dvgi.managerial._
val fileContents = Managed.from(scala.io.Source.fromFile("file.txt")).use(_.mkString)
But the real strength of the library is composing resources via for comprehensions.
Let me know what you think!