Scala / Slick 3.0.1 - Update Multiple Columns - scala

Whenever I get an update request for a given id , I am trying to update the masterId and the updatedDtTm columns in a DB table( I don't want to update my createdDtTm). The following is my code :
case class Master(id:Option[Long] = None,masterId:String,createdDtTm:Option[java.util.Date],
updatedDtTm:Option[java.util.Date])
/**
* This is my Slick Mapping table
* with the default projection
*/
`class MappingMaster(tag:Tag) extends
Table[Master](tag,"master") {
implicit val DateTimeColumnType = MappedColumnType.base[java.util.Date, java.sql.Timestamp](
{
ud => new Timestamp(ud.getTime)
}, {
sd => new java.util.Date(sd.getTime)
})
def id = column[Long]("id",O.PrimaryKey,O.AutoInc)
def masterId = column[String]("master_id")
def createdDtTm = column[java.util.Date]("created_dttm")
def updatedDtTm = column[java.util.Date]("updated_dttm")
def * = (id.? , masterId , createdDtTm.? , updatedDtTm.?) <>
((Master.apply _).tupled , Master.unapply _) }
/**
* Some where in the DAO update call
*/
db.run(masterRecords.filter(_.id === id).map(rw =>(rw.masterId,rw.updatedDtTm)).
update(("new_master_id",new Date()))
// I also tried the following
db.run(masterRecords.filter(_id === id).map(rw => (rw.masterId,rw.updatedDtTm).shaped[(String,java.util.Date)]).update(("new_master_id",new Date()))
The documentation of Slick states that inorder to update multiple columns one needs to use the map to get the corresponding columns and then update them.
The problem here is the following - the update method seems to be accepting a value of Nothing.
I also tried the following which was doing the same thing as above:
val t = for {
ms <- masterRecords if (ms.id === "1234")
} yield (ms.masterId , ms.updateDtTm)
db.run(t.update(("new_master_id",new Date())))
When I compile the code , it gives me the following Compilation Exception :
Slick does not know how to map the given types.
[error] Possible causes: T in Table[T] does not match your * projection. Or you use an unsupported type in a Query (e.g. scala List).
[error] Required level: slick.lifted.FlatShapeLevel
[error] Source type: (slick.lifted.Rep[String], slick.lifted.Rep[java.util.Date])
[error] Unpacked type: (String, java.util.Date)
[error] Packed type: Any
[error] db.run(masterRecords.filter(_id === id).map(rw => (rw.masterId,rw.updatedDtTm).shaped[(String,java.util.Date)]).update(("new_master_id",new Date()))
I am using Scala 2.11 with Slick 3.0.1 and IntelliJ as the IDE. Really appreciate if you can throw some light on this.
Cheers,
Sathish

(Replaces original answer) It seems the implicit has to be in scope for the queries, this compiles:
case class Master(id:Option[Long] = None,masterId:String,createdDtTm:Option[java.util.Date],
updatedDtTm:Option[java.util.Date])
implicit val DateTimeColumnType = MappedColumnType.base[java.util.Date, java.sql.Timestamp](
{
ud => new Timestamp(ud.getTime)
}, {
sd => new java.util.Date(sd.getTime)
})
class MappingMaster(tag:Tag) extends Table[Master](tag,"master") {
def id = column[Long]("id",O.PrimaryKey,O.AutoInc)
def masterId = column[String]("master_id")
def createdDtTm = column[java.util.Date]("created_dttm")
def updatedDtTm = column[java.util.Date]("updated_dttm")
def * = (id.? , masterId , createdDtTm.? , updatedDtTm.?) <> ((Master.apply _).tupled , Master.unapply _)
}
private val masterRecords = TableQuery[MappingMaster]
val id: Long = 123
db.run(masterRecords.filter(_.id === id).map(rw =>(rw.masterId,rw.updatedDtTm)).update("new_master_id",new Date()))
val t = for {
ms <- masterRecords if (ms.id === id)
} yield (ms.masterId , ms.updatedDtTm)
db.run(t.update(("new_master_id",new Date())))

Related

ClassCastException: org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema cannot be cast

I am using an Aggregator to apply some custom merge on a DataFrame after grouping its records by their primary key:
case class Player(
pk: String,
ts: String,
first_name: String,
date_of_birth: String
)
case class PlayerProcessed(
var ts: String,
var first_name: String,
var date_of_birth: String
)
// Cutomer Aggregator -This just for the example, actual one is more complex
object BatchDedupe extends Aggregator[Player, PlayerProcessed, PlayerProcessed] {
def zero: PlayerProcessed = PlayerProcessed("0", null, null)
def reduce(bf: PlayerProcessed, in : Player): PlayerProcessed = {
bf.ts = in.ts
bf.first_name = in.first_name
bf.date_of_birth = in.date_of_birth
bf
}
def merge(bf1: PlayerProcessed, bf2: PlayerProcessed): PlayerProcessed = {
bf1.ts = bf2.ts
bf1.first_name = bf2.first_name
bf1.date_of_birth = bf2.date_of_birth
bf1
}
def finish(reduction: PlayerProcessed): PlayerProcessed = reduction
def bufferEncoder: Encoder[PlayerProcessed] = Encoders.product
def outputEncoder: Encoder[PlayerProcessed] = Encoders.product
}
val ply1 = Player("12121212121212", "10000001", "Rogger", "1980-01-02")
val ply2 = Player("12121212121212", "10000002", "Rogg", null)
val ply3 = Player("12121212121212", "10000004", null, "1985-01-02")
val ply4 = Player("12121212121212", "10000003", "Roggelio", "1982-01-02")
val seq_users = sc.parallelize(Seq(ply1, ply2, ply3, ply4)).toDF.as[Player]
val grouped = seq_users.groupByKey(_.pk)
val non_sorted = grouped.agg(BatchDedupe.toColumn.name("deduped"))
non_sorted.show(false)
This returns:
+--------------+--------------------------------+
|key |deduped |
+--------------+--------------------------------+
|12121212121212|{10000003, Roggelio, 1982-01-02}|
+--------------+--------------------------------+
Now, I would like to order the records based on ts before aggregating them. From here I understand that .sortBy("ts") do not guarantee the order after the .groupByKey(_.pk). So I was trying to apply the .sortBy between the .groupByKey and the .agg
The output of the .groupByKey(_.pk) is a KeyValueGroupedDataset[String,Player], being the second element an Iterator. So to apply some sorting logic there I convert it into a Seq:
val sorted = grouped.mapGroups{case(k, iter) => (k, iter.toSeq.sortBy(_.ts))}.agg(BatchDedupe.toColumn.name("deduped"))
sorted.show(false)
However, the output of .mapGroups after adding the sorting logic is a Dataset[(String, Seq[Player])]. So when I try to invoke the .agg function on it I am getting the following exception:
Caused by: ClassCastException: org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema cannot be cast to $line050e0d37885948cd91f7f7dd9e3b4da9311.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$Player
How could I convert back the output of my .mapGroups(...) into a KeyValueGroupedDataset[String,Player]?
I tried to cast back to Iterator as follows:
val sorted = grouped.mapGroups{case(k, iter) => (k, iter.toSeq.sortBy(_.ts).toIterator)}.agg(BatchDedupe.toColumn.name("deduped"))
But this approach produced the following exception:
UnsupportedOperationException: No Encoder found for Iterator[Player]
- field (class: "scala.collection.Iterator", name: "_2")
- root class: "scala.Tuple2"
How else can I add the sort logic between the .groupByKey and .agg methods?
Based on the discussion above, the purpose of the Aggregator is to get the latest field values per Player by ts ignoring null values.
This can be achieved fairly easily aggregating all fields individually using max_by. With that there's no need for a custom Aggregator nor the mutable aggregation buffer.
import org.apache.spark.sql.functions._
val players: Dataset[Player] = ...
// aggregate all columns except the key individually by ts
// NULLs will be ignored (SQL standard)
val aggColumns = players.columns
.filterNot(_ == "pk")
.map(colName => expr(s"max_by($colName, if(isNotNull($colName), ts, null))").as(colName))
val aggregatedPlayers = players
.groupBy(col("pk"))
.agg(aggColumns.head, aggColumns.tail: _*)
.as[Player]
On the most recent versions of Spark you can also use the build in max_by expression:
import org.apache.spark.sql.functions._
val players: Dataset[Player] = ...
// aggregate all columns except the key individually by ts
// NULLs will be ignored (SQL standard)
val aggColumns = players.columns
.filterNot(_ == "pk")
.map(colName => max_by(col(colName), when(col(colName).isNotNull, col("ts"))).as(colName))
val aggregatedPlayers = players
.groupBy(col("pk"))
.agg(aggColumns.head, aggColumns.tail: _*)
.as[Player]

Optional pagination

I am trying to implement simple db query with optional pagination. My try:
def getEntities(limit: Option[Int], offset: Option[Int]) = {
// MyTable is a slick definition of the db table
val withLimit = limit.fold(MyTable)(l => MyTable.take(l)) // Error here.
// Mytable and MyTable.take(l)
// has various types
val withOffset = offset.fold(withLimit)(o => withLimit.drop(o))
val query = withOffset.result
db.run(query)
}
The problem is an error:
type mismatch:
found: slick.lifted.Query
required: slick.lifted.TableQuery
How to make this code runnable? And maybe a little bit prettier?
My current fix to get Query from TableQuery is to add .filter(_ => true), but IMHO this is not nice:
val withLimit = limit.fold(MyTable.filter(_ => true))(l => MyTable.take(l))
Try to replace
val MyTable = TableQuery[SomeTable]
with
val MyTable: Query[SomeTable, SomeTable#TableElementType, Seq] = TableQuery[SomeTable]
i.e. to specify the type (statically upcast TableQuery to Query).

Slick insert into postgreSQL

I am just trying to add a row from slick to my postgreSQL Database.
Here what I am trying to do :
val dbConfig = DatabaseConfigProvider.get[JdbcProfile](Play.current)
import dbConfig.driver.api._
val query = Task += new TaskRow(5, "taskName", status = "other")
println(Task.insertStatement)
val resultingQuery = dbConfig.db.run(query).map(res => "Task successfully added").recover {
case ex: Exception => ex.getCause.getMessage
}
Here the result of println :
insert into "task" ("taskname","description","status","datetask","pediocitynumber","periodicitytype") values (?,?,?,?,?,?)
I don't have any result exception or success from the resulting query.
Code generate by slick-codegen 3.1.1 :
case class TaskRow(taskid: Int, taskname: String, description: Option[String] = None, status: String, datetask: Option[java.sql.Timestamp] = None, pediocitynumber: Option[Int] = None, periodicitytype: Option[String] = None)
/** GetResult implicit for fetching TaskRow objects using plain SQL queries */
implicit def GetResultTaskRow(implicit e0: GR[Int], e1: GR[String], e2: GR[Option[String]], e3: GR[Option[java.sql.Timestamp]], e4: GR[Option[Int]]): GR[TaskRow] = GR{
prs => import prs._
TaskRow.tupled((<<[Int], <<[String], <<?[String], <<[String], <<?[java.sql.Timestamp], <<?[Int], <<?[String]))
}
/** Table description of table task. Objects of this class serve as prototypes for rows in queries. */
class Task(_tableTag: Tag) extends Table[TaskRow](_tableTag, "task") {
def * = (taskid, taskname, description, status, datetask, pediocitynumber, periodicitytype) <> (TaskRow.tupled, TaskRow.unapply)
/** Maps whole row to an option. Useful for outer joins. */
def ? = (Rep.Some(taskid), Rep.Some(taskname), description, Rep.Some(status), datetask, pediocitynumber, periodicitytype).shaped.<>({r=>import r._; _1.map(_=> TaskRow.tupled((_1.get, _2.get, _3, _4.get, _5, _6, _7)))}, (_:Any) => throw new Exception("Inserting into ? projection not supported."))
/** Database column taskid SqlType(serial), AutoInc, PrimaryKey */
val taskid: Rep[Int] = column[Int]("taskid", O.AutoInc, O.PrimaryKey)
/** Database column taskname SqlType(text) */
val taskname: Rep[String] = column[String]("taskname")
/** Database column description SqlType(text), Default(None) */
val description: Rep[Option[String]] = column[Option[String]]("description", O.Default(None))
/** Database column status SqlType(statustask) */
val status: Rep[String] = column[String]("status")
/** Database column datetask SqlType(timestamp), Default(None) */
val datetask: Rep[Option[java.sql.Timestamp]] = column[Option[java.sql.Timestamp]]("datetask", O.Default(None))
/** Database column pediocitynumber SqlType(int4), Default(None) */
val pediocitynumber: Rep[Option[Int]] = column[Option[Int]]("pediocitynumber", O.Default(None))
/** Database column periodicitytype SqlType(periodicitytype), Default(None) */
val periodicitytype: Rep[Option[String]] = column[Option[String]]("periodicitytype", O.Default(None))
}
/** Collection-like TableQuery object for table Task */
lazy val Task = new TableQuery(tag => new Task(tag))
Could someone explain what I am doing wrong?
EDIT:
To clarify my question :
The row is not added to the table, it seems that nothing happen. I can't see any exception or error thrown.
I think it could come from the sql statement and the questions marks (values (?,?,?,?,?,?)). It should be the actual values of the fields, right?
More code:
class Application #Inject()(dbConfigProvider: DatabaseConfigProvider) extends Controller {
def index = Action {
Ok(views.html.main())
}
def taskSave = Action.async { implicit request =>
println(request.body.asJson)
val dbConfig = DatabaseConfigProvider.get[JdbcProfile](Play.current)
import dbConfig.driver.api._
val query = Task += new TaskRow(5, "taskName", status = "other")
println(Task.insertStatement)
println(query)
val resultingQuery = dbConfig.db.run(query).map(res => "TAsk successfully added").recover {
case ex: Exception => ex.getCause.getMessage
}
resultingQuery.map(r => println("result : " + r))
Future(Ok(""))
}
}
route :
PUT /task-save controllers.Application.taskSave
The statement is not wrong because of the values (?,?,? ...).
The lazy val Task is of type TableQuery[...] . When you call insertStatement on this object you get back a string that represents the template SQL string that runs in the background. That is expected.
So that is not the indicator of the problem.
Try something like this :
val db = Database.forConfig("h2mem1")
try {
Await.result(db.run(DBIO.seq(
// create the schema
Task.schema.create,
// insert two User instances
Task += (2, "abcd", "201-01-01"),
// print the users (select * from USERS)
Task.result.map(println))), Duration.Inf)
} finally db.close

slick Trying to do a filter based on jodaTime DateTime

I get this error in play framework
could not find implicit value for parameter tt:
slick.ast.TypedType[org.joda.time.DateTime]
That hapens If I don't place the following implicit
I placed a implicit to do the conversion after the declaration using: def datain = columnDateTime(jdateColumnType)
implicit def jdateColumnType =
MappedColumnType.base[DateTime, Timestamp](
dt => new Timestamp(dt.getMillis),
ts => new DateTime(ts.getTime)
)
But this didn't solve the issue in the search where a error occurs: saying that DateTime does not have <=
def getLast24HByAddress(address:String) : Future[List[Email]] = {
val now = new java.sql.Timestamp(new java.util.Date().getTime())
db.run(
Emails.filter(_.datain <= DateTime.now.minusDays(1))
)
}
When I do this with the implicit I get:
value <= is not a member of slick.lifted.Rep[org.joda.time.DateTime]
I usually do this in the class where I define the tables:
/**
* Mapping for using Joda Time.
*/
implicit def dateTimeMapping = MappedColumnType.base[DateTime, java.sql.Timestamp](
dt => new Timestamp(dt.getMillis),
ts => new DateTime(ts.getTime, DateTimeZone.UTC))
Then when you instantiate and import the class:
val schema = DBSchemaV2(driver)
import schema._
import schema.driver.api._
this mapping will come along for a ride. That way you'll be able to use the ExtensionMethods in slick.
implicit val getEmailResult = GetResult(r => Email(Some(r.nextLong), r.nextString, r.nextString,
r.nextString, new DateTime(r.nextTimestamp.getTime)))
def last24Hours(address: String): Future[Vector[Email]] = {
val r = sql"""SELECT id, email, uuid, address, datain FROM email
WHERE DATE_ADD(datain, INTERVAL 1 DAY) >= NOW() AND address = $address """
.as[Email]
db.run(r)
}

playframework 2.4 - Unspecified value parameter headers error

I am upgrading playframework 2.4 from 2.3, I changed versions then if I compile same code, I see following error. Since I am novice at Scala, I am trying to learn Scala to solve this issue but still don't know what is the problem. What I want to do is adding a request header value from original request headers. Any help will be appreciated.
[error] /mnt/garner/project/app-service/app/com/company/playframework/filters/LoggingFilter.scala:26: not enough arguments for constructor Headers: (headers: Seq[(String, String)])play.api.mvc.Headers.
[error] Unspecified value parameter headers.
[error] val newHeaders = new Headers { val data = (requestHeader.headers.toMap
The LoggingFilter class
class LoggingFilter extends Filter {
val logger = AccessLogger.getInstance();
def apply(next: (RequestHeader) => Future[Result])(requestHeader: RequestHeader): Future[Result] = {
val startTime = System.currentTimeMillis
val requestId = logger.createLog();
val newHeaders = new Headers { val data = (requestHeader.headers.toMap
+ (AccessLogger.X_HEADER__REQUEST_ID -> Seq(requestId))).toList }
val newRequestHeader = requestHeader.copy(headers = newHeaders)
next(newRequestHeader).map { result =>
val endTime = System.currentTimeMillis
val requestTime = endTime - startTime
val bytesToString: Enumeratee[ Array[Byte], String ] = Enumeratee.map[Array[Byte]]{ bytes => new String(bytes) }
val consume: Iteratee[String,String] = Iteratee.consume[String]()
val resultBody : Future[String] = result.body |>>> bytesToString &>> consume
resultBody.map {
body =>
logger.finish(requestId, result.header.status, requestTime, body)
}
result;
}
}
}
Edit
I updated codes as following and it compiled well
following codes changed
val newHeaders = new Headers { val data = (requestHeader.headers.toMap
+ (AccessLogger.X_HEADER__REQUEST_ID -> Seq(requestId))).toList }
to
val newHeaders = new Headers((requestHeader.headers.toSimpleMap
+ (AccessLogger.X_HEADER__REQUEST_ID -> requestId)).toList)
It simply states that if you want to construct Headers you need to supply a field named headers which is of type Seq[(String, String)]. If you omit the inital new you will be using the apply function of the corresponding object for Headers which will just take a parameter of a vararg of (String, String) and your code should work. If you look at documentation https://www.playframework.com/documentation/2.4.x/api/scala/index.html#play.api.mvc.Headers and flip between the docs for object and class it should become clear.