Dynamic orderBy with Squeryl - scala

I can not figure out how to change orderBy dynamically in runtime. I need something like:
def samplesSorted(fields: List[String]) = {
from(Schema.samples)(s => select(s) orderBy(fields.map(getterByName))
}
Or something like
def samplesSorted(fields: List[String]) = {
val q = from(Schema.samples)(s => select(s))
fields.forEach(field => q.addOrderBy(getterByName(field)))
q
}
I am trying to write a help function to manipulate AST now. But that does not seem like the right solution.

Did not notice there is a version of orderBy that accepts a list of ExpressionNodes. Was able to solve it like this:
def samplesSorted(fields: List[String]) = {
from(Schema.samples)(s => select(s) orderBy(fields.map(buildOrderBy(s)))
}
def buildOrderBy(row: Row)(field: String): ExpressionNode = {
getterByName(row, field)
}
def getterByName(row: Row, field: String): String = field match {
case "Name" => row.name
case "Address" => row.address
}
Have not tried with fields of different types yet - implicits may not work in this case. But I could always call them explicitly.
Upd:
To do the same with descending order one could use a helper like this one:
def desc(node: ExpressionNode):ExpressionNode = new OrderByArg(node) {desc}

This works for me
def ord(dr: DataRow, name: String): ExpressionNode = if (orderAscending) {
dr.getterByName(name) asc
} else {
dr.getterByName(name) desc
}
case class DataRow(id: Long,
#Column("resource_id") resourceId: String,
def getterByName(name: String) = {
name match {
case "resource_id" => resourceId.~
case _ => id.~
}
}
}
from(DataSchema.dataRows) { dr =>
where(dr.id === id).select(dr).orderBy(ord(dr, filedName))
}.page(offset, limit)

Related

How to make only few datatype which is not related to each other acceptable by generics

There is a trait which works perfectly. However, I would like to refactor the part related to generic [T] in order to limit the data type which could be accepted by generic [T] (I need only Option[JsValue] , JsValue , StringEnumEntry , String ). Is it possible to solve this problem through shapeless coproduct? Maybe there are other solutions?
trait ParameterBinders extends Log {
def jsonBinder[T](json: T, jsonType: java.lang.String = "json"): ParameterBinderWithValue = {
val jsonObject = new PGobject()
jsonObject.setType(jsonType)
json match {
case json: Option[JsValue] =>
jsonObject.setValue(json.map(Json.stringify).orNull)
case json: JsValue =>
jsonObject.setValue(Json.stringify(json))
case json: StringEnumEntry =>
jsonObject.setValue(json.value)
case json: String =>
jsonObject.setValue(json)
case _ =>
logger.error("unexpected data type ")
}
if (jsonType == "JSONSCHEMATYPE" || jsonType == "SYSPROPERTYTYPE") {
ParameterBinder(this, (ps, i) => {
ps.setObject(i, jsonObject)
})
} else {
ParameterBinder(json, (ps, i) => {
ps.setObject(i, jsonObject)
})
}
}
}
The easiest way is to use an ADT as described in the link of the first comment.
If you don't want to change the types that are accepted in jsonBinder then you can solve the problem by using a typeclass.
e.g.
trait JsonBindValue[T] {
def value(t: T): String
}
you would then have to provide instances for your accepted datatypes
object JsonBindValue {
implicit val OptJsBinder = new JsonBindValue[Option[JsValue]] {
def value(t: Option[JsValue]): String = {
t.map(Json.stringify).orNull
}
}
... more instances here
}
finally your function would look like this:
def jsonBinder[T : JsonBindValue](json: T, jsonType: java.lang.String = "json"): ParameterBinderWithValue = {
val binder = implicitly[JsonBindValue[T]]
jsonObject.setType(jsonType)
jsonObject.setValue(binder.value(json))
...
}
if you call the function without a implicit instance in scope you will get a compile time error.

Working with options in Scala (best practices)

I have a method that I wrote to enrich person data by performing an API call and adding the enriched data.
I have this case class:
case class Person(personData: PersonData, dataEnrichment: Option[DataEnrichment])
My method is supposed to return this case class, but I have few filters before, in case person height is not "1.8 m" OR if personId was not found in the bio using the regex, I want to return Person with dataEnrichment = None . My issue is that person height and personId are Options themselves, so it looks like this:
def enrichPersonObjWithApiCall(person: Person) = {
person.personData.height.map(_.equals("1.8 m")) match {
case Some(true) =>
val personId = person.personData.bio flatMap { comment =>
extractPersonIdIfExists(comment)
}
personId match {
case Some(perId) =>
apiCall(perId) map { apiRes =>
Person(
person.personData,
dataEnrichment = apiRes)
}
case _ =>
Future successful Person(
person.personData,
dataEnrichment = None)
}
case _ =>
Future successful Person(
person.personData,
dataEnrichment = None)
}
}
def extractPersonIdIfExists(personBio: String): Option[String] = {
val personIdRegex: Regex = """(?<=PersonId:)[^;]+""".r
personIdRegex.findFirstIn(personBio)
}
def apiCall(personId: String): Future[Option[DataEnrichment]] = {
???
}
case class DataEnrichment(res: Option[String])
case class PersonData(name: String, height: Option[String], bio: Option[String])
It doesn't seem to be a Scala best practice to perform it like that. Do you have a more elegant way to get to the same result?
Using for is a good way to process a chain of Option values:
def enrichPersonObjWithApiCall(person: Person): Future[Person] =
(
for {
height <- person.personData.height if height == "1.8 m"
comment <- person.personData.bio
perId <- extractPersonIdIfExists(comment)
} yield {
apiCall(perId).map(Person(person.personData, _))
}
).getOrElse(Future.successful(Person(person.personData, None)))
This is equivalent to a chain of map, flatMap and filter calls, but much easier to read.
Here, I tried to make it more idiomatic and shorter:
def enrichPersonObjWithApiCall(person: Person) = {
person.personData.height.collect {
case h if h == "1.8 m" =>
val personId = person.personData.bio.flatMap(extractPersonIdIfExists)
personId.map(
apiCall(_)
.map(apiRes => person.copy(dataEnrichment = apiRes))
)
}.flatten.getOrElse(
Future.successful(person.copy(dataEnrichment = None))
)
}
Basically, the idea is to use appropriate monadic chains of map, flatMap, collect instead of pattern matching when appropriate.
Same idea as Aivean's answer. Just I would use map flatMap and filter.
def enrichPersonObjWithApiCall(person: Person) = {
person.personData.height
.filter(_ == "1.8 m")
.flatMap{_=>
val personId = person.personData.bio
.flatMap(extractPersonIdIfExists)
personId.map(
apiCall(_)
.map(apiRes => person.copy(dataEnrichment = apiRes))
)
}.getOrElse(Future.successful(person))
}
It's more readable for me.

How to add derived variables to a ResultSet

Almost all guides/tutorials that I've seen only show how to parse values from columns that are directly available in the database. For example, the following is a very common pattern, and I understand how it can be useful in many ways:
case class Campaign(id: Int, campaign_mode_id: Int, name: String)
class Application #Inject()(db: Database) extends Controller {
val campaign = {
get[Int]("campaign.id") ~
get[Int]("campaign.campaign_mode_id") ~
get[String]("campaign.name") map {
case id ~ campaign_mode_id ~ name => Campaign(id, campaign_mode_id, name)
}
}
def index = Action {
val data : List[Campaign] = db.withConnection { implicit connection =>
SQL("SELECT id, campaign_mode_id, name FROM campaign").as(campaign.*)
}
Ok(Json.toJson(data))
}
}
And it'd produce a result that might look like the following:
[
{
id: 2324,
campaign_mode_id: 13,
name: "ABC"
},
{
id: 1324,
campaign_mode_id: 23,
name: "ABCD"
}
]
Now what if there were an additional date field in the campaign table like started_on that referred to when the campaign was started? Or another field called num_followers that was an integer referring to the number of followers?
And suppose that I wanted to do some calculations after running the DB query and before returning my JSON. For example, I want to include a latest_compaign_date that references the started_on date of the newest campaign. Or say that I wanted to include an average_num_followers that referred to the average number of followers for all campaigns.
My final result would look like:
{
latest_compaign_date: 12 Dec 2018,
average_num_followers: 123,
campaigns: [
{
id: 2324,
campaign_mode_id: 13,
name: "ABC"
},
{
id: 1324,
campaign_mode_id: 23,
name: "ABCD"
}
]
}
I know that for the examples I've given it's better to do those calculations in my DB query and not in my application code. But what if I had to do something really complicated and wanted to do it in my application code for some reason? How should I modify my ResutSetParser to facilitate this?
Here are a couple of approaches that I've tried:
Do not use ResultSetParser and instead do everything manually
case class CampaignData(newestCampaignDate: Long, newestCampaignId: Long, averageNumFollowers: Float, campaigns: Seq[Campaign])
def aggregater(rows: Seq[Row]): CampaignData = {
val newestCampaignDate: Long = getNewestDate(rows)
val newestCampaignId: Long = getNewestCampaign(rows)
val averageNumFollowers: Float = getAverageNumFollowers(rows)
val campaigns: Seq[Campaign] = rows.map(row => {
val rowMap: Map[String, Any] = row.asMap
Campaign(
rowMap("campaign.id").asInstanceOf[Int],
rowMap("campaign.campaign_mode_id") match { case None => 0 case Some(value) => value.asInstanceOf[Int]},
rowMap("campaign.name") match { case None => "" case Some(value) => value.asInstanceOf[String]}
)
})
CampaignData(newestCampaignDate, newestCampaignId, averageNumFollowers, campaigns)
}
def index = Action {
val data : Seq[Row] = db.withConnection { implicit connection =>
SQL("SELECT id, campaign_mode_id, name, started_on, num_followers FROM campaign")
}
Ok(Json.toJson(aggregater(data)))
}
This approach smells bad because having to deal with every field using asInstanceOf and match is very tedious and honestly feels unsafe. And also intuitively, I feel that Anorm should have something better for this since I'm probably not the first person who has run into this problem.
Use ResultSetParser in combination with another function
case class Campaign(id: Int, campaign_mode_id: Int, name: String)
case class CampaignData(newestCampaignDate: Long, newestCampaignId: Long, averageNumFollowers: Float, campaigns: Seq[Campaign])
val campaign = {
get[Int]("campaign.id") ~
get[Int]("campaign.campaign_mode_id") ~
get[Int]("campaign.num_followers") ~
get[Long]("campaign.started_on") ~
get[String]("campaign.name") map {
case id ~ campaign_mode_id ~ num_followers ~ started_on ~ name => Map(
"id" -> id,
"campaign_mode_id" -> campaign_mode_id,
"num_followers" -> num_followers,
"started_on" -> started_on,
"name" -> name
)
}
}
def index = Action {
val data : Map[String, Any] = db.withConnection { implicit connection =>
SQL("SELECT id, campaign_mode_id, name, started_on, num_followers FROM campaign").as(campaign.*)
}
Ok(Json.toJson(aggregator(data)))
}
def aggregator(data: Map[String, Any]): CampaignData = {
val newestCampaignDate: Long = getNewestDate(data)
val newestCampaignId: Long = getNewestCampaign(data)
val averageNumFollowers: Float = getAverageNumFollowers(data)
val campaigns: Seq[Campaign] = getCampaigns(data)
CampaignData(newestCampaignDate, newestCampaignId, averageNumFollowers, campaigns)
}
This approach is better in the sense that I don't have to deal with isInstanceOf, but then there is a bigger problem of having to deal with the intermediate Map. And it makes all the getter functions (e.g. getCampaigns) so much more complicated. I feel that Anorm has to offer something better out of the box that I'm not aware of.
As you posted in your first code snippet, the following code
def index = Action {
val data : List[Campaign] = db.withConnection { implicit connection =>
SQL("SELECT id, campaign_mode_id, name FROM campaign").as(campaign.*)
}
Ok(Json.toJson(data))
}
returns a typesafe List of Campaign thanks to Anorm extractors.
Typically, you will pre-process the result with a typesafe function like so
case class CampaignAggregateData(campaigns:List[Campaign], averageNumFollowers:Int, newestCampaignId:Option[Long])
def aggregate(f:List[Campaign]):CampaignAggregatedData
def index = Action {
val manyCampaign : List[Campaign] = db.withConnection { implicit connection =>
SQL("SELECT id, campaign_mode_id, name FROM campaign").as(campaign.*)
}
val aggregatedData:CampaignAggregateData = aggregate(manyCampaign)
Ok(Json.toJson(data))
}
In cases where you would need aggregation to be executed by the database engine, you would typically have multiple db.withConnection statements inside a single action

scala returns doesn't conform to required S_

I got the error
found : scala.concurrent.Future[Option[models.ProcessTemplatesModel]]
required: Option[models.ProcessTemplatesModel]
My function is below
def createCopyOfProcessTemplate(processTemplateId: Int): Future[Option[ProcessTemplatesModel]] = {
val action = processTemplates.filter(_.id === processTemplateId).result.map(_.headOption)
val result: Future[Option[ProcessTemplatesModel]] = db.run(action)
result.map { case (result) =>
result match {
case Some(r) => {
var copy = (processTemplates returning processTemplates.map(_.id)) += ProcessTemplatesModel(None, "[Copy of] " + r.title, r.version, r.createdat, r.updatedat, r.deadline, r.status, r.comment, Some(false), r.checkedat, Some(false), r.approvedat, false, r.approveprocess, r.trainingsprocess)
val composedAction = copy.flatMap { id =>
processTemplates.filter(_.id === id).result.headOption
}
db.run(composedAction)
}
}
}
}
what is my problem in this case?
edit:
my controller function looks like this:
def createCopyOfProcessTemplate(processTemplateId: Int) = Action.async {
processTemplateDTO.createCopyOfProcessTemplate(processTemplateId).map { process =>
Ok(Json.toJson(process))
}
}
Is there my failure?
According to the your code - there are the following issues:
You use two db.run which return futures, but inner future will
not complete. For resolving it you should compose futures with
flatMap or for-comprehension.
You use only one partial-function case Some(_) => for pattern matching
and don't handle another value None.
You can use only one db.run and actions composition.
Your code can be like as:
def createCopyOfProcessTemplate(processTemplateId: Int): Future[Option[ProcessTemplatesModel]] = {
val action = processTemplates.filter(...).result.map(_.headOption)
val composedAction = action.flatMap {
case Some(r) =>
val copyAction = (processTemplates returning processTemplates...)
copyAction.flatMap { id =>
processTemplates.filter(_.id === id).result.headOption
}
case _ =>
DBIO.successful(None) // issue #2 has been resolved here
}
db.run(composedAction) // issue #3 has been resolved here
}
We get rid of issue #1 (because we use actions composition).

better slick dynamic query coding style

private def buildQuery(query: TweetQuery) = {
var q = Tweets.map { t =>
t
}
query.isLocked.foreach { isLocked =>
q = q.filter(_.isLocked === isLocked)
}
query.isProcessed.foreach { isProcessed =>
q = q.filter(_.processFinished === isProcessed)
}
query.maxScheduleAt.foreach { maxScheduleAt =>
q = q.filter(_.expectScheduleAt < maxScheduleAt)
}
query.minScheduleAt.foreach { minScheduleAt =>
q = q.filter(_.expectScheduleAt > minScheduleAt)
}
query.status.foreach { status =>
q = q.filter(_.status === status)
}
query.scheduleType.foreach { scheduleType =>
q = q.filter(_.scheduleType === scheduleType)
}
q
}
I am writing things like above to do dynamic query. really boring, any way better to do this ?
Maybe the MaybeFilter can help you https://gist.github.com/cvogt/9193220
I think this is the correct migrated code for slick 2.1.0
case class MaybeFilter[X, Y](val query: Query[X, Y, Seq]) {
def filter[T, R: CanBeQueryCondition](data: Option[T])(f: T => X => R) = {
data.map(v => MaybeFilter(query.withFilter(f(v)))).getOrElse(this)
}
}
I modified the answer of cvogt in order to work with slick 2.1.0. Explanations of what have changed are in here.
Hope it helps someone :)
case class MaybeFilter[X, Y](val query: scala.slick.lifted.Query[X, Y, Seq]) {
def filter(op: Option[_])(f:(X) => Column[Option[Boolean]]) = {
op map { o => MaybeFilter(query.filter(f)) } getOrElse { this }
}
}
Regards.
Corrected example:
//Class definition
import scala.slick.driver.H2Driver.simple._
import scala.slick.lifted.{ProvenShape, ForeignKeyQuery}
// A Suppliers table with 6 columns: id, name, street, city, state, zip
class Suppliers(tag: Tag)
extends Table[(Int, String, String, String, String, String)](tag, "SUPPLIERS") {
// This is the primary key column:
def id: Column[Int] = column[Int]("SUP_ID", O.PrimaryKey)
def name: Column[String] = column[String]("SUP_NAME")
def street: Column[String] = column[String]("STREET")
def city: Column[String] = column[String]("CITY")
def state: Column[String] = column[String]("STATE")
def zip: Column[String] = column[String]("ZIP")
// Every table needs a * projection with the same type as the table's type parameter
def * : ProvenShape[(Int, String, String, String, String, String)] =
(id, name, street, city, state, zip)
}
//I changed the name of the def from filter to filteredBy to ease the
//implicit conversion
case class MaybeFilter[X, Y](val query: scala.slick.lifted.Query[X, Y, Seq]) {
def filteredBy(op: Option[_])(f:(X) => Column[Option[Boolean]]) = {
op map { o => MaybeFilter(query.filter(f)) } getOrElse { this }
}
}
//Implicit conversion to the MaybeFilter in order to minimize ceremony
implicit def maybeFilterConversor[X,Y](q:Query[X,Y,Seq]) = new MaybeFilter(q)
val suppliers: TableQuery[Suppliers] = TableQuery[Suppliers]
suppliers += (101, "Acme, Inc.", "99 Market Street", "Groundsville", "CA", "95199")
//Dynamic query here
//try this asigment val nameFilter:Option[String] = Some("cme") and see the results
val nameFilter:Option[String] = Some("Acme")
//also try to assign None in here like this val supIDFilter:Option[Int] = None and see the results
val supIDFilter:Option[Int] = Some(101)
suppliers
.filteredBy(supIDFilter){_.id === supIDFilter}
.filteredBy(nameFilter){_.name like nameFilter.map("%" + _ + "%").getOrElse("")}
.query.list
Complete example:
https://github.com/neowinx/hello-slick-2.1-dynamic-filter
Are isLocked, isProcessed, etc Options?
Then you can also write things like
for (locked <- query.isLocked) { q = q.filter(_.isLocked is locked) }
if that's of any consolation :-}
Well, it seems like this code violates OCP. Try to take a look on this article - even though it's not on Scala, it explains how to properly design such methods.