better slick dynamic query coding style - scala

private def buildQuery(query: TweetQuery) = {
var q = Tweets.map { t =>
t
}
query.isLocked.foreach { isLocked =>
q = q.filter(_.isLocked === isLocked)
}
query.isProcessed.foreach { isProcessed =>
q = q.filter(_.processFinished === isProcessed)
}
query.maxScheduleAt.foreach { maxScheduleAt =>
q = q.filter(_.expectScheduleAt < maxScheduleAt)
}
query.minScheduleAt.foreach { minScheduleAt =>
q = q.filter(_.expectScheduleAt > minScheduleAt)
}
query.status.foreach { status =>
q = q.filter(_.status === status)
}
query.scheduleType.foreach { scheduleType =>
q = q.filter(_.scheduleType === scheduleType)
}
q
}
I am writing things like above to do dynamic query. really boring, any way better to do this ?

Maybe the MaybeFilter can help you https://gist.github.com/cvogt/9193220

I think this is the correct migrated code for slick 2.1.0
case class MaybeFilter[X, Y](val query: Query[X, Y, Seq]) {
def filter[T, R: CanBeQueryCondition](data: Option[T])(f: T => X => R) = {
data.map(v => MaybeFilter(query.withFilter(f(v)))).getOrElse(this)
}
}

I modified the answer of cvogt in order to work with slick 2.1.0. Explanations of what have changed are in here.
Hope it helps someone :)
case class MaybeFilter[X, Y](val query: scala.slick.lifted.Query[X, Y, Seq]) {
def filter(op: Option[_])(f:(X) => Column[Option[Boolean]]) = {
op map { o => MaybeFilter(query.filter(f)) } getOrElse { this }
}
}
Regards.
Corrected example:
//Class definition
import scala.slick.driver.H2Driver.simple._
import scala.slick.lifted.{ProvenShape, ForeignKeyQuery}
// A Suppliers table with 6 columns: id, name, street, city, state, zip
class Suppliers(tag: Tag)
extends Table[(Int, String, String, String, String, String)](tag, "SUPPLIERS") {
// This is the primary key column:
def id: Column[Int] = column[Int]("SUP_ID", O.PrimaryKey)
def name: Column[String] = column[String]("SUP_NAME")
def street: Column[String] = column[String]("STREET")
def city: Column[String] = column[String]("CITY")
def state: Column[String] = column[String]("STATE")
def zip: Column[String] = column[String]("ZIP")
// Every table needs a * projection with the same type as the table's type parameter
def * : ProvenShape[(Int, String, String, String, String, String)] =
(id, name, street, city, state, zip)
}
//I changed the name of the def from filter to filteredBy to ease the
//implicit conversion
case class MaybeFilter[X, Y](val query: scala.slick.lifted.Query[X, Y, Seq]) {
def filteredBy(op: Option[_])(f:(X) => Column[Option[Boolean]]) = {
op map { o => MaybeFilter(query.filter(f)) } getOrElse { this }
}
}
//Implicit conversion to the MaybeFilter in order to minimize ceremony
implicit def maybeFilterConversor[X,Y](q:Query[X,Y,Seq]) = new MaybeFilter(q)
val suppliers: TableQuery[Suppliers] = TableQuery[Suppliers]
suppliers += (101, "Acme, Inc.", "99 Market Street", "Groundsville", "CA", "95199")
//Dynamic query here
//try this asigment val nameFilter:Option[String] = Some("cme") and see the results
val nameFilter:Option[String] = Some("Acme")
//also try to assign None in here like this val supIDFilter:Option[Int] = None and see the results
val supIDFilter:Option[Int] = Some(101)
suppliers
.filteredBy(supIDFilter){_.id === supIDFilter}
.filteredBy(nameFilter){_.name like nameFilter.map("%" + _ + "%").getOrElse("")}
.query.list
Complete example:
https://github.com/neowinx/hello-slick-2.1-dynamic-filter

Are isLocked, isProcessed, etc Options?
Then you can also write things like
for (locked <- query.isLocked) { q = q.filter(_.isLocked is locked) }
if that's of any consolation :-}

Well, it seems like this code violates OCP. Try to take a look on this article - even though it's not on Scala, it explains how to properly design such methods.

Related

How do I compose nested case classes populated with async mongodb queries in play framework

I have been trying to convert mongodb queries I have working with await to using totally async. The problem is, I cannot find any examples or get code working to populate a list of objects where for each object there is a nested find returning futures.
I have seen examples for a single object, such as
val user = mongoDao.getUser(id)
val address = mongoDao.getAddress(user.id)
for that I see for comprehension works just fine. However, I have a list of objects (similar to users) and I cant seem to get the code right.
What I need to do is get all the users in an async manner, then when they complete, get all the addresses and populate a field (or create a new case class.)
val usersFuture : Future[List[User]] = mongoDao.getUsers()
val fullFutures : Future[List[FullUser]] = usersFuture.map(users: List[User] => {
users.map(user: User => {
val futureAddress : Future[Address] = mongoDao.getAddress()
// Now create a object
futureAddress.map(address: Address) {
FullUserInfo(user, address)
}
}
}
So, I'd like to end up with a Future[List[FullUser]] that I can return to the play framework. I've included the cutdown I've tried.
thanks
// OBJECTS HERE
case class Outer(id: Int, name: String)
case class Inner(id: Int, name: String)
case class Combined(id: Int, name: String, inner: Inner)
// FAKE DAO to reproduct
#Singleton
class StatInner #Inject()( implicit val ec: ExecutionContext) {
def outer() = {
Future {
val lb = new ListBuffer[Outer]()
Thread.sleep(1000)
println("Done")
for (id <- 1 to 5) {
lb += Outer(id, s"Hello $id")
}
lb.toList
}
}
def inner(id: Int) : Future[Inner] = {
Future {
Thread.sleep(1000)
Inner(id, s"inner $id")
}
}
}
// CODE to query that is not working
def nestedTree = Action.async {
val statInner : StatInner = new StatInner()
val listouter : Future[List[Outer]] = statInner.outer()
val combined = listouter.map((listOuter : List[Outer]) => {
listOuter.flatMap((outer: Outer) => {
val futInner : Future[Inner] = statInner.inner(outer.id)
futInner.map((inner: Inner) => {
Combined(outer, inner)
})
})
})
combined.map(Json.toJson(_))
}
```
Use Future.{flatMap.sequence}:
val usersFuture: Future[List[User]] = mongoDao.getUsers()
val fullFutures: Future[List[FullUser]] = usersFuture.flatMap { users =>
Future.sequence(users.map { user =>
mongoDao.getAddress().map { adress =>
FullUserInfo(user, address)
}
})
}

How to add derived variables to a ResultSet

Almost all guides/tutorials that I've seen only show how to parse values from columns that are directly available in the database. For example, the following is a very common pattern, and I understand how it can be useful in many ways:
case class Campaign(id: Int, campaign_mode_id: Int, name: String)
class Application #Inject()(db: Database) extends Controller {
val campaign = {
get[Int]("campaign.id") ~
get[Int]("campaign.campaign_mode_id") ~
get[String]("campaign.name") map {
case id ~ campaign_mode_id ~ name => Campaign(id, campaign_mode_id, name)
}
}
def index = Action {
val data : List[Campaign] = db.withConnection { implicit connection =>
SQL("SELECT id, campaign_mode_id, name FROM campaign").as(campaign.*)
}
Ok(Json.toJson(data))
}
}
And it'd produce a result that might look like the following:
[
{
id: 2324,
campaign_mode_id: 13,
name: "ABC"
},
{
id: 1324,
campaign_mode_id: 23,
name: "ABCD"
}
]
Now what if there were an additional date field in the campaign table like started_on that referred to when the campaign was started? Or another field called num_followers that was an integer referring to the number of followers?
And suppose that I wanted to do some calculations after running the DB query and before returning my JSON. For example, I want to include a latest_compaign_date that references the started_on date of the newest campaign. Or say that I wanted to include an average_num_followers that referred to the average number of followers for all campaigns.
My final result would look like:
{
latest_compaign_date: 12 Dec 2018,
average_num_followers: 123,
campaigns: [
{
id: 2324,
campaign_mode_id: 13,
name: "ABC"
},
{
id: 1324,
campaign_mode_id: 23,
name: "ABCD"
}
]
}
I know that for the examples I've given it's better to do those calculations in my DB query and not in my application code. But what if I had to do something really complicated and wanted to do it in my application code for some reason? How should I modify my ResutSetParser to facilitate this?
Here are a couple of approaches that I've tried:
Do not use ResultSetParser and instead do everything manually
case class CampaignData(newestCampaignDate: Long, newestCampaignId: Long, averageNumFollowers: Float, campaigns: Seq[Campaign])
def aggregater(rows: Seq[Row]): CampaignData = {
val newestCampaignDate: Long = getNewestDate(rows)
val newestCampaignId: Long = getNewestCampaign(rows)
val averageNumFollowers: Float = getAverageNumFollowers(rows)
val campaigns: Seq[Campaign] = rows.map(row => {
val rowMap: Map[String, Any] = row.asMap
Campaign(
rowMap("campaign.id").asInstanceOf[Int],
rowMap("campaign.campaign_mode_id") match { case None => 0 case Some(value) => value.asInstanceOf[Int]},
rowMap("campaign.name") match { case None => "" case Some(value) => value.asInstanceOf[String]}
)
})
CampaignData(newestCampaignDate, newestCampaignId, averageNumFollowers, campaigns)
}
def index = Action {
val data : Seq[Row] = db.withConnection { implicit connection =>
SQL("SELECT id, campaign_mode_id, name, started_on, num_followers FROM campaign")
}
Ok(Json.toJson(aggregater(data)))
}
This approach smells bad because having to deal with every field using asInstanceOf and match is very tedious and honestly feels unsafe. And also intuitively, I feel that Anorm should have something better for this since I'm probably not the first person who has run into this problem.
Use ResultSetParser in combination with another function
case class Campaign(id: Int, campaign_mode_id: Int, name: String)
case class CampaignData(newestCampaignDate: Long, newestCampaignId: Long, averageNumFollowers: Float, campaigns: Seq[Campaign])
val campaign = {
get[Int]("campaign.id") ~
get[Int]("campaign.campaign_mode_id") ~
get[Int]("campaign.num_followers") ~
get[Long]("campaign.started_on") ~
get[String]("campaign.name") map {
case id ~ campaign_mode_id ~ num_followers ~ started_on ~ name => Map(
"id" -> id,
"campaign_mode_id" -> campaign_mode_id,
"num_followers" -> num_followers,
"started_on" -> started_on,
"name" -> name
)
}
}
def index = Action {
val data : Map[String, Any] = db.withConnection { implicit connection =>
SQL("SELECT id, campaign_mode_id, name, started_on, num_followers FROM campaign").as(campaign.*)
}
Ok(Json.toJson(aggregator(data)))
}
def aggregator(data: Map[String, Any]): CampaignData = {
val newestCampaignDate: Long = getNewestDate(data)
val newestCampaignId: Long = getNewestCampaign(data)
val averageNumFollowers: Float = getAverageNumFollowers(data)
val campaigns: Seq[Campaign] = getCampaigns(data)
CampaignData(newestCampaignDate, newestCampaignId, averageNumFollowers, campaigns)
}
This approach is better in the sense that I don't have to deal with isInstanceOf, but then there is a bigger problem of having to deal with the intermediate Map. And it makes all the getter functions (e.g. getCampaigns) so much more complicated. I feel that Anorm has to offer something better out of the box that I'm not aware of.
As you posted in your first code snippet, the following code
def index = Action {
val data : List[Campaign] = db.withConnection { implicit connection =>
SQL("SELECT id, campaign_mode_id, name FROM campaign").as(campaign.*)
}
Ok(Json.toJson(data))
}
returns a typesafe List of Campaign thanks to Anorm extractors.
Typically, you will pre-process the result with a typesafe function like so
case class CampaignAggregateData(campaigns:List[Campaign], averageNumFollowers:Int, newestCampaignId:Option[Long])
def aggregate(f:List[Campaign]):CampaignAggregatedData
def index = Action {
val manyCampaign : List[Campaign] = db.withConnection { implicit connection =>
SQL("SELECT id, campaign_mode_id, name FROM campaign").as(campaign.*)
}
val aggregatedData:CampaignAggregateData = aggregate(manyCampaign)
Ok(Json.toJson(data))
}
In cases where you would need aggregation to be executed by the database engine, you would typically have multiple db.withConnection statements inside a single action

DSL in scala using case classes

My use case has case classes something like
case class Address(name:String,pincode:String){
override def toString =name +"=" +pincode
}
case class Department(name:String){
override def toString =name
}
case class emp(address:Address,department:Department)
I want to create a DSL like below.Can anyone share the links about how to create a DSL and any suggestions to achieve the below.
emp.withAddress("abc","12222").withDepartment("HR")
Update:
Actual use case class may have more fields close to 20. I want to avoid redudancy of code
I created a DSL using reflection so that we don't need to add every field to it.
Disclamer: This DSL is extremely weakly typed and I did it just for fun. I don't really think this is a good approach in Scala.
scala> create an Employee where "homeAddress" is Address("a", "b") and "department" is Department("c") and that_s it
res0: Employee = Employee(a=b,null,c)
scala> create an Employee where "workAddress" is Address("w", "x") and "homeAddress" is Address("y", "z") and that_s it
res1: Employee = Employee(y=z,w=x,null)
scala> create a Customer where "address" is Address("a", "b") and "age" is 900 and that_s it
res0: Customer = Customer(a=b,900)
The last example is the equivalent of writing:
create.a(Customer).where("address").is(Address("a", "b")).and("age").is(900).and(that_s).it
A way of writing DSLs in Scala and avoid parentheses and the dot is by following this pattern:
object.method(parameter).method(parameter)...
Here is the source:
// DSL
object create {
def an(t: Employee.type) = new ModelDSL(Employee(null, null, null))
def a(t: Customer.type) = new ModelDSL(Customer(null, 0))
}
object that_s
class ModelDSL[T](model: T) {
def where(field: String): ValueDSL[ModelDSL2[T], Any] = new ValueDSL(value => {
val f = model.getClass.getDeclaredField(field)
f.setAccessible(true)
f.set(model, value)
new ModelDSL2[T](model)
})
def and(t: that_s.type) = new { def it = model }
}
class ModelDSL2[T](model: T) {
def and(field: String) = new ModelDSL(model).where(field)
def and(t: that_s.type) = new { def it = model }
}
class ValueDSL[T, V](callback: V => T) {
def is(value: V): T = callback(value)
}
// Models
case class Employee(homeAddress: Address, workAddress: Address, department: Department)
case class Customer(address: Address, age: Int)
case class Address(name: String, pincode: String) {
override def toString = name + "=" + pincode
}
case class Department(name: String) {
override def toString = name
}
I really don't think you need the builder pattern in Scala. Just give your case class reasonable defaults and use the copy method.
i.e.:
employee.copy(address = Address("abc","12222"),
department = Department("HR"))
You could also use an immutable builder:
case class EmployeeBuilder(address:Address = Address("", ""),department:Department = Department("")) {
def build = emp(address, department)
def withAddress(address: Address) = copy(address = address)
def withDepartment(department: Department) = copy(department = department)
}
object EmployeeBuilder {
def withAddress(address: Address) = EmployeeBuilder().copy(address = address)
def withDepartment(department: Department) = EmployeeBuilder().copy(department = department)
}
You could do
object emp {
def builder = new Builder(None, None)
case class Builder(address: Option[Address], department: Option[Department]) {
def withDepartment(name:String) = {
val dept = Department(name)
this.copy(department = Some(dept))
}
def withAddress(name:String, pincode:String) = {
val addr = Address(name, pincode)
this.copy(address = Some(addr))
}
def build = (address, department) match {
case (Some(a), Some(d)) => new emp(a, d)
case (None, _) => throw new IllegalStateException("Address not provided")
case _ => throw new IllegalStateException("Department not provided")
}
}
}
and use it as emp.builder.withAddress("abc","12222").withDepartment("HR").build().
You don't need optional fields, copy, or the builder pattern (exactly), if you are willing to have the build always take the arguments in a particular order:
case class emp(address:Address,department:Department, id: Long)
object emp {
def withAddress(name: String, pincode: String): WithDepartment =
new WithDepartment(Address(name, pincode))
final class WithDepartment(private val address: Address)
extends AnyVal {
def withDepartment(name: String): WithId =
new WithId(address, Department(name))
}
final class WithId(address: Address, department: Department) {
def withId(id: Long): emp = emp(address, department, id)
}
}
emp.withAddress("abc","12222").withDepartment("HR").withId(1)
The idea here is that each emp parameter gets its own class which provides a method to get you to the next class, until the final one gives you an emp object. It's like currying but at the type level. As you can see I've added an extra parameter just as an example of how to extend the pattern past the first two parameters.
The nice thing about this approach is that, even if you're part-way through the build, the type you have so far will guide you to the next step. So if you have a WithDepartment so far, you know that the next argument you need to supply is a department name.
If you want to avoid modifying the origin classes you can use implicit class, e.g.
implicit class EmpExtensions(emp: emp) {
def withAddress(name: String, pincode: String) {
//code omitted
}
// code omitted
}
then import EmpExtensions wherever you need these methods

How to join two tables and map the result to a case class in slick

I am working in a project with scala play 2 framework where i am using slick as FRM and postgres database.
In my project customer is an entity. So i create a customer table and customer case class and object also. Another entity is account. So i create a account table and account case class and object also. The code is given bellow
case class Customer(id: Option[Int],
status: String,
balance: Double,
payable: Double,
created: Option[Instant],
updated: Option[Instant]) extends GenericEntity {
def this(status: String,
balance: Double,
payable: Double) = this(None, status, balance, payable, None, None)
}
class CustomerTable(tag: Tag) extends GenericTable[Customer](tag, "customer"){
override def id = column[Option[Int]]("id")
def status = column[String]("status")
def balance = column[Double]("balance")
def payable = column[Double]("payable")
def account = foreignKey("fk_customer_account", id, Accounts.table)(_.id, onUpdate = ForeignKeyAction.Restrict, onDelete = ForeignKeyAction.Cascade)
def * = (id, status, balance, payable, created, updated) <> ((Customer.apply _).tupled, Customer.unapply)
}
object Customers extends GenericService[Customer, CustomerTable] {
override val table = TableQuery[CustomerTable]
val accountTable = TableQuery[AccountTable]
override def copyEntityFields(entity: Customer, id: Option[Int],
created: Option[Instant], updated: Option[Instant]): Customer = {
entity.copy(id = id, created = created, updated = updated)
}
}
Now I Want to join Customer Table and Account Table and map the result to a case class named CustomerWithAccount
I have tried the following code
case class CustomerDetail(id: Option[Int],
status: String,
name: String)
object Customers extends GenericService[Customer, CustomerTable] {
override val table = TableQuery[CustomerTable]
val accountTable = TableQuery[AccountTable]
def getAllCustomersWithAccount = db.run(table.join(accountTable).on(_.id === _.id).map { row =>
//for (row1 <- row) {
for {
id <- row._1.id
status <- row._1.status.toString()
name <- row._2.name.toString()
} yield CustomerDetail(id = id, status = status, name = name)
//}
}.result)
override def copyEntityFields(entity: Customer, id: Option[Int], created:Option[Instant], updated: Option[Instant]): Customer = {
entity.copy(id = id, created = created, updated = updated)
}
}
But this did not work.
Please help me.
You can try this query
db.run((table.join(accountTable).on(_.id === _.id)
.map{
case (t,a) => ((t.id, t.status, a.name) <> (CustomerDetail.tupled, CustomerDetail.unapply _))
}).result)
you can do this with 3 case classes, 1 per table and then 1 for the joined result.
db.run((customerTable.join(accountTable).on(_.id === _.id)
.map{
case (c,a) => CustomerWithAccount(status = c.status, created =
c.created, account=a, ...)
}

Dynamic orderBy with Squeryl

I can not figure out how to change orderBy dynamically in runtime. I need something like:
def samplesSorted(fields: List[String]) = {
from(Schema.samples)(s => select(s) orderBy(fields.map(getterByName))
}
Or something like
def samplesSorted(fields: List[String]) = {
val q = from(Schema.samples)(s => select(s))
fields.forEach(field => q.addOrderBy(getterByName(field)))
q
}
I am trying to write a help function to manipulate AST now. But that does not seem like the right solution.
Did not notice there is a version of orderBy that accepts a list of ExpressionNodes. Was able to solve it like this:
def samplesSorted(fields: List[String]) = {
from(Schema.samples)(s => select(s) orderBy(fields.map(buildOrderBy(s)))
}
def buildOrderBy(row: Row)(field: String): ExpressionNode = {
getterByName(row, field)
}
def getterByName(row: Row, field: String): String = field match {
case "Name" => row.name
case "Address" => row.address
}
Have not tried with fields of different types yet - implicits may not work in this case. But I could always call them explicitly.
Upd:
To do the same with descending order one could use a helper like this one:
def desc(node: ExpressionNode):ExpressionNode = new OrderByArg(node) {desc}
This works for me
def ord(dr: DataRow, name: String): ExpressionNode = if (orderAscending) {
dr.getterByName(name) asc
} else {
dr.getterByName(name) desc
}
case class DataRow(id: Long,
#Column("resource_id") resourceId: String,
def getterByName(name: String) = {
name match {
case "resource_id" => resourceId.~
case _ => id.~
}
}
}
from(DataSchema.dataRows) { dr =>
where(dr.id === id).select(dr).orderBy(ord(dr, filedName))
}.page(offset, limit)