Scala slick sort by column string name - scala

I'm trying to implement such method in slick:
def paged(page: Long, sorting: String, order: String) = {
infos.sortBy(_.???).drop((page - 1)*INFOS_PER_PAGE).take(INFOS_PER_PAGE).result
}
But I don't know what to put in sortBy and how to cast String to column.
I tried simple function like this:
class PaymentInfoTable(tag: Tag) extends Table[PaymentInfo](tag, "payment_info") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def date = column[DateTime]("date")
def paymentType = column[String]("payment_type")
def category = column[String]("category")
def subCategory = column[String]("sub_category")
def value = column[BigDecimal]("value")
def comment = column[Option[String]]("comment")
def outsidePaypal = column[Boolean]("outside_paypal")
def outsidePayer = column[Option[String]]("outside_payer")
def sorting(col: String) = {
if(col == "id") {
id
} else if (col == "date") {
date
} else if (col == "paymentType") {
paymentType
} else if (col == "category") {
category
} else if (col == "subCategory") {
subCategory
} else if (col == "value") {
value
} else if (col == "comment") {
comment
} else if (col == "outsidePaypal") {
outsidePaypal
} else if (col == "outsidePayer") {
outsidePayer
} else {
id
}
}
but then I can't do
sortBy(_.sorting(sorting).asc)
It says
value asc is not a member of slick.lifted.Rep[_1]
And I have imported
import slick.jdbc.MySQLProfile.api._
How can I sort by string column name?

Basically what you can do is to add a map with the string/property of the table
class ColorsTable(tag: Tag) extends Table[Color](tag, "color") with DynamicSortBySupport.ColumnSelector {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def name = column[String]("name")
def * = (id, name) <> ((Color.apply _).tupled, Color.unapply)
val select = Map(
"id" -> (this.id),
"name" -> (this.name)
)
}
Then you only need to access that map
case ((sortColumn, sortOrder), queryToSort) =>
val sortOrderRep: Rep[_] => Ordered = ColumnOrdered(_, Ordering(sortOrder))
val sortColumnRep: A => Rep[_] = _.select(sortColumn)
queryToSort.sortBy(sortColumnRep)(sortOrderRep)
You can find more info in this, like, but not same, question Dynamic order by in scala slick with several columns
If you want something more simple, then just create an Order column with your column
if(col == "id") {
ColumnOrdered(id, Ordering(sortOrder))
}

Related

how to insert usert defined type in cassandra by using lagom scala framework

I am using Lagom(scala) framework and i could find any way to save scala case class object in cassandra with has complex Type. so how to i insert cassandra UDT in Lagom scala. and can any one explain hoe to use BoundStatement.setUDTValue() method.
I have tried to do by using com.datastax.driver.mapping.annotations.UDT.
but does not work for me. I have also tried com.datastax.driver.core
Session Interface. but again it does not.
case class LeadProperties(
name: String,
label: String,
description: String,
groupName: String,
fieldDataType: String,
options: Seq[OptionalData]
)
object LeadProperties{
implicit val format: Format[LeadProperties] = Json.format[LeadProperties]
}
#UDT(keyspace = "leadpropertieskeyspace", name="optiontabletype")
case class OptionalData(label: String)
object OptionalData {
implicit val format: Format[OptionalData] = Json.format[OptionalData]
}
my query:----
val optiontabletype= """
|CREATE TYPE IF NOT EXISTS optiontabletype(
|value text
|);
""".stripMargin
val createLeadPropertiesTable: String = """
|CREATE TABLE IF NOT EXISTS leadpropertiestable(
|name text Primary Key,
|label text,
|description text,
|groupname text,
|fielddatatype text,
|options List<frozen<optiontabletype>>
);
""".stripMargin
def createLeadProperties(obj: LeadProperties): Future[List[BoundStatement]] = {
val bindCreateLeadProperties: BoundStatement = createLeadProperties.bind()
bindCreateLeadProperties.setString("name", obj.name)
bindCreateLeadProperties.setString("label", obj.label)
bindCreateLeadProperties.setString("description", obj.description)
bindCreateLeadProperties.setString("groupname", obj.groupName)
bindCreateLeadProperties.setString("fielddatatype", obj.fieldDataType)
here is the problem I am not getting any method for cassandra Udt.
Future.successful(List(bindCreateLeadProperties))
}
override def buildHandler(): ReadSideProcessor.ReadSideHandler[PropertiesEvent] = {
readSide.builder[PropertiesEvent]("PropertiesOffset")
.setGlobalPrepare(() => PropertiesRepository.createTable)
.setPrepare(_ => PropertiesRepository.prepareStatements)
.setEventHandler[PropertiesCreated](ese ⇒
PropertiesRepository.createLeadProperties(ese.event.obj))
.build()
}
I was faced with the same issue and solve it following way:
Define type and table:
def createTable(): Future[Done] = {
session.executeCreateTable("CREATE TYPE IF NOT EXISTS optiontabletype(filed1 text, field2 text)")
.flatMap(_ => session.executeCreateTable(
"CREATE TABLE IF NOT EXISTS leadpropertiestable ( " +
"id TEXT, options list<frozen <optiontabletype>>, PRIMARY KEY (id))"
))
}
Call this method in buildHandler() like this:
override def buildHandler(): ReadSideProcessor.ReadSideHandler[FacilityEvent] =
readSide.builder[PropertiesEvent]("PropertiesOffset")
.setPrepare(_ => prepare())
.setGlobalPrepare(() => {
createTable()
})
.setEventHandler[PropertiesCreated](processPropertiesCreated)
.build()
Then in processPropertiesCreated() I used it like:
private val writePromise = Promise[PreparedStatement] // initialized in prepare
private def writeF: Future[PreparedStatement] = writePromise.future
private def processPropertiesCreated(eventElement: EventStreamElement[PropertiesCreated]): Future[List[BoundStatement]] = {
writeF.map { ps =>
val userType = ps.getVariables.getType("options").getTypeArguments.get(0).asInstanceOf[UserType]
val newValue = userType.newValue().setString("filed1", "1").setString("filed2", "2")
val bindWriteTitle = ps.bind()
bindWriteTitle.setString("id", eventElement.event.id)
bindWriteTitle.setList("options", eventElement.event.keys.map(_ => newValue).toList.asJava) // todo need to convert, now only stub
List(bindWriteTitle)
}
}
And read it like this:
def toFacility(r: Row): LeadPropertiesTable = {
LeadPropertiesTable(
id = r.getString(fId),
options = r.getList("options", classOf[UDTValue]).asScala.map(udt => OptiontableType(field1 = udt.getString("field1"), field2 = udt.getString("field2"))
)
}
My prepare() function:
private def prepare(): Future[Done] = {
val f = session.prepare("INSERT INTO leadpropertiestable (id, options) VALUES (?, ?)")
writePromise.completeWith(f)
f.map(_ => Done)
}
This is not a very well written code, but I think will help to proceed work.

scala slick or and query

my question may sound very banal but I still didn't resolve it.
I have the Products Table implemented like
class ProductsTable(tag: Tag) extends Table[Product](tag, "PRODUCTS") {
def id = column[Int]("PRODUCT_ID", O.PrimaryKey, O.AutoInc)
def title = column[String]("NAME")
def description = column[String]("DESCRIPTION")
def style = column[String]("STYLE")
def price = column[Int]("PRICE")
def category_id = column[Int]("CATEGORY_ID")
def size_id = column[Int]("SIZE_ID")
def brand_id = column[Int]("BRAND_ID")
def * = (id.?, title, description, style, price, category_id, size_id, brand_id) <>(Product.tupled, Product.unapply _)
}
and its representation in
val Products = TableQuery[ProductsTable]
How can I implement query equivalent to SQl query:
select * from products where( category_id = 1 or category_id = 2 or category_id = 3 ) and (price between min and max)
Try something like this:
val query = Products filter { p => (p.category_id inSet List(1,2,3)) && p.price > min && p.price < max }
val result = db.run(query.result)
You can use println(query.result.statements) to see what query looks like.
EDIT:
Answer for additional question. You can make a function for your query that accepts optional min and max values:
def getProductsQuery(maybeMin: Option[Int] = None, maybeMax: Option[Int] = None) = {
val initialQuery = val query = Products filter { p => (p.category_id inSet List(1,2,3)) }
val queryWithMin = maybeMin match {
case Some(min) => initialQuery filter { _.price > min }
case None => initialQuery
}
val queryWithMax = maybeMax match {
case Some(max) => queryWithMin filter { _.price < max }
case None => queryWithMin
}
queryWithMax
}
And then you could do any of these:
val q1 = getProductsQuery() // without min or max
val q2 = getProductsQuery(maybeMin = Option(3)) // only min
val q3 = getProductsQuery(maybeMax = Option(10)) // only max
val q4 = getProductsQuery(maybeMin = Option(3), maybeMax = Option(10)) // both
and run any of these as needed...

Scala: JavaFx Tableview does not display data

So I have tried to load in some data into my Tableview but it doesnt work because every time I start my application the tableview is empty but it doesn't show the prompt message that its actually empty.
My Code:
#FXML var anchorpane: AnchorPane = _
// Example for the Product Entity
#FXML var tableview: TableView[Product] = _
#FXML var c1: TableColumn[Product,Integer] = _
#FXML var c2: TableColumn[Product,String] = _
#FXML var c3: TableColumn[Product,Double] = _
//create data for inserting in an observerList
val data: ObservableList[Product] = FXCollections.observableArrayList(
new Product(55,"hallo",33.0)
new Product(10,"hello",20.3)
)
override def initialize(location: URL, resources: ResourceBundle): Unit = {
//Für die columns namen setzen
c1.setCellValueFactory(new PropertyValueFactory[Product,Integer]("id"))
c2.setCellValueFactory(new PropertyValueFactory[Product,String]("name"))
c3.setCellValueFactory(new PropertyValueFactory[Product,Double]("price"))
//inserting data from the observableArrayList
tableview.setItems(data)
}
But my tableview remains empty...
Here is how it looks like:
My Product class:
package fhj.swengb.homework.dbtool
import java.sql.{ResultSet, Connection, Statement}
import scala.collection.mutable.ListBuffer
/**
* Created by loete on 27.11.2015.
*/
object Product extends Db.DbEntity[Product] {
val dropTableSql = "drop table if exists product"
val createTableSql = "create table product (id integer, name string, price double)"
val insertSql = "insert into product (id, name, price) VALUES (?, ?, ?)"
def reTable(stmt: Statement): Int = {
stmt.executeUpdate(Product.dropTableSql)
stmt.executeUpdate(Product.createTableSql)
}
def toDb(c: Connection)(p: Product): Int = {
val pstmt = c.prepareStatement(insertSql)
pstmt.setInt(1, p.id)
pstmt.setString(2, p.name)
pstmt.setDouble(3, p.price)
pstmt.executeUpdate()
}
def fromDb(rs: ResultSet): List[Product] = {
val lb: ListBuffer[Product] = new ListBuffer[Product]()
while (rs.next()) lb.append(Product(rs.getInt("id"), rs.getString("name"), rs.getDouble("price")))
lb.toList
}
def queryAll(con: Connection): ResultSet = query(con)("select * from product")
}
case class Product(id:Int, name: String, price:Double) extends Db.DbEntity[Product] {
def reTable(stmt: Statement): Int = 0
def toDb(c: Connection)(t: Product): Int = 0
def fromDb(rs: ResultSet): List[Product] = List()
def dropTableSql: String = "drop table if exists product"
def createTableSql: String = "create table product (id integer, name String, price Double"
def insertSql: String = "insert into person (id, name, price) VALUES (?, ?, ?)"
}

Insert if not exists in Slick 3.0.0

I'm trying to insert if not exists, I found this post for 1.0.1, 2.0.
I found snippet using transactionally in the docs of 3.0.0
val a = (for {
ns <- coffees.filter(_.name.startsWith("ESPRESSO")).map(_.name).result
_ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*)
} yield ()).transactionally
val f: Future[Unit] = db.run(a)
I'm struggling to write the logic from insert if not exists with this structure. I'm new to Slick and have little experience with Scala. This is my attempt to do insert if not exists outside the transaction...
val result: Future[Boolean] = db.run(products.filter(_.name==="foo").exists.result)
result.map { exists =>
if (!exists) {
products += Product(
None,
productName,
productPrice
)
}
}
But how do I put this in the transactionally block? This is the furthest I can go:
val a = (for {
exists <- products.filter(_.name==="foo").exists.result
//???
// _ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*)
} yield ()).transactionally
Thanks in advance
It is possible to use a single insert ... if not exists query. This avoids multiple database round-trips and race conditions (transactions may not be enough depending on isolation level).
def insertIfNotExists(name: String) = users.forceInsertQuery {
val exists = (for (u <- users if u.name === name.bind) yield u).exists
val insert = (name.bind, None) <> (User.apply _ tupled, User.unapply)
for (u <- Query(insert) if !exists) yield u
}
Await.result(db.run(DBIO.seq(
// create the schema
users.schema.create,
users += User("Bob"),
users += User("Bob"),
insertIfNotExists("Bob"),
insertIfNotExists("Fred"),
insertIfNotExists("Fred"),
// print the users (select * from USERS)
users.result.map(println)
)), Duration.Inf)
Output:
Vector(User(Bob,Some(1)), User(Bob,Some(2)), User(Fred,Some(3)))
Generated SQL:
insert into "USERS" ("NAME","ID") select ?, null where not exists(select x2."NAME", x2."ID" from "USERS" x2 where x2."NAME" = ?)
Here's the full example on github
This is the version I came up with:
val a = (
products.filter(_.name==="foo").exists.result.flatMap { exists =>
if (!exists) {
products += Product(
None,
productName,
productPrice
)
} else {
DBIO.successful(None) // no-op
}
}
).transactionally
It's is a bit lacking though, for example it would be useful to return the inserted or existing object.
For completeness, here the table definition:
case class DBProduct(id: Int, uuid: String, name: String, price: BigDecimal)
class Products(tag: Tag) extends Table[DBProduct](tag, "product") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc) // This is the primary key column
def uuid = column[String]("uuid")
def name = column[String]("name")
def price = column[BigDecimal]("price", O.SqlType("decimal(10, 4)"))
def * = (id, uuid, name, price) <> (DBProduct.tupled, DBProduct.unapply)
}
val products = TableQuery[Products]
I'm using a mapped table, the solution works also for tuples, with minor changes.
Note also that it's not necessary to define the id as optional, according to the documentation it's ignored in insert operations:
When you include an AutoInc column in an insert operation, it is silently ignored, so that the database can generate the proper value
And here the method:
def insertIfNotExists(productInput: ProductInput): Future[DBProduct] = {
val productAction = (
products.filter(_.uuid===productInput.uuid).result.headOption.flatMap {
case Some(product) =>
mylog("product was there: " + product)
DBIO.successful(product)
case None =>
mylog("inserting product")
val productId =
(products returning products.map(_.id)) += DBProduct(
0,
productInput.uuid,
productInput.name,
productInput.price
)
val product = productId.map { id => DBProduct(
id,
productInput.uuid,
productInput.name,
productInput.price
)
}
product
}
).transactionally
db.run(productAction)
}
(Thanks Matthew Pocock from Google group thread, for orienting me to this solution).
I've run into the solution that looks more complete. Section 3.1.7 More Control over Inserts of the Essential Slick book has the example.
At the end you get smth like:
val entity = UserEntity(UUID.random, "jay", "jay#localhost")
val exists =
users
.filter(
u =>
u.name === entity.name.bind
&& u.email === entity.email.bind
)
.exists
val selectExpression = Query(
(
entity.id.bind,
entity.name.bind,
entity.email.bind
)
).filterNot(_ => exists)
val action = usersDecisions
.map(u => (u.id, u.name, u.email))
.forceInsertQuery(selectExpression)
exec(action)
// res17: Int = 1
exec(action)
// res18: Int = 0
according to the slick 3.0 manual insert query section (http://slick.typesafe.com/doc/3.0.0/queries.html), the inserted values can be returned with id as below:
def insertIfNotExists(productInput: ProductInput): Future[DBProduct] = {
val productAction = (
products.filter(_.uuid===productInput.uuid).result.headOption.flatMap {
case Some(product) =>
mylog("product was there: " + product)
DBIO.successful(product)
case None =>
mylog("inserting product")
(products returning products.map(_.id)
into ((prod,id) => prod.copy(id=id))) += DBProduct(
0,
productInput.uuid,
productInput.name,
productInput.price
)
}
).transactionally
db.run(productAction)
}

How to populate User defined objects from list of lists in scala

I am new to Scala.
Currently trying to write a program which fetches table metadata from a database as a list of lists in the below format, and convert it to Objects of user defined type TableFieldMetadata.
The getAllTableMetaDataAsVO function does this conversion.
Can you tell me if I can write this function much better in functional way.
| **Table Name** | **FieldName** | **Data type** |
| table xxxxxx | field yyyyyy | type zzzz |
| table xxxxxx | field wwwww| type mmm|
| table qqqqqq| field nnnnnn| type zzzz |
Note: Here table name can repeat as it usually has multiple columns.
User defined classes:
1. TableFieldMetadata:
/**
* Class to hold the meta data for a table
*/
class TableFieldMetadata (name: String){
var tableName: String = name
var fieldMetaDataList: List[FieldMetaData] = List()
def add(fieldMetadata: FieldMetaData) {
fieldMetaDataList = fieldMetadata :: fieldMetaDataList
}
}
2. FieldMetaData :
/**
* Class to hold the meta data for a field
*/
class FieldMetaData (fieldName: String, fieldsDataType: String) {
var name:String = fieldName
var dataType:String = fieldsDataType
}
Function:
/**
* Function to convert list of lists to user defined objects
*/
def getAllTableMetaDataAsVO(allTableMetaData: List[List[String]]):List[TableFieldMetadata] = {
var currentTableName:String = null
var currentTable: TableFieldMetadata = null;
var tableFieldMetadataList: List[TableFieldMetadata] = List()
allTableMetaData.foreach { tableFieldMetadataItem =>
var tableName = tableFieldMetadataItem.head
if (currentTableName == null || !currentTableName.equals(tableName)) {
currentTableName = tableName
currentTable = new TableFieldMetadata(tableName)
tableFieldMetadataList = currentTable :: tableFieldMetadataList
}
if (currentTableName.equals(tableName)) {
var tableField = tableFieldMetadataItem.tail
currentTable.add(new FieldMetaData(tableField(0), tableField(1)))
}
}
return tableFieldMetadataList
}
Here is one solution. NOTE: I just used Table and Field for easy of typing into the REPL. You should use case classes.
scala> case class Field( name: String, dType : String)
defined class Field
scala> case class Table(name : String, fields : List[Field])
defined class Table
scala> val rawData = List( List("table1", "field1", "string"), List("table1", "field2", "int"), List("table2", "field1", "string") )
rawData: List[List[String]] = List(List(table1, field1, string), List(table1, field2, int), List(table2, field1, string))
scala> val grouped = rawData.groupBy(_.head)
grouped: scala.collection.immutable.Map[String,List[List[String]]] = Map(table1 -> List(List(table1, field1, string), List(table1, field2, int)), table2 -> List(List(table2, field1, string)))
scala> val result = grouped.map { case(k,v) => { val fields = for { r <- v } yield { Field( r(1), r(2) ) }; Table( k, fields ) } }
result: scala.collection.immutable.Iterable[Table] = List(Table(table1,List(Field(field1,string), Field(field2,int))), Table(table2,List(Field(field1,string))))