I have the following case class:
case class Block(
id: Option[Int] = None,
blockId: Int,
name: String,
location: Option[Point] = None,
geometry: Option[Geometry] = None,
)
In postgres i have a table SubBlock contient
id : int,
block_id: Int,
name: String,
geom_location: geography,
sub_block_geom: geography
And I define a function to return a subBlock nearest of a specified point
override def getNearestSubBlock(point: Point): Future[SubBlock] = {
val query = sql"""SELECT sub_block_id,block_id,name,ST_AsText(geom_location),sub_block_geom from now.sub_block order by ST_Distance(geom_location, ST_MakePoint(${point.getX()}, ${point.getY()})::geography) limit 1""".as[SubBlock].head
db.run(query)
}
implicit val getSubBlock = GetResult(r => SubBlock(r.nextIntOption(), r.nextInt(), r.nextString(), Option(Location.location2Point(Location.fromWKT(r.nextString()))), Option(new WKTReader().read(r.nextString())))
And my request return the right result, but after I got « Exception in thread "main" java.lang.NullPointerException « because the sub_block_geom is null in my database, so I think that the solution is to change implicit val getSubBlock or to write query with filter, sortedBy , … and I don’t know how to do that
Well... I am not too sure about your problem, as a lot of required details are missing. But from what I can see, you just need to properly handle possibility of null in your getSubBlock.
implicit val getSubBlock = GetResult(r => {
val id = r.nextIntOption()
val blockId = r.nextInt()
val location: Option[Point] = r.nextStringOption().map(s => Location.location2Point(Location.fromWKT(s)))
val geometry: Option[Geometry] = r.nextStringOption().map(s => new WKTReader().read(s)))
SubBlock(id, blockId, location, geometry)
}
Related
How can I convert Query[MappedProjection[Example, (Option[String], Int, UUID, UUID)], Example, Seq] to Query[Examples, Example, Seq]?
Details
I am trying to drop a column from an existing table(Examples in this case) and move the data to another table (Examples2 in this case). I don't want to change all the existing code base, so I plan to join these two tables and map the results to Example.
import slick.lifted.Tag
import slick.driver.PostgresDriver.api._
import java.util.UUID
case class Example(
field1: Option[String] = None,
field2: Int,
someForeignId: UUID,
id: UUID,
)
object Example
class Examples(tag: Tag) extends Table[Example](tag, "entityNotes") {
def field1 = column[Option[String]]("field1")
def field2 = column[Int]("field2")
def someForeignId = column[UUID]("someForeignId")
def id = column[UUID]("id", O.PrimaryKey)
def someForeignKey = foreignKey(
"someForeignIdToExamples2",
someForeignId,
Examples2.query,
)(
_.id.?
)
def * =
(
field1.?,
field2,
someForeignId,
id,
) <> ((Example.apply _).tupled, Example.unapply)
}
object Examples{
val query = TableQuery[Examples]
}
Basically, all the functions in the codebase call Examples.query. If I update that query by joining two tables, the problem will be solved (of course with a performance shortcoming because of one extra join for each call).
To use the query with the existing code base, we need to keep the type the same. For example, we we can use filter as follows:
val query_ = TableQuery[Examples]
val query: Query[Examples, Example, Seq] = query_.filter(_.field2 > 5)
Everything will work without a problem since we keep the type of the query as it is supposed to be.
However, I cannot do that with a join if I want to use data from the second table.
val query_ = TableQuery[Examples]
val query = query
.join(Examples2.query_)
.on(_.someForeignId === _.id)
.map({
case (e, e2) =>
((
e2.value.?,
e1.field2,
e2.id
e.id,
) <> ((Example.apply _).tupled, Example.unapply))
})
This is where I got stuck. Its type is Query[MappedProjection[Example, (Option[String], Int, UUID, UUID)], Example, Seq].
Can anyone help? Btw, we don't have to use map. This is just what I got so far.
I'm new to scala, and, I'm trying to pass a map i.e. Map[String, Any]("from_type" -> "Admin", "from_id" -> 1) to my service for dynamic filtering. I'm trying to avoid writing my code like this filter(_.fromType === val && _.fromId === val2)
When trying this example Slick dynamically filter by a list of columns and values
I get a Type mismatch. Required Function1[K, NotInfered T] Found: Rep[Boolean]
Service code:
val query = TableQuery[UserTable]
def all(perPage: Int page: Int, listFilters: Map[String, Any]): Future[ResultPagination[User]] = {
val baseQuery = for {
items <- query.filter( listFilters ).take(perPage).drop(page).result // <----I want to filter here
total <- query.length.result
} yield ResultPagination[User](items, total)
db.run(baseQuery)
}
Table code:
def fromId: Rep[Int] = column[Int]("from_id")
def fromType: Rep[String] = column[String]("from_type")
def columnToRep(column: String): Rep[_] = {
column match {
case "from_type" = this.fromType
case "from_id" = this.fromId
}
}
Well, I would not recommend to use Map[String, Any] construction, because of using Any you are loosing type safety: for instance you can pass to the function by mistake Map("fromId" -> "1") and compile won't help identify issue.
I guess, what you want is to pass some kind of structure representing variative filter. And Query.filterOpt can help you in this case. You can take a look usage examples at: https://scala-slick.org/doc/3.3.2/queries.html#sorting-and-filtering
Please, see code example below:
// Your domain filter structure. None values will be ignored
// So `UserFilter()` - will match all.
case class UserFilter(fromId: Option[Int] = None, fromString: Option[String] = None)
def all(perPage: Int, page: Int, filter: UserFilter): Future[ResultPagination[User]] = {
val baseQuery = for {
items <- {
query
.filterOpt(filter.fromId)(_.fromId === _)
.filterOpt(filter.fromString)(_.fromType === _)
.take(perPage)
.drop(page)
.result
}
total <- query.length.result
} yield ResultPagination[User](items, total)
db.run(baseQuery)
}
And this will type safe.
Hope this helps!
The scenario is similar to the question at How to better parse the same table twice with Anorm? however the described solutions on that question can no longer be used.
On the scenario where a Message has 2 users I need to parse the from_user and to_user with SQL joins.
case class User(id: Long, name: String)
case class Message(id: Long, body: String, to: User, from: User)
def userParser(alias: String): RowParser[User] = {
get[Long](alias + "_id") ~ get[String](alias + "_name") map {
case id~name => User(id, name)
}
}
val parser: RowParser[Message] = {
userParser("from_user") ~
userParser("to_user") ~
get[Long]("messages.id") ~
get[String]("messages.name") map {
case from~to~id~body => Message(id, body, to, from)
}
}
// More alias here possible ?
val aliaser: ColumnAliaser = ColumnAliaser.withPattern((0 to 2).toSet, "from_user.")
SQL"""
SELECT from_user.* , to_user.*, message.* FROM MESSAGE
JOIN USER from_user on from_user.id = message_from_user_id
JOIN USER to_user on to_user.id = message.to_user
"""
.asTry(parser, aliaser)
If I'm right thinking you want to apply multiple ColumnAliaser with different aliasing policies to the same query, it's important to understand that ColumnAliaser is "just" a specific implementation of Function[(Int, ColumnName), Option[String]], so it can be defined/composed as any Function, and is simplified by the factory functions in its companion object.
import anorm.{ ColumnAliaser, ColumnName }
val aliaser = new ColumnAliaser {
def as1 = ColumnAliaser.withPattern((0 to 2).toSet, "from_user.")
def as2 = ColumnAliaser.withPattern((2 to 4).toSet, "to_user.")
def apply(column: (Int, ColumnName)): Option[String] =
as1(column).orElse(as2(column))
}
Every time I try to create a new table in cassandra with a new TableDef I end up with a clustering order of ascending and I'm trying to get descending.
I'm using Cassandra 2.1.10, Spark 1.5.1, and Datastax Spark Cassandra Connector 1.5.0-M2.
I'm creating a new TableDef
val table = TableDef("so", "example",
Seq(ColumnDef("parkey", PartitionKeyColumn, TextType)),
Seq(ColumnDef("ts", ClusteringColumn(0), TimestampType)),
Seq(ColumnDef("name", RegularColumn, TextType)))
rdd.saveAsCassandraTableEx(table, SomeColumns("key", "time", "name"))
What I'm expecting to see in Cassandra is
CREATE TABLE so.example (
parkey text,
ts timestamp,
name text,
PRIMARY KEY ((parkey), ts)
) WITH CLUSTERING ORDER BY (ts DESC);
What I end up with is
CREATE TABLE so.example (
parkey text,
ts timestamp,
name text,
PRIMARY KEY ((parkey), ts)
) WITH CLUSTERING ORDER BY (ts ASC);
How can I force it to set the clustering order to descending?
I was not able to find a direct way of doing this. Additionally there are a lot of other options you may want to specify. I ended up extending ColumnDef and TableDef and overriding the cql method in TableDef. An example of the solution I came up with is below. If someone has a better way or this becomes natively supported I'd be happy to change the answer.
// Scala Enum
object ClusteringOrder {
abstract sealed class Order(val ordinal: Int) extends Ordered[Order]
with Serializable {
def compare(that: Order) = that.ordinal compare this.ordinal
def toInt: Int = this.ordinal
}
case object Ascending extends Order(0)
case object Descending extends Order(1)
def fromInt(i: Int): Order = values.find(_.ordinal == i).get
val values = Set(Ascending, Descending)
}
// extend the ColumnDef case class to add enum support
class ColumnDefEx(columnName: String, columnRole: ColumnRole, columnType: ColumnType[_],
indexed: Boolean = false, val clusteringOrder: ClusteringOrder.Order = ClusteringOrder.Ascending)
extends ColumnDef(columnName, columnRole, columnType, indexed)
// Mimic the ColumnDef object
object ColumnDefEx {
def apply(columnName: String, columnRole: ColumnRole, columnType: ColumnType[_],
indexed: Boolean, clusteringOrder: ClusteringOrder.Order): ColumnDef = {
new ColumnDefEx(columnName, columnRole, columnType, indexed, clusteringOrder)
}
def apply(columnName: String, columnRole: ColumnRole, columnType: ColumnType[_],
clusteringOrder: ClusteringOrder.Order = ClusteringOrder.Ascending): ColumnDef = {
new ColumnDefEx(columnName, columnRole, columnType, false, clusteringOrder)
}
// copied from ColumnDef object
def apply(column: ColumnMetadata, columnRole: ColumnRole): ColumnDef = {
val columnType = ColumnType.fromDriverType(column.getType)
new ColumnDefEx(column.getName, columnRole, columnType, column.getIndex != null)
}
}
// extend the TableDef case class to override the cql method
class TableDefEx(keyspaceName: String, tableName: String, partitionKey: Seq[ColumnDef],
clusteringColumns: Seq[ColumnDef], regularColumns: Seq[ColumnDef], options: String)
extends TableDef(keyspaceName, tableName, partitionKey, clusteringColumns, regularColumns) {
override def cql = {
val stmt = super.cql
val ordered = if (clusteringColumns.size > 0)
s"$stmt\r\nWITH CLUSTERING ORDER BY (${clusteringColumnOrder(clusteringColumns)})"
else stmt
appendOptions(ordered, options)
}
private[this] def clusteringColumnOrder(clusteringColumns: Seq[ColumnDef]): String =
clusteringColumns.map { col =>
col match {
case c: ColumnDefEx => if (c.clusteringOrder == ClusteringOrder.Descending)
s"${c.columnName} DESC" else s"${c.columnName} ASC"
case c: ColumnDef => s"${c.columnName} ASC"
}
}.toList.mkString(", ")
private[this] def appendOptions(stmt: String, opts: String) =
if (stmt.contains("WITH") && opts.startsWith("WITH")) s"$stmt\r\nAND ${opts.substring(4)}"
else if (!stmt.contains("WITH") && opts.startsWith("AND")) s"WITH ${opts.substring(3)}"
else s"$stmt\r\n$opts"
}
// Mimic the TableDef object but return new TableDefEx
object TableDefEx {
def apply(keyspaceName: String, tableName: String, partitionKey: Seq[ColumnDef],
clusteringColumns: Seq[ColumnDef], regularColumns: Seq[ColumnDef], options: String = "") =
new TableDefEx(keyspaceName, tableName, partitionKey, clusteringColumns, regularColumns,
options)
def fromType[T: ColumnMapper](keyspaceName: String, tableName: String): TableDef =
implicitly[ColumnMapper[T]].newTable(keyspaceName, tableName)
}
This allowed me to create new tables in this manner:
val table = TableDefEx("so", "example",
Seq(ColumnDef("parkey", PartitionKeyColumn, TextType)),
Seq(ColumnDefEx("ts", ClusteringColumn(0), TimestampType, ClusteringOrder.Descending)),
Seq(ColumnDef("name", RegularColumn, TextType)))
rdd.saveAsCassandraTableEx(table, SomeColumns("key", "time", "name"))
From what I've read, there is a way to work with nested classes to solve the problem of tables with more than 22 fields. It looks like this (with a simple table):
case class UserRow(id:Int, address1:Address, address2:Address)
case class Address(street:String,city:String)
class User(tag:Tag) extends Table[UserRow](tag, "User"){
def id = column[Int]("id", O.PrimaryKey)
def street1 = column[String]("STREET1")
def city1 = column[String]("CITY1")
def street2 = column[String]("STREET2")
def city2 = column[String]("CITY2")
def * = (id, address1, address2) <> (UserRow.tupled, UserRow.unapply)
def address1 = (street1, city1) <> (Address.tupled, Address.unapply)
def address2 = (street2, city2) <> (Address.tupled, Address.unapply)
}
What I've realized is that plain SQL - which requires implicit values - doesn't work with this solution or at least I haven't been able to make it works.
I thought I could define the implicit values in the same way as the nested classes, like this:
implicit val getAddressResult = GetResult(r => Address(r.<<, r.<<))
implicit val getUserResult = GetResult(r => UserRow(r.<<, r.<<, r.<<))
But it doesn't work. It compiles but when running, it says that the user table is not found.
I'm very new in Scala and Slick so I could have misunderstood some information or have some wrong concepts. What am I doing wrong?
UPDATE
This is what I'm doing in the test:
user.ddl.create
user += UserRow(0, Address("s11", "c11"), Address("s12", "c12"))
user += UserRow(1, Address("s21", "c21"), Address("s22", "c22"))
user += UserRow(2, Address("s31", "c31"), Address("s32", "c32"))
println(user.list)
val sqlPlain = sql"SELECT * FROM user".as[UserRow]
println(sqlPlain)
println(sqlPlain.list)
All of it works until the last sentence where I get the error "Table "USER" not found". Also the exact same test works perfect for a non nested case class.
UPDATE 2
As cvogt has correctly indicated to me, I was misunderstanding the error reported and it wasn't related to the implicit GetResult values. His answer is correct as well as my first approach.
Pass the PositionedResult r to the corresponding GetResult objects:
implicit val getAddressResult = GetResult(r => Address(r.<<, r.<<))
implicit val getUserResult =
GetResult(r => UserRow(r.<<, getAddressResult(r), getAddressResult(r)))