Creating table view using slick - scala

How can I create queries for postgresql view using slick 3?
I didn't find an answer in the slick documentation.
The question relates to my another question. I got right answer but I don't know how to implement it using slick.

There is only rudimentary support for views in Slick 3, that doesn't guarantee full compile-time safety and compositionality, the latter especially matters considering most views strongly depend on data in other tables.
You can describe a view as a Table and separate schema manipulation statements, which you must use instead of standard table schema extension methods like create and drop. Here is an example for your registries-n-rows case subject to the REGISTRY and ROWS table are already present in the database:
case class RegRn(id: Int, name: String, count: Long)
trait View{
val viewName = "REG_RN"
val registryTableName = "REGISTRY"
val rowsTableName = "ROWS"
val profile: JdbcProfile
import profile.api._
class RegRns(tag: Tag) extends Table[RegRn](tag, viewName) {
def id = column[Int] ("REGISTRY_ID")
def name = column[String]("NAME", O.SqlType("VARCHAR"))
def count = column[Long] ("CT", O.SqlType("VARCHAR"))
override def * = (id, name, count) <> (RegRn.tupled, RegRn.unapply)
...
}
val regRns = TableQuery[RegRns]
val createViewSchema = sqlu"""CREATE VIEW #$viewName AS
SELECT R.*, COALESCE(N.ct, 0) AS CT
FROM #$registryTableName R
LEFT JOIN (
SELECT REGISTRY_ID, count(*) AS CT
FROM #$rowsTableName
GROUP BY REGISTRY_ID
) N ON R.REGISTRY_ID=N.REGISTRY_ID"""
val dropViewSchema = sqlu"DROP VIEW #$viewName"
...
}
You can now create a view with db.run(createViewSchema), drop it with db.run(dropViewSchema) and of course call MTable.getTables("REG_RN") to expectedly find its tableType is "VIEW". Queries are the same as for other tables, e.g.
db run regRns.result.head. You can even insert values into a view as you do for a normal Slick table if the rules allow (not your case due to COALESCE and the subquery).
As I mentioned everything will become a mess when you want to compose existing Tables to create a view. You will have to always keep their names and definitions in sync, as it is not possible now to write anything that would at least guarantee the shape of the view conforms to combined shape of the underlying tables for example. Well, there is no way apart from ugly ones like this:
trait View{
val profile: JdbcProfile
import profile.api._
val registryTableName = "REGISTRY"
val registryId = "REGISTRY_ID"
val regitsryName = "NAME"
class Registries(tag: Tag) extends Table[Registry](tag, registryTableName) {
def id = column[Int] (registryId)
def name = column[String](regitsryName, O.SqlType("VARCHAR"))
override def * = (id, name) <> (Registry.tupled, Registry.unapply)
...
}
val rowsTableName = "ROWS"
val rowsId = "ROW_ID"
val rowsRow = "ROW"
class Rows(tag: Tag) extends Table[Row](tag, rowsTableName) {
def id = column[String](rowsId, O.SqlType("VARCHAR"))
def rid = column[Int] (registryId)
def r = column[String]("rowsRow", O.SqlType("VARCHAR"))
override def * = (id, rid, r) <> (Row.tupled, Row.unapply)
...
}
val viewName = "REG_RN"
class RegRns(tag: Tag) extends Table[RegRn](tag, viewName) {
def id = column[Int] ("REGISTRY_ID")
def name = column[String]("NAME", O.SqlType("VARCHAR"))
def count = column[Long] ("CT", O.SqlType("VARCHAR"))
override def * = (id, name, count) <> (RegRn.tupled, RegRn.unapply)
...
}
val registries = TableQuery[Registries]
val rows = TableQuery[Rows]
val regRns = TableQuery[RegRns]
val createViewSchema = sqlu"""CREATE VIEW #$viewName AS
SELECT R.*, COALESCE(N.ct, 0) AS CT
FROM #$registryTableName R
LEFT JOIN (
SELECT #$registryId, count(*) AS CT
FROM #$rowsTableName
GROUP BY #$registryId
) N ON R.#$registryId=N.#$registryId"""
val dropViewSchema = sqlu"DROP VIEW #$viewName"
...
}

What about appending the query text after the view preamble:
val yourAwesomeQryComposition : TableQuery = ...
val qryText = yourAwesomeQryComposition.map(reg => (reg.id, ....)).result.statements.head
val createViewSchema = sqlu"""CREATE VIEW #$viewName AS #${qryText}"""

Related

Cassandra (CQL) select IN query with Cassandra4IO

I am using scala with Cassandra4io library. I am trying to perform a select IN query. The parameter of IN is like a tuple (comma separated string values). And it has not worked for me. I tried different approaches.
// keys (List[String])
val clientIdCommaSepValues = keys.mkString(",")
val selectValue = selectQuery(clientIdCommaSepValues)
private def selectQuery(clientids: String) =
cql"select * from clientinformation WHERE (clientid IN ( ${clientids} ))".as[CassandraClientInfoRow]
this worked only when the value is one (length of keys is 1).
or
private val selectQuery =
cqlt"select * from clientinformation WHERE (clientid IN ${Put[String]}) ".as[CassandraClientInfoRow]
I also tried to put ' ' quotes on the strings.
sorry for the delay on this. It turns out that adding that extra set of parenthesis around your value (in the example above IN (${clientIds})) throws off the string interpolator leading it to select the wrong Binder datatype which is used to serialize the datatype in your query before it sends it off to Cassandra (ouch!).
This selected TEXT instead of List[TEXT]
What you want to do instead is reformulate the query like so:
val keys: List[String] = ???
val selectValue = selectQuery(keys)
private def selectQuery(clientids: List[String]) =
cql"select * from clientinformation WHERE clientid IN ${clientids}".as[CassandraClientInfoRow]"""
I was able to reproduce this on my end and drop the parens. Here's what I did
CREATE KEYSPACE example WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
CREATE TABLE IF NOT EXISTS test_data (
id TEXT,
data INT,
PRIMARY KEY ((id))
);
package com.ringcentral.cassandra4io
import cats.effect._
import com.datastax.oss.driver.api.core.CqlSession
import com.ringcentral.cassandra4io.cql._
import fs2._
import java.net.InetSocketAddress
import scala.jdk.CollectionConverters._
object Investigation extends IOApp {
final case class TestDataRow(id: String, data: Int)
def insert(in: TestDataRow, session: CassandraSession[IO]): IO[Boolean] =
cql"INSERT INTO test_data (id, data) VALUES (${in.id}, ${in.data})"
.execute(session)
override def run(args: List[String]): IO[ExitCode] = {
val rSession = {
val builder =
CqlSession
.builder()
.addContactPoints(List(InetSocketAddress.createUnresolved("localhost", 9042)).asJava)
.withLocalDatacenter("dc1")
.withKeyspace("example")
CassandraSession.connect[IO](builder)
}
rSession.use { session =>
val insertData: Stream[IO, INothing] =
Stream.eval(insert(TestDataRow("test", 1), session) *> insert(TestDataRow("test2", 2), session)).drain
def query(ids: List[String]): Stream[IO, TestDataRow] =
cql"SELECT id, data FROM test_data WHERE id IN $ids"
.as[TestDataRow]
.select(session)
(insertData ++ query(List("test", "test2")))
.evalTap(i => IO(println(i)))
.compile
.drain
.as(ExitCode.Success)
}
}
}
This works great since now it selects the right Binder which is List(TEXT) as you can see above! Sorry for the trouble you had and the cryptic error messages but thank you for using this library :D

Updating table with enum

Trying to insert the information into DB that looks like this:
(UUID, EnumType)
with following logic:
var t = TestTable.query.map(t=> (t.id, t.enumType)) ++= toAdd.map(idTest, enumTest)))
but compiler throws an error for TestTable.query.map(t=> (t.id, t.enumType)) it's interpriting it as type Iteratable[Nothing], am I missing something?
Test table looks like this:
object TestTable {
val query = TableQuery[TestTable]
}
class TestTable(tag: slick.lifted.Tag) extends Table[TestTable](tag, "test_table") {
val id = column[UUID]("id")
val enumType = column[EnumType]("enumType")
override val * = (id, testType) <> (
(TestTable.apply _).tupled,
TestTable.unapply
)
Suppose you have following data structure:
object Color extends Enumeration {
val Blue = Value("Blue")
val Red = Value("Red")
val Green = Value("Green")
}
case class MyType(id: UUID, color: Color.Value)
Define slick schema as following:
class TestTable(tag: slick.lifted.Tag) extends Table[MyType](tag, "test_table") {
val id = column[UUID]("id")
val color = column[Color.Value]("color")
override val * = (id, color) <> ((MyType.apply _).tupled, MyType.unapply)
}
object TestTable {
lazy val query = TableQuery[TestTable]
}
To map enum to SQL data type slick requires implicit MappedColumnType:
implicit val colorTypeColumnMapper: JdbcType[Color.Value] = MappedColumnType.base[Color.Value, String](
e => e.toString,
s => Color.withName(s)
)
Now you can insert values into DB in this way:
val singleInsertAction = TestTable.query += MyType(UUID.randomUUID(), Color.Blue)
val batchInsertAction = TestTable.query ++= Seq(
MyType(UUID.randomUUID(), Color.Blue),
MyType(UUID.randomUUID(), Color.Red),
MyType(UUID.randomUUID(), Color.Green)
)

Slick - Inserting a row into two tables linked with an auto-incrementing key?

I'm new to Slick and struggling to find a good canonical example for the following.
I'd like to insert a row into two tables. The first table has a primary key which auto-increments. The second table is related to the first via its primary key.
So I'd like to:
Start a transaction
Insert a row into table 1, which generates a key
Insert a row into table 2, with a foreign key generated in the previous step
End transaction (rollback steps 2 & 3 if either fail)
Would appreciate a canonical example for the above logic, and any related suggestions on my definitions below (I'm very new to Slick!). Thanks!
Insert logic for table 1
private def insertAndReturn(entry: Entry) =
entries returning entries.map(_.id)
into ((_, newId) => entry.copy(id = newId))
def insert(entry: Entry): Future[Entry] =
db.run(insertAndReturn(entry) += entry)
(similar for table 2)
Table 1
class EntryTable(tag: Tag) extends Table[Entry](tag, "tblEntry") {
def id = column[EntryId]("entryID", O.PrimaryKey, O.AutoInc)
...
def * = (id, ...).shaped <> (Entry.tupled, Entry.unapply)
}
Table 2
class UsernameChangeTable(tag: Tag) extends Table[UserNameChange](tag, "tblUserNameChange") {
def entryId = column[EntryId]("entryID")
...
def entry = foreignKey("ENTRY_FK", entryId, entryDao.entries)(
_.id, onUpdate = Restrict, onDelete = Cascade
)
I'm using a MySQL database and Slick 3.1.0.
All that you have to do is
val tx =
insertAndReturn(entry).flatMap { id =>
insertUserNameChange(UserNameChange(id, ...))
}.transactionally
db.run(tx)
Note that insertUserNameChange is the function which inserts the UserNameChange instance into the database. It needs the EntryId which you get back from the previous insertion action.
Compose actions using flatMap and use transactionally to run the whole query in a transaction.
Your Slick tables look fine.
Here is a canonical example implementing this functionality
package models
import scala.concurrent.{Future, Await}
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration.Duration
import slick.backend.DatabasePublisher
import slick.driver.H2Driver.api._
case class Supplier1(id:Int,name:String)
class Suppliers1(tag:Tag) extends Table[Supplier1](tag,"SUPPLIERS") {
def id:Rep[Int] = column[Int]("SUP_ID",O.PrimaryKey,O.AutoInc)
def name:Rep[String] = column[String]("NAME")
def * = (id,name) <>
(Supplier1.tupled,Supplier1.unapply)
}
case class Coffee1(id:Int,name:String,suppId:Int)
class Coffees1(tag:Tag) extends Table[Coffee1](tag,"COFFEES"){
def id:Rep[Int] = column[Int]("C_ID",O.PrimaryKey,O.AutoInc)
def name:Rep[String] = column[String]("COFFEE_NAME")
def suppId:Rep[Int] = column[Int]("SUP_ID")
def * = (id,name,suppId) <> (Coffee1.tupled,Coffee1.unapply)
def supplier = foreignKey("supp_fk", suppId, TableQuery[Suppliers])(_.id)
}
object HelloSlick1 extends App{
val db = Database.forConfig("h2mem1")
val suppliers = TableQuery[Suppliers1]
val coffees = TableQuery[Coffees1]
val setUpF = (suppliers.schema ++ coffees.schema).create
val insertSupplier = suppliers returning suppliers.map(_.id)
//val tx = (insertSupplier += Supplier1(0,"SUPP 1")).flatMap(id=>(coffees += Coffee1(0,"COF",id))).transactionally
val tx = for{
supId <- insertSupplier += Supplier1(0,"SUPP 1")
cId <- coffees += Coffee1(0,"COF",supId)
} yield ()
tx.transactionally
def exec[T](action: DBIO[T]): T =
Await.result(db.run(action), Duration.Inf)
exec(setUpF)
exec(tx)
exec(suppliers.result.map(println))
exec(coffees.result.map(println))
}

Setting the table name dynamically in Slick 3.x

I have two tables that are identical, each in a different database. Also, the tables have different names.
I have hardcoded the name of the table in the class BankDB:
class BankDB(tag: Tag) extends Table[Bank](tag, "banks1") {
def sk = column[Int]("sk", O.PrimaryKey)
def name = column[String]("name")
// other columns
What I need is, depending on the database name, to set the name of the table, like so:
val tableName = if (dbName == "DB1") "banks1" else "banks2"
And then use tableName in TableQuery to have Slick point to the correct table:
val db = Database.forConfig(dbName)
try {
val banks = TableQuery[BankDB](tableName) // <== this doesn't work
val future = db.run(banks.filter(_.sk === sk).result)
val result = Await.result(future, Duration.Inf)
result
} finally db.close
Is this possible? or I need to define to classes, one for each database?
class BankDB(tag: Tag,tableName: String) extends Table[Bank](tag, tableName)
def getTableQuery(tableName: String) =
TableQuery[BankDB]((t: slick.lifted.Tag) => new BankDB(t, tableName))

Assert unique key constraint on h2 database with slick and scalatest

Imagine the following scenario: You have a book that consists of ordered chapters.
First the test:
"Chapters" should "have a unique order" in
{
// val exception = intercept
db.run(
DBIO.seq
(
Chapters.add(0, 0, "Chapter #0"),
Chapters.add(0, 0, "Chapter #1")
)
)
}
Now the implementation:
case class Chapter(id: Option[Long] = None, bookId: Long, order: Long, val title: String) extends Model
class Chapters(tag: Tag) extends Table[Chapter](tag, "chapters")
{
def id = column[Option[Long]]("id", O.PrimaryKey, O.AutoInc)
def bookId = column[Long]("book_id")
def order = column[Long]("order")
def title = column[String]("title")
def * = (id, bookId, order, title) <> (Chapter.tupled, Chapter.unapply)
def uniqueOrder = index("order_chapters", (bookId, order), unique = true)
def bookFK = foreignKey("book_fk", bookId, Books.all)(_.id.get, onUpdate = ForeignKeyAction.Cascade, onDelete = ForeignKeyAction.Restrict)
}
Maybe such a unique-constraint on 2 columns isn't even possible in h2?
Anyway:
Expectation:
An exception to be thrown that I can then intercept/expect in my test, hence a failing test for now, for violating a unique-constraint.
Actual result:
A successful test :(
edit: Also, I use this:
implicit val defaultPatience =
PatienceConfig(timeout = Span(30, Seconds), interval = Span(100, Millis))
db.run returns a Future.
You have to Await on it to get the result of the execution.
Try this:
import scala.concurrent.duration._
val future = db.run(...)
Await.result(future, 5 seconds)