I am using scala with Cassandra4io library. I am trying to perform a select IN query. The parameter of IN is like a tuple (comma separated string values). And it has not worked for me. I tried different approaches.
// keys (List[String])
val clientIdCommaSepValues = keys.mkString(",")
val selectValue = selectQuery(clientIdCommaSepValues)
private def selectQuery(clientids: String) =
cql"select * from clientinformation WHERE (clientid IN ( ${clientids} ))".as[CassandraClientInfoRow]
this worked only when the value is one (length of keys is 1).
or
private val selectQuery =
cqlt"select * from clientinformation WHERE (clientid IN ${Put[String]}) ".as[CassandraClientInfoRow]
I also tried to put ' ' quotes on the strings.
sorry for the delay on this. It turns out that adding that extra set of parenthesis around your value (in the example above IN (${clientIds})) throws off the string interpolator leading it to select the wrong Binder datatype which is used to serialize the datatype in your query before it sends it off to Cassandra (ouch!).
This selected TEXT instead of List[TEXT]
What you want to do instead is reformulate the query like so:
val keys: List[String] = ???
val selectValue = selectQuery(keys)
private def selectQuery(clientids: List[String]) =
cql"select * from clientinformation WHERE clientid IN ${clientids}".as[CassandraClientInfoRow]"""
I was able to reproduce this on my end and drop the parens. Here's what I did
CREATE KEYSPACE example WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
CREATE TABLE IF NOT EXISTS test_data (
id TEXT,
data INT,
PRIMARY KEY ((id))
);
package com.ringcentral.cassandra4io
import cats.effect._
import com.datastax.oss.driver.api.core.CqlSession
import com.ringcentral.cassandra4io.cql._
import fs2._
import java.net.InetSocketAddress
import scala.jdk.CollectionConverters._
object Investigation extends IOApp {
final case class TestDataRow(id: String, data: Int)
def insert(in: TestDataRow, session: CassandraSession[IO]): IO[Boolean] =
cql"INSERT INTO test_data (id, data) VALUES (${in.id}, ${in.data})"
.execute(session)
override def run(args: List[String]): IO[ExitCode] = {
val rSession = {
val builder =
CqlSession
.builder()
.addContactPoints(List(InetSocketAddress.createUnresolved("localhost", 9042)).asJava)
.withLocalDatacenter("dc1")
.withKeyspace("example")
CassandraSession.connect[IO](builder)
}
rSession.use { session =>
val insertData: Stream[IO, INothing] =
Stream.eval(insert(TestDataRow("test", 1), session) *> insert(TestDataRow("test2", 2), session)).drain
def query(ids: List[String]): Stream[IO, TestDataRow] =
cql"SELECT id, data FROM test_data WHERE id IN $ids"
.as[TestDataRow]
.select(session)
(insertData ++ query(List("test", "test2")))
.evalTap(i => IO(println(i)))
.compile
.drain
.as(ExitCode.Success)
}
}
}
This works great since now it selects the right Binder which is List(TEXT) as you can see above! Sorry for the trouble you had and the cryptic error messages but thank you for using this library :D
Related
I am trying to follow the documentation and create a Table Function to "flatten" some data. The Table Function seems to work fine when using the joinLateral to do the flattening. When using leftOuterJoinLateral though, I get the following error. I'm using Scala and have tried both Table API and SQL with the same result:
Caused by: java.lang.NullPointerException: Null result cannot be stored in a Case Class.
Here is my job:
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.table.api.scala.StreamTableEnvironment
import org.apache.flink.table.api.scala._
import org.apache.flink.streaming.api.scala._
import org.apache.flink.table.functions.TableFunction
object example_job{
// Split the List[Int] into multiple rows
class Split() extends TableFunction[Int] {
def eval(nums: List[Int]): Unit = {
nums.foreach(x =>
if(x != 3) {
collect(x)
})
}
}
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.createLocalEnvironment()
val tableEnv = StreamTableEnvironment.create(env)
val splitMe = new Split()
// Create some dummy data
val events: DataStream[(String, List[Int])] = env.fromElements(("simon", List(1,2,3)), ("jessica", List(3)))
val table = tableEnv.fromDataStream(events, 'name, 'numbers)
.leftOuterJoinLateral(splitMe('numbers) as 'number)
.select('name, 'number)
table.toAppendStream[(String, Int)].print()
env.execute("Flink jira ticket example")
}
}
When I change .leftOuterJoinLateral to .joinLateral I get the expected result:
(simon,1)
(simon,2)
When using the .leftOuterJoinLateral I would expect something like:
(simon,1)
(simon,2)
(simon,null) // or (simon, None)
(jessica,null) // or (jessica, None)
Seems like this might be a bug with the Scala API? I wanted to check here first before raising a ticket just in case I'm doing something stupid!
The problem is that Flink per default does expect that all fields of a row are non-null. That's why the program fails when it sees the null result from the outer join operation. In order to accept null values, you either need to disable the null check via
val tableConfig = tableEnv.getConfig
tableConfig.setNullCheck(false)
Or you must specify the result type to tolerate null values, e.g. specifying a custom POJO output type:
table.toAppendStream[MyOutput].print()
with
class MyOutput(var name: String, var number: Integer) {
def this() {
this(null, null)
}
override def toString: String = s"($name, $number)"
}
I am using Lagom(scala) framework and i could find any way to save scala case class object in cassandra with has complex Type. so how to i insert cassandra UDT in Lagom scala. and can any one explain hoe to use BoundStatement.setUDTValue() method.
I have tried to do by using com.datastax.driver.mapping.annotations.UDT.
but does not work for me. I have also tried com.datastax.driver.core
Session Interface. but again it does not.
case class LeadProperties(
name: String,
label: String,
description: String,
groupName: String,
fieldDataType: String,
options: Seq[OptionalData]
)
object LeadProperties{
implicit val format: Format[LeadProperties] = Json.format[LeadProperties]
}
#UDT(keyspace = "leadpropertieskeyspace", name="optiontabletype")
case class OptionalData(label: String)
object OptionalData {
implicit val format: Format[OptionalData] = Json.format[OptionalData]
}
my query:----
val optiontabletype= """
|CREATE TYPE IF NOT EXISTS optiontabletype(
|value text
|);
""".stripMargin
val createLeadPropertiesTable: String = """
|CREATE TABLE IF NOT EXISTS leadpropertiestable(
|name text Primary Key,
|label text,
|description text,
|groupname text,
|fielddatatype text,
|options List<frozen<optiontabletype>>
);
""".stripMargin
def createLeadProperties(obj: LeadProperties): Future[List[BoundStatement]] = {
val bindCreateLeadProperties: BoundStatement = createLeadProperties.bind()
bindCreateLeadProperties.setString("name", obj.name)
bindCreateLeadProperties.setString("label", obj.label)
bindCreateLeadProperties.setString("description", obj.description)
bindCreateLeadProperties.setString("groupname", obj.groupName)
bindCreateLeadProperties.setString("fielddatatype", obj.fieldDataType)
here is the problem I am not getting any method for cassandra Udt.
Future.successful(List(bindCreateLeadProperties))
}
override def buildHandler(): ReadSideProcessor.ReadSideHandler[PropertiesEvent] = {
readSide.builder[PropertiesEvent]("PropertiesOffset")
.setGlobalPrepare(() => PropertiesRepository.createTable)
.setPrepare(_ => PropertiesRepository.prepareStatements)
.setEventHandler[PropertiesCreated](ese ⇒
PropertiesRepository.createLeadProperties(ese.event.obj))
.build()
}
I was faced with the same issue and solve it following way:
Define type and table:
def createTable(): Future[Done] = {
session.executeCreateTable("CREATE TYPE IF NOT EXISTS optiontabletype(filed1 text, field2 text)")
.flatMap(_ => session.executeCreateTable(
"CREATE TABLE IF NOT EXISTS leadpropertiestable ( " +
"id TEXT, options list<frozen <optiontabletype>>, PRIMARY KEY (id))"
))
}
Call this method in buildHandler() like this:
override def buildHandler(): ReadSideProcessor.ReadSideHandler[FacilityEvent] =
readSide.builder[PropertiesEvent]("PropertiesOffset")
.setPrepare(_ => prepare())
.setGlobalPrepare(() => {
createTable()
})
.setEventHandler[PropertiesCreated](processPropertiesCreated)
.build()
Then in processPropertiesCreated() I used it like:
private val writePromise = Promise[PreparedStatement] // initialized in prepare
private def writeF: Future[PreparedStatement] = writePromise.future
private def processPropertiesCreated(eventElement: EventStreamElement[PropertiesCreated]): Future[List[BoundStatement]] = {
writeF.map { ps =>
val userType = ps.getVariables.getType("options").getTypeArguments.get(0).asInstanceOf[UserType]
val newValue = userType.newValue().setString("filed1", "1").setString("filed2", "2")
val bindWriteTitle = ps.bind()
bindWriteTitle.setString("id", eventElement.event.id)
bindWriteTitle.setList("options", eventElement.event.keys.map(_ => newValue).toList.asJava) // todo need to convert, now only stub
List(bindWriteTitle)
}
}
And read it like this:
def toFacility(r: Row): LeadPropertiesTable = {
LeadPropertiesTable(
id = r.getString(fId),
options = r.getList("options", classOf[UDTValue]).asScala.map(udt => OptiontableType(field1 = udt.getString("field1"), field2 = udt.getString("field2"))
)
}
My prepare() function:
private def prepare(): Future[Done] = {
val f = session.prepare("INSERT INTO leadpropertiestable (id, options) VALUES (?, ?)")
writePromise.completeWith(f)
f.map(_ => Done)
}
This is not a very well written code, but I think will help to proceed work.
I am looking for a way to generate an UPDATE query over multiple columns that are only known at runtime.
For instance, given a List[(String, Int)], how would I go about generating a query in the form of UPDATE <table> SET k1=v1, k2=v2, kn=vn for all key/value pairs in the list?
I have found that, given a single key/value pair, a plain SQL query can be built as sqlu"UPDATE <table> SET #$key=$value (where the key is from a trusted source to avoid injection), but I've been unsuccessful in generalizing this to a list of updates without running a query for each.
Is this possible?
This is one way to do it. I create a table definition T here with table and column names (TableDesc) as implicit arguments. I would have thought that it should be possible to set them explicitly, but I couldn't find it. For the example a create to table query instances, aTable and bTable. Then I insert and select some values and in the end I update a value in the bTable.
import slick.driver.H2Driver.api._
import scala.concurrent.Await
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration.Duration
import scala.util.{Failure, Success}
val db = Database.forURL("jdbc:h2:mem:test1;DB_CLOSE_DELAY=-1", "sa", "", null, "org.h2.Driver")
case class TableDesc(tableName: String, intColumnName: String, stringColumnName: String)
class T(tag: Tag)(implicit tableDesc: TableDesc) extends Table[(String, Int)](tag, tableDesc.tableName) {
def stringColumn = column[String](tableDesc.intColumnName)
def intColumn = column[Int](tableDesc.stringColumnName)
def * = (stringColumn, intColumn)
}
val aTable = {
implicit val tableDesc = TableDesc("TABLE_A", "sa", "ia")
TableQuery[T]
}
val bTable = {
implicit val tableDesc = TableDesc("TABLE_B", "sb", "ib")
TableQuery[T]
}
val future = for {
_ <- db.run(aTable.schema.create)
_ <- db.run(aTable += ("Hi", 1))
resultA <- db.run(aTable.result)
_ <- db.run(bTable.schema.create)
_ <- db.run(bTable ++= Seq(("Test1", 1), ("Test2", 2)))
_ <- db.run(bTable.filter(_.stringColumn === "Test1").map(_.intColumn).update(3))
resultB <- db.run(bTable.result)
} yield (resultA, resultB)
Await.result(future, Duration.Inf)
future.onComplete {
case Success(a) => println(s"OK $a")
case Failure(f) => println(s"DOH $f")
}
Thread.sleep(500)
I've got the sleep statement in the end to assert that the Future.onComplete gets time to finish before the application ends. Is there any other way?
How can I create queries for postgresql view using slick 3?
I didn't find an answer in the slick documentation.
The question relates to my another question. I got right answer but I don't know how to implement it using slick.
There is only rudimentary support for views in Slick 3, that doesn't guarantee full compile-time safety and compositionality, the latter especially matters considering most views strongly depend on data in other tables.
You can describe a view as a Table and separate schema manipulation statements, which you must use instead of standard table schema extension methods like create and drop. Here is an example for your registries-n-rows case subject to the REGISTRY and ROWS table are already present in the database:
case class RegRn(id: Int, name: String, count: Long)
trait View{
val viewName = "REG_RN"
val registryTableName = "REGISTRY"
val rowsTableName = "ROWS"
val profile: JdbcProfile
import profile.api._
class RegRns(tag: Tag) extends Table[RegRn](tag, viewName) {
def id = column[Int] ("REGISTRY_ID")
def name = column[String]("NAME", O.SqlType("VARCHAR"))
def count = column[Long] ("CT", O.SqlType("VARCHAR"))
override def * = (id, name, count) <> (RegRn.tupled, RegRn.unapply)
...
}
val regRns = TableQuery[RegRns]
val createViewSchema = sqlu"""CREATE VIEW #$viewName AS
SELECT R.*, COALESCE(N.ct, 0) AS CT
FROM #$registryTableName R
LEFT JOIN (
SELECT REGISTRY_ID, count(*) AS CT
FROM #$rowsTableName
GROUP BY REGISTRY_ID
) N ON R.REGISTRY_ID=N.REGISTRY_ID"""
val dropViewSchema = sqlu"DROP VIEW #$viewName"
...
}
You can now create a view with db.run(createViewSchema), drop it with db.run(dropViewSchema) and of course call MTable.getTables("REG_RN") to expectedly find its tableType is "VIEW". Queries are the same as for other tables, e.g.
db run regRns.result.head. You can even insert values into a view as you do for a normal Slick table if the rules allow (not your case due to COALESCE and the subquery).
As I mentioned everything will become a mess when you want to compose existing Tables to create a view. You will have to always keep their names and definitions in sync, as it is not possible now to write anything that would at least guarantee the shape of the view conforms to combined shape of the underlying tables for example. Well, there is no way apart from ugly ones like this:
trait View{
val profile: JdbcProfile
import profile.api._
val registryTableName = "REGISTRY"
val registryId = "REGISTRY_ID"
val regitsryName = "NAME"
class Registries(tag: Tag) extends Table[Registry](tag, registryTableName) {
def id = column[Int] (registryId)
def name = column[String](regitsryName, O.SqlType("VARCHAR"))
override def * = (id, name) <> (Registry.tupled, Registry.unapply)
...
}
val rowsTableName = "ROWS"
val rowsId = "ROW_ID"
val rowsRow = "ROW"
class Rows(tag: Tag) extends Table[Row](tag, rowsTableName) {
def id = column[String](rowsId, O.SqlType("VARCHAR"))
def rid = column[Int] (registryId)
def r = column[String]("rowsRow", O.SqlType("VARCHAR"))
override def * = (id, rid, r) <> (Row.tupled, Row.unapply)
...
}
val viewName = "REG_RN"
class RegRns(tag: Tag) extends Table[RegRn](tag, viewName) {
def id = column[Int] ("REGISTRY_ID")
def name = column[String]("NAME", O.SqlType("VARCHAR"))
def count = column[Long] ("CT", O.SqlType("VARCHAR"))
override def * = (id, name, count) <> (RegRn.tupled, RegRn.unapply)
...
}
val registries = TableQuery[Registries]
val rows = TableQuery[Rows]
val regRns = TableQuery[RegRns]
val createViewSchema = sqlu"""CREATE VIEW #$viewName AS
SELECT R.*, COALESCE(N.ct, 0) AS CT
FROM #$registryTableName R
LEFT JOIN (
SELECT #$registryId, count(*) AS CT
FROM #$rowsTableName
GROUP BY #$registryId
) N ON R.#$registryId=N.#$registryId"""
val dropViewSchema = sqlu"DROP VIEW #$viewName"
...
}
What about appending the query text after the view preamble:
val yourAwesomeQryComposition : TableQuery = ...
val qryText = yourAwesomeQryComposition.map(reg => (reg.id, ....)).result.statements.head
val createViewSchema = sqlu"""CREATE VIEW #$viewName AS #${qryText}"""
I have methods in my Play app that query database tables with over hundred columns. I can't define case class for each such query, because it would be just ridiculously big and would have to be changed with each alter of the table on the database.
I'm using this approach, where result of the query looks like this:
Map(columnName1 -> columnVal1, columnName2 -> columnVal2, ...)
Example of the code:
implicit val getListStringResult = GetResult[List[Any]] (
r => (1 to r.numColumns).map(_ => r.nextObject).toList
)
def getSomething(): Map[String, Any] = DB.withSession {
val columns = MTable.getTables(None, None, None, None).list.filter(_.name.name == "myTable").head.getColumns.list.map(_.column)
val result = sql"""SELECT * FROM myTable LIMIT 1""".as[List[Any]].firstOption.map(columns zip _ toMap).get
}
This is not a problem when query only runs on a single database and single table. I need to be able to use multiple tables and databases in my query like this:
def getSomething(): Map[String, Any] = DB.withSession {
//The line below is no longer valid because of multiple tables/databases
val columns = MTable.getTables(None, None, None, None).list.filter(_.name.name == "table1").head.getColumns.list.map(_.column)
val result = sql"""
SELECT *
FROM db1.table1
LEFT JOIN db2.table2 ON db2.table2.col1 = db1.table1.col1
LIMIT 1
""".as[List[Any]].firstOption.map(columns zip _ toMap).get
}
The same approach can no longer be used to retrieve column names. This problem doesn't exist when using something like PHP PDO or Java JDBCTemplate - these retrieve column names without any extra effort needed.
My question is: how do I achieve this with Slick?
import scala.slick.jdbc.{GetResult,PositionedResult}
object ResultMap extends GetResult[Map[String,Any]] {
def apply(pr: PositionedResult) = {
val rs = pr.rs // <- jdbc result set
val md = rs.getMetaData();
val res = (1 to pr.numColumns).map{ i=> md.getColumnName(i) -> rs.getObject(i) }.toMap
pr.nextRow // <- use Slick's advance method to avoid endless loop
res
}
}
val result = sql"select * from ...".as(ResultMap).firstOption
Another variant that produces map with not null columns (keys in lowercase):
private implicit val getMap = GetResult[Map[String, Any]](r => {
val metadata = r.rs.getMetaData
(1 to r.numColumns).flatMap(i => {
val columnName = metadata.getColumnName(i).toLowerCase
val columnValue = r.nextObjectOption
columnValue.map(columnName -> _)
}).toMap
})