I am storing a scala case class data in Cassandra table, for that, I need to define User-defined type. I can write cql query but do not know how to parse it.com.datastax.driver.mapping.annotations.UDT
I have tried this annotation but it does not work me. I think I'm completely out of the track.
I have also tried Session class belong to com.datastax.driver.core.Session.
and my conclusion is I have no idea how to do it I am just using hit and trail.
case class Properties(name: String,
label: String,
description: String,
groupName: String,
fieldDataType: String,
options: Seq[OptionalData]
)
object Properties{
implicit val format: Format[Properties] = Json.format[Properties]
}
case class OptionalData(label: String, name: String)
object OptionalData{
implicit val format: Format[OptionalData] = Json.format[OptionalData]
}
and my query is:
val optionalData: String=
"""
|CREATE TYPE IF NOT EXISTS optionaldata(
|label text,
|name text
);
""".stripMargin
val createPropertiesTable: String = """
|CREATE TABLE IF NOT EXISTS prop(
|name text Primary Key,
|label text,
|description text,
|groupname text,
|fielddatatype text,
|options LIST<frozen<optionaldata>>
);
""".stripMargin
com.datastax.driver.core.exceptions.InvalidQueryException: Unknown typ
e leadpropdb3.optionaldata
java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.InvalidQueryException: Unknown type leadpropdb3.optionaldata
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:552)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:513)
at akka.persistence.cassandra.package$ListenableFutureConverter$$anon$2.$anonfun$run$2(package.scala:25)
at scala.util.Try$.apply(Try.scala:213)
at akka.persistence.cassandra.package$ListenableFutureConverter$$anon$2.run(package.scala:25)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Unknown type leadpropdb3.optionaldata
From error message it's clear that the type wasn't created - you need to create it before creating the table - be very careful when executing CQL statements from your code - you need to wait until schema is an agreement, before you execute next statement. Here is an example of Java code that does this - it's easy to convert it into Scala.
When you're using Object Mapper with Scala, you need to obey some rules (I hope that my blog post on that topic will be published soon):
You need to use Java types - List instead of Seq, etc., or use extra codecs for Scala;
Case classes should have empty constructor.
but otherwise it's possible to use object mapper with Scala, like this:
#UDT(name = "scala_udt")
case class UdtCaseClass(id: Integer, #(Field #field)(name = "t") text: String) {
def this() {
this(0, "")
}
}
#Table(name = "scala_test_udt")
case class TableObjectCaseClassWithUDT(#(PartitionKey #field) id: Integer,
udt: UdtCaseClass) {
def this() {
this(0, UdtCaseClass(0, ""))
}
}
// ...
val mapperForUdtCaseClass = manager.mapper(classOf[TableObjectCaseClassWithUDT])
val objectCaseClassWithUDT = mapperForUdtCaseClass.get(new Integer(1))
println("ObjWithUdt(1)='" + objectCaseClassWithUDT + "'")
More examples are available in my repo.
Related
I am trying to do this (on Play Framework):
db.run(users.filter(_.id === id).map(_.deleted).update(Option(DateTime.now)))
But it's throwing a compilation error:
No matching Shape found. Slick does not know how to map the given
types. Possible causes: T in Table[T] does not match your *
projection, you use an unsupported type in a Query (e.g. scala List),
or you forgot to import a driver api into scope. Required level:
slick.lifted.FlatShapeLevel
Source type: slick.lifted.Rep[Option[org.joda.time.DateTime]] Unpacked type: T
Packed type: G
Version of Slick 3.0.3.
How can i fix this bug?
class UserTable(tag: Tag) extends Table[User](tag, "user") {
def id = column[Int]("id")
def name = column[String]("name")
def age = column[Int]("age")
def deleted = column[Option[DateTime]]("deleted")
override def * =
(id, name, age, deleted) <> ((User.apply _).tupled, User.unapply)
}
case class User(
id: Int = 0,
name: String,
age: Int,
deleted: Option[DateTime] = None
)
When you define your table, there is a column type mapper in scope for DateTime. Something like
implicit val dtMapper = MappedColumnType.base[org.joda.time.DateTime, Long](
dt => dt.getMillis,
l => new DateTime(l, DateTimeZone.UTC)
)
for example. This mapper must also be in scope at the location where you construct your query, otherwise Slick will not know how to convert what you've written into a SQL query.
You can either import the mapper, if you would like to keep the query separate from the table, or define the query in the same file where you define UserTable. A common pattern is to wrap the table in a trait, say UserRepository, and define both the table and the queries within that trait.
See this page for more information on how implicit scope works.
I am attempting to use scala-cass in order to read from cassandra and convert the resultset to a case class using resultSet.as[CaseClass]. This works great when running the following.
import com.weather.scalacass.syntax._
case class TestTable(id: String, data1: Int, data2: Long)
val resultSet = session.execute(s"select * from test.testTable limit 10")
resultSet.one.as[TestTable]
Now I am attempting to make this more generic and I am unable to find the proper type constraint for the generic class.
import com.weather.scalacass.syntax._
case class TestTable(id: String, data1: Int, data2: Long)
abstract class GenericReader[T] {
val table: String
val keyspace: String
def getRows(session: Session): T = {
val resultSet = session.execute(s"select * from $keyspace.$table limit 10")
resultSet.one.as[T]
}
}
I implement this class with the desired case class and attempt to call getRows on the created Object.
object TestTable extends GenericReader[TestTable] {
val keyspace = "test"
val table = "TestTable"
}
TestTable.getRows(session)
This throws an exception could not find implicit value for parameter ccd: com.weather.scalacass.CCCassFormatDecoder[T].
I am trying to add a type constraint to GenericReader in order to ensure the implicit conversion will work. However, I am unable to find the proper type. I am attempting to read through scala-cass in order to find the proper constraint but I have had no luck so far.
I would also be happy to use any other library that can achieve this.
Looks like as[T] requires an implicit value that you don't have in scope, so you'll need to require that implicit parameter in the getRows method as well.
def getRows(session: Session)(implicit cfd: CCCassFormatDecoder[T]): T
You could express this as a type constraint (what you were looking for in the original question) using context bounds:
abstract class GenericReader[T:CCCassFormatDecoder]
Rather than try to bound your generic T type, it might be easier to just pass through the missing implicit parameter:
abstract class GenericReader[T](implicit ccd: CCCassFormatDecoder[T]) {
val table: String
val keyspace: String
def getRows(session: Session): T = {
val resultSet = session.execute(s"select * from $keyspace.$table limit 10")
resultSet.one.as[T]
}
}
Finding a concrete value for that implicit can then be deferred to when you narrow that T to a specific class (like object TestTable extends GenericReader[TestTable])
How can we overcome the 22 limit when calling procedures with Slick?
We currently have:
val q3 = sql"""call getStatements(${accountNumber})""".as[Transaction]
The problem is that we have to return more than 22 columns and Transaction case class cannot have more than 22 columns since when we do JSONFormat we get an error:
[error] E:\IdeaProjects\admin\Transaction.scala:59: No unapply or unapplySeq function found
[error] implicit val jsonFormat = Json.format[Transaction]
Any suggestions?
Alright - so if you can actually modify your Transaction case class than there is a better solution than HList (which to be honest may be a little cumbersome to operate with later on).
So here is the thing: let's imagine you have User table with following attributes:
id
name
surname
faculty
finalGrade
street
number
city
postCode
Above columns may not make sense but let's use them as example. The most straightforward way to deal with above is to create a case class:
case class User(
id: Long,
name: String,
... // rest of the attributes here
postCode: String)
which would be mapped from table on the application side.
Now what you can also do is to do this:
case class Address(street: String, number: String, city: String, postCode: String)
case class UniversityInfo(faculty: String, finalGrade: Double)
case class User(id: Long, name: String, surname: String, uniInfo: UniversityInfo, address: Address)
This composition will help you to avoid problem with too many columns (which is basically problem with too many attributes in your case class/tuple). Apart from that - I would argue that it is always (very often?) beneficial to do this if you have many columns - if for nothing else than simply for readability purposes.
How to do the mapping
class User(tag: Tag) extends Table(tag, "User") {
// cricoss info
def id = column[Long]("id")
def name = column[String]("name")
// ... all the other fields
def postCode = column[String]("postCode")
def * = (id, name, surname, uniInfoProjection, addressProjection) <>((User.apply _).tupled, User.unapply)
def uniInfoProjection = (faculty, finalGrade) <>((UniversityInfo.apply _).tupled, UniversityInfo.unapply)
def addressProjection = (street, number, city, city) <>((Address.apply _).tupled, Address.unapply)
}
The same can be done with custom SQL mapping.
implicit val getUserResult = GetResult(r =>
User(r.nextLong, r.nextString, r.nextString,
UniversityInfo(r.nextString, r.nextDouble),
Adress(r.nextString, r.nextString, r.nextString, r.nextString))
)
So to put things simply - try to segregate your fields into multiple nested case classes and your problem should go away (with added benefit of improved readability). If you do that approaching tuple/case class limit should virtually never be a problem (and you shouldn't even need to use HList).
I'm trying to work out how to save nested case classes with Spark Cassandra Connector. As a simple example:
Case classes:
case class Foo(id: String, bar: Bar)
case class Bar(field1: String)
Cassandra table:
CREATE TABLE foo (id text, field1 text, PRIMARY KEY (id));
Spark code:
val foo = Foo("a", Bar("b"))
val fooRdd = sparkContext.parallelize(Seq(foo))
fooRdd.saveToCassandra(cassandraKeyspace, "foo")
Results in:
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Columns not found in Foo: [field1]
I realise I could make a new case class that flattens out Foo, but I'd rather not do this if possible. Ive played around with Column Mappers but to no avail. Is there a better way?
I'm trying to use Couchbase as a cache layer for a relational database that is accessed using Slick. The skeleton of my code that's relevant to the question is as follows:
class RdbTable[T <: Table[_]](implicit val bucket: CouchbaseBucket) {
type ElementType = T#TableElementType
private val table = TableQuery[T].baseTableRow
private def cacheAll(implicit session: Session) =
TableQuery[T].list foreach (elem => cache(elem))
private def cache(elem: ElementType) =
table.primaryKeys foreach (pk => bucket.set[ElementType](key(pk, elem), elem))
private def key(pk: PrimaryKey, elem: ElementType) = ???
.......
}
As you can see, I want to cache each element by all of its primary keys. For this purpose, I need to obtain the value of that key for the given element. But I don't see an obvious way to compute the value of a primary key (the column value, if single-column key; the tuple value, if multi-column).
Any suggestions on what to do? Note that the code MUST NOT know what the actual tables and their columns are. It must be completely general.
We're doing something similar, using Redis as the cache. Each of our records only has one primary key, but in some cases we need to include additional data with the cache key to avoid ambiguity (for example, we have a ManyToMany record that represents an association between two records; when we return a ManyToMany record we'll embed one (but not both) of the associated records, and so in the cache key we need to include the type of the associated record that we're returning).
abstract trait Record {
val cacheKey: CacheKey
}
trait ManyToManyRecord extends Record {
override val cacheKey: ManyToManyCacheKey
}
class CacheKey(recordType: String, key: Int) {
def getKey: String = recordType + ":" + key.toString
}
class ManyToManyCacheKey(recordType: String, key: Int, assocType: String) extends CacheKey {
def getKey: String = recordType + ":" + key.toString + ":" + assocType
}
All of our tables use an integer primary key called "id", so it's easy for us to figure out what the value of "key" is. If you're working with a more complicated schema and don't want to manually write out the "def key: String" (or whatever) definitions for all of your record / table types, then you could try using Slick code generation to automatically generate record / table classes / objects with "def key" created directly from the schema. However, the learning curve for Slick code generation (or any other code generation tool) is steep, so if this is your only use for it then you'd probably be better off generating "def key" by hand. (We generate somewhere between 20-30% of our code using the code generation tool, so the initial investment in learning how to use the tool has paid off)
Slick doesn't come with a built-in primary key extractor for entities. What you can do is use either interfaces, type classes or reflection. E.g. variants of the following:
Either make your entities implement a trait
trait HasPrimaryKey{
def primaryKey: Any
}
class RdbTable[T <: Table[_ <: HasPrimaryKey]](implicit val bucket: CouchbaseBucket) {
...
private def key(elem: ElementType) = elem.primaryKey
// and for each entity:
case class Person( ... ) extends HasPrimaryKey{
def primaryKey = ...
}
or a type class
trait KeyTypeClass[E,T <: Table[E]]{
def key(e: E): Any
}
class RdbTable[T <: Table[_]](implicit val bucket: CouchbaseBucket, keyTC: KeyTypeClass[T]) {
...
private def key(elem: ElementType) = keyTC(elem)
// and for each entity:
implicit val personKey = new KeyTypeClass[Person,PersonTable]{
def key(p: Person) = ...
}
or using reflection to iterate over the primary keys and pull the values out of corresponding fields of the entity.
Code generation as mentioned by Zim-Zam can help with the repetitive elements.