Trying to create new table, where each row can have extra data called context, which is key/value map. Key is string and value can be few bytes (bytes array).
The following definition doesn't compile, with the following error:
Error:(55, 30) Cannot find primitive implementation for class Array
object context extends MapColumn[String, Array[Byte]]
Here is my code:
case class UserJourny(
id: Long,
time: Int,
activity_type: Int,
context: Map[String, Array[Byte]]
)
abstract class UserJournyModel extends Table[UserJournyModel, UserJourny] {
override def tableName: String = "user_journy"
object dyid extends BigIntColumn with PartitionKey {
override lazy val name = "id"
}
object time extends IntColumn with ClusteringOrder with Descending {
override lazy val name = "time"
}
object activity_type extends IntColumn
object context extends MapColumn[String, Array[Byte]]
}
How should I do it right?
Use MapColumn[String, ByteBuffer] instead, which Cassandra natively understands. Then you need some basic fun stuff to encode a byte array to a buffer.
val buf = ByteBuffer.wrap(Array[Byte](1, 2, 3, 4, 5))
You can also derive a primitive to make your life easier overall.
implicit val byteArrayPrimitive = Primitive.derive[
Array[Byte],
ByteBuffer
](ByteBuffer.wrap)(_.array)
Now you can successfully use MapColumn[String, Array[Byte]], and Cassandra will store it as a buffer anyway, it will have map<text, buffer> type inside.
Related
case class Person(name: String,
override val age: Int,
override val address: String
) extends Details(age, address)
class Details(val age: Int, val address: String)
val person = Person("Alex", 33, "Europe")
val details = person.asInstanceOf[Details] // ???
println(details) // I want only Details class fields
I have these 2 classes. In reality, both have a lot of fields. Somewhere, I need only field of superclass, taken from Person class.
There is a nice way to get only super class values and not mapping them field by field?
*I'm pretty sure I'll have some problems with json writes for class Details (which is not a case class and have not a singleton object, but this is another subject)
If I get your question correctly, then you might be asking me runtime polymorphism or dynamic method dispatch from java.
If so, you may have to create both the class and not case class
class Details( val age: Int, val address: String)
class Person(name: String,
override val age: Int,
override val address: String
) extends Details(age, address) {
}
Now create the object of person and reference to superclass (Details)
val detail:Details = new Person("Alex", 33, "Europe")
println(detail.address)
println(detail.age)
This way you will be able to get the only address and age
Another way is like , why can't we create the Details a separate entity like:
case class Details( age: Int, address: String)
case class Person(name: String,
details: Details
)
val detail = Person("Alex", Details(10,"Europe") )
Output:
println(detail.details)
Details(10,Europe)
I will post a solution that leverages scala macro system (old kind, not the newest introduced with Scala 3.0). It could be an overkill for you...
BTW, if you want to access to only parent values (for example for getting key, value pair), you can:
given a type tag, get all parents;
from them, extract all the accessors (vals);
for each val, get its value;
and finally returns a list with all accessors taken
So, I try to solve each point step by step.
First of all, we have to write the macro definition as:
object Macros {
def accessors[T](element : T): String = macro MacrosImpl.accessors[T]
}
object MacrosImpl {
def accessors[T: c.WeakTypeTag](c: whitebox.Context): c.Expr[String] = ...
}
for the first point, we can leverage the reflection macroprogramming API using c.universe:
import c.universe._
val weakType = weakTypeTag[T] //thanks to the WeakTypeTag typeclass
val parents = weakType.tpe.baseClasses
for the second point, we can iterate over the parent classes and then take only the public accessors:
val accessors = parents
.map(weakType.tpe.baseType(_))
.flatMap(_.members)
.filter(_.isPublic)
.filter(_.isMethod)
.map(_.asMethod)
.filter(_.isAccessor)
.toSet
So, for example, if the we write Macros.accessors[Details](person), accessors will yield age and address.
To take the value, we can leverage quasiqouting. So, first we take only the values name:
val names = accessors
.map(_.fullName)
.map(_.split("\\."))
.map(_.reverse.head)
Then we convert them into a TermName:
val terms = names.map(TermName(_))
And finally, we convert each term to a key value tuple containing the val name and its value:
val accessorValues = terms
.map(name => c.Expr[(String, Any)](q"(${name.toString}, ${element}.${name})"))
.toSeq
The last step consist in convert a Seq[Expr[(String, Any)] into a Expr[Seq[(String, Any)]. A way to do that, could be leveraging recursion, reify, and splicing expression:
def seqToExprs(seq: Seq[Expr[(String, Any)]]): c.Expr[Seq[(String, Any)]] =
seq.headOption match {
case Some(head) =>
c.universe.reify(
Seq((head.splice._1, head.splice._2)) ++
seqToExprs(seq.tail).splice
)
case _ => c.Expr[Seq[(String, Any)]](q"Seq.empty")
}
So now I decide to return a String representation (but you can manipulate it as you want):
val elements = seqToExprs(accessorValues)
c.Expr[String](q"${elements}.mkString")
You can use it as:
import Macros._
class A(val a : Int)
class B(val b : Int) extends A(b)
class C(val c: Int) extends B(c)
//println(typeToString[List[Set[List[Double]]]])
val c = new C(10)
println(accessors[C](c)) // prints (a, 10)(b, 10)(c, 10)
println(accessors[B](c)) // prints (a, 10)(b, 10)
println(accessors[A](c)) // prints (a, 10)
And, using your example:
// Your example:
case class Person(name: String,
override val age: Int,
override val address: String
) extends Details(age, address)
class Details(val age: Int, val address: String)
val person = Person("Alex", 33, "Europe")
println(accessors[Details](person)) // prints (address,Europe)(age,33)
println(accessors[Person](person)) // prints (address,Europe)(age,33)(name,Alex)
Here there is a repository with the macro implemented.
Scala 3.0 introduce a safer and cleaner macro system, if you use it and you want to go further you can read these articles:
macros tips and tricks
short tutorial
another tutorial
I have a class like this -
class Cache (
tableName: String,
TTL: Int) {
// Creates a cache
}
I have a companion object that returns different types of caches. It has functions that require a base table name and can construct the cache.
object Cache {
def getOpsCache(baseTableName: String): Cache = {
new Cache(s"baseTableName_ops", OpsTTL);
}
def getSnapshotCache(baseTableName: String): Cache = {
new Cache(s"baseTableName_snaps", SnapshotTTL);
}
def getMetadataCache(baseTableName: String): Cache = {
new Cache(s"baseTableName_metadata", MetadataTTL);
}
}
The object does a few more things and the Cache class has more parameters, which makes it useful to have a companion object to create different types of Caches. The baseTableName parameter is same for all of the caches. Is there a way in which I can pass this parameter only once and then just call the functions to get different types of caches ?
Alternative to this is to create a factory class and pass the baseTableName parameter to constructor and then call the functions. But I am wondering if it could be done in any way with the Companion object.
The simplest way is to put your factory in a case class:
case class CacheFactory(baseTableName: String) {
lazy val getOpsCache: Cache =
Cache(s"baseTableName_ops", OpsTTL)
lazy val getSnapshotCache =
Cache(s"baseTableName_snaps", SnapshotTTL)
lazy val getMetadataCache =
Cache(s"baseTableName_metadata", MetadataTTL)
}
As I like case classes I changed your Cache also to a case class:
case class Cache(tableName: String, TTL: Int)
As you can see I adjusted your Java code to correct Scala code.
If you want to put it in the companion object, you could use implicits, like:
object Cache {
def getOpsCache(implicit baseTableName: String): Cache =
Cache(s"baseTableName_ops", OpsTTL)
def getSnapshotCache(implicit baseTableName: String) =
Cache(s"baseTableName_snaps", SnapshotTTL)
def getMetadataCache(implicit baseTableName: String) =
Cache(s"baseTableName_metadata", MetadataTTL)
}
Then your client looks like:
implicit val baseTableName: String = "baseName"
cache.getSnapshotCache
cache.getMetadataCache
Consider creating algebraic data type like so
sealed abstract class Cache(tablePostfix: String, ttl: Int) {
val tableName = s"baseTableName_$tablePostfix"
}
case object OpsCache extends Cache("ops", 60)
case object SnapshotCache extends Cache("snaps", 120)
case object MetadataCache extends Cache("metadata", 180)
OpsCache.tableName // res0: String = baseTableName_ops
I have the following class :
case class AucLog(timestamp: UUID, modelname: String, good: Int,
list: List[Double])
class AucDatabase(override val connector : CassandraConnection)
extends Database[AucDatabase](connector) {
object users extends CMetrics with Connector
}
object AucDatabase extends AucDatabase(AucConnector.connector)
abstract class AucMetrics extends Table[AucMetrics, AucLog] {
object id extends UUIDColumn with PartitionKey
object name extends StringColumn
object ud extends IntColumn
object zob extends ListColumn[Double]
}
abstract class CMetrics extends AucMetrics with RootConnector {
def store(metric : AucLog): Future[ResultSet] = {
insert.value(_.id, metric.timestamp)
.value(_.name, metric.modelname)
.value(_.ud, metric.good)
.value(_.zob, metric.list)
.consistencyLevel_=(ConsistencyLevel.ONE)
.future()
}
DmpDatabase.create()
AucDatabase.create()
val pd = DmpDatabase.users.myselect()
val timeout = new Timeout(500000)
val result = Await.result(pd, timeout.duration)
"<--- this attempt to read from my database is working - no problemo ---> "
val todf = result.records.map { elem => elem.idcat }
val rdd = spark.sparkContext.parallelize(todf)
import spark.implicits._
rdd.toDF().show(100)
---> I'm storing one line in my database to be sure that it is not empty when
i'm reading it.
AucDatabase.users.store(new AucLog(UUIDs.timeBased(), "tyron", 0, List(0.1)))
val second = AucDatabase.users.myselect()
val resultmetric = Await.result(second, timeout.duration)
-----> this line cause the Execption
val r = spark.sparkContext.parallelize(resultmetric.records).toDF().show(
What I do not understand is that i'm doing basically the same thing with both databases. Yet, one is throwing the following error : UnsupportedOperationException : No encoder found for com.outworkers.phantom.dsl.UUID.
Thank you.
First of all the store method is macro generated so you don't need to create one. The problem you are having is likely not related to phantom at all, but to some kind of Spark construct.
The phantom UUID is nothing more than a type alias for java.util.UUID, so I'm quite surprised there is no straight up encoder for a default type. If you help me out with the full name of the Encoder class, including the package, I can figure out explicitly what is broken.
I am attempting to use scala-cass in order to read from cassandra and convert the resultset to a case class using resultSet.as[CaseClass]. This works great when running the following.
import com.weather.scalacass.syntax._
case class TestTable(id: String, data1: Int, data2: Long)
val resultSet = session.execute(s"select * from test.testTable limit 10")
resultSet.one.as[TestTable]
Now I am attempting to make this more generic and I am unable to find the proper type constraint for the generic class.
import com.weather.scalacass.syntax._
case class TestTable(id: String, data1: Int, data2: Long)
abstract class GenericReader[T] {
val table: String
val keyspace: String
def getRows(session: Session): T = {
val resultSet = session.execute(s"select * from $keyspace.$table limit 10")
resultSet.one.as[T]
}
}
I implement this class with the desired case class and attempt to call getRows on the created Object.
object TestTable extends GenericReader[TestTable] {
val keyspace = "test"
val table = "TestTable"
}
TestTable.getRows(session)
This throws an exception could not find implicit value for parameter ccd: com.weather.scalacass.CCCassFormatDecoder[T].
I am trying to add a type constraint to GenericReader in order to ensure the implicit conversion will work. However, I am unable to find the proper type. I am attempting to read through scala-cass in order to find the proper constraint but I have had no luck so far.
I would also be happy to use any other library that can achieve this.
Looks like as[T] requires an implicit value that you don't have in scope, so you'll need to require that implicit parameter in the getRows method as well.
def getRows(session: Session)(implicit cfd: CCCassFormatDecoder[T]): T
You could express this as a type constraint (what you were looking for in the original question) using context bounds:
abstract class GenericReader[T:CCCassFormatDecoder]
Rather than try to bound your generic T type, it might be easier to just pass through the missing implicit parameter:
abstract class GenericReader[T](implicit ccd: CCCassFormatDecoder[T]) {
val table: String
val keyspace: String
def getRows(session: Session): T = {
val resultSet = session.execute(s"select * from $keyspace.$table limit 10")
resultSet.one.as[T]
}
}
Finding a concrete value for that implicit can then be deferred to when you narrow that T to a specific class (like object TestTable extends GenericReader[TestTable])
I wrote a trait to mix into a class the ability to serialize itself to query string parameters, leveraging an existing JSON Writes instance. In order to use that Writes instance as a parameter, I need to know the type into which this trait is being mixed. I'm getting that using a type parameter (which should be the class itself) and a self-type annotation. I'm wondering if there's a DRYer way of doing this, which doesn't require the type parameter?
Here's my code:
trait ConvertibleToQueryString[T] {
this: T =>
/** Transformation of field names in obj to query string keys */
def objToQueryStringMapping: Map[JsPath, JsPath] = Map.empty
/**
* Convert a model to a Map, for serialization to a query string, to be used
* in a REST API call.
* #param writes writer for `obj`
* #return
*/
def toQueryStringMap(implicit writes: Writes[T]): Map[String, String] = {
// Get a map of key -> JsValue from obj
val mapObj = Json.toJson(this).transform(moveKeys(objToQueryStringMapping)).get.value
// Convert the JsValue values of the map to query strings
mapObj.mapValues(jsValueToQueryStringValue).filter(_._2.nonEmpty).toMap
}
}
, to be used as such:
case class MyClass(param1: String, param2: Int) extends ConvertibleToQueryString[MyClass]
, that final type parameter being the thing that's annoying me. It's fully unconstrained, but it should really just be "the type of the class I get mixed into". Is there a way to express this?
Why not use the pimp-encrich-my-library pattern:
implicit class ConvertibleToQueryString[T: Writes](x: T) {
def objToQueryStringMapping: Map[JsPath, JsPath] = Map.empty
def toQueryStringMap: Map[String, String] = {
// Get a map of key -> JsValue from obj
val mapObj = Json.toJson(x).transform(moveKeys(objToQueryStringMapping)).get.value
// Convert the JsValue values of the map to query strings
mapObj.mapValues(jsValueToQueryStringValue).filter(_._2.nonEmpty).toMap
}
}
Now you don't need the extends ... at all on the classes you want to serialize.