Using Phantom 2 with an existing Cassandra session - scala

I am trying to migrate our current implementation from Phantom 1.28.16 to 2.16.4 but I'm running into problems with the setup.
Our framework is providing us with the Cassandra session object during startup which doesn't seem to fit with Phantom. I am trying to get Phantom to accept that already instantiated session instead of going through the regular CassandraConnection object.
I'm assuming that we can't use the Phantom Database class because of this but I am hoping that there still is some way to set up and use the Tables without using that class.
Is this doable?

I ended up doing the following to be able to use Phantom with an existing connection:
Defined a new trait PhantomTable to be used instead of Phantoms 'Table' trait. They are identical except for removal of the RootConnector
trait PhantomTable[T <: PhantomTable[T, R], R] extends CassandraTable[T, R] with TableAliases[T, R]
Defined my tables by extending the PhantomTable trait and also made it to an object. Here I had to import all of the TableHelper macro to get it to compile
...
import com.outworkers.phantom.macros.TableHelper._
final case class Foo(id: String, name: Option[String])
sealed class FooTable extends PhantomTable[FooTable, Foo] {
override val tableName = "foo"
object id extends StringColumn with PartitionKey
object name extends OptionalStringColumn
}
object FooTable extends FooTable
After that it is possible to use all the wanted methods on the FooTable object as long as an implicit Keyspace and Session exists in the scope.
This is a simple main program that shows how the tables can be used
object Main extends App {
val ks = "foo_keyspace"
val cluster = Cluster.builder().addContactPoints("127.0.0.1").build()
implicit val keyspace: KeySpace = KeySpace(ks)
implicit val session: Session = cluster.connect(ks)
val res = for {
_ <- FooTable.create.ifNotExists.future
_ <- FooTable.insert.value(_.id, "1").value(_.name, Some("data")).future
row <- FooTable.select.where(_.id eqs "1").one
} yield row
val r = Await.result(res, 10.seconds)
println(s"Row: $r")
}

Related

Common Repository/DAO method to insert value with autoincremented ID using Slick is missing implicits

I'm rather new to scala and trying to learn slick and started with play-slick-example where everything was understandable.
I went off and created my own entities, tables and queries.
First trick was to work around getting autoincremeted id upon insertion, but example code covers it, though idiom may be improved.
Next thing to work around was to move all common code to one place. By common code I mean basic CRUD operations that are basically copy-paste for all entities.
So I went and created base Entity
trait Entity[T <: Entity[T, ID], ID] {
val id: Option[ID]
def withId(id: ID): T
}
With that I went and created BaseRepo that should contain all common code:
abstract class BaseRepo[T <: Entity[T, ID], ID] {
protected val dbConfigProvider: DatabaseConfigProvider
val dbConfig = dbConfigProvider.get[JdbcProfile]
import dbConfig._
import profile.api._
type TableType <: Keyed[ID] with RelationalProfile#Table[T]
protected val tableQuery: TableQuery[TableType]
}
Where dbConfigProvider is injected into implementations and allows to import proper config (not sure it's needed, but example has it like that). Keyed is another trait that represents Tables with columns id:
trait Keyed[ID] {
def id: Rep[ID]
}
Everything looks good for now. To extend BaseRepo one would need to properly assign TableType and tableQuery and everything should work.
I start with following implementation:
case class Vehicle(override val id: Option[Long], name: String, plate: String, modelId: Long)
extends Entity[Vehicle, Long] {
override def withId(id: Long): Vehicle = this.copy(id = Some(id))
}
And following repo:
#Singleton
class VehicleRepository #Inject()(override val dbConfigProvider: DatabaseConfigProvider)
(implicit ec: ExecutionContext)
extends BaseRepo [Vehicle, Long]{
import dbConfig._
import profile.api._
type TableType = Vehicles
val tableQuery = TableQuery[Vehicles]
class Vehicles(tag:Tag) extends Table[Vehicle](tag:Tag, "vehicles") with Keyed[Long] {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def name = column[String]("name")
def plate = column[String]("plate")
def modelId = column[Long]("modelId")
def * = (id.?, name,plate,modelId) <> ((Vehicle.apply _).tupled, Vehicle.unapply)
}
Everything still looks great!
Now I add all() to BaseRepo:
def all() = db.run {
tableQuery.result
}
And it works! I can list all my Vehicle entities through injected repo:VehicleRepository with simple repo.all() (Well, i get Future to be precise, but who cares)
Next, I go and try to generalize insert with autoincremented id and put it into BaseRepo:
def create(item: T, ec:ExecutionContext) = db.run{
((tableQuery returning tableQuery.map(_.id)) += item)
.map(id => item.withId(id))(ec)
}
Don't mind explicit ExecutionContext here, but anyway, this does not work and Error I get is frustrating:
Slick does not know how to map the given types.
Possible causes: T in Table[T] does not match your * projection,
you use an unsupported type in a Query (e.g. scala List),
or you forgot to import a driver api into scope.
Required level: slick.lifted.FlatShapeLevel
Source type: slick.lifted.Rep[ID]
Unpacked type: T
Packed type: G
]
If I move this method back into VehicleRepository (replacing T with Vehicle) everything works like a charm.
Some hours of digging later I understand that tableQuery.map takes some (implicit shape: Shape[_ <: FlatShapeLevel, F, T, G]) as implicit parameter and I literally have no idea where does that come from into the scope of VehicleRepository and why is it not available in my BaseRepo
Any comment or advice on how to work around this or maybe some other approaches to generalize CRUDs with Slick would be appriciated!
I'm using Play-2.8 Slick-3.3.2 play-slick-5.0.0 scala-2.13.1

UnsupportedOperationExeception when writing in Cassandra table

I have the following class :
case class AucLog(timestamp: UUID, modelname: String, good: Int,
list: List[Double])
class AucDatabase(override val connector : CassandraConnection)
extends Database[AucDatabase](connector) {
object users extends CMetrics with Connector
}
object AucDatabase extends AucDatabase(AucConnector.connector)
abstract class AucMetrics extends Table[AucMetrics, AucLog] {
object id extends UUIDColumn with PartitionKey
object name extends StringColumn
object ud extends IntColumn
object zob extends ListColumn[Double]
}
abstract class CMetrics extends AucMetrics with RootConnector {
def store(metric : AucLog): Future[ResultSet] = {
insert.value(_.id, metric.timestamp)
.value(_.name, metric.modelname)
.value(_.ud, metric.good)
.value(_.zob, metric.list)
.consistencyLevel_=(ConsistencyLevel.ONE)
.future()
}
DmpDatabase.create()
AucDatabase.create()
val pd = DmpDatabase.users.myselect()
val timeout = new Timeout(500000)
val result = Await.result(pd, timeout.duration)
"<--- this attempt to read from my database is working - no problemo ---> "
val todf = result.records.map { elem => elem.idcat }
val rdd = spark.sparkContext.parallelize(todf)
import spark.implicits._
rdd.toDF().show(100)
---> I'm storing one line in my database to be sure that it is not empty when
i'm reading it.
AucDatabase.users.store(new AucLog(UUIDs.timeBased(), "tyron", 0, List(0.1)))
val second = AucDatabase.users.myselect()
val resultmetric = Await.result(second, timeout.duration)
-----> this line cause the Execption
val r = spark.sparkContext.parallelize(resultmetric.records).toDF().show(
What I do not understand is that i'm doing basically the same thing with both databases. Yet, one is throwing the following error : UnsupportedOperationException : No encoder found for com.outworkers.phantom.dsl.UUID.
Thank you.
First of all the store method is macro generated so you don't need to create one. The problem you are having is likely not related to phantom at all, but to some kind of Spark construct.
The phantom UUID is nothing more than a type alias for java.util.UUID, so I'm quite surprised there is no straight up encoder for a default type. If you help me out with the full name of the Encoder class, including the package, I can figure out explicitly what is broken.

scala-cass generic read from cassandra table as case class

I am attempting to use scala-cass in order to read from cassandra and convert the resultset to a case class using resultSet.as[CaseClass]. This works great when running the following.
import com.weather.scalacass.syntax._
case class TestTable(id: String, data1: Int, data2: Long)
val resultSet = session.execute(s"select * from test.testTable limit 10")
resultSet.one.as[TestTable]
Now I am attempting to make this more generic and I am unable to find the proper type constraint for the generic class.
import com.weather.scalacass.syntax._
case class TestTable(id: String, data1: Int, data2: Long)
abstract class GenericReader[T] {
val table: String
val keyspace: String
def getRows(session: Session): T = {
val resultSet = session.execute(s"select * from $keyspace.$table limit 10")
resultSet.one.as[T]
}
}
I implement this class with the desired case class and attempt to call getRows on the created Object.
object TestTable extends GenericReader[TestTable] {
val keyspace = "test"
val table = "TestTable"
}
TestTable.getRows(session)
This throws an exception could not find implicit value for parameter ccd: com.weather.scalacass.CCCassFormatDecoder[T].
I am trying to add a type constraint to GenericReader in order to ensure the implicit conversion will work. However, I am unable to find the proper type. I am attempting to read through scala-cass in order to find the proper constraint but I have had no luck so far.
I would also be happy to use any other library that can achieve this.
Looks like as[T] requires an implicit value that you don't have in scope, so you'll need to require that implicit parameter in the getRows method as well.
def getRows(session: Session)(implicit cfd: CCCassFormatDecoder[T]): T
You could express this as a type constraint (what you were looking for in the original question) using context bounds:
abstract class GenericReader[T:CCCassFormatDecoder]
Rather than try to bound your generic T type, it might be easier to just pass through the missing implicit parameter:
abstract class GenericReader[T](implicit ccd: CCCassFormatDecoder[T]) {
val table: String
val keyspace: String
def getRows(session: Session): T = {
val resultSet = session.execute(s"select * from $keyspace.$table limit 10")
resultSet.one.as[T]
}
}
Finding a concrete value for that implicit can then be deferred to when you narrow that T to a specific class (like object TestTable extends GenericReader[TestTable])

How to write class and tableclass mapping for slick2 instead of using case class?

I use case class to transform the class object to data for slick2 before, but current I use another play plugin, the plugin object use the case class, my class is inherent from this case class. So, I can not use case class as the scala language forbidden use case class to case class inherent.
before:
case class User()
class UserTable(tag: Tag) extends Table[User](tag, "User") {
...
def * = (...)<>(User.tupled,User.unapply)
}
it works.
But now I need to change above to below:
case class BasicProfile()
class User(...) extends BasicProfile(...){
...
def unapply(i:User):Tuple12[...]= Tuple12(...)
}
class UserTable(tag: Tag) extends Table[User](tag, "User") {
...
def * = (...)<>(User.tupled,User.unapply)
}
I do not know how to write the tupled and unapply(I am not my writing is correct or not) method like the case class template auto generated. Or you can should me other way to mapping the class to talbe by slick2.
Any one can give me an example of it?
First of all, this case class is a bad idea:
case class BasicProfile()
Case classes compare by their member values, this one doesn't have any. Also the name is not great, because we have the same name in Slick. May cause confusion.
Regarding your class
class User(...) extends BasicProfile(...){
...
def unapply(i:User):Tuple12[...]= Tuple12(...)
}
It is possible to emulate case classes yourself. Are you doing that because of the 22 field limit? FYI: Scala 2.11 supports larger case classes. We are doing what you are trying at Sport195, but there are several aspects to take care of.
apply and unapply need to be members of object User (the companion object of class User). .tupled is not a real method, but generated automatically by the Scala compiler. it turns a method like .apply that takes a list of arguments into a function that takes a single tuple of those arguments. As tuples are limited to 22 columns, so is .tupled. But you could of course auto-generated one yourself, may have to give it another name.
We are using the Slick code generator in combination with twirl template engine (uses # to insert expressions. The $ are inserted as if into the generated Scala code and evaluated, when the generated code is compiled/run.). Here are a few snippets that may help you:
Generate apply method
/** Factory for #{name} objects
#{indentN(2,entityColumns.map(c => "* #param "+c.name+" "+c.doc).mkString("\n"))}
*/
final def apply(
#{indentN(2,
entityColumns.map(c =>
colWithTypeAndDefault(c)
).mkString(",\n")
)}
) = new #{name}(#{columnsCSV})
Generate unapply method:
#{if(entityColumns.size <= 22)
s"""
/** Extractor for ${name} objects */
final def unapply(o: ${name}) = Some((${entityColumns.map(c => "o."+c.name).mkString(", ")}))
""".trim
else
""}
Trait that can be mixed into User to make it a Scala Product:
trait UserBase with Product{
// Product interface
def canEqual(that: Any): Boolean = that.isInstanceOf[#name]
def productArity: Int = #{entityColumns.size}
def productElement(n: Int): Any = Seq(#{columnsCSV})(n)
override def toString = #{name}+s"(${productIterator.toSeq.mkString(",")})"
...
case-class like .copy method
final def copy(
#{indentN(2,columnsCopy)}
): #{name} = #{name}(#{columnsCSV})
To use those classes with Slick you have several options. All are somewhat newer and not documented (well). The normal <> operator Slick goes via tuples, but that's not an option for > 22 columns. One option are the new fastpath converters. Another option is mapping via a Slick HList. No examples exist for either. Another option is going via a custom Shape, which is what we do. This will require you to define a custom shape for your User class and another class defined using Column types to mirror user within queries. Like this: http://slick.typesafe.com/doc/2.1.0/api/#scala.slick.lifted.ProductClassShape Too verbose to write by hand. We use the following template code for this:
/** class for holding the columns corresponding to #{name}
* used to identify this entity in a Slick query and map
*/
class #{name}Columns(
#{indent(
entityColumns
.map(c => s"val ${c.name}: Column[${c.exposedType}]")
.mkString(", ")
)}
) extends Product{
def canEqual(that: Any): Boolean = that.isInstanceOf[#name]
def productArity: Int = #{entityColumns.size}
def productElement(n: Int): Any = Seq(#{columnsCSV})(n)
}
/** shape for mapping #{name}Columns to #{name} */
object #{name}Implicits{
implicit object #{name}Shape extends ClassShape(
Seq(#{
entityColumns
.map(_.exposedType)
.map(t => s"implicitly[Shape[ShapeLevel.Flat, Column[$t], $t, Column[$t]]]")
.mkString(", ")
}),
vs => #{name}(#{
entityColumns
.map(_.exposedType)
.zipWithIndex
.map{ case (t,i) => s"vs($i).asInstanceOf[$t]" }
.mkString(", ")
}),
vs => new #{name}Columns(#{
entityColumns
.map(_.exposedType)
.zipWithIndex
.map{ case (t,i) => s"vs($i).asInstanceOf[Column[$t]]" }
.mkString(", ")
})
)
}
import #{name}Implicits.#{name}Shape
A few helpers we put into the Slick code generator:
val columnsCSV = entityColumns.map(_.name).mkString(", ")
val columnsCopy = entityColumns.map(c => colWithType(c)+" = "+c.name).mkString(", ")
val columnNames = entityColumns.map(_.name.toString)
def colWithType(c: Column) = s"${c.name}: ${c.exposedType}"
def colWithTypeAndDefault(c: Column) =
colWithType(c) + colDefault(c).map(" = "+_).getOrElse("")
def indentN(n:Int,code: String): String = code.split("\n").mkString("\n"+List.fill(n)(" ").mkString(""))
I know this may a bit troublesome to replicate, especially if you are new to Scala. I hope to to find the time get it into the official Slick code generator at some point.

How to map postgresql custom enum column with Slick2.0.1?

I just can't figure it out. What I am using right now is:
abstract class DBEnumString extends Enumeration {
implicit val enumMapper = MappedJdbcType.base[Value, String](
_.toString(),
s => this.withName(s)
)
}
And then:
object SomeEnum extends DBEnumString {
type T = Value
val A1 = Value("A1")
val A2 = Value("A2")
}
The problem is, during insert/update JDBC driver for PostgreSQL complains about parameter type being "character varying" when column type is "some_enum", which is reasonable as I am converting SomeEnum to String.
How do I tell Slick to treat String as DB-defined "enum_type"? Or how to define some other Scala-type that will map to "enum_type"?
I had similar confusion when trying to get my postgreSQL enums to work with slick. Slick-pg allows you to use Scala enums with your databases enums, and the test suite shows how.
Below is an example.
Say we have this enumerated type in our database.
CREATE TYPE Dog AS ENUM ('Poodle', 'Labrador');
We want to be able to map these to Scala enums, so we can use them happily with Slick. We can do this with slick-pg, an extension for slick.
First off, we make a Scala version of the above enum.
object Dogs extends Enumeration {
type Dog = Value
val Poodle, Labrador = Value
}
To get the extra functionality from slick-pg we extend the normal PostgresDriver and say we want to map our Scala enum to the PostgreSQL one (remember to change the slick driver in application.conf to the one you've created).
object MyPostgresDriver extends PostgresDriver with PgEnumSupport {
override val api = new API with MyEnumImplicits {}
trait MyEnumImplicits {
implicit val dogTypeMapper = createEnumJdbcType("Dog", Dogs)
implicit val dogListTypeMapper = createEnumListJdbcType("Dog", Dogs)
implicit val dogColumnExtensionMethodsBuilder = createEnumColumnExtensionMethodsBuilder(Dogs)
implicit val dogOptionColumnExtensionMethodsBuilder = createEnumOptionColumnExtensionMethodsBuilder(Dogs)
}
}
Now when you want to make a new model case class, simply use the corresponding Scala enum.
case class User(favouriteDog: Dog)
And when you do the whole DAO table shenanigans, again you can just use it.
class Users(tag: Tag) extends Table[User](tag, "User") {
def favouriteDog = column[Dog]("favouriteDog")
def * = (favouriteDog) <> (Dog.tupled, Dog.unapply _)
}
Obviously you need the Scala Dog enum in scope wherever you use it.
Due to a bug in slick, currently you can't dynamically link to a custom slick driver in application.conf (it should work). This means you either need to run play framework with start and not get dynamic recompiling, or you can create a standalone sbt project with just the custom slick driver in it and depend on it locally.