I have been toying around with the smiple code provided in the phantom wiki, the follow I have tried;
import com.websudos.phantom.dsl._
case class Student(id: UUID, name: String)
class Students extends CassandraTable[Students, Student] {
object id extends UUIDColumn(this) with PartitionKey[UUID]
object name extends StringColumn(this)
def fromRow(row: Row): Student = {
Student(id(row), name(row))
}
}
object Students extends Students with Connector {
def getByName(name: String): Future[Option[Student]] = {
select.where(_.name eqs name).one()
}
}
But my IDE keeps saying Cannot resolve symbol where and the compiler says value where is not a member of com.websudos.phantom.builder.query.RootSelectBlock[Students,Student]
I'm using Scala 2.11.6 and Phantom 1.10.1, all help is greatly appreciated!
I ran into this issue and resolved this using #flavian's suggestion above.
Ensure that your Connector has an implicit keyspace defined.
This is directly lifted from the example project.
trait KeyspaceDefinition {
implicit val keySpace = KeySpace("sample_keyspace")
}
trait Connector extends SimpleConnector with KeyspaceDefinition
You are missing out on a fundamental Cassandra issue, you cannot query by name as it's not an indexed column. Based on the table you've just defined, the query you are trying to perform is invalid and Cassandra will tell you that at runtime.
Phantom will prevent most bad things at compile time. It's worth reading through this blog series to understand how things work in Cassandra.
To put things in perspective, the only where query that's valid for your Students table is:
def getById(id: UUID): Future[Option[Student]] = {
select.where(_.id eqs id).one()
}
Related
I'm using this library for the first time, and I ran into a problem. I did everything according to the documentation, but nothing is working, and i don't know why. Here is my table model:
trait CassandraModel
object CassandraModel {
case class TaskData(notifyid: String,
notifyType: String)
extends CassandraModel
abstract class TaskDataCassandra extends Table[TaskDataCassandra, TaskData] {
object notifyid extends StringColumn with PartitionKey
object notifyType extends StringColumn
def store(record: TaskData): InsertQuery.Default[TaskDataCassandra, TaskData] =
insert
.value(_.notifyId, record.notifyId)
.value(_.notifyType, record.notifyType)
}
}
And DataBase with DatabaseProvider:
class AppDatabase(override val connector: CassandraConnection) extends Database[AppDatabase](connector) {
object taskDataCassandra extends TaskDataCassandra with Connector
}
trait AppDatabaseProvider extends DatabaseProvider[AppDatabase]
So, when i starting my app, i'm trying to create a keyspace, but nothing is happens
object Boot extends App with AmqpConnector with ServiceRestRoute with JsonSerializer with AppDatabaseProvider {
override def database: AppDatabase = new AppDatabase(CassandraConnector.createCassandraConnection)
database.taskDataCassandra.create.ifNotExists()
}
store method also doesn't works
Read through the documentation properly and the differences will be obvious. The thing to read is the Database Docs.
You have 2 options. You can call database.create(), which is a blocking creation operation that will create all the tables inside the database.
Option 2 is to call database.taskDataCassandra.create.ifNotExists().future().
If you do not use future(), all you have is a query generated, you are not actually executing anything. If you check the return type of database.taskDataCassandra.create.ifNotExists() it will be a CreateQuery, wheres if you add future() you get a Future[Result].
Hope this makes sense.
Recently trying to migrate to the latest phantom version 2.24.8. I created a dummy project, but running into a few issues that I can't figure out. Here's my code:
import com.outworkers.phantom.connectors.{CassandraConnection, ContactPoints}
import com.outworkers.phantom.database.Database
import scala.concurrent.Future
import com.outworkers.phantom.dsl._
case class Test(id: String, timestamp: String)
abstract class Tests extends Table[Tests, Test] {
object id extends StringColumn with PartitionKey
object timestamp extends StringColumn with ClusteringOrder
}
abstract class ConcreteTests extends Tests with RootConnector {
def addTest(l: Test): Future[ResultSet] = {
// store(l).consistencyLevel_=(ConsistencyLevel.LOCAL_ONE).future
insert.value(_.id, l.id)
.value(_.timestamp, l.timestamp)
.consistencyLevel_=(ConsistencyLevel.QUORUM).future
}
}
class MyDB(override val connector: CassandraConnection) extends Database[MyDB](connector) {
object tests extends ConcreteTests with connector.Connector
def init(): Unit = {
tests.create
}
}
object Test{
def main(args: Array[String]): Unit = {
val db = new MyDB(ContactPoints(Seq("127.0.0.1")).keySpace("tests"))
db.init
db.tests.addTest(Test("1", "1323234234"))
println("Done")
}
}
It compiled and ran in IntelliJ and print out 'Done'. However, no table is ever created. Also no exceptions or warnings. It did nothing. I tried to stop the local cassandra database. The code throws the NoHostAvailableException. So it does try to connect the local database. What is the problem?
Another weird thing is that "com.typesafe.play" %% "play-json" % "2.6.9" is in my build.sbt. If I remove the library, the same code throws the following exception:
Exception in thread "main" java.lang.NoClassDefFoundError: scala/reflect/runtime/package$
at com.outworkers.phantom.column.AbstractColumn.com$outworkers$phantom$column$AbstractColumn$$_name(AbstractColumn.scala:55)
at com.outworkers.phantom.column.AbstractColumn.com$outworkers$phantom$column$AbstractColumn$$_name$(AbstractColumn.scala:54)
at com.outworkers.phantom.column.Column.com$outworkers$phantom$column$AbstractColumn$$_name$lzycompute(Column.scala:22)
at com.outworkers.phantom.column.Column.com$outworkers$phantom$column$AbstractColumn$$_name(Column.scala:22)
at com.outworkers.phantom.column.AbstractColumn.name(AbstractColumn.scala:58)
at com.outworkers.phantom.column.AbstractColumn.name$(AbstractColumn.scala:58)
at com.outworkers.phantom.column.Column.name(Column.scala:22)
at com.outworkers.phantom.builder.query.InsertQuery.value(InsertQuery.scala:107)
Really cannot figure what's going on. Any help?
BTW, I'm using scala 2.12.6 and JVM 1.8.181.
You're not using the correct DSL method for table creation, have a look at the official guide. All that table.create does is to create an empty CreateQuery, and you're coercing the return type to Unit manually.
The automated blocking create method is on Database, not on table, so what you want is:
class MyDB(override val connector: CassandraConnection) extends Database[MyDB](connector) {
object tests extends ConcreteTests with connector.Connector
def init(): Unit = {
this.create
}
}
If you want to achieve the same thing using table, you need:
class MyDB(override val connector: CassandraConnection) extends Database[MyDB](connector) {
object tests extends ConcreteTests with connector.Connector
def init(): Unit = {
import scala.concurrent.duration._
import scala.concurrent.Await
Await.result(tests.create.future(), 10.seconds)
}
}
It's only the call to future() method that will trigger any kind of action to the database, otherwise you're just building a query without executing it. The method name can be confusing, and we will improve the docs and future releases to make it more obvious.
The conflict with play 2.6.9 looks very weird, it's entirely possible there's an incompatible dependency behind the scenes to do with macro compilation. Raise that as a separate issue and we can definitely have a look at it.
I am having some issues getting the following example to compile.
import scala.slick.driver.MySQLDriver.simple._
case class Age(value: Int)
case class User(id: Long, age: Option[Age])
object Dao {
implicit val ageColumnType: TypedType[Age] = MappedColumnType.base[Age, Int](_.value, Age(_))
class UserTable(tag: Tag) extends Table[User](tag, "users") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def age = column[Option[Age]]("age")
def * = (id, age) <> (User.tupled, User.unapply)
}
val users = TableQuery[UserTable]
def byId(id: Long)(implicit session: Session): Option[User] = {
users.filter(_.age === Some(Age(21))).firstOption
}
}
But the compiler is failing with the following error:
Example.scala:16:28: value === is not a member of slick.lifted.Rep[Option[Age]]
Does the right way to do this involve using OptionColumnExtensionMethods or something? It is strange that the type-classes for TypedType[Option[T]] would not kick in here, however.
Here is a list of some other resources I dug up, but none of them seem to deal with a container type around a custom column type using mappedColumnType.
Filtering when using custom column type in Slick
Slick: Option column filtering
Slick and filtering by Option columns
Figured it out and figured it was worth posting here.
The following line had too broad of a type signature.
implicit val ageColumnType: TypedType[Age]
Obviously, the implicit scope no longer contained the right evidence to infer the various column operators needed in the filter query, including the === method. Instead, I just needed a more specific type:
implicit val ageColumnType: BaseColumnType[Age]
As per the comment on the original question, the ==? solution also works once this change is made. Thanks!
I'm using phantom to connect cassandra in play framework. Created the first class following the tutorial. Everything works fine.
case class User(id: String, page: Map[String,String])
sealed class Users extends CassandraTable[Users, User] {
object id extends StringColumn(this) with PartitionKey[String]
object page extends MapColumn[String,String](this)
def fromRow(row: Row): User = {
User(
id(row),
page(row)
)
}
}
abstract class ConcreteUsers extends Users with RootConnector {
def getById(page: String): Future[Option[User]] = {
select.where(_.id eqs id).one()
}
def create(id:String, kv:(String,String)): Future[ResultSet] = {
insert.value(_.id, id).value(_.page, Map(kv)).consistencyLevel_=(ConsistencyLevel.QUORUM).future()
}
}
class UserDB(val keyspace: KeySpaceDef) extends Database(keyspace) {
object users extends ConcreteUsers with keyspace.Connector
}
object UserDB extends ResourceAuthDB(conn) {
def createTable() {
Await.ready(users.create.ifNotExists().future(), 3.seconds)
}
}
However, when I try to create another table following the exact same way, play throws the exception when compile:
overriding method session in trait RootConnector of type => com.datastax.driver.core.Session;
How could I build create another table? Also can someone explain what causes the exception? Thanks.
EDIT
I moved the connection part together in one class:
class UserDB(val keyspace: KeySpaceDef) extends Database(keyspace) {
object users extends ConcreteUsers with keyspace.Connector
object auth extends ConcreteAuthInfo with keyspace.Connector
}
This time the error message is:
overriding object session in class AuthInfo; lazy value session in trait Connector of
type com.datastax.driver.core.Session cannot override final member
Hope the message helps identify the problem.
The only problem I see here is not to do with connectors, it's here:
def getById(page: String): Future[Option[User]] = {
select.where(_.id eqs id).one()
}
This should be:
def getById(page: String): Future[Option[User]] = {
select.where(_.id eqs page).one()
}
Try this, I was able to compile. Is RootConnector the default one or do you define another yourself?
It took me 6 hours to figure out the problem. It is because there is a column named "session" in the other table. It turns out that you need to be careful when selecting column names. "session" obviously gives the above exception. Cassandra also has a long list of reserved keywords. If you accidentally use one of them as your column name, phantom will not throw any exceptions (maybe it should?). I don't know if any other keywords are reserved in phantom. A list of them will be really helpful.
I just can't figure it out. What I am using right now is:
abstract class DBEnumString extends Enumeration {
implicit val enumMapper = MappedJdbcType.base[Value, String](
_.toString(),
s => this.withName(s)
)
}
And then:
object SomeEnum extends DBEnumString {
type T = Value
val A1 = Value("A1")
val A2 = Value("A2")
}
The problem is, during insert/update JDBC driver for PostgreSQL complains about parameter type being "character varying" when column type is "some_enum", which is reasonable as I am converting SomeEnum to String.
How do I tell Slick to treat String as DB-defined "enum_type"? Or how to define some other Scala-type that will map to "enum_type"?
I had similar confusion when trying to get my postgreSQL enums to work with slick. Slick-pg allows you to use Scala enums with your databases enums, and the test suite shows how.
Below is an example.
Say we have this enumerated type in our database.
CREATE TYPE Dog AS ENUM ('Poodle', 'Labrador');
We want to be able to map these to Scala enums, so we can use them happily with Slick. We can do this with slick-pg, an extension for slick.
First off, we make a Scala version of the above enum.
object Dogs extends Enumeration {
type Dog = Value
val Poodle, Labrador = Value
}
To get the extra functionality from slick-pg we extend the normal PostgresDriver and say we want to map our Scala enum to the PostgreSQL one (remember to change the slick driver in application.conf to the one you've created).
object MyPostgresDriver extends PostgresDriver with PgEnumSupport {
override val api = new API with MyEnumImplicits {}
trait MyEnumImplicits {
implicit val dogTypeMapper = createEnumJdbcType("Dog", Dogs)
implicit val dogListTypeMapper = createEnumListJdbcType("Dog", Dogs)
implicit val dogColumnExtensionMethodsBuilder = createEnumColumnExtensionMethodsBuilder(Dogs)
implicit val dogOptionColumnExtensionMethodsBuilder = createEnumOptionColumnExtensionMethodsBuilder(Dogs)
}
}
Now when you want to make a new model case class, simply use the corresponding Scala enum.
case class User(favouriteDog: Dog)
And when you do the whole DAO table shenanigans, again you can just use it.
class Users(tag: Tag) extends Table[User](tag, "User") {
def favouriteDog = column[Dog]("favouriteDog")
def * = (favouriteDog) <> (Dog.tupled, Dog.unapply _)
}
Obviously you need the Scala Dog enum in scope wherever you use it.
Due to a bug in slick, currently you can't dynamically link to a custom slick driver in application.conf (it should work). This means you either need to run play framework with start and not get dynamic recompiling, or you can create a standalone sbt project with just the custom slick driver in it and depend on it locally.