I created following table for Cassandra
abstract class MessageTable extends Table[ConcreteMessageModel, Message] {
override def tableName: String = "messages"
// String because TimedUUIDs are bad bad bad
object id extends Col[String] with PartitionKey {
override lazy val name = "message_id"
}
object phone extends Col[String]
object message extends Col[String]
object provider_message_id extends Col[Option[String]]
object status extends Col[Option[String]]
object datetime extends DateColumn {
override lazy val name = "message_datetime"
}
override def fromRow(r: Row): Message = Message(phone(r), message(r), Some(UUID.fromString(id(r))), None, status(r), Some( ZonedDateTime.ofInstant(datetime(r).toInstant, ZoneOffset.UTC) ))
}
In above table, I want to be able to update the table based on id or provider_message_id.
I can easily update the row using id
update().where(_.id eqs message.id)...
But I can't update the table using provider_message_id
update().where(_.provider_message_id eqs callback_id)...
How can I use multiple fields to update the table in cassandra
There is a restriction with Cassandra updates is that they will work only with the primary key. The primary key can be one column (named partition key), or multiple columns (a partition key, and one or many clustering keys).
In the case that you are providing, you need to ensure that both id and provider_message_id are part of the primary key, the description of the table with cql should be something similar to:
cqlsh:> DESCRIBE keyspace1.messages;
...
CREATE TABLE keyspace1.messages (
id text,
phone text,
message text,
provider_message_id text,
status text,
datetime date,
PRIMARY KEY (id, provider_message_id)
) WITH CLUSTERING ORDER BY (provider_message_id ASC)
...
Also, please note that you will need to use id and provider_message_id in all the update queries (there is no update by id or provider_message_id). Your code will look as:
update().where(_.id eqs message.id).and(_.provider_message_id eqs callback_id)...
Related
it seems that I can't find anywhere how to properly use custom column types in Slick and I've been struggling for a while. Slick documentation
suggests MappedColumnType but I found it useable only for simple use-cases like primitive type wrappers (or it's probably just me not knowing how to use it properly).
Let's say that I have Jobs table in my DB described by JobsTableDef class. In that table, I have columns companyId and responsibleUserId which are Foreign keys for Company and User objects in their respective tables (CompaniesTableDef, UsersTableDef).
class JobsTableDef(tag: Tag) extends Table[Job] (tag, "jobs") {
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def title = column[String]("title")
def companyId = column[Long]("companyId")
def responsibleUserId = column[Long]("responsibleUserId")
def companyFK = foreignKey("COMPANY_ID_FK", companyId, companies)(i => i.id)
def responsibleUserFK = foreignKey("RESPONSIBLE_USER_FK", responsibleUserId, users)(i => i.id)
val companies = TableQuery[CompaniesTableDef]
val users = TableQuery[UsersTableDef]
override def * = (id, title, companyId, responsibleUserId) <> (Job.tupled, Job.unapply)
}
class CompaniesTableDef(tag: Tag) extends Table[Company] (tag, "companies") {
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def name = column[String]("name")
def about = column[String]("about")
override def * = (id, name, about) <> (Company.tupled, Company.unapply)
}
class UsersTableDef(tag: Tag) extends Table[User] (tag, "users"){
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def username = column[String]("username", O.Unique)
override def * = (id, username) <> (User.tupled, User.unapply)
}
What I would like to achieve is to automatically 'deserialize' Company and User represented by their IDs in Jobs table. For example:
class JobsTableDef(tag: Tag) extends Table[Job] (tag, "jobs") {
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def title = column[String]("title")
def company = column[Company]("companyId")
def responsibleUser = column[User]("responsibleUserId")
def companyFK = foreignKey("COMPANY_ID_FK", companyId, companies)(i => i.id.?)
def responsibleUserFK = foreignKey("RESPONSIBLE_USER_FK", responsibleUserId, users)(i => i.id.?)
val companies = TableQuery[CompaniesTableDef]
val users = TableQuery[UsersTableDef]
override def * = (id, title, company, responsibleUser) <> (Job.tupled, Job.unapply)
}
given that my Job class is defined like this:
case class Job(
id: Long,
title: String,
company: Company,
responsibleUser: User,
)
Currently, I'm doing it in old-fashioned way of getting Job from the DB, reading companyId and responsibleUserId, then querying the DB again and manually constructing another Job object (of course, I could also join tables, get the data as tuple and then construct Job object). I seriously doubt that this is the way go. Is there a smarter and more elegant way to instruct Slick to automagically fetch linked objects from another tables?
EDIT: I'm using Play 2.6.12 with Slick 3.2.2
After a couple of days of deeper investigation, I've concluded that it's currently impossible in Slick. What I was looking for could be described as auto joining tables described through custom column types. Slick indeed supports custom column types (embodied through MappedColumnType, as described in docs) but it works only for relatively simple types which aren't composed of another objects deserialized from DB (at least automatically, you could always try to fetch another object from the DB and then Await.result() the resulting Future object, but I guess that's not a good practice).
So, to answer myself, 'auto joining' in Slick isn't possible, falling back to manual joins with manual object construction.
I've defined a custom Cassandra type and a table,e.g:
CREATE TYPE my.usertype (
id text,
firstname text,
lastname text
);
CREATE TABLE mytable (
user frozen <usertype>,
...,
PRIMARY KEY(user)
);
How can I define this user type in the Cassandra table definition in Scala?
class MyTable extends CassandraTable[X, Y] {
object user extends UserColumn(this) with PartitionKey[User]
^^^^^??? ^^^???
How can I implement a custom UserColumn for the UserType? I checked the Phantom code for the column implementations, but any example and/or explanation would be great.
In phantom pro only.
#Udt case class User(
id: String,
firstname: String,
lastname: String
)
And then you use UDTColumn:
class MyTable extends Table[MyTable , Y] {
object user extends Col[User] with PartitionKey
}
This will give you automated schema generation and whatever else, including automated initialisation of your UDT.
So according to authors of the library, user defined types are unsupported in the open source edition of phantom: https://github.com/outworkers/phantom/issues/496
However, you may be able to partially overcome that by extending MapColumn, as it is described here: Phantom-DSL cassandra with frozen type . Of course, that's not perfect, e.g. you will not be able to generate CQL for schema creation and you will have to do some manual piping.
So more or less that could like like this:
class MyTable extends CassandraTable[MyTable , Y] {
object user extends MapColumn[MyTable , Y, String, String](this) with PartitionKey[MapColumn...]
I got the play-slick module up and running and am also using evolution in order to create the required tables in the database during application start.
For evolution to work it is required to write a 1.sql script which contains the table definitions that I want to create. At the moment it looks like this:
# --- !Ups
CREATE TABLE Users (
id UUID NOT NULL,
email varchar(255) NOT NULL,
password varchar(255) NOT NULL,
firstname varchar(255),
lastname varchar(255),
username varchar(255),
age varchar(255),
PRIMARY KEY (id)
);
# --- !Downs
DROP TABLE Users;
So far so good but for Slick to work correctly it also need to know the definition of my table. So I have a UserDAO object which looks like this:
class UserDAO #Inject()(protected val dbConfigProvider: DatabaseConfigProvider) extends HasDatabaseConfigProvider[JdbcProfile] {
import driver.api._
private val Users = TableQuery[UsersTable]
def all(): Future[Seq[User]] = db.run(Users.result)
def insert(user: User): Future[User] = db.run(Users += user).map { _ => user }
//Table definition
private class UsersTable(tag:Tag) extends Table[User](tag,"users"){
def id = column[UUID]("id", O.PrimaryKey)
def email = column[String]("email")
def password = column[String]("password")
def firstname = column[Option[String]]("firstname")
def lastname = column[Option[String]]("lastname")
def username = column[Option[String]]("username")
def age = column[Int]("age")
def * = (id, email,password,firstname,lastname,username,age) <> ((User.apply _).tupled, User.unapply)
}
}
I basically have the same table definition in two different places now. Once in the 1.sql script and once in the UserDAO class.
I really don’t like this design at all! Having the same table definitions in two different places doesn't seem to be right.
Is there some way to generate the evolution scripts from the table definitions inside UserDAO classes? Or is there a completely different way to generate the table definitions during startup (perhaps only using slick)? I really would like to only use the slick table definition and get rid of the annoying SQL scripts.
I am using play-2.4 and play-slick-1.0
Thanks a lot.
Great question - I was in the same boat as you!
I'd have just the DAO and this code:
TableQuery[UsersTable].schema.create
which'll create the database table for you. No need for the .sql.
Correspondingly, to drop, use .drop instead of .create.
You can also combine table creation of several tables using reduceLeft. Here's how I do it:
lazy val allTables = Array(
TableQuery[AcceptanceTable].schema,
[... many more ...]
TableQuery[UserTable].schema
).reduceLeft(_ ++ _)
/** Create all tables in database */
def create = {
allTables.create
}
/** Delete all tables in database */
def drop = {
allTables.drop
}
All that will need the driver API in scope such as:
val profile = slick.driver.H2Driver
import profile.api._
I have these case classes:
case class PolicyHolder(id : String, firstName : String, lastName : String)
case class Policy(address : Future[Address], policyHolder : Future[PolicyHolder], created : RichDateTime, duration : RichDuration )
I then have a slick schema defined for Policy
class PolicyDAO(tag: Tag) extends Table[Policy](tag, "POLICIES") with DbConfig {
def address = column[String]("ADDRESS", O.PrimaryKey)
def policyHolder = foreignKey("POLICY_HOLDER_FK", address, TableQuery[PolicyHolderDAO])(_.id)
def created = column[RichDateTime]("CREATED")
def duration = column[String]("DURATION")
def * = (address, policyHolder, created, duration) <> (Policy.apply, Policy.unapply)
}
What is the best way for me to define this projection correctly to map the policyHolder field inside of my Policy case class from the foreign key value to an actual instance of the PolicyHolder case class.
Our solution to this problem is to place the foreign key id in the case class, and to then use a lazy val or a def (the latter possibly being backed by a cache) to retrieve the record using the key. This is assuming that your PolicyHolders are stored in a separate table - if they're denormalized but you want to treat them as separate case classes then you can have the lazy val / def in Policy construct a new case class instead of retrieving the record using the foreign key.
class PolicyDAO(tag: Tag) extends Table[Policy](tag, "POLICIES") with DbConfig {
def address = column[String]("ADDRESS", O.PrimaryKey)
def policyHolderId = column[String]("POLICY_HOLDER_ID")
def created = column[RichDateTime]("CREATED")
def duration = column[String]("DURATION")
def * = (address, policyHolderId, created, duration) <> (Policy.apply, Policy.unapply)
}
case class Policy(address : Future[Address], policyHolderId : Future[String], created : RichDateTime, duration : RichDuration ) {
lazy val policyHolder = policyHolderId.map(id => PolicyHolderDAO.get(id))
}
We also used a common set of create/update/delete methods to account for the nesting, so that when a Policy is committed its inner PolicyHolder will also be committed; we used a CommonDAO class that extended Table and had the prototypes for the create/update/delete methods, and then all DAOs extended CommonDAO instead of Table and overrode create/update/delete as necessary.
Edit: To cut down on errors and reduce the amount of boilerplate we had to write, we used Slick's code generation tool - this way the CRUD operations could be automatically generated from the schema
In slick, how to do cascade deletion for foreign key? Is there any way to specify this on the schema level, or do it when delete is called after the query?
I updated the docs: https://github.com/slick/slick/pull/721/
A foreign key constraint can be defined with a Table’s foreignKey method. It first takes a name for the constraint, the referencing column(s) and the referenced table. The second argument list takes a function from the referenced table to its referenced column(s) as well as ForeignKeyAction for onUpdate and onDelete, which are optional and default to NoAction. When creating the DDL statements for the table, the foreign key definition is added to it.
class Coffees(tag: Tag) extends Table[(String, Int, Double, Int, Int)](tag, "COFFEES") {
def supID = column[Int]("SUP_ID")
//...
def supplier = foreignKey("SUP_FK", supID, suppliers)(_.id, onUpdate=ForeignKeyAction.Restrict, onDelete=ForeignKeyAction.Cascade)
// compiles to SQL:
// alter table "COFFEES" add constraint "SUP_FK" foreign key("SUP_ID")
// references "SUPPLIERS"("SUP_ID")
// on update RESTRICT on delete CASCADE
}