Table creation in play 2.4 with play-slick 1.0 - scala

I got the play-slick module up and running and am also using evolution in order to create the required tables in the database during application start.
For evolution to work it is required to write a 1.sql script which contains the table definitions that I want to create. At the moment it looks like this:
# --- !Ups
CREATE TABLE Users (
id UUID NOT NULL,
email varchar(255) NOT NULL,
password varchar(255) NOT NULL,
firstname varchar(255),
lastname varchar(255),
username varchar(255),
age varchar(255),
PRIMARY KEY (id)
);
# --- !Downs
DROP TABLE Users;
So far so good but for Slick to work correctly it also need to know the definition of my table. So I have a UserDAO object which looks like this:
class UserDAO #Inject()(protected val dbConfigProvider: DatabaseConfigProvider) extends HasDatabaseConfigProvider[JdbcProfile] {
import driver.api._
private val Users = TableQuery[UsersTable]
def all(): Future[Seq[User]] = db.run(Users.result)
def insert(user: User): Future[User] = db.run(Users += user).map { _ => user }
//Table definition
private class UsersTable(tag:Tag) extends Table[User](tag,"users"){
def id = column[UUID]("id", O.PrimaryKey)
def email = column[String]("email")
def password = column[String]("password")
def firstname = column[Option[String]]("firstname")
def lastname = column[Option[String]]("lastname")
def username = column[Option[String]]("username")
def age = column[Int]("age")
def * = (id, email,password,firstname,lastname,username,age) <> ((User.apply _).tupled, User.unapply)
}
}
I basically have the same table definition in two different places now. Once in the 1.sql script and once in the UserDAO class.
I really don’t like this design at all! Having the same table definitions in two different places doesn't seem to be right.
Is there some way to generate the evolution scripts from the table definitions inside UserDAO classes? Or is there a completely different way to generate the table definitions during startup (perhaps only using slick)? I really would like to only use the slick table definition and get rid of the annoying SQL scripts.
I am using play-2.4 and play-slick-1.0
Thanks a lot.

Great question - I was in the same boat as you!
I'd have just the DAO and this code:
TableQuery[UsersTable].schema.create
which'll create the database table for you. No need for the .sql.
Correspondingly, to drop, use .drop instead of .create.
You can also combine table creation of several tables using reduceLeft. Here's how I do it:
lazy val allTables = Array(
TableQuery[AcceptanceTable].schema,
[... many more ...]
TableQuery[UserTable].schema
).reduceLeft(_ ++ _)
/** Create all tables in database */
def create = {
allTables.create
}
/** Delete all tables in database */
def drop = {
allTables.drop
}
All that will need the driver API in scope such as:
val profile = slick.driver.H2Driver
import profile.api._

Related

How to update table in Phantom for cassandra Scala

I created following table for Cassandra
abstract class MessageTable extends Table[ConcreteMessageModel, Message] {
override def tableName: String = "messages"
// String because TimedUUIDs are bad bad bad
object id extends Col[String] with PartitionKey {
override lazy val name = "message_id"
}
object phone extends Col[String]
object message extends Col[String]
object provider_message_id extends Col[Option[String]]
object status extends Col[Option[String]]
object datetime extends DateColumn {
override lazy val name = "message_datetime"
}
override def fromRow(r: Row): Message = Message(phone(r), message(r), Some(UUID.fromString(id(r))), None, status(r), Some( ZonedDateTime.ofInstant(datetime(r).toInstant, ZoneOffset.UTC) ))
}
In above table, I want to be able to update the table based on id or provider_message_id.
I can easily update the row using id
update().where(_.id eqs message.id)...
But I can't update the table using provider_message_id
update().where(_.provider_message_id eqs callback_id)...
How can I use multiple fields to update the table in cassandra
There is a restriction with Cassandra updates is that they will work only with the primary key. The primary key can be one column (named partition key), or multiple columns (a partition key, and one or many clustering keys).
In the case that you are providing, you need to ensure that both id and provider_message_id are part of the primary key, the description of the table with cql should be something similar to:
cqlsh:> DESCRIBE keyspace1.messages;
...
CREATE TABLE keyspace1.messages (
id text,
phone text,
message text,
provider_message_id text,
status text,
datetime date,
PRIMARY KEY (id, provider_message_id)
) WITH CLUSTERING ORDER BY (provider_message_id ASC)
...
Also, please note that you will need to use id and provider_message_id in all the update queries (there is no update by id or provider_message_id). Your code will look as:
update().where(_.id eqs message.id).and(_.provider_message_id eqs callback_id)...

Custom column (Object) type in Slick

it seems that I can't find anywhere how to properly use custom column types in Slick and I've been struggling for a while. Slick documentation
suggests MappedColumnType but I found it useable only for simple use-cases like primitive type wrappers (or it's probably just me not knowing how to use it properly).
Let's say that I have Jobs table in my DB described by JobsTableDef class. In that table, I have columns companyId and responsibleUserId which are Foreign keys for Company and User objects in their respective tables (CompaniesTableDef, UsersTableDef).
class JobsTableDef(tag: Tag) extends Table[Job] (tag, "jobs") {
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def title = column[String]("title")
def companyId = column[Long]("companyId")
def responsibleUserId = column[Long]("responsibleUserId")
def companyFK = foreignKey("COMPANY_ID_FK", companyId, companies)(i => i.id)
def responsibleUserFK = foreignKey("RESPONSIBLE_USER_FK", responsibleUserId, users)(i => i.id)
val companies = TableQuery[CompaniesTableDef]
val users = TableQuery[UsersTableDef]
override def * = (id, title, companyId, responsibleUserId) <> (Job.tupled, Job.unapply)
}
class CompaniesTableDef(tag: Tag) extends Table[Company] (tag, "companies") {
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def name = column[String]("name")
def about = column[String]("about")
override def * = (id, name, about) <> (Company.tupled, Company.unapply)
}
class UsersTableDef(tag: Tag) extends Table[User] (tag, "users"){
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def username = column[String]("username", O.Unique)
override def * = (id, username) <> (User.tupled, User.unapply)
}
What I would like to achieve is to automatically 'deserialize' Company and User represented by their IDs in Jobs table. For example:
class JobsTableDef(tag: Tag) extends Table[Job] (tag, "jobs") {
def id = column[Long]("id", O.AutoInc, O.PrimaryKey)
def title = column[String]("title")
def company = column[Company]("companyId")
def responsibleUser = column[User]("responsibleUserId")
def companyFK = foreignKey("COMPANY_ID_FK", companyId, companies)(i => i.id.?)
def responsibleUserFK = foreignKey("RESPONSIBLE_USER_FK", responsibleUserId, users)(i => i.id.?)
val companies = TableQuery[CompaniesTableDef]
val users = TableQuery[UsersTableDef]
override def * = (id, title, company, responsibleUser) <> (Job.tupled, Job.unapply)
}
given that my Job class is defined like this:
case class Job(
id: Long,
title: String,
company: Company,
responsibleUser: User,
)
Currently, I'm doing it in old-fashioned way of getting Job from the DB, reading companyId and responsibleUserId, then querying the DB again and manually constructing another Job object (of course, I could also join tables, get the data as tuple and then construct Job object). I seriously doubt that this is the way go. Is there a smarter and more elegant way to instruct Slick to automagically fetch linked objects from another tables?
EDIT: I'm using Play 2.6.12 with Slick 3.2.2
After a couple of days of deeper investigation, I've concluded that it's currently impossible in Slick. What I was looking for could be described as auto joining tables described through custom column types. Slick indeed supports custom column types (embodied through MappedColumnType, as described in docs) but it works only for relatively simple types which aren't composed of another objects deserialized from DB (at least automatically, you could always try to fetch another object from the DB and then Await.result() the resulting Future object, but I guess that's not a good practice).
So, to answer myself, 'auto joining' in Slick isn't possible, falling back to manual joins with manual object construction.

Slick schema/design guidelines

Suppose I have such schema (simplified), using slick 2.1:
// Banks
case class Bank(id: Int, name: String)
class Banks(tag: Tag) extends Table[Bank](tag, "banks") {
def id = column[Int]("id", O.PrimaryKey)
def name = column[String]("name")
def * = (id, name) <> (Bank.tupled, Bank.unapply)
}
lazy val banks = TableQuery[Banks]
Each bank has, say, 1:1 BankInfo, which I keep in separate table:
// Bank Info
case class BankInfo(bank_id: Int, ...)
class BankInfos(tag: Tag) extends Table[BankInfo](tag, "bank_infos") {
def bankId = column[Int]("bank_id", O.PrimaryKey)
...
}
lazy val bankInfos = TableQuery[BankInfos]
And each bank has associated 1:M BankItems:
// Bank Item
case class BankItem(id: Int, bank_id: Int, ...)
class BankItems(tag: Tag) extends Table[BankItem](tag, "bank_items") {
def id = column[Int]("id", O.PrimaryKey)
def bankId = column[Int]("bank_id")
...
}
lazy val bankItems = TableQuery[BankItems]
So, if I used ORM, I would have had convenient accessors for associated data, something like bank.info or bank.items. I've read "Migrating from ORMs", and I understand that Slick doesn't support full relation mapping, though I've seen example where foreignKey was used.
Basically, where should I place my code to access related data (I want to access all BankItems for some Bank, and its BankInfo). Should it be implemented in case classes, or in Table classes, or elsewhere? Can someone give me a practical advice on what is a "standard practice" in this case?

slick 3 auto-generated - default value (timestamp) column, how to define a Rep[Date] function

I have the following postgres column definition:
record_time TIMESTAMP WITHOUT TIME ZONE DEFAULT now()
How would I map it to slick? Please take into account that I wish to map the default value generated by the now() function
i.e:
def recordTimestamp: Rep[Date] = column[Date]("record_time", ...???...)
Should any extra definition go where the ...???... is currently located?
EDIT (1)
I do not want to use
column[Date]("record_time", O.Default(new Date(System.currentTimeMillis()))) // or some such applicative generation of the date column value
I found a blog explaining that you can use the following:
// slick 3
import slick.profile.SqlProfile.ColumnOption.SqlType
def created = column[Timestamp]("created", SqlType("timestamp not null default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP"))
// slick 3
def createdAt = column[Timestamp]("createdAt", O.NotNull, O.DBType("timestamp default now()"))
see: http://queirozf.com/entries/scala-slick-dealing-with-datetime-timestamp-attributes
I guess this is not supported yet.
Here is issue: https://github.com/slick/slick/issues/214
Slick 3 Example
import slick.driver.PostgresDriver.api._
import slick.lifted._
import java.sql.{Date, Timestamp}
/** A representation of the message decorated for Slick persistence
* created_date should always be null on insert operations.
* It is set at the database level to ensure time syncronicity
* Id is the Twitter snowflake id. All columns NotNull unless declared as Option
* */
class RawMessages(tag: Tag) extends Table[(String, Option[String], Timestamp)](tag, Some("rti"), "RawMessages") {
def id = column[String]("id", O.PrimaryKey)
def MessageString = column[Option[String]]("MessageString")
def CreatedDate = column[Timestamp]("CreatedDate", O.SqlType("timestamp default now()"))
def * = (id, MessageString, CreatedDate)
}

Defining projection to map to nested case classes

I have these case classes:
case class PolicyHolder(id : String, firstName : String, lastName : String)
case class Policy(address : Future[Address], policyHolder : Future[PolicyHolder], created : RichDateTime, duration : RichDuration )
I then have a slick schema defined for Policy
class PolicyDAO(tag: Tag) extends Table[Policy](tag, "POLICIES") with DbConfig {
def address = column[String]("ADDRESS", O.PrimaryKey)
def policyHolder = foreignKey("POLICY_HOLDER_FK", address, TableQuery[PolicyHolderDAO])(_.id)
def created = column[RichDateTime]("CREATED")
def duration = column[String]("DURATION")
def * = (address, policyHolder, created, duration) <> (Policy.apply, Policy.unapply)
}
What is the best way for me to define this projection correctly to map the policyHolder field inside of my Policy case class from the foreign key value to an actual instance of the PolicyHolder case class.
Our solution to this problem is to place the foreign key id in the case class, and to then use a lazy val or a def (the latter possibly being backed by a cache) to retrieve the record using the key. This is assuming that your PolicyHolders are stored in a separate table - if they're denormalized but you want to treat them as separate case classes then you can have the lazy val / def in Policy construct a new case class instead of retrieving the record using the foreign key.
class PolicyDAO(tag: Tag) extends Table[Policy](tag, "POLICIES") with DbConfig {
def address = column[String]("ADDRESS", O.PrimaryKey)
def policyHolderId = column[String]("POLICY_HOLDER_ID")
def created = column[RichDateTime]("CREATED")
def duration = column[String]("DURATION")
def * = (address, policyHolderId, created, duration) <> (Policy.apply, Policy.unapply)
}
case class Policy(address : Future[Address], policyHolderId : Future[String], created : RichDateTime, duration : RichDuration ) {
lazy val policyHolder = policyHolderId.map(id => PolicyHolderDAO.get(id))
}
We also used a common set of create/update/delete methods to account for the nesting, so that when a Policy is committed its inner PolicyHolder will also be committed; we used a CommonDAO class that extended Table and had the prototypes for the create/update/delete methods, and then all DAOs extended CommonDAO instead of Table and overrode create/update/delete as necessary.
Edit: To cut down on errors and reduce the amount of boilerplate we had to write, we used Slick's code generation tool - this way the CRUD operations could be automatically generated from the schema