how to do cascade delete for foreign key in slick - scala

In slick, how to do cascade deletion for foreign key? Is there any way to specify this on the schema level, or do it when delete is called after the query?

I updated the docs: https://github.com/slick/slick/pull/721/
A foreign key constraint can be defined with a Table’s foreignKey method. It first takes a name for the constraint, the referencing column(s) and the referenced table. The second argument list takes a function from the referenced table to its referenced column(s) as well as ForeignKeyAction for onUpdate and onDelete, which are optional and default to NoAction. When creating the DDL statements for the table, the foreign key definition is added to it.
class Coffees(tag: Tag) extends Table[(String, Int, Double, Int, Int)](tag, "COFFEES") {
def supID = column[Int]("SUP_ID")
//...
def supplier = foreignKey("SUP_FK", supID, suppliers)(_.id, onUpdate=ForeignKeyAction.Restrict, onDelete=ForeignKeyAction.Cascade)
// compiles to SQL:
// alter table "COFFEES" add constraint "SUP_FK" foreign key("SUP_ID")
// references "SUPPLIERS"("SUP_ID")
// on update RESTRICT on delete CASCADE
}

Related

How to set unique constraint in Quill.io with case class

I am using quill library and mapping entities via case classes
case class Country(id: Long, name: String)
How can I set unique
constraint on name field?

How to update table in Phantom for cassandra Scala

I created following table for Cassandra
abstract class MessageTable extends Table[ConcreteMessageModel, Message] {
override def tableName: String = "messages"
// String because TimedUUIDs are bad bad bad
object id extends Col[String] with PartitionKey {
override lazy val name = "message_id"
}
object phone extends Col[String]
object message extends Col[String]
object provider_message_id extends Col[Option[String]]
object status extends Col[Option[String]]
object datetime extends DateColumn {
override lazy val name = "message_datetime"
}
override def fromRow(r: Row): Message = Message(phone(r), message(r), Some(UUID.fromString(id(r))), None, status(r), Some( ZonedDateTime.ofInstant(datetime(r).toInstant, ZoneOffset.UTC) ))
}
In above table, I want to be able to update the table based on id or provider_message_id.
I can easily update the row using id
update().where(_.id eqs message.id)...
But I can't update the table using provider_message_id
update().where(_.provider_message_id eqs callback_id)...
How can I use multiple fields to update the table in cassandra
There is a restriction with Cassandra updates is that they will work only with the primary key. The primary key can be one column (named partition key), or multiple columns (a partition key, and one or many clustering keys).
In the case that you are providing, you need to ensure that both id and provider_message_id are part of the primary key, the description of the table with cql should be something similar to:
cqlsh:> DESCRIBE keyspace1.messages;
...
CREATE TABLE keyspace1.messages (
id text,
phone text,
message text,
provider_message_id text,
status text,
datetime date,
PRIMARY KEY (id, provider_message_id)
) WITH CLUSTERING ORDER BY (provider_message_id ASC)
...
Also, please note that you will need to use id and provider_message_id in all the update queries (there is no update by id or provider_message_id). Your code will look as:
update().where(_.id eqs message.id).and(_.provider_message_id eqs callback_id)...

Slick upserting doesn't work for model which contains ID as character PK (in postgres only)

I try to use insertOrUpdate method from the slick framework. And this method works only if my model contains ID as Long autoincrement PK. If I try to use model with ID as unique String PK then the unsertOrUpdate method doesn't work in the postgres. But it work in H2.
Environment: scala-2.11.8, slick-3.0.0 or slick-3.2.1.
Jdbc drivers: "org.postgresql" % "postgresql" % "9.4.1211.jre7" or "42.1.4"
Test case with using model which contains ID as String PK:
// model and schema
case class Supplier(id: String, name: String)
class Suppliers(tag: Tag) extends Table[Supplier](tag, "suppliers") {
def id = column[String]("SUP_ID", O.PrimaryKey)
def name = column[String]("SUP_NAME")
def * = (id, name) <> (Supplier.tupled, Supplier.unapply)
}
val suppliers = TableQuery[Suppliers]
...
// usage
val action = suppliers.insertOrUpdate(Supplier("2", "name1"))
db.run(action) // raises the following exception (see below)
Exception:
org.postgresql.util.PSQLException: ERROR: syntax error at or near "merge"
Position: 1
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2477)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2190)
...
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
at slick.jdbc.JdbcActionComponent$InsertActionComposerImpl$InsertOrUpdateAction$$anonfun$nativeUpsert$1.apply(JdbcActionComponent.scala:562)
at slick.jdbc.JdbcActionComponent$InsertActionComposerImpl$InsertOrUpdateAction$$anonfun$nativeUpsert$1.apply(JdbcActionComponent.scala:559)
at slick.jdbc.JdbcBackend$SessionDef$class.withPreparedStatement(JdbcBackend.scala:371)
The above code works correctly with using H2-driver. But doesn't work with using postgresql driver.
Is this a bug or should I use a workaround instead insertOrUpdate?
UPD: probably, this issue is cause for the another problem

How to save a model, update the PK if it saves correctly

Say I have a model like:
case class User(id: Int, name: String)
I am using slick 3 so I have all Table defined etc.
My question is, I want to save a user, and then update the id to the newly inserted PK value from postgresql.
I want to re-use this pattern in my entire Data Layer, so if it can be extracted out to a function that would be better.
So I want to do this:
save the model
update the id with the newly inserted ID primary key value
throw an exception if it didn't save
How can i do this with slick 3.x ?
def save(user: User):User = {
// (users returning users.map(_.id)) ??
}
case class User(id: Option[Int], name: String)
def save(user: User)(implicit session:Session): User = {
(users returning users.map(_.id) into ((user,id) => user.copy(id=Some(id))))
+= User(None, "mgosk")
}
There is relevant documentation page:
http://slick.typesafe.com/doc/3.1.0/queries.html#inserting

Using Couchbase as cache over Slick

I'm trying to use Couchbase as a cache layer for a relational database that is accessed using Slick. The skeleton of my code that's relevant to the question is as follows:
class RdbTable[T <: Table[_]](implicit val bucket: CouchbaseBucket) {
type ElementType = T#TableElementType
private val table = TableQuery[T].baseTableRow
private def cacheAll(implicit session: Session) =
TableQuery[T].list foreach (elem => cache(elem))
private def cache(elem: ElementType) =
table.primaryKeys foreach (pk => bucket.set[ElementType](key(pk, elem), elem))
private def key(pk: PrimaryKey, elem: ElementType) = ???
.......
}
As you can see, I want to cache each element by all of its primary keys. For this purpose, I need to obtain the value of that key for the given element. But I don't see an obvious way to compute the value of a primary key (the column value, if single-column key; the tuple value, if multi-column).
Any suggestions on what to do? Note that the code MUST NOT know what the actual tables and their columns are. It must be completely general.
We're doing something similar, using Redis as the cache. Each of our records only has one primary key, but in some cases we need to include additional data with the cache key to avoid ambiguity (for example, we have a ManyToMany record that represents an association between two records; when we return a ManyToMany record we'll embed one (but not both) of the associated records, and so in the cache key we need to include the type of the associated record that we're returning).
abstract trait Record {
val cacheKey: CacheKey
}
trait ManyToManyRecord extends Record {
override val cacheKey: ManyToManyCacheKey
}
class CacheKey(recordType: String, key: Int) {
def getKey: String = recordType + ":" + key.toString
}
class ManyToManyCacheKey(recordType: String, key: Int, assocType: String) extends CacheKey {
def getKey: String = recordType + ":" + key.toString + ":" + assocType
}
All of our tables use an integer primary key called "id", so it's easy for us to figure out what the value of "key" is. If you're working with a more complicated schema and don't want to manually write out the "def key: String" (or whatever) definitions for all of your record / table types, then you could try using Slick code generation to automatically generate record / table classes / objects with "def key" created directly from the schema. However, the learning curve for Slick code generation (or any other code generation tool) is steep, so if this is your only use for it then you'd probably be better off generating "def key" by hand. (We generate somewhere between 20-30% of our code using the code generation tool, so the initial investment in learning how to use the tool has paid off)
Slick doesn't come with a built-in primary key extractor for entities. What you can do is use either interfaces, type classes or reflection. E.g. variants of the following:
Either make your entities implement a trait
trait HasPrimaryKey{
def primaryKey: Any
}
class RdbTable[T <: Table[_ <: HasPrimaryKey]](implicit val bucket: CouchbaseBucket) {
...
private def key(elem: ElementType) = elem.primaryKey
// and for each entity:
case class Person( ... ) extends HasPrimaryKey{
def primaryKey = ...
}
or a type class
trait KeyTypeClass[E,T <: Table[E]]{
def key(e: E): Any
}
class RdbTable[T <: Table[_]](implicit val bucket: CouchbaseBucket, keyTC: KeyTypeClass[T]) {
...
private def key(elem: ElementType) = keyTC(elem)
// and for each entity:
implicit val personKey = new KeyTypeClass[Person,PersonTable]{
def key(p: Person) = ...
}
or using reflection to iterate over the primary keys and pull the values out of corresponding fields of the entity.
Code generation as mentioned by Zim-Zam can help with the repetitive elements.