Anorm Scala insert list of objects with nested list - scala

I find myself in need of inserting a sequence of elements with a sequence of nested elements into a PostgreSQL database, preferably with a single statement, because I am returning a Future. I am using Scala Play with Anorm.
My data looks something like below.
case class Question(id: Long, titel: String)
case class Answer(questionId: Long, text: String)
In db it looks like this:
CREATE TABLE questions (
question_id SERIAL PRIMARY KEY NOT NULL,
titel TEXT NOT NULL,
);
CREATE TABLE answers (
answer_id SERIAL PRIMARY KEY NOT NULL,
question_id INT NOT NULL,
text TEXT NOT NULL,
FOREIGN KEY (question_id) REFERENCES questions(question_id) ON DELETE CASCADE
);
My function would look something like this:
def saveFormQuestions(questions: Seq[Question], answers: Seq[Answer]): Future[Long] = {
Future {
db.withConnection{ implicit c =>
SQL(
// sql
).executeInsert()
}
}
}
Somehow, in Anorm, SQL or both, I have to do the following, preferably in a single transaction:
foreach question in questions
insert question into questions
foreach answer in answers, where answer.questionId == old question.id
insert answer into answers with new question id gained from question insert
I am new with Scala Play, so I might have made some assumptions I shouldn't have. Any ideas to get me started would be appreciated.

I solved it with logic inside the db.withConnection block. Somehow I assumed that you had to have a single SQL statement inside db.withConnection, which turned out not to be true. So like this:
val idMap = scala.collection.mutable.Map[Long,Long]() // structure to hold map of old ids to new
db.withConnection { implicit conn =>
// save all questions and gather map of the new ids to the old
for (q <- questions) {
val id: Long = SQL("INSERT INTO questions (titel) VALUES ({titel})")
.on('titel -> q.titel)
.executeInsert(scalar[Long].single)
idMap(q.id) = id
}
// save answers with new question ids
if (answers.nonEmpty) {
for (a <- answers ) {
SQL("INSERT INTO answers (question_id, text) VALUES ({qid}, {text});")
.on('qid -> idMap(a.questionId), 'text -> a.text).execute()
}
}
}

As indicated by its name, Anorm is not an ORM, and won't generate the statement for you.
You will have to determine the statements appropriate the represent the data and relationships (e.g. my Acolyte tutorial).
As for transaction, Anorm is a thin/smart wrapper around JDBC, so JDBC transaction semantic is keep. BTW Play provides .withTransaction on its DB resolution utility.

Related

Alternatives for withFilterExpression for supporting composite key

I'm trying to query dynamoDB through withFilterExpression. I get an error as the argument is a composite key
Filter Expression can only contain non-primary key attributes: Primary key attribute: question_id
and also as it uses OR operator in the query and it cannot be passed to withKeyConditionExpression.
The query that was passed to withFilterExpression is similar to this question_id = 1 OR question_id = 2. The entire code is like follows
def getQuestionItems(conceptCode : String) = {
val qIds = List("1","2","3")
val hash_map : java.util.Map[String, Object] = new java.util.HashMap[String, Object]()
var queries = ArrayBuffer[String]()
hash_map.put(":c_id", conceptCode)
for ((qId, index) <- qIds.zipWithIndex) {
val placeholder = ":qId" + index
hash_map.put(placeholder, qId)
queries += "question_id = " + placeholder
}
val query = queries.mkString(" or ")
val querySpec = new QuerySpec()
.withKeyConditionExpression("concept_id = :c_id")
.withFilterExpression(query)
.withValueMap(hash_map)
questionsTable.query(querySpec)
}
Apart from withFilterExpression and withConditionExpression is there any other methods that I can use which is a part of QuerySpec ?
Let's raise things up a level. With a Query (as opposed to a GetItem or Scan) you provide a single PK value and optionally an SK condition. That's what a Query requires. You can't provide multiple PK values. If you want multiple PK values, you can do multiple Query calls. Or possibly you may consider a Scan across all PK values.
You can also consider having a GSI that presents the data in a format more suitable to efficient lookup.
Side note: With PartiQL you can actually specify multiple PK values, up to a limit. So if you really truly want this, that's a possibility. The downside is it raises things up to a new level of abstraction and can make inefficiencies hard to spot.

Slick Postgres: How to search in a list of strings using the like operator

There is a simple database entity:
case class Foo(id: Option[UUID], keywords: Seq[String])
I want to implement a search function which returns all entities of type Foo which have at least one keyword that contains the search string.
I'm using Slick and tried this:
def searchKeywords(txt: String): Future[Seq[Foo]] = {
val action = Foos.filter(p => p.keywords.any like s"%$txt%").result
db.run(action)
}
This piece of code compiles, but when executing, I get this SQL error:
PSQLException: ERROR: syntax error at or near "any"
The generated sql statement looks like:
select "id", "title", "tagline", "logo", "short_desc", "keywords", "initial_condition", "work_process", "end_result", "ts", "lm", "v" from "projects" where any("keywords") like '%foo%'
And it does not work with postgresql. (I'm using v12)
Schema for the table looks like this:
CREATE TABLE foos
(
id UUID NOT NULL PRIMARY KEY,
keywords varchar[] NOT NULL
);
How can I achieve to search in a list of strings using the like operator?
From a pure SQL point of view, you need a derived table to achieve that. I hope some expert corrects me if I'm wrong but you can't use SQL operator like on a array.
Supposing your table construction is :
CREATE TABLE foos
(
id UUID NOT NULL PRIMARY KEY,
keywords varchar[] NOT NULL
);
Then an SQL way of retrieving the results would be :
select * from (
select id, unnest(keywords) as keyw from foos
) myTable where keyw like '%foo%'
Otherwise, the syntax you're using for the like operator seems correct.
myProperty like s"%$myVariable"

Updating a many-to-many join table in slick 3.0

I have a database structure with a many-to-many relationship between Dreams and Tags.
Dreams and Tags are kept in separate tables, and there is a join table between them as usual in this kind of situation, with the class DreamTag representing the connection:
protected class DreamTagTable(tag: Tag) extends Table[DreamTag](tag, "dreamtags") {
def dreamId = column[Long]("dream_id")
def dream = foreignKey("dreams", dreamId, dreams)(_.id)
def tagId = column[Long]("tag_id")
def tag = foreignKey("tags", tagId, tags)(_.id)
// default projection
def * = (dreamId, tagId) <> ((DreamTag.apply _).tupled, DreamTag.unapply)
}
I have managed to perform the appropriate double JOIN to retrieve a Dream with its Tags, but I struggled to do it in a fully non-blocking manner.
Here is my code for performing the retrieval, as this may shed some light on things:
def createWithTags(form: DreamForm): Future[Seq[Int]] = db.run {
logger.info(s"Creating dream [$form]")
// action to put the dream
val dreamAction: DBIO[Dream] =
dreams.map(d => (d.title, d.content, d.date, d.userId, d.emotion))
.returning(dreams.map(_.id))
.into((fields, id) => Dream(id, fields._1, fields._2, fields._3, fields._4, fields._5))
.+=((form.title, form.content, form.date, form.userId, form.emotion))
// action to put any tags that don't already exist (create a single action)
val tagActions: DBIO[Seq[MyTag]] =
DBIO.sequence(form.tags.map(text => createTagIfNotExistsAction(text)))
// zip allows us to get the results of both actions in a tuple
val zipAction: DBIO[(Dream, Seq[MyTag])] = dreamAction.zip(tagActions)
// put the entries into the join table, if the zipAction succeeds
val dreamTagsAction = exec(zipAction.asTry) match {
case Success(value) => value match {
case (dream, tags) =>
DBIO.sequence(tags.map(tag => createDreamTagAction(dream, tag)))
}
case Failure(exception) => throw exception
}
dreamTagsAction
}
private def createTagIfNotExistsAction(text: String): DBIO[MyTag] = {
tags.filter(_.text === text).result.headOption.flatMap {
case Some(t: MyTag) => DBIO.successful(t)
case None =>
tags.map(t => (t.text))
.returning(tags.map(_.id))
.into((text, id) => MyTag(id, text)) += text
}
}
private def createDreamTagAction(dream: Dream, tag: MyTag): DBIO[Int] = {
dreamTags += DreamTag(dream.id, tag.id)
}
/**
* Helper method for executing an async action in a blocking way
*/
private def exec[T](action: DBIO[T]): T = Await.result(db.run(action), 2.seconds)
Scenario
Now I'm at the stage where I want to be able to update a Dream and the list of Tags, and I'm struggling.
Given a situation where the existing list of tags is ["one", "two", "three"] and is being updated to ["two", "three", "four"] I want to:
Delete the Tag for "one", if no other Dreams reference it.
Not touch the entries for "two" and "three", as the Tag and DreamTag entries already exist.
Create Tag "four" if it doesn't exist, and add an entry to the join table for it.
I think I need to do something like list1.diff(list2) and list2.diff(list1) but that would require getting performing a get first, which seems wrong.
Perhaps my thinking is wrong - Is it best to just clear all entries in the join table for this Dream and then create every item in the new list, or is there a nice way to diff the two lists (previous and existing) and perform the deletes/adds as appropriate?
Thanks for the help.
N.B. Yes, Tag is a super-annoying class name to have, as it clashes with slick.lifted.Tag!
Update - My Solution:
I went for option 2 as mentioned by Richard in his answer...
// action to put any tags that don't already exist (create a single action)
val tagActions: DBIO[Seq[MyTag]] =
DBIO.sequence(form.tags.map(text => createTagIfNotExistsAction(text)))
// zip allows us to get the results of both actions in a tuple
val zipAction: DBIO[(Int, Seq[MyTag])] = dreamAction.zip(tagActions)
// first clear away the existing dreamtags
val deleteExistingDreamTags = dreamTags
.filter(_.dreamId === dreamId)
.delete
// put the entries into the join table, if the zipAction succeeds
val dreamTagsAction = zipAction.flatMap {
case (_, tags) =>
DBIO.sequence(tags.map(tag => createDreamTagAction(dreamId, tag)))
}
deleteExistingDreamTags.andThen(dreamTagsAction)
I struggled to do it in a fully non-blocking manner.
I see you have an eval call which is blocking. I looks like this can be replaced with a flatMap:
case class Dream()
case class MyTag()
val zipAction: DBIO[(Dream, Seq[MyTag])] =
DBIO.successful( (Dream(), MyTag() :: MyTag() :: Nil) )
def createDreamTagAction(dream: Dream)(tag: MyTag): DBIO[Int] =
DBIO.successful(1)
val action: DBIO[Seq[Int]] =
zipAction.flatMap {
case (dream, tags) => DBIO.sequence(tags.map(createDreamTagAction(dream)))
}
Is it best to just clear all entries in the join table for this Dream and then create every item in the new list, or is there a nice way to diff the two lists (previous and existing) and perform the deletes/adds as appropriate?
Broadly, you have three options:
Look in the database to see what tags exist, compare them to what you want the state to be, and compute a set of insert and delete actions.
Delete all the tags and insert the state you want to reach.
Move the problem to SQL so you insert tags where they don't already exist in the table, and delete tags that don't exist in your desired state. You'd need to look at the capabilities of your database and likely need to use Plain SQL in Slick to get the effect. I'm not sure what the insert would be for adding tags (perhaps a MERGE or upsert of some kind), but deleting would be of the form: delete from tags where tag not in (1,2) if you wanted a final state of just tags 1 and 2.
The trades off:
For 1, you need to run 1 query to fetch existing tags, and then 1 query for the deletes, and at least 1 for the inserts. This will change the smallest number of rows, but will be the largest number of queries.
For 2, you'll be executing at least 2 queries: a delete and 1 (potentially) for a bulk insert. This will change the largest number of rows.
For 3, you'll be executing a constant 2 queries (if your database can carry out the logic for you). If this is even possible, the queries will be more complicated.

Inner actions are not getting executed inside transaction in slick

I have requirement where i have to insert into book table and based on auto Id generated , I have to insert into bookModules table.For every bookModule autoincrement Id , I have to insert into bookAssoc table. But bookModule is populated using seq[bookMods] and bookAssociation data is populated using Seq[userModRoles].
I have written below code to achieve this but it is only executing action1. My inner actions are not getting executed. Please help me .
val action1 =bookDao.insert(book)
val action2 = action1.map { id => DBIO.sequence(
bookMods.map { bookMod =>
bookModDao.insert(new bookModule(None, id, bookMod.moduleId, bookMod.isActive))
.map { bookModId =>
userModRoles.map { userModRole =>
bookAssocDao.insert(new bookAssociation(None, bookModId, userModRole.moduleId, userModRole.roleId))
}
}
})
}
db.run(action2.transactionally)
EDIT 1: Adding code in for comphrension
val action1 = for{
bookId<-bookDao.insert(book) // db transaction
bookMod<-bookModules// this is scala collection // Iterate each element and Insert into tables
bookModId<-bookModDao.insert(new bookModule(None, bookId, bookMod.moduleId, bookMod.isActive))
userModRole<-userModRoles //// this is scala collection // Iterate each element and Insert into tables
bookAssocDao.insert(new bookAssociation(None, bookModId, userModRole.moduleId, userModRole.roleId))
}yield()
db.run(action2.transactionally)
You need to split your logic into two parts.
1) DBIO
2) iterates over collections.
In that case, a solution should be easy. But without using for comprehension.
bookModules.map{ bookMod =>
userModRoles.map{ userModRole =>
db.run(bookDao.insert(book).flatMap{ bookId =>
bookModDao.inser(new bookModule(None, bookId, bookMod.moduleId, bookMod.isActive)).map{ bookModId =>
bookAssocDao.insert(new bookAssociation(None, bookModId, userModRole.moduleId, userModRole.roleId))
}
}).transactionally
}
}
Try something like this. It should work. You can think about move db.run to Dao classes. And here, probably it's service you should work on Futures.
And sorry if I make some mistake with parenthesis, but here it's difficult to have everything clear :)

Concatenating databases with Squeryl

I'm trying to use Squeryl to take the contents of a table from one database, and append it to the equivalent table in another database. The primary key will have to be reassigned in the process, but I'm getting the error NULL not allowed for column "SIMID". Why is this?
object Concatenator {
def main(args: Array[String]) {
Class.forName("org.h2.Driver");
val seshA = Session.create(
java.sql.DriverManager.getConnection("jdbc:h2:file:data/resultsA", "sa", "password"),
new H2Adapter
)
val seshB = Session.create(
java.sql.DriverManager.getConnection("jdbc:h2:file:data/resultsB", "sa", "password"),
new H2Adapter
)
using(seshA){
import Library._
from(sims){s => select(s)}.foreach{item =>
using(seshB){
sims.insert(item);
}
}
}
}
case class Simulation(
#Column("SIMID")
var id: Long,
val date: Date
) extends KeyedEntity[Long]
object Library extends Schema {
val sims = table[Simulation]
on(sims)(s => declare(
s.id is(unique, indexed, autoIncremented)
))
}
}
Update:
I think it might be something to do with the DBs. They were created in a Java project using JPA/EclipseLink and in additional to generating tables for my entities it also created a table called SEQUENCE, presumably for primary key generation.
I've found that I can create an brand new table in Squeryl and manually put the contents of both databases in that, thus achieving the same effect. Interestingly this new table did not have any SEQUENCE table auto generated. So I'm guessing it comes down to how JPA/EclipseLink was generating my primary keys?
Update 2:
As requested, I appended trace_level_file=3 to the url and the files are here: resultsA.trace.db and resultsB.trace.db. B is the more interesting one I think. Also, I've put a simplified version of the database here which has had unnecessary tables removed (the same database is used for resultsA and resultsB).
Just got a moment to look at this more closely. I turns out you were on the right track. While I guess that EclipseLink uses Sequences to generate the PK value, Squeryl defines the column as something like:
simid bigint not null primary key auto_increment
Without the auto_increment flag a value is never placed in the column and you end up with the constraint violation you mentioned. It sounds like you've already worked around the issue, but hopefully this will help you or someone else in the future.
Not really a solution, but my workaround is to create a new database
val seshNew = Session.create(java.sql.DriverManager.getConnection("jdbc:h2:file:data/resultsNew", "sa","password"),new H2Adapter)
and then just write all the data from the other databases into it
using(seshNew){
sims.insert(new Simulation(0,item.date))
}
The primary keys 0 gets overwritten as appropriate.