Slick issue when going with PostgreSQL - postgresql

I'm using slick in a scala project to query some tables.
//define table
object Addresses extends Table[Address]("assetxs.address") {
def id = column[Int]("id", O.PrimaryKey)
def street = column[String]("street")
def number = column[String]("number")
def zipcode = column[String]("zipcode")
def country = column[String]("country")
def * = id ~ street ~ number ~ zipcode ~ country <> (Address, Address.unapply _)
}
If I use any query of this table it does not work (it says it cannot find my table) so I went further and print out the query like:
implicit val session = Database.forURL("jdbc:postgresql://localhost:5432/postgres", driver = "org.postgresql.Driver", user="postgres", password="postgres").createSession()
session.withTransaction{
val query = Query(Addresses)
println("Addresses: " + query.selectStatement)
}
I noticed that the name of the schema.table appears in "" so the statement is:
select x2."id", x2."street", x2."number", x2."zipcode", x2."country"
from "assetxs.address" x2
which of course does not work (I've tried to run it in PostgreSQL tool and I needed to remove "" from table name in order to have it working.
Can you please tell me if there is any slick option to not include "" in any query when using table names?

You've put the schema into the table name. A (quoted) table name containing a dot character is valid in SQL but it's not what you want here. You have to specify the schema separately:
object Addresses extends Table[Address](Some("assetxs"), "address")

In the end I was able to solve this issue.
I specify the table name only:
object Addresses extends Table[Address]("address")
and change my postgresql conf to include my schema when searching (it seems that slick is looking on public schema only):
search_path = '"$user",assetxs,public'
and now it works.

The solution I found when wanting to work with both H2 (testing) and Postgres (production) using liquibase and slick.
Stick with lowercase in your Slick Table objects
class MyTable(tag: Tag) extends Table[MyRecord](tag,
Some("my_schema"), "my_table")
In your H2 url config you'll need to specify DATABASE_TO_UPPER=false (this prevents the table and column names from being upper cased) and put quotation marks around the INIT schema (this prevents the schema from being upper cased)
url =
jdbc:h2:mem:test;MODE=PostgreSQL;DATABASE_TO_UPPER=false;INIT=create
schema if not exists \"my_schema\"\;SET SCHEMA \"my_schema\""
When specifying schema names in liquibase scripts it must also be quoted so that H2 won't try to capitalize it.

Since this problem is still bothering Scala newcomers (like me), I've performed small research and found that such an application.conf was successful with Slick 3.1.1 and PostgreSQL 9.5:
postgres.devenv = {
url = "jdbc:postgresql://localhost:5432/dbname?currentSchema=customSchema"
user = "user"
password = "password"
driver = org.postgresql.Driver
}

You're just using the wrong driver, check your imports
import scala.slick.driver.PostgresDriver.simple._

Related

Kotlin Exposed/Postgresql is lower-casing my table name in queries; how to use capitalized table names?

I have the following SQL query using Kotlin Exposed to a Postgres server with a capitalized table name:
object Table: IntIdTable("Table") {
val tC = text("Text")
val vC = text("Value")
}
Database.connect("jdbc:postgresql://...", driver = "org.postgresql.Driver")
transaction {
logger.addLogger(StdOutSqlLogger)
val query = Table.select {
Table.id eq 5
}
query.forEach {
println( it[Table.tC] )
}
}
But I am getting back:
Exception in thread "main" org.postgresql.util.PSQLException: ERROR: relation "table" does not exist
Usually I would simply be able to quote the table name "Table" to use the capitalized table names, but can't seem to do that with Kotlin Exposed; so is there a way to use the capitalized table name by preventing it from being lowercased?
I was able to resolve this by using escaped quotes within the table string, example for the above question would be as follows:
object Table : IntIdTable("\"Table\"") {
Could you provide the whole sample and point to the place where the exception is thrown? From the current code that's unclear who and how trying to create relation to the table.

Parse multiple ResultSets from select query with Anorm 2.4

I am continuing my journey with Play, Scala & Anorm, and I faced a following problem:
One of my repository classes holds a logic to fetch list of email from DB for a user, as follows:
val sql =
"""
|select * from user_email where user_id = {user_id}
|;--
""".stripMargin
SQL(sql).on(
'user_id -> user.id
).as(UserEmail.simpleParser *)
While a parser is something like this:
val simpleParser: RowParser[UserEmail] = (
SqlParser.get[Muid](qualifiedColumnNameOf("", Identifiable.Column.Id)) ~
AuditMetadata.generateParser("") ~
SqlParser.get[Muid]("user_id") ~
SqlParser.get[String]("email") ~
SqlParser.get[Boolean]("verified") ~
SqlParser.get[Muid]("verification_code_id") ~
SqlParser.get[Option[DateTime]]("verification_sent_date")
) map {
case id ~ audit ~ userId ~ email ~ emailVerified ~ emailVerificationCode ~ emailVerificationSentDate =>
UserEmail(
id,
audit,
userId,
email,
emailVerified,
emailVerificationCode,
emailVerificationSentDate
)
}
When I execute this in test, I am getting the following error:
[error] Multiple ResultSets were returned by the query. (AbstractJdbc2Statement.java:354)
...
It is expected, that there are more than a simple result; however, I am puzzled about how to parse this case correctly.
I was under assumption, that:
UserEmail.simpleParser single is for simple row
and
UserEmail.simpleParser * is to handle multiple rows
I could not figure this put based on documentation, and, at least for now, didn't find anything useful anywhere else.
How do I parse multiple rows from the result set?
Update: I just found this gist (https://gist.github.com/davegurnell/4b432066b39949850b04) with pretty good explanation and created a ResultSetParser like so:
val multipleParser: ResultSetParser[List[UserEmail]] = UserEmail.simpleParser.*
And... that didn't help!
Thanks,

Scala Postgres IN Operator

I'm developing a web application based on Activator and Postgres.
I am trying to perform the following SQL query:
SELECT *
FROM table t_0
WHERE t_0.category IN (?)
then to populate the query I am using the following Scala code
val readQuery = """ SELECT * FROM table t_0 WHERE t_0.category IN (?) """
val categories = Array("free time", "living")
val insertValues = Array(categories)
val queryResult = await { connection.sendPreparedStatement(readQuery, insertValues) }
Even though there are some records in the database I always get an empty set, I have already tried with some forms of Array[Byte], but I have never managed to get results.
Does anybody have some tips or trick that I can use?
Thanks!

Lift Squeryl-Record select with google user id

I want to identify a user using the Google OAuth service. This returns a googleId which is 2^67 so it does not fit into the long datatype which I am currently using as the primary key in my table. Because of this I wanted to store the googleId in a StringField.
However, this does not function because I can not get the Where clause working:
where(u.googleId.is === id)
produces the error value === is not a member of u.googleId.MyType.
I defined it like this:
val googleId = new StringField(this, "")
How can I select the data using a StringField in the where clause?
Thanks in advance
Flo
Doesn't BigDecimal do the trick? http://squeryl.org/schema-definition.html
Also, I'm not sure about the syntax, isn't it supposed to be ?:
where(u.googleId == "hello there")
I'm using pure squeryl and pure liftweb, so I'm not sure..
I solved it by importing org.squeryl.PrimitiveTypeMode._ as described here Squeryl Schema Definition
I then set the googleId to be of type StringField
val googleId = new StringField(this,"")
Now I have to add a .~ to the field because else it will be interpreted as String not as TypedExpressionNode[Int].
Now the following statement works:
where(u.googleId.is.~ === id) select (u))

Concatenating databases with Squeryl

I'm trying to use Squeryl to take the contents of a table from one database, and append it to the equivalent table in another database. The primary key will have to be reassigned in the process, but I'm getting the error NULL not allowed for column "SIMID". Why is this?
object Concatenator {
def main(args: Array[String]) {
Class.forName("org.h2.Driver");
val seshA = Session.create(
java.sql.DriverManager.getConnection("jdbc:h2:file:data/resultsA", "sa", "password"),
new H2Adapter
)
val seshB = Session.create(
java.sql.DriverManager.getConnection("jdbc:h2:file:data/resultsB", "sa", "password"),
new H2Adapter
)
using(seshA){
import Library._
from(sims){s => select(s)}.foreach{item =>
using(seshB){
sims.insert(item);
}
}
}
}
case class Simulation(
#Column("SIMID")
var id: Long,
val date: Date
) extends KeyedEntity[Long]
object Library extends Schema {
val sims = table[Simulation]
on(sims)(s => declare(
s.id is(unique, indexed, autoIncremented)
))
}
}
Update:
I think it might be something to do with the DBs. They were created in a Java project using JPA/EclipseLink and in additional to generating tables for my entities it also created a table called SEQUENCE, presumably for primary key generation.
I've found that I can create an brand new table in Squeryl and manually put the contents of both databases in that, thus achieving the same effect. Interestingly this new table did not have any SEQUENCE table auto generated. So I'm guessing it comes down to how JPA/EclipseLink was generating my primary keys?
Update 2:
As requested, I appended trace_level_file=3 to the url and the files are here: resultsA.trace.db and resultsB.trace.db. B is the more interesting one I think. Also, I've put a simplified version of the database here which has had unnecessary tables removed (the same database is used for resultsA and resultsB).
Just got a moment to look at this more closely. I turns out you were on the right track. While I guess that EclipseLink uses Sequences to generate the PK value, Squeryl defines the column as something like:
simid bigint not null primary key auto_increment
Without the auto_increment flag a value is never placed in the column and you end up with the constraint violation you mentioned. It sounds like you've already worked around the issue, but hopefully this will help you or someone else in the future.
Not really a solution, but my workaround is to create a new database
val seshNew = Session.create(java.sql.DriverManager.getConnection("jdbc:h2:file:data/resultsNew", "sa","password"),new H2Adapter)
and then just write all the data from the other databases into it
using(seshNew){
sims.insert(new Simulation(0,item.date))
}
The primary keys 0 gets overwritten as appropriate.