Lift Squeryl-Record select with google user id - scala

I want to identify a user using the Google OAuth service. This returns a googleId which is 2^67 so it does not fit into the long datatype which I am currently using as the primary key in my table. Because of this I wanted to store the googleId in a StringField.
However, this does not function because I can not get the Where clause working:
where(u.googleId.is === id)
produces the error value === is not a member of u.googleId.MyType.
I defined it like this:
val googleId = new StringField(this, "")
How can I select the data using a StringField in the where clause?
Thanks in advance
Flo

Doesn't BigDecimal do the trick? http://squeryl.org/schema-definition.html
Also, I'm not sure about the syntax, isn't it supposed to be ?:
where(u.googleId == "hello there")
I'm using pure squeryl and pure liftweb, so I'm not sure..

I solved it by importing org.squeryl.PrimitiveTypeMode._ as described here Squeryl Schema Definition
I then set the googleId to be of type StringField
val googleId = new StringField(this,"")
Now I have to add a .~ to the field because else it will be interpreted as String not as TypedExpressionNode[Int].
Now the following statement works:
where(u.googleId.is.~ === id) select (u))

Related

How to get correct type and nullability information for enum fields using jOOQ's metadata API?

I'm trying to use jOOQ's metadata API, and most columns behave the way I'd expect, but enum columns seem to be missing type and nullability information somehow.
For example, if I have a schema defined as:
CREATE TYPE public.my_enum AS ENUM (
'foo',
'bar',
'baz'
);
CREATE TABLE public.my_table (
id bigint NOT NULL,
created_at timestamp with time zone DEFAULT now() NOT NULL,
name text,
my_enum_column public.my_enum NOT NULL,
);
The following test passes:
// this is Kotlin, but hopefully pretty easy to decipher
test("something fishy going on here") {
val jooq = DSL.using(myDataSource, SQLDialect.POSTGRES)
val myTable = jooq.meta().tables.find { it.name == "my_table" }!!
// This looks right...
val createdAt = myTable.field("created_at")!!
createdAt.dataType.nullability() shouldBe Nullability.NOT_NULL
createdAt.dataType.typeName shouldBe "timestamp with time zone"
// ...but none of this seems right
val myEnumField = myTable.field("my_enum_column")!!
myEnumField.dataType.typeName shouldBe "other"
myEnumField.dataType.nullability() shouldBe Nullability.DEFAULT
myEnumField.dataType.castTypeName shouldBe "other"
myEnumField.type shouldBe Any::class.java
}
It's telling me that enum columns have Nullability.DEFAULT regardless of whether they are null or not null. For other types, Field.dataType.nullability will vary depending on whether the column is null or not null, as expected.
For any enum column, the type is Object (Any in Kotlin), and the dataType.typeName is "other". For non-enum columns, dataType.typeName gives me the correct SQL for the type.
I'm also using the jOOQ code generator, and it generates the correct types for enum columns. That is, it creates an enum class and uses that as the type for the corresponding fields, which are marked as not-nullable. The generated code for this field looks something like (reformatted to avoid long lines):
public final TableField<MyTableRecord, MyEnum> MY_ENUM_COLUMN =
createField(
DSL.name("my_enum_column"),
SQLDataType.VARCHAR
.nullable(false)
.asEnumDataType(com.example.schema.enums.MyEnum.class),
this,
""
)
So it appears that jOOQ's code generator has the type information, but how can I access the type information via the metadata API?
I'm using postgres:11-alpine and org.jooq:jooq:3.14.11.
Update 1
I tried testing this with org.jooq:jooq:3.16.10 and org.jooq:jooq:3.17.4. They seem to fix the nullability issue, but the datatype is still "other", and the type is still Object. So it appears the nullability issue was a bug in jOOQ. I'll file an issue about the type+datatype.
Update 2
This is looking like it may be a bug, so I've filed an issue.

Copy Product Price into a Custom Field Object using APEX Trigger in Salesforce

Trying to just copy the Cost_Price__c field of a product into a custom object when it is updated (if possible inserted too) using an APEX trigger.
I'm so close but the error I am getting at the moment is: Illegal assignment from PricebookEntry to String
trigger updateAccount on Account (after update) {
for (Account oAccount : trigger.new) {
//create variable to store product ID
string productId = oAccount.Product__c;
//SQL statement to lookup price of product using productID
PricebookEntry sqlResult = [SELECT Cost_Price__c
FROM PricebookEntry
WHERE Product2Id =: productId];
//save the returned SQL result inside the field of Industry - Illegal assignment from PricebookEntry to String
oAccount.Industry = sqlResult;
}
}
Am I right in thinking it's because its returning a collective group of results from the SOQL call? I've tried using the sqlResult[0] which still doesn't seem to work.
The Illegal Assignmnet because you are assigning a whole Object i.e PriceBook Entry to a string type of field i.e Industry on Account.
Please use the following code for assignment.
oAccount.Industry = sqlResult[0].Cost_Price__c;
Please mark the answer if this works for you.
Thanks,
Tushar

Compare MappedTo with raw type in Slick Query

How do you compare MappedTo[T] with raw T columns?
I have a problem (Cannot perform option-mapped operation) with this code:
for {
toEventLink <- Link.linksFromQuery(fromEntity).filter(_.toTable === Event.tableName)
event <- Event.table.filter(e => e.id === toEventLink.toId)
} yield event
In: e.id === toEventLink.toId where e.id is an ID (extends MappedTo[Long]) and toEventLink.toId is a raw Long.
This compiler check is doing what it’s supposed to do (e.g., not let you accidently compare an ID against something that isn’t an ID). But I can totally see why this would be useful (e.g., when migrating a schema to start to used typed keys).
You can use asColumnOf to cast a column to the type you want. For example:
e => e.id.asColumnOf[Long] === toEventLink.toId
There is a an issue open to make a more general solution for this: https://github.com/slick/slick/issues/1664

Parse multiple ResultSets from select query with Anorm 2.4

I am continuing my journey with Play, Scala & Anorm, and I faced a following problem:
One of my repository classes holds a logic to fetch list of email from DB for a user, as follows:
val sql =
"""
|select * from user_email where user_id = {user_id}
|;--
""".stripMargin
SQL(sql).on(
'user_id -> user.id
).as(UserEmail.simpleParser *)
While a parser is something like this:
val simpleParser: RowParser[UserEmail] = (
SqlParser.get[Muid](qualifiedColumnNameOf("", Identifiable.Column.Id)) ~
AuditMetadata.generateParser("") ~
SqlParser.get[Muid]("user_id") ~
SqlParser.get[String]("email") ~
SqlParser.get[Boolean]("verified") ~
SqlParser.get[Muid]("verification_code_id") ~
SqlParser.get[Option[DateTime]]("verification_sent_date")
) map {
case id ~ audit ~ userId ~ email ~ emailVerified ~ emailVerificationCode ~ emailVerificationSentDate =>
UserEmail(
id,
audit,
userId,
email,
emailVerified,
emailVerificationCode,
emailVerificationSentDate
)
}
When I execute this in test, I am getting the following error:
[error] Multiple ResultSets were returned by the query. (AbstractJdbc2Statement.java:354)
...
It is expected, that there are more than a simple result; however, I am puzzled about how to parse this case correctly.
I was under assumption, that:
UserEmail.simpleParser single is for simple row
and
UserEmail.simpleParser * is to handle multiple rows
I could not figure this put based on documentation, and, at least for now, didn't find anything useful anywhere else.
How do I parse multiple rows from the result set?
Update: I just found this gist (https://gist.github.com/davegurnell/4b432066b39949850b04) with pretty good explanation and created a ResultSetParser like so:
val multipleParser: ResultSetParser[List[UserEmail]] = UserEmail.simpleParser.*
And... that didn't help!
Thanks,

Slick issue when going with PostgreSQL

I'm using slick in a scala project to query some tables.
//define table
object Addresses extends Table[Address]("assetxs.address") {
def id = column[Int]("id", O.PrimaryKey)
def street = column[String]("street")
def number = column[String]("number")
def zipcode = column[String]("zipcode")
def country = column[String]("country")
def * = id ~ street ~ number ~ zipcode ~ country <> (Address, Address.unapply _)
}
If I use any query of this table it does not work (it says it cannot find my table) so I went further and print out the query like:
implicit val session = Database.forURL("jdbc:postgresql://localhost:5432/postgres", driver = "org.postgresql.Driver", user="postgres", password="postgres").createSession()
session.withTransaction{
val query = Query(Addresses)
println("Addresses: " + query.selectStatement)
}
I noticed that the name of the schema.table appears in "" so the statement is:
select x2."id", x2."street", x2."number", x2."zipcode", x2."country"
from "assetxs.address" x2
which of course does not work (I've tried to run it in PostgreSQL tool and I needed to remove "" from table name in order to have it working.
Can you please tell me if there is any slick option to not include "" in any query when using table names?
You've put the schema into the table name. A (quoted) table name containing a dot character is valid in SQL but it's not what you want here. You have to specify the schema separately:
object Addresses extends Table[Address](Some("assetxs"), "address")
In the end I was able to solve this issue.
I specify the table name only:
object Addresses extends Table[Address]("address")
and change my postgresql conf to include my schema when searching (it seems that slick is looking on public schema only):
search_path = '"$user",assetxs,public'
and now it works.
The solution I found when wanting to work with both H2 (testing) and Postgres (production) using liquibase and slick.
Stick with lowercase in your Slick Table objects
class MyTable(tag: Tag) extends Table[MyRecord](tag,
Some("my_schema"), "my_table")
In your H2 url config you'll need to specify DATABASE_TO_UPPER=false (this prevents the table and column names from being upper cased) and put quotation marks around the INIT schema (this prevents the schema from being upper cased)
url =
jdbc:h2:mem:test;MODE=PostgreSQL;DATABASE_TO_UPPER=false;INIT=create
schema if not exists \"my_schema\"\;SET SCHEMA \"my_schema\""
When specifying schema names in liquibase scripts it must also be quoted so that H2 won't try to capitalize it.
Since this problem is still bothering Scala newcomers (like me), I've performed small research and found that such an application.conf was successful with Slick 3.1.1 and PostgreSQL 9.5:
postgres.devenv = {
url = "jdbc:postgresql://localhost:5432/dbname?currentSchema=customSchema"
user = "user"
password = "password"
driver = org.postgresql.Driver
}
You're just using the wrong driver, check your imports
import scala.slick.driver.PostgresDriver.simple._