I have a query that takes Seq[Int] as it's argument (and performs filtering like WHERE x IN (...)), and I need to compile it since this query is failry complex. However, when I try the naive approach:
Compiled((xs: Set[Int]) => someQuery.filter(_.x inSet xs))
It fails with message that
Computation of type Set[Int] => Query[SomeTable, SomeValue, Seq] cannot be compiled (as type C)
Can Slick compile queries that takes a sets of integer as parameters?
UPDATE: I use PostgreSQL as database, so it can be possible to use arrays instead of IN clause, but how?
As for the PostgreSQL database, the solution is much simpler than I expected.
First of all, there is a need of special Slick driver for PostgreSQL that support arrays. It usually already included in projects that rely on PgSQL features, so there is no trouble at all. I use this driver.
The main idea is to replace plain SQL IN (...) clause which takes the same amount of bind parameters as the amount of items in list, and thus cannot be statically compiled by Slick with PgSQL-specific array operator x = ANY(arr), which takes only one parameter for the array. It's easy to do with code like this:
val compiledQuery = Compiled((x: Rep[List[Int]]) => query.filter(_.id === x.any))
This code will generate query like WHERE x = ANY(?) which will use only one parameter, so Slick will accept it for compilation.
Related
Im currently using jOOQ to build my SQL (with code generation via the mvn plugin).
Executing the created query is not done by jOOQ though (Using vert.X SqlClient for that).
Lets say I want to select all columns of two tables which share some identical column names. E.g. UserAccount(id,name,...) and Product(id,name,...). When executing the following code
val userTable = USER_ACCOUNT.`as`("u")
val productTable = PRODUCT.`as`("p")
create().select().from(userTable).join(productTable).on(userTable.ID.eq(productTable.AUTHOR_ID))
the build method query.getSQL(ParamType.NAMED) returns me a query like
SELECT "u"."id", "u"."name", ..., "p"."id", "p"."name", ... FROM ...
The problem here is, the resultset will contain the column id and name twice without the prefix "u." or "p.", so I can't map/parse it correctly.
Is there a way how I can say to jOOQ to alias these columns like the following without any further manual efforts ?
SELECT "u"."id" AS "u.id", "u"."name" AS "u.name", ..., "p"."id" AS "p.id", "p"."name" AS "p.name" ...
Im using the holy Postgres Database :)
EDIT: Current approach would be sth like
val productFields = productTable.fields().map { it.`as`(name("p.${it.name}")) }
val userFields = userTable.fields().map { it.`as`(name("p.${it.name}")) }
create().select(productFields,userFields,...)...
This feels really hacky though
How to correctly dereference tables from records
You should always use the column references that you passed to the query to dereference values from records in your result. If you didn't pass column references explicitly, then the ones from your generated table via Table.fields() are used.
In your code, that would correspond to:
userTable.NAME
productTable.NAME
So, in a resulting record, do this:
val rec = ...
rec[userTable.NAME]
rec[productTable.NAME]
Using Record.into(Table)
Since you seem to be projecting all the columns (do you really need all of them?) to the generated POJO classes, you can still do this intermediary step if you want:
val rec = ...
val userAccount: UserAccount = rec.into(userTable).into(UserAccount::class.java)
val product: Product = rec.into(productTable).into(Product::class.java)
Because the generated table has all the necessary meta data, it can decide which columns belong to it, and which ones don't. The POJO doesn't have this meta information, which is why it can't disambiguate the duplicate column names.
Using nested records
You can always use nested records directly in SQL as well in order to produce one of these 2 types:
Record2<Record[N], Record[N]> (e.g. using DSL.row(table.fields()))
Record2<UserAccountRecord, ProductRecord> (e.g using DSL.row(table.fields()).mapping(...), or starting from jOOQ 3.17 directly using a Table<R> as a SelectField<R>)
The second jOOQ 3.17 solution would look like this:
// Using an implicit join here, for convenience
create().select(productTable.userAccount(), productTable)
.from(productTable)
.fetch();
The above is using implicit joins, for additional convenience
Auto aliasing all columns
There are a ton of flavours that users could like to have when "auto-aliasing" columns in SQL. Any solution offered by jOOQ would be no better than the one you've already found, so if you still want to auto-alias all columns, then just do what you did.
But usually, the desire to auto-alias is a derived feature request from a misunderstanding of what's the best approch to do something in jOOQ (see above options), so ideally, you don't follow down the auto-aliasing road.
I'm new on Cassandra and Scala, I'm working on a Kafka consumer (written in Scala) that has to update a field of a row on Cassandra from data it receives.
And so far no problem.
In this row a field is a String list and when I do the update this field hasn't to change, so I have to assign the same String list to it self.
UPDATE keyspaceName.tableName
SET fieldToChange = newValue
WHERE id = idValue
AND fieldA = '${currentRow.getString("fieldA")}'
AND fieldB = ${currentRow.getInt("fieldB")}
...
AND fieldX = ${currentRow.getList("fieldX", classOf[String]).toString}
...
But I receive even the exception:
com.datastax.driver.core.exceptions.SyntaxError: line 19:49 no viable alternative at input ']' (... 482 AND fieldX = [[listStringItem1]]...)
I currently haven't found anything that could help me through the web
The problem is that Scala's string representation of the list doesn't match to the Cassandra's representation of the list, so it generates errors.
Instead of constructing the CQL statement directly in your code, it's better to use PreparedStatement and bind variables to it:
first, it will speedup the execution as Cassandra won't parse every statement separately;
it will be easier to bind variables as you won't need to care about corresponding string representation
But be very careful with Scala - Java driver expects Java's lists, sets, maps, and base types, like, ints, etc.. You may look to java-driver-scala-extras package, but you'll need to compile it yourself, as it's not available on Maven Central.
The title says it all.
I need to see the underlying query of a DBIOAction
EDIT 1
When trying the solution suggested here it fails with the following:
Error:(978, 19) value result is not a member of
slick.dbio.DBIOAction[Unit,slick.dbio.NoStream,slick.dbio.Effect.Write]
logger.info(myAction.result.statements.headOption)
which is normal since the docs do not mention result to be part of the trait.
I am using slick 3.2.0-M1
EDIT 2
result doesn't seem to be an implicit.
In the same scope, I can use it on Query object but not on a DBIOAction object.
unless DBIOAction has another implicit with the same name that should be imported separately, but there is nothing in the docs about it.
I have some data in a postgres table with one column called version (of type varchar). I would like to use my own comparison function to to order/sort on that column, but I am not sure what is the most appropriate answer:
I have an JS implementation of the style comp(left, right) -> -1/0/1, but I don't know how I can use it in a sql order by clause (through plv8)
I could write a C extension, but I am not particularly excited about this (mostly for maintenance reason, as writing the comparison in C would not be too difficult in itself)
others ?
The type of comparisons I am interested are similar to version string ordering used in package managers.
You want:
ORDER BY mycolumn USING operator
See the docs for SELECT. It looks like you may need to define an operator for the function, and a b-tree operator class containing the operator to use it; you can't just write USING myfunc().
(No time to test this and write a demo right now).
One way to achieve it would be like this:
val now = DateTime.now
val today = now.toLocalDate
val tomorrow = today.plusDays(1)
val startOfToday = today.toDateTimeAtStartOfDay(now.getZone)
val startOfTomorrow = tomorrow.toDateTimeAtStartOfDay(now.getZone)
val todayLogItems = logItems.filter(logItem =>
logItem.MyDateTime >= startOfToday && logItem.MyDateTime < startOfTomorrow
).list
Is there any way to write the query in a more concise way? Something on the lines of:
logItems.filter(_.MyDateTime.toDate == DateTime.now.toDate).list
I'm asking this because in LINQ to NHibernate that is achievable (Fetching records by date with only day part comparison using nhibernate).
Unless the Slick joda mapper adds support for comparisons you are out of luck unless you add it yourself. For giving it a shot these may be helpful pointers:
* http://slick.typesafe.com/doc/2.0.0/userdefined.html
* http://slick.typesafe.com/doc/2.0.0/api/#scala.slick.lifted.ExtensionMethods
* https://github.com/slick/slick/blob/2.0.0/src/main/scala/scala/slick/lifted/ExtensionMethods.scala
I create a ticket to look into it in Slick at some point: https://github.com/slick/slick/issues/627
You're confusing matters by working with LocalDateTimes instead of using LocalDates directly:
val today = LocalDate.now
val todayLogItems = logItems.filter(_.MyDateTime.toLocalDate isEqual today)
UPDATE
A Major clarification is needed on the question here, Slick was only mentioned in passing, by way of a tag.
However... Slick is central to this question, which hinges on the fact that filter operation is actually into an SQL query by way of PlainColumnExtensionMethods
I'm not overly familiar with the library, but this must surely mean that you're restricted to just operations which can be executed in SQL. As this is a Column[DateTime] you must therefore compare it to another DateTime.
As for the LINQ example, it seems to recommend first fetching everything and then proceeding as per my example above (performing the comparison in Scala and not in SQL). This is an option, but I suspect you won't want the performance cost that it entails.
UPDATE 2 (just to clarify)
There is no answer.
There's no guarantee that your underlying database has the ability to do an equality check between dates and timestamps, slick therefore can't rely on such an ability existing.
You're stuck between a rock and a hard place. Either do the range check between timestamps as you already are, or pull everything from the query and filter it in Scala - with the heavy performance cost that this would likely involve.
FINAL UPDATE
To refer to the Linq/NHibernate question you referenced, here are a few quotes:
You can also use the date function from Criteria, via SqlFunction
It depends on the LINQ provider
I'm not sure if NHibernate LINQ provider supports...
So the answers there seem to be either:
Relying on NHibernate to push the date coercion logic into the DB, perhaps silently crippling performance (by fetching all records and filtering locally) if this is not possible
Relying on you to write custom SQL logic
The best-case scenario is that NHibernate could translate date/timestamp comparisons into timestamp range checks. Doing something like that is quite a deep question about how Slick (and slick-joda-mapper) can handle comparisons, the fact that you'd use it in a filter is incidental.
You'd need an extremely compelling use-case to write a feature like this yourself, given the risk for creating complicated bugs. You'd be better off:
splitting the column into separate date/time columns
adding the date as a calculated column (maybe in a view)
using custom SQL (or a stored proc) for the query
sticking with the range check
using a helper function
In the case of a helper:
def equalsDate(dt: LocalDate) = {
val start = dt.toDateTimeAtStartOfDay()
val end = dt.plusDays(1).toDateTimeAtStartOfDay()
(col: Column[DateTime]) => {
col >= start && col < end
}
}
val isToday = equalsDate(LocalDate.now)
val todayLogItems = logItems.filter(x => isToday(x.MyDateTime))