Fetch object by plain SQL query with SORM - scala

Is it possible to fetch items by plain SQL query instead of building query by DSL using SORM?
For example is there an API for making something like
val metallica = Db.query[Artist].fromString("SELECT * FROM artist WHERE name = ?", "Metallica").fetchOne() // Option[Artist]
instead of
val metallica = Db.query[Artist].whereEqual("name", "Metallica").fetchOne() // Option[Artist]

Since populating an entity with collections and other structured values involves fetching data from multiple tables in an unjoinable way, the API for fetching it directly will most probably never get exposed. However another approach to this problem is currently being considered.
Here's how it could be implemented:
val artists : Seq[Artist]
= Db.fetchWithSql[Artist]("SELECT id FROM artist WHERE name = ?", "Metallica")
If this issue gets a notable support either here or, even better, here, it will probably get implemented in the next minor release.
Update
implemented in 0.3.1

If you want to fetch only one object (by 2 and more arguments) you can
also do the following:
by using Sorm Querier
Db.query[Artist].where(Querier.And(Querier.Equal("name", "Name"), Querier.Equal("surname", "surname"))).fetchOne()
or just
Db.query[Artist].whereEqual("name", "Name").whereEqual( "surname","surname").fetchOne()

Related

Selecting identical named columns in jOOQ

Im currently using jOOQ to build my SQL (with code generation via the mvn plugin).
Executing the created query is not done by jOOQ though (Using vert.X SqlClient for that).
Lets say I want to select all columns of two tables which share some identical column names. E.g. UserAccount(id,name,...) and Product(id,name,...). When executing the following code
val userTable = USER_ACCOUNT.`as`("u")
val productTable = PRODUCT.`as`("p")
create().select().from(userTable).join(productTable).on(userTable.ID.eq(productTable.AUTHOR_ID))
the build method query.getSQL(ParamType.NAMED) returns me a query like
SELECT "u"."id", "u"."name", ..., "p"."id", "p"."name", ... FROM ...
The problem here is, the resultset will contain the column id and name twice without the prefix "u." or "p.", so I can't map/parse it correctly.
Is there a way how I can say to jOOQ to alias these columns like the following without any further manual efforts ?
SELECT "u"."id" AS "u.id", "u"."name" AS "u.name", ..., "p"."id" AS "p.id", "p"."name" AS "p.name" ...
Im using the holy Postgres Database :)
EDIT: Current approach would be sth like
val productFields = productTable.fields().map { it.`as`(name("p.${it.name}")) }
val userFields = userTable.fields().map { it.`as`(name("p.${it.name}")) }
create().select(productFields,userFields,...)...
This feels really hacky though
How to correctly dereference tables from records
You should always use the column references that you passed to the query to dereference values from records in your result. If you didn't pass column references explicitly, then the ones from your generated table via Table.fields() are used.
In your code, that would correspond to:
userTable.NAME
productTable.NAME
So, in a resulting record, do this:
val rec = ...
rec[userTable.NAME]
rec[productTable.NAME]
Using Record.into(Table)
Since you seem to be projecting all the columns (do you really need all of them?) to the generated POJO classes, you can still do this intermediary step if you want:
val rec = ...
val userAccount: UserAccount = rec.into(userTable).into(UserAccount::class.java)
val product: Product = rec.into(productTable).into(Product::class.java)
Because the generated table has all the necessary meta data, it can decide which columns belong to it, and which ones don't. The POJO doesn't have this meta information, which is why it can't disambiguate the duplicate column names.
Using nested records
You can always use nested records directly in SQL as well in order to produce one of these 2 types:
Record2<Record[N], Record[N]> (e.g. using DSL.row(table.fields()))
Record2<UserAccountRecord, ProductRecord> (e.g using DSL.row(table.fields()).mapping(...), or starting from jOOQ 3.17 directly using a Table<R> as a SelectField<R>)
The second jOOQ 3.17 solution would look like this:
// Using an implicit join here, for convenience
create().select(productTable.userAccount(), productTable)
.from(productTable)
.fetch();
The above is using implicit joins, for additional convenience
Auto aliasing all columns
There are a ton of flavours that users could like to have when "auto-aliasing" columns in SQL. Any solution offered by jOOQ would be no better than the one you've already found, so if you still want to auto-alias all columns, then just do what you did.
But usually, the desire to auto-alias is a derived feature request from a misunderstanding of what's the best approch to do something in jOOQ (see above options), so ideally, you don't follow down the auto-aliasing road.

Persist lists using the Play framework and Anorm

I'm currently developping a small application in Scala using the Play framework and I would like to persist a list of operations made by a user. Is is possible to store a simple list of ids (List[Long]) using just Anorm like I'm doing?
Otherwise, what else could I use to make it work? Do I need to use an ORM like explained in Scala Play! Using anorm or ORM?
If you're talking about persisting to a SQL database then Anorm can certainly handle that for you.
At the most basic level, you could create a table of long integers in your SQL database and then use Anorm to persist your list. Assume your store your integers in a single-column table called UserActions with its sole column called action:
def saveList(list: List[Long]) = {
DB.withConnection { implicit connection =>
val insertQuery = SQL("insert into UserActions(action) values ({action})")
val batchInsert = (insertQuery.asBatch /: list)(
(sql, elem) => sql.addBatchParams(elem)
)
batchInsert.execute()
}
}
I threw together a little demo for you and I'm pushing it to Heroku, I'll update with the link soon (edit: Heroku and I aren't getting along tonight, sorry).
The code is in my Github at: https://github.com/ryantanner/anorm-batch-demo
Look in models/UserActions.scala to find that snippet specifically. The rest is just fluff to make the demo more interesting.
Now, I'd take a step back for a moment and ask yourself what information you need about these user operations. Semantically, what does that List[Long] mean? Do you need to store more information about those user actions? Should it actually be something like rows of (UserID, PageVisited, Timestamp)?
Untested
I think you need to create a batch insert statement like this:
val insertStatement =
SQL("""INSERT INTO UserOperations (id) VALUES ({id})""")
.asBatch
.addBatchParamsList(List(Seq(1), Seq(2)))
.execute()
BatchSql of Anorm have been recently updated. You may want to check out the latest.

ORMLite foreigncollection - object searching

I'm using ORMLite & SQLite as my database, and I'm working on a android app.
1) I'm searching for a particular object in the Foreign-Collection object as follows.
Collection<PantryCheckLine> pantryCheckLinesCollection = pantryCheck.getPantryCheckLines();
Iterator iterator = pantryCheckLinesCollection.iterator();
while (iterator.hasNext()) {
pantryCheckLine = (PantryCheckLine) iterator.next();
//i'm searching for a purticular object match
}
2) Or else I can directly query from the relevant table and identified the item as well.
What I'm asking is out of these two methods which one will be much faster?
Depends a bit on the particulars of your situation.
If your ForeignCollection is eager fetched then your loop will not have to do any database transactions and will, for small collections, probably be faster then doing the query.
However if your collection lazy loaded then iterating through the collection will go back to the database anyway so you might as well do the precise query.

QueryDSL: querying relations and properties

I'm using QueryDSL with JPA.
I want to query some properties of an entity, it's like this:
QPost post = QPost.post;
JPAQuery q = new JPAQuery(em);
List<Object[]> rows = q.from(post).where(...).list(post.id, post.name);
It works fine.
If i want to query a relation property, e.g. comments of a post:
List<Set<Comment>> rows = q.from(post).where(...).list(post.comments);
It's also fine.
But when I want to query relation and simple properties together, e.g.
List<Object[]> rows = q.from(post).where(...).list(post.id, post.name, post.comments);
Then something went wrong, generiting a bad SQL syntax.
Then I realized that it's not possible to query them together in one SQL statement.
Is it possible that QueryDSL would somehow deal with relations and generate additional queries (just like what hibernate does with lazy relations), and load the results in?
Or should I just query twice, and then merge both result lists?
P.S. what i actually want is each post with its comments' ids. So a function to concat each post's comment ids is better, is this kind of expressin possible?
q.list(post.id, post.name, post.comments.all().id.join())
and generate a subquery sql like (select group_concat(c.id) from comments as c inner join post where c.id = post.id)
Querydsl JPA is restricted to the expressivity of JPQL, so what you are asking for is not possible with Querydsl JPA. You can though try to express it with Querydsl SQL. It should be possible. Also as you don't project entities, but literals and collections it might work just fine.
Alternatively you can load the Posts with only the Comment ids loaded and then project the id, name and comment ids to something else. This should work when accessors are annotated.
The simplest thing would be to query for Posts and use fetchJoin for comments, but I'm assuming that's too slow for you use case.
I think you ought to simply project required properties of posts and comments and group the results by hand (if required). E.g.
QPost post=...;
QComment comment=..;
List<Tuple> rows = q.from(post)
// Or leftJoin if you want also posts without comments
.innerJoin(comment).on(comment.postId.eq(post.id))
.orderBy(post.id) // Could be used to optimize grouping
.list(new QTuple(post.id, post.name, comment.id));
Map<Long, PostWithComments> results=...;
for (Tuple row : rows) {
PostWithComments res = results.get(row.get(post.id));
if (res == null) {
res = new PostWithComments(row.get(post.id), row.get(post.name));
results.put(res.getPostId(), res);
}
res.addCommentId(row.get(comment.id));
}
NOTE: You cannot use limit nor offset with this kind of queries.
As an alternative, it might be possible to tune your mappings so that 1) Comments are always lazy proxies so that (with property access) Comment.getId() is possible without initializing the actual object and 2) using batch fetch* on Post.comments to optimize collection fetching. This way you could just query for Posts and then access id's of their comments with little performance hit. In most cases you shouldn't even need those lazy proxies unless your Comment is very fat. That kind of code would certainly look nicer without low level row handling and you could also use limit and offset in your queries. Just keep an eye on your query log to make sure everything works as intended.
*) Batch fetching isn't directly supported by JPA, but Hibernate supports it through mapping and Eclipselink through query hints.
Maybe some day Querydsl will support this kind of results grouping post processing out-of-box...

Hand made queries vs findDependentRowset

I have built a pretty big application with Zend and i was wondering which would be better, building query by hand (using the Zend object model)
$db->select()
->form('table')
->join('table2',
'table.id = table2.table_id')
or going with the findDependentRowset method (Zend doc for findDependentRowSet).
I was wondering since i did a test to fetch data over multiple tables and display all the informations from a table and the findDependentRowset seemed to run slower. I might be wrong but i guess it makes a new query every time findDependentRowset is called as in :
$table1 = new Model_Table1;
$rowset = $table1-fetchAll();
foreach($rowset as $row){
$table2data = $row->findDependentRowset('Model_Table2', 'Map');
echo $row['field'] . ' ' . $table2data['field'];
}
So, which one is better and is there a way using findDependentRowset to build complexes queries that could span over 5 tables which would run as fast as a hand made query?
Thanks
Generally, build your own query is the best way to go, because zend will create just one object (or set of objects) and do just one query.
If you use findDependentRowset Zend will perform another query and build another object (or set) with the result for each call.
You should use this only in very specific cases.
See this question: PHP - Query single value per iteration or fetch all at start and retrieve from array?