how do I execute queries that looks like this using scala Quill library?
REFRESH MATERIALIZED VIEW CONCURRENTLY transaction_view
Basically this can be archived by writing it in a quote
val q = quote { query[MyTable] }
val myQuery = quote { infix"REFRESH MATERIALIZED VIEW CONCURRENTLY {$q}".as[Query[MyTable]] }
Thanks to #deusaquilus
The answer from binkabir is almost correct. One last touch needed is to replace Query with Action, otherwise Quill will again generate a select, instead of just using the raw string.
val q = quote { query[MyTable] }
val myQuery = quote { infix"REFRESH MATERIALIZED VIEW CONCURRENTLY {$q}".as[Action[MyTable]] }
Related
I have a POJO that has the fields that can be updated. But sometimes only a few fields will need to be updated and the rest are null. How do I write an update statement that ignores the fields that are null? Would it be better to loop through the non missing ones and dynamically add to a set statement, or using coalesce?
I have the following query:
jooqService.using(txn)
.update(USER_DETAILS)
.set(USER_DETAILS.NAME, input.name)
.set(USER_DETAILS.LAST_NAME, input.lastName)
.set(USER_DETAILS.COURSES, input.courses)
.set(USER_DETAILS.SCHOOL, input.school)
.where(USER_DETAILS.ID.eq(input.id))
.execute()
If there is a better practice?
I don't know Jooq but it looks like you could simply do this:
val jooq = jooqService.using(txn).update(USER_DETAILS)
input.name.let {jooq.set(USER_DETAILS.NAME, it)}
input.lastName.let {jooq.set(USER_DETAILS.LAST_NAME, it)}
etc...
EDIT: Mapping these fields explicitly as above is clearest in my opinion, but you could do something like this:
val fields = new Object[] {USER_DETAILS.NAME, USER_DETAILS.LAST_NAME}
val values = new Object[] {input.name, input.lastName}
val jooq = jooqService.using(txn).update(USER_DETAILS)
values.forEachIndexed { i, value ->
value.let {jooq.set(fields[i], value)}
}
You'd still need to enumerate all the fields and values explicitly and consistently in the arrays for this to work. It seems less readable and more error prone to me.
In Java, it would be somthing like this
var jooqQuery = jooqService.using(txn)
.update(USER_DETAILS);
if (input.name != null) {
jooqQuery.set(USER_DETAILS.NAME, input.name);
}
if (input.lastName != null) {
jooqQuery.set(USER_DETAILS.LAST_NAME, input.lastName);
}
// ...
jooqQuery.where(USER_DETAILS.ID.eq(input.id))
.execute();
Another option rather than writing this UPDATE statement is to use UpdatableRecord:
// Load a POJO into a record using a RecordUnmapper
UserDetailsRecord r =
jooqService.using(txn)
.newRecord(USER_DETAILS, input)
(0 .. r.size() - 1).forEach { if (r[it] == null) r.changed(it, false) }
r.update();
You can probably write an extension function to make this available for all jOOQ records, globally, e.g. as r.updateNonNulls().
I am facing challenges with below code to loop hive sql queries in spark.sql.
def missing_pks(query: String) = {
//println(f"spark.sql( $query )")
spark.sql(query)
}
var hql_query_list_df=spark.sql("select distinct hql_qry from table where msr_nm='orders' and rgn_src='europe'")
var hql= hql_query_list_df.select('hql_qry).as[String].collect()
var hql_f=hql_query_list_df.map( "\"" + _ + "\"" )
hql_f.foreach(missing_pks)
here I am calling hive sql statements from table and load them as list then try to execute, unfortunately its not working. not sure what missing in my code. Interesting part is if the list was created manually with in spark shell code is working perfectly. It would be great if someone help me here.
I have about 10 tables that I load into a DataSet using a single DataAdapter in a sequence. During the load, I use only one DataAdapter, and I replace the table names and SELECT statements as required. I replace the table name and the SQL select statement and successively fill tables in the DataSet. Everything is done inside of two nested "using" statements to dispose of the connection and DataAdapter objects as shown below.
using (OleDbConnection conn = new OleDbConnection (Db.DbConnGet ())) {
using (var da = new OleDbDataAdapter (sql, conn)) {
tablename = "Table1";
da.SelectCommand.CommandText = $"Select * from {tablename}";
try {
da.Fill (hsdset, tablename);
} catch (Exception ex) {
...
}
tablename = "Table2";
da.SelectCommand.CommandText = $"Select * from {tablename}";
try {
da.Fill (hsdset, tablename);
} catch (Exception ex) {
...
}
}}
As you can see, the DataAdapter is disposed of once the loading is done, and I pass the DataSet around my application as necessary for reading data.
But now I have a need to update or extend the data in the dataset and get it back into the database. Updating the DataTables inside the dataset is not a problem - there are many examples on the net. I regenerated a new Connection and DataAdapter to do the update with a table in the existing, modified, strongly-typed DataSet, as follows.
using (OleDbConnection conn = new OleDbConnection (Db.DbConnGet ())) {
using (var da = new OleDbDataAdapter ("", conn)) {
// this is required; I don't know if it is used by Update
da.SelectCommand.CommandText = $"Select * from " + tablename;
try {
// build special update commands from the table->db differences
var cbuilder = new OleDbCommandBuilder (da);
da.Update (dset, "Layers");
} catch (Exception ex) {
...
}
}
}
}
My first question is, "Does the Update operation actually use the original SELECT statement to retrieve info from the database? If not, why is it required? I thought the DataSet kept track of modified rows, new rows, deleted rows, and so on. I thought updating could be done without reading the whole data table again? Or maybe it reads only the records that are marked as modified in the DataTable?
My second question is what is the best (or normal) way of working with DataSets and DataAdapters this way? Is it best practice to always save the original DataAdapters for later use, or is it good practice to create new ones like I did above? (Does the original DataAdapter keep any state information during the load that the newly-created DataAdapter would not have?) Thank you.
I copied the code from off. documentation:
&sql(SELECT *,%ID INTO :tflds()
FROM Sample.Person )
IF SQLCODE=0 {
SET firstflds=14
FOR i=0:1:firstflds {
IF $DATA(tflds(i)) {
WRITE "field ",i," = ",tflds(i),! }
} }
ELSE {WRITE "SQLCODE error=",SQLCODE,! }
But for some reason it only returns all fields of the first row and nothing else. Is it a bug, or am i doing smth wrong?
You need to use cursor to loop through rows of SQL query result.
&sql(declare c1 cursor for SELECT *,%ID INTO :tflds()
FROM Sample.Person)
&sql(open c1)
for {
&sql(fetch c1)
quit:SQLCODE'=0
set firstflds=14
for i=0:1:firstflds {
if $Data(tflds(i)) {
write "field ",i," = ",tflds(i),!
}
}
write "===NEXT ROW===",!
}
&sql(close c1)
See http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GSQL_esql#GSQL_esql_cursor for more info
Embedded SQL is a good tool for performance - sensitive operations, but indeed hard to deal with if you need to retrieve more than one row. All this cursor business is a pain.
Consider using Dynamic SQL instead. It has nice resultset - like interface.
We are using Scala Play, and I am trying to ensure that all SQL queries are using Anorm's String Interpolation. It works with some queries, but many are not actually replacing the variables before the query is executing.
import anorm.SQL
import anorm.SqlStringInterpolation
object SecureFile
{
val table = "secure_file"
val pk = "secure_file_idx"
...
// This method works exactly as I would hope
def insert(secureFile: SecureFile): Option[Long] = {
DBExec { implicit connection =>
SQL"""
INSERT INTO secure_file (
subscriber_idx,
mime_type,
file_size_bytes,
portal_msg_idx
) VALUES (
${secureFile.subscriberIdx},
${secureFile.mimeType},
${secureFile.fileSizeBytes},
${secureFile.portalMsgIdx}
)
""" executeInsert()
}
}
def delete(secureFileIdx: Long): Int = {
DBExec { implicit connection =>
// Prints correct values
println(s"table: ${table} pk: ${pk} secureFileIdx: ${secureFileIdx} ")
// Does not work
SQL"""
DELETE FROM $table WHERE ${pk} = ${secureFileIdx}
""".executeUpdate()
// Works, but unsafe
val query = s"DELETE FROM ${table} WHERE ${pk} = ${secureFileIdx}"
SQL(query).executeUpdate()
}
}
....
}
Over in the PostgreSQL logs, it's clear that the delete statement has not acquired the correct values:
2015-01-09 17:23:03 MST ERROR: syntax error at or near "$1" at character 23
2015-01-09 17:23:03 MST STATEMENT: DELETE FROM $1 WHERE $2 = $3
2015-01-09 17:23:03 MST LOG: execute S_1: ROLLBACK
I've tried many variations of execute, executeUpdate, and executeQuery with similar results. For the moment, we are using basic string replacement but of course this is bad because it's not using PreparedStatements.
For anyone else sitting on this page scratching their head and wondering what they might be missing...
SQL("select * from mytable where id = $id")
is NOT the same as
SQL"select * from mytable where id = $id"
The former does not do String interpolation whereas the latter does.
This is easily overlooked in the aforementioned docs as all the samples provided just happen to have a (non-related) closing parenthesis on them (like this sentence does)
Anorm String interpolation was introduced to pass parameter (e.g. SQL"Select * From Test Where id = $x), with interpolation arguments (e.g. $x) set on underlying PreparedStament according proper type conversion (see use cases on https://www.playframework.com/documentation/2.3.x/ScalaAnorm ).
Next Anorm release will also have the #$foo syntax to mix interpolation for parameter with standard string interpolation. This will allow to write DELETE FROM #$table WHERE #${pk} = ${secureFileIdx} and having it executed as DELETE FROM foo WHERE bar = ? (if literal table is "foo" and pk is "bar"), with literal secureFileIdx passed as parameter. See related pull request.
Until next revision is release, you can build Anorm from its master sources ti benefit from this change.