Customise generated Slick SQL for debugging - scala

I want to customise the SQL that Slick generates for a standard insert before it is sent to the DBMS, so that I can add extra DBMS-specific debugging options that Slick doesn't natively support. How can I do that?

At the action level (i.e., with a DBIO), you can replace the SQL Slick will use via overrideStatements. Combined with statements to access the SQL Slick generates, that would give you a place to jump in and customize the SQL.
Bare in mind, you'll be working with Strings with these two API calls.
A simple example would be:
val regularInsert = table += row
// Switching the generated SQL to all-caps is a terrible idea,
// and may not run in your database, but it will do as an example:
val modifiedSQL = regularInsert.statements.map(_.toUpperCase())
val modifiedInsert = regularInsert.overrideStatements(modifiedSQL)
// run modifiedInsert action as normal
The next step up from this would be to implement a custom database profile to override the way inserts are created to include debugging.
This is more involved: you'd want to extend the profile you're currently using, and dive into the Slick APIs to override various methods to change the insert behaviour. For example, you might start by exploring the existing Postgres profile if that's the database you're using.
However, the above example can be applied per-insert as needed which may be enough for what you need.

If you are using a connection pool such as HikariCP, you can put a Java breakpoint on the ProxyConnection.prepareStatement(String sql) method, or the equivalent method in whatever connection pool library you are using. Then when the SQL of interest is about to be prepared by that method, use your debugger's "evaluate expression" functionality to modify/replace the value of sql.
This won't work if the library you are setting the breakpoint on is not open source, or for some other reason is compiled without debugging information.

Related

How to get column name and data type returned by a custom query in postgres?

How to get column name and data type returned by a custom query in postgres? We have inbuilt functions for table/views but not for custom queries. For more clarification I would say that I need a postgres function which will take sql string as parameter and will return colnames and their datatype.
I don't think there's any built-in SQL function which does this for you.
If you want to do this purely at the SQL level, the simplest and cheapest way is probably to CREATE TEMP VIEW AS (<your_query>), dig the column definitions out of the catalog tables, and drop the view when you're done. However, this can have a non-trivial overhead depending on how often you do it (as it needs to write view definitions to the catalogs), can't be run in a read-only transaction, and can't be done on a standby server.
The ideal solution, if it fits your use case, is to build a prepared query on the client side, and make use of the metadata returned by the server (in the form of a RowDescription message passed as part of the query protocol). Unfortunately, this depends very much on which client library you're using, and how much of this information it chooses to expose. For example, libpq will give you access to everything, whereas the JDBC driver limits you to the public methods on its ResultSetMetadata object (though you could probably pull more information from its private fields via reflection, if you're determined enough).
If you want a read-only, low-overhead, client-independent solution, then you could also write a server-side C function to prepare and describe the query via SPI. Writing and building C functions comes with a bit of a learning curve, but you can find numerous examples on PGXN, or within Postgres' own contrib modules.

How can I use ormlite to escape my insert?

I have ormlite integrated into an application I'm working on. Right now I'm trying to build in functionality to easily switch from automatically inserting data to the database to outputting the equivalent collection of insert statements to a file for later use. The data isn't user input but still requires proper escaping to handle basic gotchas like apostrophes.
Ideas I've burned through:
Dao.create() writes to the database directly, so that's a no-go.
QueryBuilder can't handle inserts.
JdbcDatabaseConnection.compileStatement() might work but the amount of setup required is inappropriate.
Using a java.sql.PreparedStatement has a reasonable enough interface (if toString() returns the SQL like I would hope) but it's not compatible with ormlite's connection types.
This should be very easy and if it is, I can't find the right combination of method calls to make it happen.
Right now I'm trying to build in functionality to easily switch from automatically inserting data to the database to outputting the equivalent collection of insert statements to a file for later use.
Interesting. So one hack would be to use the MappedCreate class. The MappedCreate.build(...) method takes a DatabaseType and a TableInfo which is available from the dao.getTableInfo().
The mappedCreate.toString() exposed the generated INSERT statement (with a prefix) which might help but you would still need to convert the ? arguments to be the actual values with escaped quotes. That you would have to do in your own code.
Hope this helps somewhat.

JavaM API for GT.M - SELECT support

I'm wondering if there's any possible way how to use or implement SELECT query into JavaM API for GT.M database system. I'm using version 0.1, since I haven't found any other version ( https://github.com/Gadreel/javam/blob/master/README.md ).
If there's no option yet, could you recommend me any other API for this DBMS, using Java? I know there's some gtm4j ( http://code.vistaehr.com/gtm4j ), but it takes advantage of springframework, which is not convenient for me.
I'm new with GT.M and I just want to test, how to connect to it using Java and use some basic queries. Thanks a lot for your advices.
The database side of GT.M is a hierarchical key-value store, so features like SELECT (I'm guessing you want a full SQL SELECT) needs to be implemented by some framework (either an existing framework or one created by you).
From a quick look at the JavaM API, it seems it only offers/showcase a Java interface to the features offered by GT.M. So I think you would have to implement the SQL SELECT feature yourself, in Java.
That said, it is possible that what you wanted to use a SQL SELECT for can be done easilly using the standard GT.M / JavaM API, so there would be no need to implement a full SQL SELECT.
Actually, you could use M to write a parser for your SELECT command syntax. And would certainly be easier to do with the GTMJI plug-in for full-duplex GT.M/Java communication that FIS have just released.

Is there a way to create query specific functions in SQL Server 2008?

In an effort to adhere to the Dry Principle I have some code I feel could easily live in a function. I may need to reuse this code at some point in the future, I may not. Ideally I would have a function that lives just in this piece of code as it provides no benefit to the database as a whole and living inside any of the existing scheme's will create noise when trying to find meaningful and globally useful functions.
I have tried to write a script which uses typical syntax to create a function before my other code and drop the function at the end of the code. This is less than ideal because of potential collisions in the future, but an acceptable risk. Unfortunately I get an error:
'CREATE FUNCTION' must be the first statement in a query batch.
Adding semi-colons before and after the statement unfortunately is not a quick fix. Is there no way to quickly to use functions without building them into the framework of the database?
Or am I asking the wrong question. Is there a way in one script to force separate batches?
If you're truly running a "batch" (e.g. a set of T-SQL commands run in Query analyzer or ossql), then simply use "go". Your "create function" should work if it's the first line after a "go" - again, depending on your T-SQL interpreter. OSSQL: should work. An ADO connection in a VB6 program: definitely WON'T work.

variable table or column names in a function

I'm trying to search all tables and columns in a database, a la here. The suggested technique is to construct SQL query strings and then EXEC them. This works well, as a stored procedure. (Another example of variable table/column names is here. Again, EXEC is used to execute "dynamic SQL".)
However, my app requires that I do this in a function, not an SP. (Our development framework has trouble obtaining results from an SP.) But in a function, at least on SQL Server 2008 R2, you can't use EXEC; I get this error:
Invalid use of a side-effecting operator 'INSERT EXEC' within a function.
According to the answer to this post, apparently by a Microsoft developer, this is by design; it has nothing to do with the INSERT, only the fact that when you execute dynamically-constructed SQL code, the parser cannot guarantee a lack of side effects. Therefore it won't allow you to create such a function.
So... is there any way to iterate over many tables/columns within a function?
I see from BOL that
The following statements are valid in a function: ...
EXECUTE
statements calling extended stored procedures.
Huh - How could extended SP's be guaranteed side-effect free?
But that doesn't help me anyway:
The extended stored procedure, when it is called from inside a
function, cannot return result sets to the client. Any ODS APIs that
return result sets to the client will return FAIL. The extended stored
procedure could connect back to an instance of SQL Server; however, it
should not try to join the same transaction as the function that
invoked the extended stored procedure.
Since we need the function to return the results of the search, an ESP won't help.
I don't really want to get into extended SP's anyway: incrementing the number of programming languages in the environment would complicate our development environment more than it's worth.
I can think of a few solutions right now, none of which is very satisfactory:
First call an SP that produces the needed data and puts it in a table, then select from the function which merely reads the result from the table; this could be trouble if the search takes a while and two users' searches overlap. Or,
Have the application (not the function) generate a long query naming every table and column name from the db. I wonder if the JDBC driver can handle a query that long. Or,
Have the application (not the function) generate a long series of short queries naming every table and column name from the db. This will make the overall search a lot slower.
Thanks for any suggestions.
P.S. Upon further searching, I stumbled across this question which is closely related. It has no answers.
Update: No longer needed
I think this question is still valid, and we may again have a situation where we need it. However, I don't need an answer anymore for the present problem. After much trial-and-error I managed to get our application framework to retrieve row results from the RDBMS via the JDBC driver from the stored procedure. Therefore getting the thing to work as a function is unnecessary.
But if anyone posts an answer here that helps with the stated problem, I will be happy to upvote and/or accept it as appropriate.
An sp is basically a predefined sql statment with some add ons.
So if you had
PSEUDOCODE
Create SP_DoSomething As
Select * From MyTable
END
And you can't use the SP
Then you just execute the SQL as in "Select * From MyTable"
As for that naff sql code.
For start you could join table to column with a where clause, which would get rid of that line by line if stuff.
Ask another question. Like How could this be improved, there's lots of scope for more attempts than mine.