Slick 3 - how to get correct (database) schema when inserting with plain SQL - postgresql

I'm trying to get basic plain SQL example working in Slick 3, on Postgres but with custom DB schema, say local instead of default public one. I have hard time inserting the row as executing the following
sqlu"INSERT INTO schedule(user_id, product_code, run_at) VALUES ($userId, $code, $nextRun)"
says
org.postgresql.util.PSQLException: ERROR: relation "schedule" does not exist
The table is in place because when I prefix schedule with local. in the insert statement it works as expected. How can I get correct schema provided to this query?
I'm using it as part of akka-projection handler and all the projection internals like maintaining offsets work as expected on local schema.
I cannot simply put schema as a variable as it errors while resolving parameters:
sqlu"INSERT INTO ${schema}.schedule(user_id, product_code, run_at) VALUES ($userId, $code, $nextRun)"

You can insert schema name using #${value}:
sqlu"INSERT INTO #${schema}.table ..."

Related

View query used in Data Factory Copy Data operation

In Azure Data Factory, I'm doing a pretty vanilla 'Copy Data' operation. One dataset to another.
Can I view the query being used to perform the copy operation? Apparently, it's a syntax error, but I've only used drag-and-drop menus. Here's the error:
ErrorCode=SqlOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=A
database operation failed. Please search error to get more
details.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Data.SqlClient.SqlException,Message=Incorrect
syntax near the keyword 'from'.,Source=.Net SqlClient Data
Provider,SqlErrorNumber=156,Class=15,ErrorCode=-2146232060,State=1,Errors=[{Class=15,Number=156,State=1,Message=Incorrect
syntax near the keyword 'from'.,},],'
Extra context
1.Clear Schema and import Schema again.
2. This mostly happens when there are some changes to table schema after creating pipeline and datasets. verify once.
3.The schema and datasets should be refreshed when there are some changes in the SQL table schema.
4. For table name or view name used in the query, use []. ex: [dbo].[persons]
5. In datasets select table name
6. try to publish before testing.
I followed the same scenario and reproduced the same thing. I didn’t get any error as you can see, the above error mainly happens because of schema.
Source dataset:
In source dataset, I manually added and connected to SQL database with Sample SQL table.
Sink dataset:
In sink dataset, I added another SQL database with auto create table, write behavior: Upset, key column: PersonID.
Before execution there is no table on SQL database, then after the execution was successful, I got this sample output in azure SQL database.

On Google Data Studio, using PostgreSQL data, how do I "SELECT * ..." but for camelCase columns?

On Google Data Studio, I cannot create a chart from Postgres data if table columns are in camelCase. I have data in PostgreSQL where I want to get charts from. Integrating it as a data source works fine. Now, I have a problem when creating a chart.
After creating a chart and selecting a data source, I try to add a column, which results in this error:
Error with SQL statement: ERROR: column "columnname" does not exist Hint: Perhaps you meant to reference the column "table.columnName". Position: 8
It just so happens that all my columns are in camelCase. Is there no way around this? Surely this is a basic question that has been resolved.
When connecting to your data source, try using 'Custom query' instead of selecting a table from your database. Then manually write your SQL query where you cast your camel case column names to lower case using sql alias. Worked for me.
example:
SELECT
"camelCaseColA" as cola,
"camelCaseColB" as colb,
"camelCaseColC" as colc
FROM
tableName as table

Default schema for native SQL queries (spring-boot + hibernate + postgresql + postgis)

I am introducing spring to the existing application (hibernate has already been there) and encountered a problem with native SQL queries.
A sample query:
SELECT ST_MAKEPOINT(cast(longitude as float), cast(latitude as float)) FROM
OUR_TABLE;
OUR_TABLE is in OUR_SCHEMA.
When we connect to the db to OUR_SCHEMA:
spring.datasource.url: jdbc:postgresql://host:port/db_name?currentSchema=OUR_SCHEMA
the query fails because function ST_MAKEPOINT is not found - the function is located in schema: PUBLIC.
When we connect to the db without specifying the schema, ST_MAKEPOINT is found and runs correctly, though schema name needs to be added to the table name in the query.
As we are talking about thousands of such queries and all the tables are located in OUR_SCHEMA, is there a chance to anyhow specify the default schema, so still functions from PUBLIC schema were visible?
So far, I have tried the following springboot properties - with no success:
spring.jpa.properties.hibernate.default_schema: OUR_SCHEMA
spring.datasource.tomcat.initSQL: ALTER SESSION SET CURRENT_SCHEMA=OUR_SCHEMA
spring.datasource.initSQL: ALTER SESSION SET CURRENT_SCHEMA=OUR_SCHEMA
Also, it worked before switching to springboot config - specifying hibernate.default-schema = OUR_SCHEMA in persistence.xml was enough.
Stack:
spring-boot: 2.0.6
hibernate: 5.3.1.Final
postgresql: 42.2.5
postgis: 2.2.1
You're probably looking for the PostgreSQL search_path variable, which controls which schemas are checked when trying to resolve database object names. The path accepts several schema names, which are checked in order. So you can use the following
SET search_path=our_schema,public;
This will make PostgreSQL look for your tables (and functions!) first in our_schema, and then in public. Your JDBC driver may or may not support multiple schemas in its current_schema parameter.
Another option is to install the PostGIS extension (which provides the make_point() function) in the our_schema schema:
CREATE EXTENSION postgis SCHEMA our_schema;
This way you only have to have one schema in your search path.
JDBC param currentSchema explicitly allows specifying several schemas separating them by commas:
jdbc:postgresql://postgres-cert:5432/db?currentSchema=my,public&connectTimeout=4&ApplicationName=my-app
From https://jdbc.postgresql.org/documentation/head/connect.html
currentSchema = String
Specify the schema (or several schema separated by commas) to be set in the search-path. This schema will be used to resolve unqualified object names used in statements over this connection.
Note you probably need Postgres 9.6 or better for currentSchema support.
PS Probably better solution is to set search_path per user:
ALTER USER myuser SET search_path TO mydb,pg_catalog;
if you use hibernate.default_schema, then for native queries, you need to provide the {h-schema} placeholder, something like that
SELECT ST_MAKEPOINT(cast(longitude as float), cast(latitude as float)) FROM {h-schema}OUR_TABLE;

Accessing table without using schema name

I am new with DB2.
I am not able to get data from a table without using the schema name. If I use a schema name with table name, I am able to fetch data.
Example:
SELECT * FROM TABLE_NAME;
It is giving me error, while
SELECT FROM SCHEMA_NAME.TABLE_NAME;
is fetching a result.
What do I have to set up not to always have to use the schema name?
By default, your username is used as the schema name for unqualified object names. You can see the current schema with, e.g. VALUES CURRENT SCHEMA. You can change the current schema for you current session with SET SCHEMA new_schema_name, or via e.g. a JDBC connection parameter. Most query tools also have a place to specify/change the current schema.
See the manual page for SET SCHEMA https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.sql.ref.doc/doc/r0001016.html
The full rules for the qualification of unqualified objects is here https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.sql.ref.doc/doc/r0000720.html#r0000720__unq-alias
E.g.
Unqualified alias, index, package, sequence, table, trigger, and view names are implicitly qualified by the default schema.
However, you can create a public alias for a table, module or sequence if you wish to be able to reference it regardless of your CURRENT SCHEMA value.
https://www.ibm.com/support/knowledgecenter/SSEPGG_11.5.0/com.ibm.db2.luw.sql.ref.doc/doc/r0000910.html
(P.S. all the above assumes you are using Db2 LUW)
Try using SET SCHEMA to set the default schema to be used in the session:
SET SCHEMA SCHEMA_NAME;
SELECT * FROM TABLE_NAME;
When using DBeaver - right click on connection > Connection Settings > Initialization, and select your default DB and default Schema:
After that, open your SQL Script and select Active DB:

Postgres pg_dump now stored procedure fails because of boolean

I have a stored procedure that has started to fail for no reason. Well there must be one but I can't find it!
This is the process I have followed a number of times before with no problem.
The source server works fine!
I am doing a pg_dump of the database on source server and imported it onto another server - This is fine I can see all the data and do updates.
Then I run a stored procedure on the imported database that does the following on the database which has 2 identical schema's -
For each table in schema1
Truncate table in schema2
INSERT INTO schema2."table" SELECT * FROM schema1."table" WHERE "Status" in ('A','N');
Next
However this gives me an error now when it did not before -
The error is
*** Error ***
ERROR: column "HBA" is of type boolean but expression is of type integer
SQL state: 42804
Hint: You will need to rewrite or cast the expression.
Why am I getting this - The only difference between the last time I followed this procedure and this time is that the table in question now has an extra column added to it so the "HBA" boolean column is not the last field. But then why would it work in original database!
I have tried removing all data, dropping and rebuilding table these all fail.
However if I drop column and adding it back in if works - Is there something about Boolean fields that mean they need to be the last field!
Any help greatly apprieciated.
Using Postgres 9.1
The problem here - tables in different schemas were having different column order.
If you do not explicitly specify column list and order in INSERT INTO table(...) or use SELECT * - you are relying on the column order of the table (and now you see why it is a bad thing).
You were trying to do something like
INSERT INTO schema2.table1(id, bool_column, int_column) -- based on the order of columns in schema2.table1
select id, int_column, bool_column -- based on the order of columns in schema1.table1
from schema1.table1;
And such query caused cast error because column type missmatch.