npgsql search_path not working - postgresql

I've a script like the following:
SET search_path = MySchema;
INSERT INTO MyTable() values ()
MyTable is actually created in MySchema, so if I change the script to
INSERT INTO MySchema.MyTable() values ()
It works
Now I'm generating some SQL files to recreate a database structure, so we have the scripts generated with "SET search_path = MySchema;"
Is there a way to make this work?
Using NPGSQL version 3.0.4.0

My mistake, it's not related to the schema itself, the issue is when having both DDL and DML instructions together.

Related

PostgreSQL rename table named with a keyword

I have a table named import.
I want to rename the table with the following statement in a sql script below.
Unfortunately I can't, because sql treats the term import as a psql keyword.
How can I change the name in a sql script?
I have a Database change management also called database migration or database upgrading. Database change management is the process of managing the change of a database over the course of an application's lifecycle. What could change in a database? The database structure (i.e. the tables), master data but even indices, triggers and stored procedures could be added, changed or deleted over time.
ALTER TABLE import
RENAME TO api_exchange;
I am aware I can change the table name with a PostgreSQL client, but I need to do it in a SQL script for postgreSQL 10 in order to keep my Database change management intact.
You can quote reserved words using double quotes:
-- \i tmp.sql
CREATE TABLE "select"(id integer);
ALTER TABLE "select"
RENAME TO api_exchange;
\d api_exchange

Default schema for native SQL queries (spring-boot + hibernate + postgresql + postgis)

I am introducing spring to the existing application (hibernate has already been there) and encountered a problem with native SQL queries.
A sample query:
SELECT ST_MAKEPOINT(cast(longitude as float), cast(latitude as float)) FROM
OUR_TABLE;
OUR_TABLE is in OUR_SCHEMA.
When we connect to the db to OUR_SCHEMA:
spring.datasource.url: jdbc:postgresql://host:port/db_name?currentSchema=OUR_SCHEMA
the query fails because function ST_MAKEPOINT is not found - the function is located in schema: PUBLIC.
When we connect to the db without specifying the schema, ST_MAKEPOINT is found and runs correctly, though schema name needs to be added to the table name in the query.
As we are talking about thousands of such queries and all the tables are located in OUR_SCHEMA, is there a chance to anyhow specify the default schema, so still functions from PUBLIC schema were visible?
So far, I have tried the following springboot properties - with no success:
spring.jpa.properties.hibernate.default_schema: OUR_SCHEMA
spring.datasource.tomcat.initSQL: ALTER SESSION SET CURRENT_SCHEMA=OUR_SCHEMA
spring.datasource.initSQL: ALTER SESSION SET CURRENT_SCHEMA=OUR_SCHEMA
Also, it worked before switching to springboot config - specifying hibernate.default-schema = OUR_SCHEMA in persistence.xml was enough.
Stack:
spring-boot: 2.0.6
hibernate: 5.3.1.Final
postgresql: 42.2.5
postgis: 2.2.1
You're probably looking for the PostgreSQL search_path variable, which controls which schemas are checked when trying to resolve database object names. The path accepts several schema names, which are checked in order. So you can use the following
SET search_path=our_schema,public;
This will make PostgreSQL look for your tables (and functions!) first in our_schema, and then in public. Your JDBC driver may or may not support multiple schemas in its current_schema parameter.
Another option is to install the PostGIS extension (which provides the make_point() function) in the our_schema schema:
CREATE EXTENSION postgis SCHEMA our_schema;
This way you only have to have one schema in your search path.
JDBC param currentSchema explicitly allows specifying several schemas separating them by commas:
jdbc:postgresql://postgres-cert:5432/db?currentSchema=my,public&connectTimeout=4&ApplicationName=my-app
From https://jdbc.postgresql.org/documentation/head/connect.html
currentSchema = String
Specify the schema (or several schema separated by commas) to be set in the search-path. This schema will be used to resolve unqualified object names used in statements over this connection.
Note you probably need Postgres 9.6 or better for currentSchema support.
PS Probably better solution is to set search_path per user:
ALTER USER myuser SET search_path TO mydb,pg_catalog;
if you use hibernate.default_schema, then for native queries, you need to provide the {h-schema} placeholder, something like that
SELECT ST_MAKEPOINT(cast(longitude as float), cast(latitude as float)) FROM {h-schema}OUR_TABLE;

SET/RESET command in ALTER DATABASE is not supported

Encountered this issue when trying to modify the search_path to my new Redshift db.
Presently, I've migrated the contents of my MySQL db into a redshift cluster via AWS' Data Migration Service. The data was imported into a schema lets call my_schema. When I try to execute queries against the cluster it requires me to prefix table names with the schema name
i.e.
select * from my_schema.my_table
I wanted to change the setup so that I can reference the table directly without needing the prefix. After a bit of looking around I found out that this was possible by modifying the search_path attribute.
First I tried doing this by running
set search_path = "$user", my_schema;
This appeared to work but then I realized that this was simply setting my_schema as the default schema in the context of the current session, I wanted it set on a database level. I found several sources saying that the way to do this was to use the alter command like so...
alter database my_db set search_path = "$user", public, my_schema
However, running this command results in the following error which somehow shows up in 0 google results:
SET/RESET commmand in ALTER DATABASE is not supported
I'm pretty baffled by how the above error hasn't ever had a post made about it but I'm also pretty interested in figuring out how to resolve my initial issue of setting a global default schema for my redshift cluster.
ALTER DATABASE SET is not supported in Redshift. However you can SET/RESET configuration parameters at USER level using the ALTER USER SET SEARCH_PATH TO <SCHEMA1>,<SCHMEA2>;
Please check: http://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_USER.html
http://docs.aws.amazon.com/redshift/latest/dg/r_search_path.html
When you set the search_path to <SCHEMA1>,<SCHMEA2> in db1 for a user it is not for just current session, it will be set for all future sessions.

PSQL was inferring schema, but how?

We're in the process of upgrading our software from PostgreSQL 9.2 to 9.6 and we've run into an odd issue.
Our installation runs an SQL script to create the database. This is done using psql -f. This worked fine under 9.2 but seems to have issues with not creating objects in 9.6. I've been looking into this and found something odd in the SQL script. Most of the tables are created using statements that look like this:
--
-- Name: crawler_run; Type: TABLE; Schema: analytics; Owner: postgres; Tablespace:
--
CREATE TABLE IF NOT EXISTS crawler_run (
... columns, etc.
);
--
ALTER TABLE analytics.crawler_run OWNER TO postgres;
Note that there is no schema in the create table statement. But the tables were being created in the correct schema and the subsequent alter table statement was not failing.
My best guess is that the preceding comment has something to do with it, but I've not been able to find any documentation to support that.
So how was this working?
Tables are created in the first schema of the user/role search_path, which is either set permanently or just for the current session.
Look for a statement like:
SET search_path = analytics
In your case it was analytics and now it is probably back to the default public.

restoring database from pg_dump file creates strange tables

I have backup created like this:
pg_dump dbname > file
I am trying to restore the database (after drop database and create database) like this:
psql dbname < file
What I get is a database full of tables that are created with dbname.tablename instead of just tablename.
How do I restore a postgres database making sure the tables it creates has just tablename and not dbname.tablename?
Thanks to #Craig Ringer for pointing me in the right direction.
Yes, there was SET search_path on the database for the original DB. This created the table names with schema names prefixed to table names.
Removing or commenting those out of the backup script created tables without a schema prefix. Which was desirable. But the restore didn't result in complete restore, and many tables got left out.
So did the restore, with usual means. Tables are created with schema names prefixed. The sql query scripts broke because they were not specifying the schema names every time they queried the table. To fix this, I followed this - https://stackoverflow.com/a/2875705/1945517
ALTER ROLE <your_login_role> SET search_path TO dbname;
This fixed the broken queries.