I am working on a multi tenant Spring Boot application using Postgresql and different schemas for tenants. Everything works fine until we need to use Postgresql's extensions, then we got errors about missing types.
ERROR: function text2ltree(character varying) does not exist
Hint: No function matches the given name and argument types. You might need to add explicit type casts.
Position: 266
The extension is installed using Liquibase and I have updated the changesets to perform the following:
CREATE SCHEMA myschema;
CREATE EXTENSION IF NOT EXISTS ltree WITH SCHEMA myschema;
ALTER DATABASE mydb SET search_path TO "$user", myschema
Then I can control where the extension is installed and updating the search path should avoid to specify the schema all the time. I have run the Liquibase migration and check Postgresql configuration for search_path and it works. I can run queries (using Squirrel) without the need to include the schema prefix for the ltree extension types and functions.
It does not work as expected as I still got the same error message. If I explicitly prefix the ltree types and functions with myschema, I got an error indicating that the operator is not found.
ERROR: operator does not exist: myschema.ltree <# myschema.ltree
Hint: No operator matches the given name and argument types. You might need to add explicit type casts.
Position: 263
For multi tenancy, we implement a custom org.hibernate.engine.jdbc.connections.spi.MultiTenantConnectionProvider that sets the schema in the MultitenantConnectionProvider.getAnyConnection() method.
I want to simplify the setup so I want to avoid to include always the extension's schema, besides the search_path configuration, I can not found anything related. Is there a way to implement the described setup to prevent prefixing with the extension schema in all the code?
If I install the extensions in the pg_catalog schema, then everything works as expected. See this answer.
I want to prevent to install the extensions in the pg_catalog schema so, a closer look to the PGConnection class reveals that the method to set the default schemas updates the search_path value to the schema given.
StringBuilder sb = new StringBuilder();
sb.append("SET SESSION search_path TO '");
Utils.escapeLiteral(sb, schema, getStandardConformingStrings());
sb.append("'");
stmt.executeUpdate(sb.toString());
It is escaping the schema argument, therefore if I try to use multiple schemas separated by commas, the whole expression is escaped.
The solution has been to create my own setSchema method in the MultiTenantConnectionProvider class. It does the same as the code above but adding my extensions' schema to the search_path value.
Related
I am trying to install pgcrypto in pg_catalog schema. But this does not work with postgres 13 or higher since the function gen_random_uuid is globally available. How can I still create my extension?
I am trying:
CREATE EXTENSION IF NOT EXISTS "pgcrypto" WITH SCHEMA pg_catalog CASCADE
I get the error:
ERROR: function "gen_random_uuid" already exists with same argument types.
All functions in pg_catalog are automatically available within other schemas.
Any function with schema pg_catalog can be called by using schema for example pg_catalog.gen_random_uuid(), but also without defining the schema, for example gen_random_uuid().
Function gen_random_uuid() is part of extension pgcrypto, so when you are trying to enable it, it notifies you, that this function is already installed.
CREATE EXTENSION IF NOT EXISTS pgcrypto WITH SCHEMA pg_catalog CASCADE;
On most cases is recommended to install common Postgres extensions into pg_catalog, so these are available within any schema. All objects in pg_catalog are automatically appended to any other schema. For example if you put pgcrypto into public schema, then you have to always use public.gen_random_uuid() within any other schema, which is annoying etc. But if you put it into pg_catalog, then you can call anywhere gen_random_uuid(), which is more useful.
I'm running PostgresSQL 10, and I have several schemas on my DB with multiple functions. I've created a schemaless script with all the functions on it (I've removed the schema prefix), with this, everytime i create a new schema, I ran the migration and create all the functions as well.
This was necessary/requested for a better data separation between customers. All the schemas are twins in terms of structure.
All was fine until I figured that SchemaA was calling a function from public. Even if I call:
SchemaA.myFunction(p_param1:= 'A', p_param2:= 'B').
If this "myFunction" calls another from the inside, it will target public schema by default.
The only way I made it work, was using an input parameter called p_user_schema myFunction(p_param1, p_param2, p_user_schema) and add the following statement as the first line of myFunction body.
EXECUTE FORMAT('SET search_path TO %L', p_user_schema);
I've 147 functions, I will need to adapt each of these, does anyone know a better way to target the callers schema, by callers I mean the prefix schema used on the main call.
You can set the search path at the function level with the current user as the 1st one
CREATE OR REPLACE FUNCTION schemaA.myfunction()
RETURNS ..
AS $$
...
$$ LANGUAGE SQL
SET SEARCH_PATH = "$user", public;
I am introducing spring to the existing application (hibernate has already been there) and encountered a problem with native SQL queries.
A sample query:
SELECT ST_MAKEPOINT(cast(longitude as float), cast(latitude as float)) FROM
OUR_TABLE;
OUR_TABLE is in OUR_SCHEMA.
When we connect to the db to OUR_SCHEMA:
spring.datasource.url: jdbc:postgresql://host:port/db_name?currentSchema=OUR_SCHEMA
the query fails because function ST_MAKEPOINT is not found - the function is located in schema: PUBLIC.
When we connect to the db without specifying the schema, ST_MAKEPOINT is found and runs correctly, though schema name needs to be added to the table name in the query.
As we are talking about thousands of such queries and all the tables are located in OUR_SCHEMA, is there a chance to anyhow specify the default schema, so still functions from PUBLIC schema were visible?
So far, I have tried the following springboot properties - with no success:
spring.jpa.properties.hibernate.default_schema: OUR_SCHEMA
spring.datasource.tomcat.initSQL: ALTER SESSION SET CURRENT_SCHEMA=OUR_SCHEMA
spring.datasource.initSQL: ALTER SESSION SET CURRENT_SCHEMA=OUR_SCHEMA
Also, it worked before switching to springboot config - specifying hibernate.default-schema = OUR_SCHEMA in persistence.xml was enough.
Stack:
spring-boot: 2.0.6
hibernate: 5.3.1.Final
postgresql: 42.2.5
postgis: 2.2.1
You're probably looking for the PostgreSQL search_path variable, which controls which schemas are checked when trying to resolve database object names. The path accepts several schema names, which are checked in order. So you can use the following
SET search_path=our_schema,public;
This will make PostgreSQL look for your tables (and functions!) first in our_schema, and then in public. Your JDBC driver may or may not support multiple schemas in its current_schema parameter.
Another option is to install the PostGIS extension (which provides the make_point() function) in the our_schema schema:
CREATE EXTENSION postgis SCHEMA our_schema;
This way you only have to have one schema in your search path.
JDBC param currentSchema explicitly allows specifying several schemas separating them by commas:
jdbc:postgresql://postgres-cert:5432/db?currentSchema=my,public&connectTimeout=4&ApplicationName=my-app
From https://jdbc.postgresql.org/documentation/head/connect.html
currentSchema = String
Specify the schema (or several schema separated by commas) to be set in the search-path. This schema will be used to resolve unqualified object names used in statements over this connection.
Note you probably need Postgres 9.6 or better for currentSchema support.
PS Probably better solution is to set search_path per user:
ALTER USER myuser SET search_path TO mydb,pg_catalog;
if you use hibernate.default_schema, then for native queries, you need to provide the {h-schema} placeholder, something like that
SELECT ST_MAKEPOINT(cast(longitude as float), cast(latitude as float)) FROM {h-schema}OUR_TABLE;
I'm trying to run an inline query on my database - which has the citext extension installed (using CREATE EXTENSION) - and yet the executed query keeps throwing this error when calling a function:
type "citext" does not exist
DO
LANGUAGE plpgsql
$$
DECLARE
_id INT;
BEGIN
SELECT * FROM "dbo"."MyFunction"(_id, 'some value'::citext);
END;
$$;
If I omit the ::citext cast, it says:
function dbo.MyFunction(integer, unknown) does not exist.
You might need to add explicit type casts.
The citext extension is added, is part of the schema and works with other queries. This keeps coming up randomly - what causes it?
EDIT:
The installed extensions:
extname | nspname
----------+-----------
plpgsql | pg_catalog
citext | public
uuid-ossp | public
Search path:
show search_path;
search_path
-----------
dbo
As suspected, the extension schema is missing from the search_path. Read up on how to set the schema search path in this related answer:
How does the search_path influence identifier resolution and the "current schema"
It seems like your client sets search_path = dbo on connection, which seems to be misconfigured. dbo is something we see a lot for SQL Server (used to be the default schema here or still is?), very untypical for Postgres. Not sure how you got there.
Why do table names in SQL Server start with "dbo"?
One alternative would be to install extensions into the dbo schema as well:
Best way to install hstore on multiple schemas in a Postgres database?
You can even move (most) extensions to a different schema:
ALTER EXTENSION citext SET SCHEMA dbo;
But I would advice to install extensions to a dedicated schema and include it in the search_path.
Leave plpgsql alone in any case. It's installed by default and should should stay in pg_catalog.
One way or another, clean up the mess with varying search_path settings.
As for the second question: that's guided by the rules of Function Type Resolution. The call cannot be resolved, because citext does not have an implicit cast to text.
Related
Is there a way to disable function overloading in Postgres
I'm on PostgresQL 9.1.1 trying to have the extension unaccent available on all schemas.
So I ran the command CREATE EXTENSION unaccent;. Which works, but only for the current schema set on search_path. So this means if I change the search_path, I no longer can call unaccent. How do I make this extension available to all schemas in a particular database?
Thanks in advance!
CREATE EXTENSION unaccent; installs the extension into the public schema. To make it usable, simply include this to change the search_path:
set search_path = my_schema, public;
Or better create a schema to contain all extensions, then always append that schema to the search_path.
create schema extensions;
-- make sure everybody can use everything in the extensions schema
grant usage on schema extensions to public;
grant execute on all functions in schema extensions to public;
-- include future extensions
alter default privileges in schema extensions
grant execute on functions to public;
alter default privileges in schema extensions
grant usage on types to public;
Now install the extension:
create extension unaccent schema extensions;
Then use include that schema in the search_path
set search_path = my_schema, extensions;
If you don't want to repeat the above for every new database you create, run the above steps while being connected to the template1 database. You can even include the extensions schema in the default search_path by either editing postgresql.conf or using alter system
Had same question, but #Richard Huxton answer led to correct solution:
create extension unaccent schema pg_catalog;
This works!!
As Richard said, pg_catalog is automatically added (silently) to each search_path. Extensions added there will be found.
imho this is much better than schema.func() if the extension is global.
For example, I use a lot of schemae. I use the schema PUBLIC for debugging - everything should be in its own schema. If something is in PUBLIC, it's wrong.
Creating the extension in pg_catalog keeps all the schema clean, and lets the schema itself work as if it were part of the core postgres.
You don't. You can always call it fully qualified if you want to.
SELECT <schema>.<function>(...)
In fact, I believe the only reason the built-in functions are always available is that PG adds pg_catalog to the end of your search_path no matter what you do.