I want to create a test environment where the basic underlying postgres database is overlain with an instance-localized private view, such that all queries from a specific set of processes go through the private view while other (potentially concurrent or merely subsequent) processes would remain unaffected.
I think I can do something like this using the search_path mechanism, but it's not clear if I can do that transparently (e.g., without having each application execute some set of SQL setup for each connection). For example, is there something I could set as an environment variable saying "use this search_path" and have every process that I start thereafter see that and use the same private table instances?
If it matters, the processes are all going through the C++ adapter, libpqxx, to access the database.
Thanks,
Jeff
If every instance has a separate database user role, you can simply create a schema with the same name as the user and it'll use it -- without any change to configuration:
myuser=> show search_path;
search_path
--------------
"$user",public
(1 row)
myuser=> create schema myuser;
CREATE SCHEMA
myuser=> create table foo(i int);
CREATE TABLE
myuser=> \d foo
Table "myuser.foo"
Column Type Modifiers
------ ------- ---------
i integer
If you want to have different names for users and schemas, you can configure it for each user manually:
ALTER USER foo SET search_path=foo_schema;
You can configure the default search path for all connections in the postgre configuration file.
See http://www.postgresql.org/docs/9.0/static/runtime-config-client.html#RUNTIME-CONFIG-CLIENT-OTHER
If each connection needs some custom search path based on who the user is you will have to do that in your code and issue a SET search_path TO x,y,z;for each connection.
Another option that comes to mind is using stored functions and have them use dynamic sql to query from different schemas based on who the caller is. You would have to maintain a table or the more evil of the two "hard code" the user/schema mappings into the stored functions that the stored function would use.
Related
When collaborating with colleagues I need to change the schema name every time I receive a SQL script (Postgres).
I am only an ordinary user of a corporate database (no permissions to change anything). Also, we are not allowed to create tables in PUBLIC schema. However, we can use (read-only) all the tables from BASE schema.
It is cumbersome for the team of users, where everybody is creating SQL scripts (mostly only for creating tables), which need to be shared amongst others. Every user has its own schema.
Is it possible to change the script below, where I will share the script to another user without the need for the other user to find/replace the schema, in this case, user1?
DROP TABLE IF EXISTS user1.table1;
CREATE TABLE user1.table1 AS
SELECT * FROM base.table1;
You can set the default schema at the start of the script (similar to what pg_dump generates):
set search_path = user1;
DROP TABLE IF EXISTS table1;
CREATE TABLE table1 AS
SELECT * FROM base.table1;
Because the search path was change to contain user1 as the first schema, tables will be searched in that schema when dropping and creating. And because the search path does not include any other schema, no other schema will be consulted.
If you
However the default search_path is "$user", public which means that any unqualified table will be searched or created in a schema with the same name as the current user.
Caution
Note that a DROP TABLE will drop the table in the first schema found in that case. So if table1 doesn't exist in the user's schema, but in the public schema, it would be dropped from the public schema. So for your use-case setting the path to exactly one schema might be more secure.
I am migrating this Oracle command to PostgreSQL:
CREATE SYNONYM &user..emp FOR &schema..emp;
Please suggest to me how I can migrate the above command.
PostgreSQL does not support SYNOSYM or ALIAS. Synonym is a non SQL 2003 feature implemented by Microsoft SQL 2005 (I think). While it does provide an interesting abstraction, but due to lack of relational integrity, it can be considered a risk.
That is, you can create a synonym, advertise it to you programmers, the code is written around it, including stored procedures, then one day the backend of this synonym (or link or pointer) is changed/deleted/etc leading to a run time error. I don't even think a prepare would catch that.
It is the same trap as the symbolic links in unix and null pointers in C/C++.
But rather create a DB view and it is already updateable as long as it contains one table select only.
CREATE VIEW schema_name.synonym_name AS SELECT * FROM schema_name.table_name;
You don't need synonyms.
There are two approaches:
using the schema search path:
ALTER DATABASE xyz SET search_path = schema1, schema2, ...;
Put the schema that holds the table on the search_path of the database (or user), then it can be used without schema qualification.
using a view:
CREATE VIEW dest_schema.tab AS SELECT * FROM source_schema.tab;
The first approach is good if you have a lot of synonyms for objects in the same schema.
I am about to model a PostgreSQL database, based on an Oracle database. The latter is old and its tables have been named after a 3-letter-scheme.
E.g. a table that holds parameters for tasks would be named TSK_PAR.
As I model the new database, I'd like to rename those tables to a more descriptive name using actual words. My problem is, that some parts of the software might rely on these old names until they're rewritten and adapted to the new scheme.
Is it possible to create something like an alias that's being used for the whole database?
E.g. I create a new task_parameters database, but add a TSK_PAR alias to it, so if a SELECT * FROM TSK_PAR is being used, it automatically refers to the new name?
Postgres has no synonyms like Oracle.
But for your intended use case, views should do just fine. A view that simply does select * from taks_parameters is automatically updateable (see here for an online example).
If you don't want to clutter your default schema (usually public) with all those views, you can create them in a different schema, and then adjust the user's search path to include that "synonym schema".
For example:
create schema synonyms;
create table public.task_parameters (
id integer primary key,
....
);
create view synonyms.task_par
as
select *
from public.task_parameters;
However, that approach has one annoying drawback: if a table is used by a view, the allowed DDL statements on it are limited, e.g. you can't drop a column or rename it.
As we manage our schema migrations using Liquibase, we always drop all views before applying "normal" migrations, then once everything is done, we simply re-create all views (by running the SQL scripts stored in Git). With that approach, ALTER TABLE statements never fail because there are not views using the tables. As creating a view is really quick, it doesn't add overhead when deploying a migration.
Encountered this issue when trying to modify the search_path to my new Redshift db.
Presently, I've migrated the contents of my MySQL db into a redshift cluster via AWS' Data Migration Service. The data was imported into a schema lets call my_schema. When I try to execute queries against the cluster it requires me to prefix table names with the schema name
i.e.
select * from my_schema.my_table
I wanted to change the setup so that I can reference the table directly without needing the prefix. After a bit of looking around I found out that this was possible by modifying the search_path attribute.
First I tried doing this by running
set search_path = "$user", my_schema;
This appeared to work but then I realized that this was simply setting my_schema as the default schema in the context of the current session, I wanted it set on a database level. I found several sources saying that the way to do this was to use the alter command like so...
alter database my_db set search_path = "$user", public, my_schema
However, running this command results in the following error which somehow shows up in 0 google results:
SET/RESET commmand in ALTER DATABASE is not supported
I'm pretty baffled by how the above error hasn't ever had a post made about it but I'm also pretty interested in figuring out how to resolve my initial issue of setting a global default schema for my redshift cluster.
ALTER DATABASE SET is not supported in Redshift. However you can SET/RESET configuration parameters at USER level using the ALTER USER SET SEARCH_PATH TO <SCHEMA1>,<SCHMEA2>;
Please check: http://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_USER.html
http://docs.aws.amazon.com/redshift/latest/dg/r_search_path.html
When you set the search_path to <SCHEMA1>,<SCHMEA2> in db1 for a user it is not for just current session, it will be set for all future sessions.
I have a database shared by many users, all the users are in a group "example" and the vast majority of objects in the database are owned by "example". Very occasionally a user will create a new table - that table gets assigned to the user who created it and so the other users are unable to alter the new table.
Is there a way to have the ownership of a table automatically set to the group "example" and not the user who created the table or a way to set up a trigger that happens after a CREATE TABALE or a way to set up group/permissions such that all users will be considered owners of objects regardless of who actually created them?
You could change the default privileges this way:
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO PUBLIC;
or to give write access:
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT,INSERT,UPDATE,DELETE ON TABLES TO PUBLIC;
https://www.postgresql.org/docs/9.0/static/sql-alterdefaultprivileges.html
You probably want to use an EVENT TRIGGER
This is doable in all versions of Pg from 9.3 forward but depending on your version might require different approaches since the structures for event triggers have improved significantly.
In earlier versions you could look through the table catalogs for items owned by the current user. In newer versions you can use pg_event_trigger_ddl_commands to get the information you need. you want to run the command at ddl end.