postgresql request over several schema - postgresql

I have a database, with every users having a schema.
Is there a way to query a table in every schema?
Something like: select id, name from *.simulation doesn't work...
Thank you for your help !

No, you will need to write a function - either a server side function or a client side function in whatever language you're using - that executes the query once for each schema.
You could also create a VIEW that does UNION ALL between all the schemas, but that's going to be a lot of work to maintain if your schemas are dynamically added and removed.

Yes you can, use SET search_path TO ... to point to all schema's. If you don't know all the names of the schemas, wrap it in a function that first selects all schemas and then set the entire search_path.
http://www.postgresql.org/docs/current/interactive/sql-set.html

Related

How to execute graphql query for a specific schema in hasura?

As it can be seen in the following screenshot, the current project database (postgresql)
named default has these 4 schema - public, appcompany1, appcompany2 and appcompany3.
They share some common tables. Right now, when I want to fetch data for customers, I write a query like this:
query getCustomerList {
customer {
customer_id
...
...
}
}
And it fetches the required data from public schema.
But according to the requirements, depending on user interactions in front-end, that query will be executed for appcompanyN (N=1,2,3,..., any positive integer). How do I achieve this goal?
NOTE: Whenever the user creates a new company, a new schema is created for that company. So the total number of schema is not limited to 4.
I suspect that you see a problem where it does not exists actually.
Everything is much simpler than maybe it seems.
A. Where all those tables?
There are a lot of schemas with identical (or almost identical) objects inside them.
All tables are registered in hasura.
Hasura can't register different tables with the same name, so by default names will be [schema_name]_[table_name] (except for public)
So table customer will be registered as:
customer (from public)
appcompany1_customer
appcompany2_customer
appcompany3_customer
It's possible to customize entity name in GraphQL-schema with "Custom GraphQL Root Fields".
B. The problem
But according to the requirements, depending on user interactions in front-end, that query will be executed for appcompanyN (N=1,2,3,..., any positive integer). How do I achieve this goal?
There are identical objects that differs only with prefixes with schema name.
So solutions are trivial
1. Dynamic GraphQL query
Application stores templates of GraphQL-queries and replaces prefix with real schema name before request.
E.g.
query getCustomerList{
[schema]_customer{
}
}
substitute [schema] with appcompany1, appcompany2, appcompanyZ and execute.
2. SQL view for all data
If tables are 100% identical then it's possible to create an sql view as:
CREATE VIEW ALL_CUSTOMERS
AS
SELECT 'public' as schema,* FROM public.customer
UNION ALL
SELECT 'appcompany1' as schema,* FROM appcompany1.customer
UNION ALL
SELECT 'appcompany2' as schema,* FROM appcompany2.customer
UNION ALL
....
SELECT `appcompanyZ',* FROM appcompanyZ.customer
This way: no need for dynamic query, no need to register all objects in all schemas.
You need only to register view with combined data and use one query
query{
query getCustomerList($schema: string) {
all_customer(where: {schema: {_eq: $schema}}){
customer_id
}
}
About both solutions: it's hard to call them elegant.
I myself dislike them both ;)
So decide yourself which is more suitable in your case.

Is there any risks involved with changing a table's schema

I want to know if there is any risks involved when changing a table's schema, for example from dbo. to xyz. or visa versa.
Would like to hear your views on this.
First which crossed my mind is when you have views, stored procedures etc which contains query like
SELECT *
FROM [sch].[tbl]
Once you move table to new schema you will get 'Invalid object name 'oldSchema.tableName'.

In SQL Server 2008R2 can I force a View to use objects within the user's default schema instead of the schema in which the View exists?

A bit of background. I have a base application and most clients use it as standard. However some clients have small code and database customisations.
Each of these clients has their own branch and maintenance can be tricky.
I want to consolidate all these into a single database structure (not a single database - we aren't doing multi-tenancy) to enable upgrades to be applied in a much more uniform fashion.
I'm still at the proof of concept stage, but the route I was going down would be to have the standard objects stay in the schema they currently exist in (mostly dbo) and have the custom objects reside in a schema for each client.
For example, I could have dbo.users and client1.users which has some additional columns. If I set the default schema for the client to be "client1" then the following query
SELECT * FROM users
will return data from the client1 schema or the dbo schema depending on which login is connected.
This is absolutely perfect for what I'm trying to achieve.
The problem I'm running into is with Views.
I have many views which are in the dbo schema and refer to the Users table. No matter which user I connect to the database as, these views always select from dbo.users.
So I'm guessing the question I have is:
Can I prefix the tables in the view with some variable like "DEFAULT"? e.g.
SELECT u.username, u.email, a.level
FROM DEFAULT.users u INNER JOIN accessLevels a ON u.accessID = a.accessID
If this isn't possible and I'm totally barking up the wrong tree, do you have any suggestions as to how I can achieve what I'm setting out to do?
Many thanks.
Just reference the name of the schema in which the views reside...
Select a., b.
from schema1.TABLEA A
join schema2.TABLEB B on A.ID = B.ID

Dynamic auditing of data with PostgreSQL trigger

I'm interested in using the following audit mechanism in an existing PostgreSQL database.
http://wiki.postgresql.org/wiki/Audit_trigger
but, would like (if possible) to make one modification. I would also like to log the primary_key's value where it could be queried later. So, I would like to add a field named something like "record_id" to the "logged_actions" table. The problem is that every table in the existing database has a different primary key fieldname. The good news is that the database has a very consistent naming convention. It's always, _id. So, if a table was named "employee", the primary key is "employee_id".
Is there anyway to do this? basically, I need something like OLD.FieldByName(x) or OLD[x] to get value out of the id field to put into the record_id field in the new audit record.
I do understand that I could just create a separate, custom trigger for each table that I want to keep track of, but it would be nice to have it be generic.
edit: I also understand that the key value does get logged in either the old/new data fields. But, what I would like would be to make querying for the history easier and more efficient. In other words,
select * from audit.logged_actions where table_name = 'xxxx' and record_id = 12345;
another edit: I'm using PostgreSQL 9.1
Thanks!
You didn't mention your version of PostgreSQL, which is very important when writing answers to questions like this.
If you're running PostgreSQL 9.0 or newer (or able to upgrade) you can use this approach as documented by Pavel:
http://okbob.blogspot.com/2009/10/dynamic-access-to-record-fields-in.html
In general, what you want is to reference a dynamically named field in a record-typed PL/PgSQL variable like 'NEW' or 'OLD'. This has historically been annoyingly hard, and is still awkward but is at least possible in 9.0.
Your other alternative - which may be simpler - is to write your audit triggers in plperlu, where dynamic field references are trivial.

View from dynamically generated list of tables

So I basicly have a table which has a list of table names. All these listed tables have exact same structure.
Then I have a query template, with place holder for table name.
I need to create a view, which should return results of that query UNIONed from all the tables listed in that one setup table.
So far what I've done is create a user defined function, which would prepare a complete UNIONed SQL statement.
But this is where I'm stuck. I can't figure out how to execute it in a view and return whatever it returns..
My function returns SQL syntax.
I figured that UDF can't executed dynamic sql, so my method won't work. So far I've solved my issue at hand by generating views. But I would still prefer a more dynamic way..