In SQL Server 2008R2 can I force a View to use objects within the user's default schema instead of the schema in which the View exists? - sql-server-2008-r2

A bit of background. I have a base application and most clients use it as standard. However some clients have small code and database customisations.
Each of these clients has their own branch and maintenance can be tricky.
I want to consolidate all these into a single database structure (not a single database - we aren't doing multi-tenancy) to enable upgrades to be applied in a much more uniform fashion.
I'm still at the proof of concept stage, but the route I was going down would be to have the standard objects stay in the schema they currently exist in (mostly dbo) and have the custom objects reside in a schema for each client.
For example, I could have dbo.users and client1.users which has some additional columns. If I set the default schema for the client to be "client1" then the following query
SELECT * FROM users
will return data from the client1 schema or the dbo schema depending on which login is connected.
This is absolutely perfect for what I'm trying to achieve.
The problem I'm running into is with Views.
I have many views which are in the dbo schema and refer to the Users table. No matter which user I connect to the database as, these views always select from dbo.users.
So I'm guessing the question I have is:
Can I prefix the tables in the view with some variable like "DEFAULT"? e.g.
SELECT u.username, u.email, a.level
FROM DEFAULT.users u INNER JOIN accessLevels a ON u.accessID = a.accessID
If this isn't possible and I'm totally barking up the wrong tree, do you have any suggestions as to how I can achieve what I'm setting out to do?
Many thanks.

Just reference the name of the schema in which the views reside...
Select a., b.
from schema1.TABLEA A
join schema2.TABLEB B on A.ID = B.ID

Related

How to design security policies for a following system including counters in postgres/supabase if postgres functions are used?

I am unsure how to design security policies for a following system including counters in postgres/supabase. My database includes two tables:
Users:
uuid|name|follower_counter
------------------------------
xyz |tobi| 1
Following-Relationship
follower| following
---------------------------
uuid_1 | uuid_2
Once a user follows a different user, I would like to use a postgres function/transaction to
Insert a new following-follower relationship
Update the followed users' counter
BEGIN
create follower_relationship(follower_id, following_id);
update increment_counter_of_followed_person(following_id);
END;
The constraint should be that the users table (e.g. the name column) can only be altered by the user owning the row. However, the follower_counter should open to changes from users who start following that user.
What is the best security policy design here? Should I add column security or should exclude the counters to a different table?
Do I have to pass parameters to the "block transaction" to ensure that the update and insert functions are called with the needed rights? With which rights should I call the block function?
It might be better to take a different approach to solve this problem. Instead of having a column dedicated to counting the followers, I would recommend actually counting the number of followers when you query the users. Since you already have Following-Relationship table, we just need to count the rows within the table where following or follower is the querying user.
When you have a counter, it might be hard to keep the counter accurate. You have to make sure the number gets decremented when someone unfollows. What if someone blocks a user? What if a user was deleted? There could be a lot of situations that could throw off the counter.
If you count the number of followings/followers on the fly, you don't need to worry about those situations at all.
Now obvious concern with this approach that you might have is performance, but you should not worry too much about it. Postgres is a powerful database that has been battle tested for decades, and with a proper index in place, it can easily perform these query on the fly.
The easiest way of doing this in Supabase would be to create a view like this the following. Once you create a view, you can query it from your Supabase client just like a typical table!
create or replace view profiles as
select
id,
name,
(select count(*) from following_relationship where followed_user_id = id) as follower_count,
(select count(*) from following_relationship where following_user_id = id) as following_count
from users;

How to execute graphql query for a specific schema in hasura?

As it can be seen in the following screenshot, the current project database (postgresql)
named default has these 4 schema - public, appcompany1, appcompany2 and appcompany3.
They share some common tables. Right now, when I want to fetch data for customers, I write a query like this:
query getCustomerList {
customer {
customer_id
...
...
}
}
And it fetches the required data from public schema.
But according to the requirements, depending on user interactions in front-end, that query will be executed for appcompanyN (N=1,2,3,..., any positive integer). How do I achieve this goal?
NOTE: Whenever the user creates a new company, a new schema is created for that company. So the total number of schema is not limited to 4.
I suspect that you see a problem where it does not exists actually.
Everything is much simpler than maybe it seems.
A. Where all those tables?
There are a lot of schemas with identical (or almost identical) objects inside them.
All tables are registered in hasura.
Hasura can't register different tables with the same name, so by default names will be [schema_name]_[table_name] (except for public)
So table customer will be registered as:
customer (from public)
appcompany1_customer
appcompany2_customer
appcompany3_customer
It's possible to customize entity name in GraphQL-schema with "Custom GraphQL Root Fields".
B. The problem
But according to the requirements, depending on user interactions in front-end, that query will be executed for appcompanyN (N=1,2,3,..., any positive integer). How do I achieve this goal?
There are identical objects that differs only with prefixes with schema name.
So solutions are trivial
1. Dynamic GraphQL query
Application stores templates of GraphQL-queries and replaces prefix with real schema name before request.
E.g.
query getCustomerList{
[schema]_customer{
}
}
substitute [schema] with appcompany1, appcompany2, appcompanyZ and execute.
2. SQL view for all data
If tables are 100% identical then it's possible to create an sql view as:
CREATE VIEW ALL_CUSTOMERS
AS
SELECT 'public' as schema,* FROM public.customer
UNION ALL
SELECT 'appcompany1' as schema,* FROM appcompany1.customer
UNION ALL
SELECT 'appcompany2' as schema,* FROM appcompany2.customer
UNION ALL
....
SELECT `appcompanyZ',* FROM appcompanyZ.customer
This way: no need for dynamic query, no need to register all objects in all schemas.
You need only to register view with combined data and use one query
query{
query getCustomerList($schema: string) {
all_customer(where: {schema: {_eq: $schema}}){
customer_id
}
}
About both solutions: it's hard to call them elegant.
I myself dislike them both ;)
So decide yourself which is more suitable in your case.

Changing a DB View dynamically according the current user-group

we are currently digging into Amazon Redshift and testing different functionalities.
One of our basic requirements is that we will define different user groups which in turn will be granted access to different views.
One way to go about this would be to implement one view seperately for each user-group. However, since we have a lot of user-groups that share almost the exact same need for information, I'm looking for a way to implement this more dynamically in Redshift.
For instance, let's say I have a user group called users_london and another one called users_berlin. Both will have access to a view called v_employee_master_data which contains the columns employee_name, employee_job_title and employee_city.
Both groups share the same scope of information with one exception - the column employee_city.
In essence, the view should be pre-filtered for a certain value in the column employee_city according to the currently logged-in user-group.
In SQL - something like this:
For the usergroup users_london:
SELECT * FROM v_employee_master_data WHERE employee_city = 'London';
For the usergroup users_berlin:
SELECT * FROM v_employee_master_data WHERE employee_city = 'Berlin';
Now to make the connection back to Amazon Redshift. Does the underlying DB runtime provide an out-of-the-box functionality to somehow catch the currently logged user-group as a form of global variable and alter the SQL-statement according to the value of that variable?
It is possible to do:
get current user
select current_user
find what group it belongs to
select groname from pg_group where current_user_id = any(grolist);
Extract city and capitalize it:
select initcap(substring(groname from 'users_(.*)')) from pg_group where current_user_id = any(grolist);
Now you have your city based on the "user". So just inject it in the view
... WHERE employee_city = initcap(substring(groname from 'users_(.*)') ...

FileMaker Pro 12 Auto-populating Tables

I'm new to Filemaker and need some advice on auto-populating tables.
Part 1:
I have TableA which includes many records with client information. I want a separate TableB which is identical to TableA except that it is "de-identified"; that is, it does not contain two of the fields, first name and last name.
I would like the two tables to interact such that if I add a new record to TableA, that same record (sans first and last name) appear automatically in TableB.
Part 2:
In addition to the above functionality, I would also like said functionality to be dependent on a specific field type from TableA. For example, I enter a new record, which has a "status" field set to "active," into tableA. I then want that record to be auto-popualted into TableB; however, if I add another record with a "status" of "inactive," I want that that record auto-populated into a TableC but not into TableB.
FileMaker can perform this with script triggers so long as every layout where TableA will be edited has a layout script trigger of OnRecordCommit connected to it. When the record is committed (which can happen in a number of ways), the attached script will run, which you can use to create the appropriate record in the appropriate table.
The script could create the record in a number of ways. If the primary keys for both records are the same, you could use lookups. You could export the record in TableA and then import it into the correct table. You could pass the field information as a parameter to the script. The best choice really depends on your needs.
Having said that, I would question the wisdom of this approach. It brings up a few questions that would seem to complicate matters. For example, what happens when the status changes? When a record in TableA is deleted? When fields in TableA are modified? Each of these contingencies (and others) will require thought and more complicated scripts.
So I would ask what problem you're really trying to solve. My best guess is that you are trying to keep the name information private from certain users. User accounts and privileges with dedicated layouts for each privilege can solve this without the need for duplicate tables. FileMaker privilege sets can be quite granular.
For example, you can specify that users with PrivilegeA can create records and view names, but PrivilegeB users can only view records if the status is "active" and the name fields are not available to them, while PrivilegeC users can view records if the status is "inactive" and the name fields are also not available to them.
I would definitely use filters and permissions on the "status field" to achieve this and not two mirroring tables. Unless the inactive information is drastically different, you would be complicated your solution and creating more possible pitfalls.

postgresql request over several schema

I have a database, with every users having a schema.
Is there a way to query a table in every schema?
Something like: select id, name from *.simulation doesn't work...
Thank you for your help !
No, you will need to write a function - either a server side function or a client side function in whatever language you're using - that executes the query once for each schema.
You could also create a VIEW that does UNION ALL between all the schemas, but that's going to be a lot of work to maintain if your schemas are dynamically added and removed.
Yes you can, use SET search_path TO ... to point to all schema's. If you don't know all the names of the schemas, wrap it in a function that first selects all schemas and then set the entire search_path.
http://www.postgresql.org/docs/current/interactive/sql-set.html