Postgres inherit from schema - postgresql

I am initiating a new project which will be available as a SaaS for multiple customers. So, I am thinking of creating a database and then create individual schema for every customer.
I have defined some rules and the first rule is all the customers must always have the same schema. No matter what. If one customer gets an update, all the other customers will get the update as well.
For this purpose, my question is, is it possible to inherit schema from another schema in the same database? If not, do I have to manually create all the tables and indexes in the new schema and inherit them from the tables in master schema?
I am using Postgresql 9.6 but I can upgrade it as well if needed.
I open to suggestions.
Thanks in advance

There is no automated way to establish inheritance between all tables in two schemas, you'd have to do it one by one (a function can help).
However, I invite you to stop and think about your data model for a bit. How many users do you expect? If there could be many, plan differently, because databases with thousands of schemas become unwieldy (e.g. catalog lookups will become slow).
You might be better off with one schema for all users. If you are concerned with separation of the data and security, row level security might be the solution for you.

Related

Asp Net Boilerplate - Setup Schema-Per-Tenant Multitenancy (EntityFrameworkCore & PostgreSQL)

We are looking into using Asp Net Boilerplate. Looks very promising. We love the framework, but we would like to be able to use a per-schema Multitenancy configuration. Instead of sharing the data in the same db & tables, each tenant would "have" a schema, in which the whole database structure would be replicated.
One of our data tables will be quite big (sometimes +1 million entries / tenant), and we were advised that for performance reasons, it's better to keep the number of entries as low as possible. Also, this particular table will be queried & inserted a lot. It would be unrealistic that this table would hold data for 40+ tenants. For that reason, and others, we would prefer to have a distinct schema per tenant.
Our DB is a single PostgreSQL server (might scale up to more in the future). We use EntityFramework & Npgsql. We already noticed that it is possible to set up a different ConnectionString for specific tenants that would have bigger data requirements.
http://www.summa.com/blog/2013/09/17/approaches-to-multi-tenancy See separate schema per tenant
Any idea on how to acheive a schema-per-tenant multitenancy? There's a lot of moving parts in this, I'm not sure where to start.

Implementing multi tenant data structure using multiple schemas or by customerId table column

I am developing multi tenant store web application (software as a service) which will be used by many customers. I would like to use just one database. I would appreciate suggestions/feedback on how to go about this in the database:
Separate schemas for each customer. Whenever new customer signs up, I create separate schema.
Single schema with all the customers. And creating a CUSTOMER table with customerId that is referenced in all other tables (eg. orders, payments, etc). Whenever new customer signs up, I create an entry in CUSTOMER table.
Incase if you want to know what technologies are being used:
Postgres, Spring Boot MVC, REST, Maven, JPA.
Thanks.
There are major tradeoffs here. With customer id's your foreign keys become more complex (the customer id should probably be a part of every foreign key) and that means additional indexes. It also means you have to have some means of enforcing this restriction. The big issue is that bugs in your application can quite easily disclose material from other customers.
With multiple schemas you have an issue that you have many more tables and this can cause performance problems for pg_dump in particular. However with appropriate search paths it is a bit harder to compromise other clients' data. However this is harder to use with a connection pool.
In general I think the schema approach is better because you can always scale out by partitioning by customer set, and the better security is important. However it means you must have a good understanding of search_path and set it to a sensible value on every database transaction.

Multiple database in EF6

We are involved in quite a new development in which we are remaking our current web shop platform.
In the current platform we do not use EF6 neither other ORM but store procedures to access to the db, but in the new building is what we do.
We have a doubt regarding database design of the new platform. In the current platform we use several different databases depending on the content of them.
For example, we have dedicated databases to store information for products catalogs other dedicated db for handling orders.
Currently all data access is done through stored procedures, so we have no problem with the links between different databases.
The problem appears to us now when we have started to use EF6. In this case each DB is associated with a context and it is not possible to know data from one context to another
unless we implement directly in the source code these relationships using various contexts. It looks like these means we will lose the power of EF6.
The questions we have are:
Is it a bad design maintaining different databases for the same application using EF6?
in case this is a poor design and choosing for a single database, is the performance going to be optimum even driving hundreds of tables (almost 1000) with several TBytes of information?
in the other hand, in the case of opting for the design in which several bbdd appear (it would be much better in our case), what is the best way to handle them EF6?
Thank you very much for your help!
First of all EF is not written to be cross database. You can't write cross database (cross context) queries, lazy load does not work and so on.
This is a big limitation in your case.
EF could work with several schema (actually I don't use it and I don't like it but is just my opinion).
You can use your stored procedures with EF but as I understand you are thinking to stop to use them.
In my experience I wrote several applications with more than one database but the use of the different databases was very limited. In this cases I use cross database views (i.e. one database per company and some common tables with views in company databases that selects data in common tables). In your case, if the tables are sharded everywhere I don't think this is a way you can choose.
So, in my opinion you could change the approach.
If you have backups problems you could shard the huge tables (I think facts tables and tables with pictures) and create cross database views. BTW, also, cross database referential integrity is not supported in SQL Server so you need to write triggers to check it.
If you need to split different application functions (i.e. WMS, CRM and so on) you can use namespaces without bothering about how tables are stored in the DB.

Postgres Multi-tenant administration/maintenance

We have a SaaS application where each tenant has its own database in Postgres. How would I apply a patch to all the databses? For example if I want to add a table or add a column to a table, I have to either write a program that loops through all databases and execute a SQL against them or using pgadmin, go through them one by one.
Is there smarter and/or faster way?
Any help is greatly appreciated.
Yes, there's a smarter way.
Don't create a new database for each tenant. If everything is in one database then you only need to alter one database.
Pick one database, alter each table to have the column TENANT and add this to the primary key. Then insert into this database every record for all tenants and drop the other databases (obviously considerably more work than this as your application will need to be changed).
The differences with your approach are extensively discussed elsewhere:
What problems will I get creating a database per customer?
What are the advantages of using a single database for EACH client?
Multiple schemas versus enormous tables
Practicality of multiple databases per client vs one database
Multi-tenancy - single database vs multiple database
If you don't put everything in one database then I'm afraid you have to alter them all individually, and doing it programatically would be simplest.
At a higher level, all multi-tenant applications follow one of three approaches:
One tenant's data lives in one database,
One tenant's data lives in one schema, or
Add a tenant_id / account_id column to your tables (shared schema).
I usually find that developers use the following criteria when they evaluate these different approaches.
Isolation: Since you can put each tenant into its own database in one hand, and have tenants share the same table on the other, this becomes the most apparent dimension. If you provide your users raw SQL access or you're in a regulated industry such as healthcare, you may need strict guarantees from your database. That said, PostgreSQL 9.5 comes with row level security policies that makes this less of a concern for most applications.
Extensibility: If your tenants are sharing the same schema (approach #3), and your tenants have fields that varies between them, then you need to think about how to merge these fields.
This article on multi-tenant databases has a great summary of different approaches. For example, you can add a dozen columns, call them C1, C2, and so forth, and have your application infer the actual data in this column based on the tenant_id. PostgresQL 9.4 comes with JSONB support and natively allows you to use semi-structured fields to express variations between different tenants' data.
Scaling: Another criteria is how easily your database would scale-out. If you create a tenant per database or schema (#1 or #2 above), your application can make use of existing Ruby Gems or [Django packages][1] to simplify app integration. That said, you'll need to manually manage your tenants' data and the machines they live on. Similarly, you'll need to build your own sharding logic to propagate foreign key constraints and ALTER TABLE commands.
With approach #3, you can use existing open source scaling solutions, such as Citus. For example, this blog post describes how to easily shard a multi-tenant app with Postgres.
it's time for me to give back to the community :) So after 4 years, our multi-tenant platform is in production and I would like to share the following observations/experiences with all of you.
We used a database per each tenant. This has given us extreme flexibility as the size of the databases in the backups are not huge and hence we can easily import them into our staging environment for customers issues.
We use Liquibase for database development and upgrades. This has been a tremendous help to us, allowing us to package the entire build into a simple war file. All changes are easily versioned and managed very efficiently. There is a bit of learning curve here an there but nothing substantial. 2-5 days can significantly save you time.
Given that we use Spring/JPA/Hibernate, we use a technique called Dynamic Data Source Routing. So when a user logs-in, we find the related datasource with a lookup and connect them to the session to the right database. That's also when the Liquibase scripts get applied for updates.
This is, for now, I will come back with more later on.
Well, there are problems with one database for all tenants in our case for sure.
The backup file gets huge and becomes almost not practical hard to manage
For troubleshooting, we need to restore customer's data in our dev env, we just use that customer's backup file and usually the file is not as big as if we were to use one database for all customers.
Again, Liquibase has been key in allowing to manage updates across all the tenants seamlessly and without any issues. Without Liquibase, I can see lots of complications with this approach. So Liquibase, Liquibase and more Liquibase.
I also suspect that we would need a more powerful hardware to manage a huge database with large joins across millions of records vs much lighter database with much smaller queries.
In case of problems, the service doesn't go down for everyone and there will be limited to one or few tenants.
In general, for our purposes, this has been a great architectural decision and we are benefiting from it every day. One time we had one customer that didn't have their archiving active and their database size grew to over 3 GB. With offshore teams and slower internet as well as storage/bandwidth prices, one can see how things may become complicated very quickly.
Hope this helps someone.
--Rex

Decentralizing Database Structure

Although this question fancies PostgreSQL, it is still a general DB question.
I have always been curious about the term schema as it relates to databases. Recently, we switched over to using PostgreSQL, where that term has actual significance to the underlying database structure.
In PostgreSQL-land, the decentralized structure is as follows:
DB Server (`some-server.com:5432`)
>> Database (`fizz`)
>> Schema (`buzz`)
>> Table (`foo`)
Thus, the FQDN for table [foo] is fizz.buzz.foo.
I understand that a "database" is a logical grouping of tables. For instance, an organization might have a "domain" database where all POJOs/VOs are persisted, an "orders" database where all sales-related info is stored, and a "logging" databases where all log messages get sent for future analysis, etc.
The introduction of this "schema" construct in between the database and its tables has me very confused, and the PostgreSQL documentation is a little too heavy-handed (and lacking good examples) for a newbie such as myself to understand.
I'm wondering if anyone can give me a laymen's description of not only what this "schema" construct is within the realm of PostgreSQL (and how it relates databases to tables), but I'm wondering what it means to database structures in general.
Thanks in advance!
Think of schemas as namespaces. We can use them to logically group tables (such as a People schema). Additionally, we can assign security to that schema so we can allow certain folks to look at a Customer schema, but not an Employee schema. This allows us to have a granularity of control of security just above an object level but below the database level.
Security is probably the most important reason to use schemas, but I've seen them used for logical groupings as well. It just depends on what you need them for.
Late to the party, but ..
I use schemas to split tables in to groups that are used by different applications that share a few tables, for example.
users
application1
application2
Here, if we log in with app1, we see users + application1; if we log in to app2, we see users and application2. So our user data can be shared between both, without exposing app1 users to app2 data. It also means that a superuser can do queries across both sets of data.