Say I have two db1 and db2, with schema
- db1.public.orders
- db1.public.accounts
- db2.public.orders
- db2.public.accounts
If I wish to have access to two databases, would it be recommended to import the whole schema using fdw into db1, so that one client is used:
- db1.public.orders
- db1.public.accounts
- db1.shard02.orders (fdw linked to db2.public.orders)
- db1.shard02.accounts (fdw linked to db2.public.accounts)
Or should I simply maintain two clients dbClient1 and dbClient2?
I wish to know which method is better in terms of connection robustness and performance when the two dbs reside in different data centres, clients using connection pools considered.
With fdw, each connection in the pool will also establish a tcp connection to the remote db, and all the queries will need to be pushed down and therefore a performance cost will incur. Why bother at all when two separate clients can be used?
If you need to access two databases independently from each other, use two database connections. If you need to coordinate transactions across two databases, use two database connections with the two-phase commit protocol. If you need to join tables from different databases, use a solution with postgres_fdw for performance reasons.
Related
I am architecting a database where I expected to have 1,000s of tenants where some data will be shared between tenants. I am currently planning on using Postgres with row level security for tenant isolation. I am also using knex and Objection.js to model the database in node.js.
Most of the tutorials I have seen look like this where you create a separate knex connection per tenant. However, I've run into a problem on my development machine where after I create ~100 connections, I received this error: "remaining connection slots are reserved for non-replication superuser connections".
I'm investigating a few possible solutions/work-arounds, but I was wondering if anyone has been able to make this setup work the way I'm intending. Thanks!
Perhaps one solution might be to cache a limited number of connections, and destroy the oldest cached connection when the limit is reached. See this code as an example.
That code should probably be improved, however, to use a Map as the knexCache instead of an object, since a Map remembers the insertion order.
Trying to figure out what would be better:
Multiple instances, one per DB
or
Single large instance which will hold multiple DBs inside
The scenario is similar to Jira Cloud where each customer has his own Jira Cloud server, with its own DB.
Now the question is, will it be better to manage all of the users' DBs in 1 large instance, or to have a DB instance for each customer?
What would be the cons and pros for the chosen alternative?
The first thing that came to or minds is backup management - Will we be able to recover a specific customer's DB if it resides on the same large instance as all other DBs?
Similar question, but in a different scenario and other requirements - 1-big-google-cloud-sql-instance-2-small-google-cloud-sql-instances-or-1-medium
This answer is based on a personal opinion. It is up to you to decide how you want to build your database. However, it is better to go with multiple smaller Cloud SQL instances as it is also stated in Cloud SQL > Best practices documentation.
PROS of multiple databases
It is easier to manage smaller instances rather than big instances. (In documentation provided above)
You can choose the region and zone for each database, so if your customers are located in different geographical locations, you can always choose the closest for them zone for the Cloud SQL instance and this way you will reduce the latency.
Also if you are planning to have a lot of databases, with a lot of tables in each database and a lot of records in each table, this means that the instance will be huge. Therefore the backup, creating read replicas or fail-over replicas and maintaining them, will take some time after the databases will begin to expand.
Although, I would suggest, if you have multiple databases per user, have them inside one Cloud SQL instance that so you manage one Cloud SQL instance per user. e.g. You have 100 users and User1 has 4 databases, User2 has 6 databases etc. Create 100 Cloud SQL instances instead of having one Cloud SQL instance per databases, otherwise you will end up with a lot of them and it will be hard to manage multiple instances per user.
I'd like to preface this by saying I'm not a DBA, so sorry for any gaps in technical knowledge.
I am working within a microservices architecture, where we have about a dozen or applications, each supported by its Postgres database instance (which is in RDS, if that helps). Each of the microservices' databases contains a few tables. It's safe to assume that there's no naming conflicts across any of the schemas/tables, and that there's no sharding of any data across the databases.
One of the issues we keep running into is wanting to analyze/join data across the databases. Right now, we're relying on a 3rd Party tool that caches our data and makes it possible to query across multiple database sources (via the shared cache).
Is it possible to create read-replicas of the schemas/tables from all of our production databases and have them available to query in a single database?
Are there any other ways to configure Postgres or RDS to make joining across our databases possible?
Is it possible to create read-replicas of the schemas/tables from all of our production databases and have them available to query in a single database?
Yes, that's possible and it's actually quite easy.
Setup one Postgres server that acts as the master.
For each remote server, create a foreign server then you then use to create a foreign table that makes the data accessible from the master server.
If you have multiple tables in multiple server that should be viewed as a single table in the master, you can setup inheritance to make all those tables appear like one. If you can define a "sharding" key that identifies a distinct attribute between those server, you can even make Postgres request the data only from the specific server.
All foreign tables can be joined as if they were local tables. Depending on the kind of query, some (or a lot) of the filter and join criteria can even be pushed down to the remote server to distribute the work.
As the Postgres Foreign Data Wrapper is writeable, you can even update the remote tables from the master server.
If the remote access and joins is too slow, you can create materialized views based on the remote tables to create a local copy of the data. This however means that it's not a real time copy and you have to manage the regular refresh of the tables.
Other (more complicated) options are the BDR project or pglogical. It seems that logical replication will be built into the next Postgres version (to be released a the end of this year).
Or you could use a distributed, shared-nothing system like Postgres-XL (which probably is the most complicated system to setup and maintain)
I am developing a messaging system (in java) that can support around 10k users. The architecture is supposed to be as following :
- 10k clients
- 2 or more replicas of the server (each on a different machine)
- 1 postgre DB
The application is aimed to run on a clustered environment (Amazone Webservice).
Now, I have read a couple of things on Schemas in Postgre DB's. I am not sure if I should use them (and in what way) or if a simple relational DB model will do.
Basically, the DB is supposed to be very simple (messages/metadata, queueID for messages, and users).
Thank you for your answers
Don't bother with schemas. They are useful for semantically separating information in a database with lots of tables that can be grouped into clusters relevant to separate topics. They don't help you with performance, clustering or replicating databases. Also, I agree with Frank Heikens - unless each of your users sends messages with high frequency, I wouldn't worry.
Currently, all my collections are maintained in a single database.
I'm a little confused on when I should separate my collections into multiple databases, as some of the collections aren't necessarily related.
multiple databases:
can refine security permissions
separation of concerns
single database
easy
There are a set of tables I access all the time, and a set of tables I access about once a month. It makes some sense to open a persistent connection to a database containing my always-used tables, and open a connection to a database containing the sparsely-used tables when needed.
But is there any performance difference to having all my data in the same database? Is there any general rule-of-thumb to when to use multiple databases (other than production, development, etc.)
Check here for a similar question with some useful, more in-depth answers: Is it better to use multiple databases when you are managing independent sets of things in MongoDB?