Aurora Postgres 11.9
In SQL Server we strictly follow the good programming practice that "every single call land on DB from the application will be a stored procedure instead of simple queries". In Oracle, we haven't experienced the same thing may be due to select stored procedures required additional cursors, and so on.
Can any expert Postgres person advise me what practice should we follow in progress in this regard and what are pros and cons in this case of Postgres?
In addition in SQL Server we use "rowversion" for data sync with BI and other external modules, is there any built-in alternate available in Postgres or should we have to do it with manual triggers?
Related
I'm getting familiar with the greenplum solution concepts, and trying to understand whether, and if so, when the organisation I work for should use this solution. Our conceptual idea is to setup a kind of central 'datastore' suitable for both OLTP and OLAP access.
My research: this article suggests Greenplum is more suitable for OLAP, and PostgreSQL for OLTP. But I also read about Greenplum improvements for OLTP processing. And in favour of Postgresql, there are also articles like this that suggest that OLAP (eg, a datawarehouse implementation) can be done by means of Postgresql.
So my question is: how to move forward, and what are the main criteria to decide? For example, in case we now have a just a few TB's (1-5), start with a Postgresql cluster (for OLTP+OLAP), and when data volumes grow, move on to Greenplum? Or start straight away with Greenplum?
maybe use postgres if it can handle your use case. If you have you have too much data and need to finish reports and analytics faster; change to greenplum
For a project I need two types of tables.
hypertable (which is a special type of table in PostgreSQL (in PostgreSQL TimescaleDB)) for some timeseries records
my ordinary tables which are not timeseries
Can I create a PostgreSQL TimescaleDB and store my ordinary tables on it? Are all the tables a hypertable (time series) on a PostgreSQL TimescaleDB? If no, does it have some overhead if I store my ordinary tables in PostgreSQL TimescaleDB?
If I can, does it have any benefit if I store my ordinary table on a separate ordinary PostgreSQL database?
Can I create a PostgreSQL TimescaleDB and store my ordinary tables on it?
Absolutely... TimescaleDB is delivered as an extension to PostgreSQL and one of the biggest benefits is that you can use regular PostgreSQL tables alongside the specialist time-series tables. That includes using regular tables in SQL queries with hypertables. Standard SQL works, plus there are some additional functions that Timescale created using PostgreSQL's extensibility features.
Are all the tables a hypertable (time series) on a PostgreSQL TimescaleDB?
No, you have to explicitly create a table as a hypertable for it to implement TimescaleDB features. It would be worth checking out the how-to guides in the Timescale docs for full (and up to date) details.
If no, does it have some overhead if I store my ordinary tables in PostgreSQL TimescaleDB?
I don't think there's a storage overhead. You might see some performance gains e.g. for data ingest and query performance. This article may help clarify that https://docs.timescale.com/timescaledb/latest/overview/how-does-it-compare/timescaledb-vs-postgres/
Overall you can think of TimescaleDB as providing additional functionality to 'vanilla' PostgreSQL and so unless there's a reason around application design to separate non-time-series data to a separate database then you aren't obliged to do that.
One other point, shared by a very experienced member of our Slack community [thank you Chris]:
To have time-series data and “normal” data (normalized) in one or separate databases for us came down to something like “can we asynchronously replicate the time-series information”?
In our case we use two different pg systems, one replicating asynchronously (for TimescaleDB) and one with synchronous replication (for all other data).
Transparency: I work for Timescale
pgmodeler is said to be PostgreSQL Database Modeler.
As far as I know it is for relational database design, and relational database design isn't RDMBS specific.
So is pgmodeler only used for PostgreSQL? Can it be used with other RDBMS, such as mysql, sqlserver, oracle database?
What part of pgmodeler is postgresql specific, and what part of it is not?
Thanks.
It is specific to postgresql in the sense that it supports everything that posgresql does : sql extensions ("create table like..."), its procedural language pl/pgsql, foreign data wrappers, table partitioning, among many many others ; and these are usually totally incompatible with other RDBMS.
But the main developper is studying closely an integration with pgloader, which would make such a compatibility a thing in a near future.
If you stick to the pure design features of pgmodeler, "keep it classic" and never go for an implementation, then pgmodeler is somehow universal.
Edit : To answer more precisely, the model is "universal", the code produced when you export the model to a database is specific to postgresql (sql data types, extensions...).
I have a use case to distribute data across many databases on many servers, all in postgres tables.
From any given server/db, I may need to query another server/db.
The queries are quite basic, standard selects with where clauses on standard fields.
I have currently implemented postgres_FDW, (I'm, using postgres 9.5), but I think the queries are not using indexes on the remote db.
For this use case (a random node may query N other nodes), which is likely my best performance choice based on how each underlying engine actually executes?
The Postgres foreign data wrapper (postgres_FDW) is newer to
PostgreSQL so it tends to be the recommended method. While the
functionality in the dblink extension is similar to that in the
foreign data wrapper, the Postgres foreign data wrapper is more SQL
standard compliant and can provide improved performance over dblink
connections.
Read this article for more detailed info: Cross Database queryng
My solution was simple: I upgraded to Postgres 10, and it appears to push where clauses down to the remote server.
I have a large code base of an online charging application that is tightly coupled to Oracle and relies extensively on SQL queries , PL/SQL procedures etc.
In case , we are to migrate to a NO SQL based DB , would all the code need to be rewritten or are there some already available libraries/drivers that do the job of translation of sql queries to no-sql queries automatically by simply having us define a mapping between the current Oracle Schema and the new underlying NO-SQL DB schema (designed afresh)?
Thanks
You are going to rewrite a lot of things.
Relational database and nosql "things" are so different. And nosql are not transactional, eccept for documents.
You can save money going to mysql or postgresql (suggested) but still you have to implement a lot of things and you need to study proxy, connection pooling when you need to scale.
But, you can save a lot of work with Postgres plus advanced server of enterprise db: http://www.enterprisedb.com/products-services-training/products/postgres-plus-advanced-server
They say you can switch db without a single line to be changed. And save money.
Then you can access things like partitioning that will cost a lot in enterprise version of Oracle.