pgmodeler is said to be PostgreSQL Database Modeler.
As far as I know it is for relational database design, and relational database design isn't RDMBS specific.
So is pgmodeler only used for PostgreSQL? Can it be used with other RDBMS, such as mysql, sqlserver, oracle database?
What part of pgmodeler is postgresql specific, and what part of it is not?
Thanks.
It is specific to postgresql in the sense that it supports everything that posgresql does : sql extensions ("create table like..."), its procedural language pl/pgsql, foreign data wrappers, table partitioning, among many many others ; and these are usually totally incompatible with other RDBMS.
But the main developper is studying closely an integration with pgloader, which would make such a compatibility a thing in a near future.
If you stick to the pure design features of pgmodeler, "keep it classic" and never go for an implementation, then pgmodeler is somehow universal.
Edit : To answer more precisely, the model is "universal", the code produced when you export the model to a database is specific to postgresql (sql data types, extensions...).
Related
Aurora Postgres 11.9
In SQL Server we strictly follow the good programming practice that "every single call land on DB from the application will be a stored procedure instead of simple queries". In Oracle, we haven't experienced the same thing may be due to select stored procedures required additional cursors, and so on.
Can any expert Postgres person advise me what practice should we follow in progress in this regard and what are pros and cons in this case of Postgres?
In addition in SQL Server we use "rowversion" for data sync with BI and other external modules, is there any built-in alternate available in Postgres or should we have to do it with manual triggers?
I have a use case to distribute data across many databases on many servers, all in postgres tables.
From any given server/db, I may need to query another server/db.
The queries are quite basic, standard selects with where clauses on standard fields.
I have currently implemented postgres_FDW, (I'm, using postgres 9.5), but I think the queries are not using indexes on the remote db.
For this use case (a random node may query N other nodes), which is likely my best performance choice based on how each underlying engine actually executes?
The Postgres foreign data wrapper (postgres_FDW) is newer to
PostgreSQL so it tends to be the recommended method. While the
functionality in the dblink extension is similar to that in the
foreign data wrapper, the Postgres foreign data wrapper is more SQL
standard compliant and can provide improved performance over dblink
connections.
Read this article for more detailed info: Cross Database queryng
My solution was simple: I upgraded to Postgres 10, and it appears to push where clauses down to the remote server.
i have two database cvtl and cvtl_db , i need to write a single query to retrieve data from table A in cvtl and table B in cvtl_db.
Postgres is throwing error: cross database reference are not implemented
Basically you have two ways:
Older tools.
If you need to support older versions of PostgreSQL, use dblink or DBI-link. These two provide robust support for cross-db queries across a number of PostgreSQL versions. pl/proxy is another possibility.
Newer tools.
The newer approach is to use foreign data wrappers. This has more functionality (such as better transaction handling) and probably has more eyes in terms of support than dblink etc do today.
I have a large code base of an online charging application that is tightly coupled to Oracle and relies extensively on SQL queries , PL/SQL procedures etc.
In case , we are to migrate to a NO SQL based DB , would all the code need to be rewritten or are there some already available libraries/drivers that do the job of translation of sql queries to no-sql queries automatically by simply having us define a mapping between the current Oracle Schema and the new underlying NO-SQL DB schema (designed afresh)?
Thanks
You are going to rewrite a lot of things.
Relational database and nosql "things" are so different. And nosql are not transactional, eccept for documents.
You can save money going to mysql or postgresql (suggested) but still you have to implement a lot of things and you need to study proxy, connection pooling when you need to scale.
But, you can save a lot of work with Postgres plus advanced server of enterprise db: http://www.enterprisedb.com/products-services-training/products/postgres-plus-advanced-server
They say you can switch db without a single line to be changed. And save money.
Then you can access things like partitioning that will cost a lot in enterprise version of Oracle.
How is database normalization related to ER Modelling??
What comes first??
Or should both be implemented at the same time??
I feel modeling should come first in a highly normalized database design.
Creating the model allows you to think through how the tables will relate to one another and also allows you to envision what tables you'll need to use when writing your join queries.
Using a tool such as MySQL Workbench or Toad Data Modeler , depending on your target database vendor, can even generate SQL commands to build the tables, constraints, and indexes directly from the model. This is useful because it ensures the tables are created exactly as you designed them.
Also, when making changes to the model, some tools like those mentioned above will even allow you to "update" your schema by issuing the necessary statements required to do so.
So in short, for a project with more than one table, I'd always model it first. It also makes it easier for developers to understand how the tables function and relate at a glance rather than having to read through DDL to understand it.
Modeling can even be fun!
A model created with MySQL Workbench:
Hope this helps!