Is PostgreSQL a NoSQL database? [closed] - postgresql

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I am getting confused about PostgreSQL. In some places people are saying it's NoSQL, and in some places people are saying it's not NoSQL. I have seen something Postgres Plus which is actual NoSQL.
Can you tell me what is true?

You have been confused my marketing and buzzwords.
“NoSQL” is a buzzword describing a diverse collection of database systems that focus on “semi-structured” data (that do not fit well into a tabular representation), sharding and high concurrency at the expense of transactional integrity and consistency, the latter being among the basic tenets of relational database management systems (RDBMS).
Since SQL is the language normally used to interact with an RDBMS, the term “NoSQL” is used as a name for all these systems. Perhaps the name was also chosen because SQL, being verbose and often hard to understand, evokes negative reactions in many programmers.
Now PostgreSQL, like many other RDBMS, has added support for JSON data, which is the most popular format for semi-structured data commonly stored in NoSQL systems. Now you can say that PostgreSQL supports a certain feature commonly found in NoSQL databases.
Still, SQL is the only way to interact with a PostgreSQL database, so you couldn't call it a NoSQL database and keep a straight face unless you were in marketing.
Postgres Plus is a closed source fork of PostgreSQL, so the same applies to it.

PostgreSQL is not NoSQL.
PostgreSQL is a classical, relational database server (and syntax) supporting most of the SQL standards.
On a sidenote, I suggest doing some research into the differences and advantages. They both have a solid place and time.

PostgreSQL prides itself in standards compliance. Its SQL implementation strongly conforms to the ANSI-SQL:2008 standard. It has full support for subqueries (including subselects in the FROM clause), read-committed and serializable transaction isolation levels. And while PostgreSQL has a fully relational system catalog which itself supports multiple schemas per database, its catalog is also accessible through the Information Schema as defined in the SQL standard.
Source

Related

Is pgmodeler only used for PostgreSQL?

pgmodeler is said to be PostgreSQL Database Modeler.
As far as I know it is for relational database design, and relational database design isn't RDMBS specific.
So is pgmodeler only used for PostgreSQL? Can it be used with other RDBMS, such as mysql, sqlserver, oracle database?
What part of pgmodeler is postgresql specific, and what part of it is not?
Thanks.
It is specific to postgresql in the sense that it supports everything that posgresql does : sql extensions ("create table like..."), its procedural language pl/pgsql, foreign data wrappers, table partitioning, among many many others ; and these are usually totally incompatible with other RDBMS.
But the main developper is studying closely an integration with pgloader, which would make such a compatibility a thing in a near future.
If you stick to the pure design features of pgmodeler, "keep it classic" and never go for an implementation, then pgmodeler is somehow universal.
Edit : To answer more precisely, the model is "universal", the code produced when you export the model to a database is specific to postgresql (sql data types, extensions...).

Migrating to a Nosql DB from Oracle

I have a large code base of an online charging application that is tightly coupled to Oracle and relies extensively on SQL queries , PL/SQL procedures etc.
In case , we are to migrate to a NO SQL based DB , would all the code need to be rewritten or are there some already available libraries/drivers that do the job of translation of sql queries to no-sql queries automatically by simply having us define a mapping between the current Oracle Schema and the new underlying NO-SQL DB schema (designed afresh)?
Thanks
You are going to rewrite a lot of things.
Relational database and nosql "things" are so different. And nosql are not transactional, eccept for documents.
You can save money going to mysql or postgresql (suggested) but still you have to implement a lot of things and you need to study proxy, connection pooling when you need to scale.
But, you can save a lot of work with Postgres plus advanced server of enterprise db: http://www.enterprisedb.com/products-services-training/products/postgres-plus-advanced-server
They say you can switch db without a single line to be changed. And save money.
Then you can access things like partitioning that will cost a lot in enterprise version of Oracle.

We use nosql for mongodb in the same way we use sql for oracle? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I will ask my question as an example .
If we use ORACLE as a database , and if we want to get data from it what we should know is SQL . with the help of an sql we can get the data from oracle.
If we use Mongo db as a database do we have to know about NoSql . ??
in simpler terms .
SQL for ORACLE . And NoSql for MongoDB ? am i right .?
There is no such thing as the NoSQL query language. All the databases usually grouped under the catch-all label "NoSQL" are completely different technologies which are used in completely different ways.
MongoDB has a query language which is based on javascript object notations. It doesn't have much to do with SQL and not anything either with the query languages of most other NoSQL databases. An interactive tutorial can be found on the MongoDB website. It should give you a basic understanding of how the query language works. The full documentation is a good source of in-depth knowledge.
Keep in mind that when you learned everything about MongoDB and its query language, you still know absolutely nothing about other NoSQL databases like Redis, Neo4j, CouchDB etc.. These are as different from MongoDB (and as different from each other) as MongoDB is different from SQL databases.

Communicating with Informix from PostgreSQL? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Please help me to setup connectivity from PostgreSQL to Informix (latest versions for both). I would like to be able to perform a query on Informix from PostgreSQL. I am looking for a solution that will not require data exports (from Informix) and imports (to PostgreSQL) for every query.
I am very new in PostgreSQL and need detailed instructions.
As Chris Travers said, what you're seeking to do is not easy to do.
In theory, if you were working with Informix and needed to access PostgreSQL, you could (buy and) use the Enterprise Gateway Manager (EGM) and use the ODBC driver for PostgreSQL to allow Informix to connect to PostgreSQL. The EGM would do its utmost to appear to be another Informix database while actually accessing PostgreSQL. (I've not validated that PostgreSQL is supported, but EGM basically needs an ODBC driver to work, so there shouldn't be any problem — 'famous last words', probably.) This will include an emulation of 2PC (two-phase commit); not perfect, but moderately close.
For the converse connection (working with PostgreSQL and connecting to Informix), you will need to look to the PostgreSQL tool suite — or other sources.
You haven't said which version you are using. There are some limitations to be aware of but there are a wide range of choices.
Since you say this is import/export, I will assume that read-only options are not sufficient. That rules out PostgreSQL 9.1's foreign data wrapper system.
Depending on your version David Fetter's DBI-Link may suit your needs since it can execute queries on remote tables (see https://github.com/davidfetter/DBI-Link). It hasn't been updated in a while but the implementation should be pretty stable and usable across versions. If that fails you can write stored procedures in an untrusted language (PL/PythonU, PL/PerlU, etc) to connect to Informix and run the queries there. Note there are limits regarding transaction handling you will run into in this case so you may want to run any queries on the other tables using deferred constraint triggers so everything gets run at commit time.
Edit: A cleaner way occurred to me: use foreign data wrappers for import and a separate client app for export.
In this approach, you are going to have four basic components but this will be loosely coupled and subject to proper transactional controls. You can even use two-phase commit if you want. The four components are (not providing a complete working example here but at least a roadmap to one):
Foreign data wrappers for data import, allowing you to see data from Informix.
Views of data to be exported.
External application which manages the export aspect, written in a language of your choice. This listens on a channel like LISTEN export_informix;
Triggers on underlying tables which make view of data to be exported which raise a NOTIFY export_informix
The notifications are riased on the commit and so basically you have two stages to your transaction in this way:
Write data in PostgreSQL, flag data to be exported. Commit.
Read data from PostgreSQL, export to Informix. Commit on both sides (TPC?).

ADO.NET Entity Framework custom classes, the right choice? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have an upcoming requirement, and I'm unsure if EF is the right approach.
Essentially I have a Web Service Contract to implement, that has a handful of methods that return specific classes (DataContract classes with DataMember'd properties so they serialize correctly). The data that makes up the classes will be the result of queries against a backend database.
At the lowest level, I know I can just write some stored procs in the database that will return data rows that I can manually wire up to the custom classes, and call the stored procs from within a Data Layer class (calls stored procs, returns custom classes).
I'm wondering if I can use ADO.NET Entity Framework for this, however my understanding that this creates Entity classes from the database tables. My custom classes don't resemble any of the database tables. The stored procs perform aggregations and table joins to produce the classes.
Am I missing something here from what's possible with the EF? Or would I be better just going with stored procs / manually wiring up the custom classes in a data layer?
The Web Service will be hosted in SharePoint 2010 therefore I'm limited to ASP.NET 3.5. I think I'd be using Patterns and Practices to access the data layer, unless there are better ideas out there.
Given that you have indicated that you can only use .NET 3.5, you would be using EF 1.x which wasn't widely accepted in the ORM community.
EF 4.x is much improved, but unfortunately requires .NET 4.
DAAB is certainly an alternative, but you will still need to map out your Service entities from the data (i.e. DAAB isn't an ORM)
IMO EF comes into its own when used with LINQ, especially when used with queries - if you find that you are writing many SPROCs of the form GetXXXByYYY (or using lots of ad hoc or dynamic sp_executesql) to populate your entities, then a LINQ enabled ORM makes a lot of sense. However, if you only have a few heavy hitting PROCs which have well defined interfaces, then an ORM may be overkill.
If the object model of your application remarkably differs from the data model in your database then I'd stick to the classic ADO.Net + stored procedures for aggregations and table joins. My opinion is that any ORM brings more trouble than benefit in such case.