PostGIS vs Postgresql Performance in normal tables - postgresql

i have a question that i couldn't find an answer for searching in my own, and i thank you for your help in advance.
Is it better to create the non spatial tables & relations in PostGIS Database with spatial tables, or To create special tables in PostGIS and create non spatial tables & relations in normal PostgreSQL database, and if the we go with the latter choice how to properly do it?

Related

PGAdmin create ERD from explicitly selected tables?

I have the ability to generate an ERD from an entire DB. My DB is huge and I want to focus on a subset of table relationships. Is this supported in PGAdmin? Additionally, if it isn't is there a free/open-source tool that does support this?

Have an ordinary table on a PostgreSQL TimescaleDB (timeseries) database

For a project I need two types of tables.
hypertable (which is a special type of table in PostgreSQL (in PostgreSQL TimescaleDB)) for some timeseries records
my ordinary tables which are not timeseries
Can I create a PostgreSQL TimescaleDB and store my ordinary tables on it? Are all the tables a hypertable (time series) on a PostgreSQL TimescaleDB? If no, does it have some overhead if I store my ordinary tables in PostgreSQL TimescaleDB?
If I can, does it have any benefit if I store my ordinary table on a separate ordinary PostgreSQL database?
Can I create a PostgreSQL TimescaleDB and store my ordinary tables on it?
Absolutely... TimescaleDB is delivered as an extension to PostgreSQL and one of the biggest benefits is that you can use regular PostgreSQL tables alongside the specialist time-series tables. That includes using regular tables in SQL queries with hypertables. Standard SQL works, plus there are some additional functions that Timescale created using PostgreSQL's extensibility features.
Are all the tables a hypertable (time series) on a PostgreSQL TimescaleDB?
No, you have to explicitly create a table as a hypertable for it to implement TimescaleDB features. It would be worth checking out the how-to guides in the Timescale docs for full (and up to date) details.
If no, does it have some overhead if I store my ordinary tables in PostgreSQL TimescaleDB?
I don't think there's a storage overhead. You might see some performance gains e.g. for data ingest and query performance. This article may help clarify that https://docs.timescale.com/timescaledb/latest/overview/how-does-it-compare/timescaledb-vs-postgres/
Overall you can think of TimescaleDB as providing additional functionality to 'vanilla' PostgreSQL and so unless there's a reason around application design to separate non-time-series data to a separate database then you aren't obliged to do that.
One other point, shared by a very experienced member of our Slack community [thank you Chris]:
To have time-series data and “normal” data (normalized) in one or separate databases for us came down to something like “can we asynchronously replicate the time-series information”?
In our case we use two different pg systems, one replicating asynchronously (for TimescaleDB) and one with synchronous replication (for all other data).
Transparency: I work for Timescale

dropped geometry_columns and geography_columns tables

(cross-posted from https://gis.stackexchange.com/q/320977/104667)
I've accidentally dropped geometry_columns and geography_columns tables from an existing PostgreSQL PostGIS database schema - let's call it mydb.myschema.
This post outlines how to restore using CREATE TABLE ... with .sql from PostGIS install (I am on a shared server and am having to get my devops to look for them because of permissions).
I wanted to check if this will work. Also, do these two tables (geometry_columns and geography_columns) contain data relating to the geometries held in existing tables in mydb.myschema. Or do geometry_columns and geography_columns hold reference information for spatial operations?
Advice from anyone who has gone through this before welcome.
Running Postgis 2.5.2 ON pg 11.x.
[NOTE: No backup available as snapshots weren't set up beforehand...]

Combine NoSQL and Relational Database on single Postgresql instance

I have an existing relational Postgresql database. A few of the tables contain very fat blobs, they would be much better of as NoSQL Documents. This would significantly lighten our relational database.
So, we thought of moving those blob-table out into a NoSQL solution like CosmosDB or MongoDB. However there are foreign key dependencies with purely relational tables and this complicates moving those tables out into their own database.
I have found that PSQL natively supports storing Documents and can be distributed. The solutions I looked at so far are CitusData and Postgres XL. For those who used those how do they compare?
Has anyone encountered similar situations before? Did you separate out into a NoSQL database? Or has anyone partitioned their PSQL into relational and NoSQL parts? How did that go? What would you recommend to look out for in hindsight?
(Citus Engineer Here)
Postgres has JSONB column type which is powerful and flexible. What you can do is to keep your structural table as is and put a jsonb column for the blob data. Test this with single node Postgres and if that works for you, great!
If you have a problem with the scale of your data, i.e. memory or storage or CPU of a single machine is not enough for your workload and you cannot go bigger, then you can try scaling out with Citus or Postgres-XL.
I have no experience with Postgres-XL but Citus is pretty easy to try. There are docker images that you can use or you can create an account on Citus Cloud to try a 1-week free dev plan (it would not be suitable for benchmarking purposes).
Every RDBMS->NoSQL migration would require one of the two:
1. embedding some of these dependent documents into the ones that are actually queried by the user
2. referencing dependent documents by id and inferring these relationships on read.
Very typical, everyone does it every day, don't be afraid. BTW, you don't have to make a choice between Cosmos DB and MongoDB - just use Cosmos DB with MongoDB API.

Migrating all Data Over From App Engine NDB Over to Django Models Postgres

I'm new to data migrations, so I'm just wondering what the best way would be to go about migrating all of the data from the Big Table (NDB) over to Django Models (Postgres).
On the one hand, I have plenty of 'tables' that have plenty of relations (KeyProperties) and on the other, I must maintain those relations as well as port some over to general relations (GFK).
I'm not even sure how to go about doing this. I know how to create a Postgres Django DB, just not how to maintain things like, KeyProperties linking to image Blogs. How do I copy those images over and also maintain this 'FK' relation? I have quite a bit of data and would really like to maintain the structure of it.
Is there any good documents on database migrations and how its ideally done?
Any help would be appreciated!!!
Create a Postgres table just for the images (using BLOB or bytea types) and use FK relations to it.
The general question of doing database migrations is too broad to answer, please ask a more specific question. You are going to have to write custom code to split apart each entity's properties and convert them into Postgres data types.