I have created a db long ago using django. Now as we are migrating the application, so I need all the CREATE TABLE sql queries which django might have run to create the entire db for our service (which has around 70-80 tables and each table has avg 30-70 columns).
Both the servers old and new are using Postgres for databases.
But the technology stack is completely different (A 3rd party proprietary application which will host the service) instead of django.
If I start to write all the tables again from scratch, it will take at least a week or two.
Is there any way either from Postgres or from django which can generate the CREATE TABLE sql schema for an entire db keeping all the relationship as is?
Also, I have to do minor modification to that schema as per customer requirement.
p.s - pg_dump won't work as I need actual schema itself to get it reviewed from client.
Related
I am working on PostgreSQL database and after some months I have added new columns in many tables according to the new requirements on my test server and also I have done changes in functions.
Now what I want that I want to do same changes on my Live server without losing data.
I have taken backup on test server schema using below answer
https://stackoverflow.com/a/7804825/7223676
I'm trying to run a different database for development as my initial product release is coming so I'd like to know how to maintain two different databases. I'm using postgresql as my DBMS
I want development database and production database to have exactly the same schema. Is there a way to do this automatically? If I have to to manually, what would be the best way to update schema?
thank you
I want development database and production database to have exactly the same schema.
Then just create 2 databases with the same schema(s).
Or you can read more about template databases - https://www.postgresql.org/docs/11/manage-ag-templatedbs.html.
The idea is that when you create new database, it's actually copied from template1, thereof you can edit template1, and every new database will have schemas/tables that you need.
I'd like to preface this by saying I'm not a DBA, so sorry for any gaps in technical knowledge.
I am working within a microservices architecture, where we have about a dozen or applications, each supported by its Postgres database instance (which is in RDS, if that helps). Each of the microservices' databases contains a few tables. It's safe to assume that there's no naming conflicts across any of the schemas/tables, and that there's no sharding of any data across the databases.
One of the issues we keep running into is wanting to analyze/join data across the databases. Right now, we're relying on a 3rd Party tool that caches our data and makes it possible to query across multiple database sources (via the shared cache).
Is it possible to create read-replicas of the schemas/tables from all of our production databases and have them available to query in a single database?
Are there any other ways to configure Postgres or RDS to make joining across our databases possible?
Is it possible to create read-replicas of the schemas/tables from all of our production databases and have them available to query in a single database?
Yes, that's possible and it's actually quite easy.
Setup one Postgres server that acts as the master.
For each remote server, create a foreign server then you then use to create a foreign table that makes the data accessible from the master server.
If you have multiple tables in multiple server that should be viewed as a single table in the master, you can setup inheritance to make all those tables appear like one. If you can define a "sharding" key that identifies a distinct attribute between those server, you can even make Postgres request the data only from the specific server.
All foreign tables can be joined as if they were local tables. Depending on the kind of query, some (or a lot) of the filter and join criteria can even be pushed down to the remote server to distribute the work.
As the Postgres Foreign Data Wrapper is writeable, you can even update the remote tables from the master server.
If the remote access and joins is too slow, you can create materialized views based on the remote tables to create a local copy of the data. This however means that it's not a real time copy and you have to manage the regular refresh of the tables.
Other (more complicated) options are the BDR project or pglogical. It seems that logical replication will be built into the next Postgres version (to be released a the end of this year).
Or you could use a distributed, shared-nothing system like Postgres-XL (which probably is the most complicated system to setup and maintain)
How can I obtain the creation date or time of an IBM's DB2 database without connecting to the specified database first? Solutions like:
select min(create_time) from syscat.tables
and:
db2 list tables for schema SYSIBM
require me to connect to the database first, like:
db2 connect to dbname user userName using password
Is there another way of doing this through a DB2 command instead, so I wouldn't need to connect to the database?
Can db2look command be used for that?
Edit 01: Background Story
Since more than one person asked why do I need to do this and for what reasons, here is the background story.
I have a server with DB2 DBMS where many people and automated scripts are using it to create some databases for temporary tasks and tests. It's never meant to keep the data for long time. However for one reason or another (ex: developer not cleaning after himself or tests stopping forcefully before they can do the clean up) some databases never get dropped and they start to get accumulated till the hard disk is filled out eventually. So The idea of the app is to look up the age of the database and drop it, if it's older than 6 months (for example).
I am new to the PostgreSQL database. What my visual c++ application needs to do is to create multiple tables and add/retrieve data from them.
Each session of my application should create a new and distinct database. I can use the current date and time for a unique database name.
There should also be an option to delete all the databases.
I have worked out how to connect to a database, create tables, and add data to tables. I am not sure how to make a new database for each run or how to retrieve number and name of databases if user want to clear all databases.
Please help.
See the libpq examples in the documentation. The example program shows you how to list databases, and in general how to execute commands against the database. The example code there is trivial to adapt to creating and dropping databases.
Creating a database is a simple CREATE DATABASE SQL statement, same as any other libpq operation. You must connect to a temporary database (usually template1) to issue the CREATE DATABASE, then disconnect and make a new connection to the database you just created.
Rather than creating new databases, consider creating new schema instead. Much less hassle, since all you need to do is change the search_path or prefix your table references, you don't have to disconnect and reconnect to change schemas. See the documentation on schemas.
I question the wisdom of your design, though. It is rarely a good idea for applications to be creating and dropping databases (or tables, except temporary tables) as a normal part of their operation. Maybe if you elaborated on why you want to do this, we can come up with solutions that may be easier and/or perform better than your current approach.