Data Import deletes any VIEWS that were previously created - postgresql

I've created VIEWS in PostgreSQL recently. I run on a daily basis an import script that ingested the DB with new data by dropping all schemas and tables and then re-creating the schemas and importing the data.
Is there a way of keeping the created VIEWS in the DB?
I didn't realize that the VIEWS would be deleted as well. The reason why they are not recreated as the Schemas is that the script reads from a source DB while the VIEWS are created on the target.
Are there other ways of circumventing this? Or any workarounds for re-using VIEWS or similar in this methodology - of dropping the schemas every day.
Thanks

Related

Amazon RDS Postgresql snapshot preserves schema but loses all data

Using AWS RDS console I created a snapshot backup of a Postgresql v11 database containing multiple schemas. I then created a new instance from the backup. The process seemed to work fine without error. However, upon inspection of the data in the new instance, I noticed that in only one of my schemas the data was not preserved. The schema structure, tables, indexes, constraints, etc looked fine, but every table was empty (select count(*) from schema.table was 0 for every table in the schema). All other schemas looked fine and contained the expected data. I looked everywhere (could not find help for this online) and tried many tests myself (changing roles, rebuilding the schema, privileges, much more) while attempting to solve this issue. What would cause my snapshots to preserve the entire schema structure, but lose all of the data itself?
I finally realized that the only difference between the problem schema and the other was that all tables in the problem schema had been created with the 'UNLOGGED' keyword. This was done to increase write speed for millions of rows inserted when the schema was first built. However, when a snapshot is created/restored as described above, the process depends on the WAL files that are written with normal (logged) tables to restore the data. To fix my problem I simply altered all of the tables and set them to be logged (alter table schema.table set logged). After this, snapshots worked fine. For anyone else in the future that is doing something similar, should unlogged tables be needed for initial mass population of data to get better write speed, it would be a good to changed them to be logged after initial data population (if you plan on using snapshots or replications or similar). Side note, pg_dump/pg_restore does still work for unlogged tables.

Maintaining a development database with exactly the same schema

I'm trying to run a different database for development as my initial product release is coming so I'd like to know how to maintain two different databases. I'm using postgresql as my DBMS
I want development database and production database to have exactly the same schema. Is there a way to do this automatically? If I have to to manually, what would be the best way to update schema?
thank you
I want development database and production database to have exactly the same schema.
Then just create 2 databases with the same schema(s).
Or you can read more about template databases - https://www.postgresql.org/docs/11/manage-ag-templatedbs.html.
The idea is that when you create new database, it's actually copied from template1, thereof you can edit template1, and every new database will have schemas/tables that you need.

How do I share reference PostgreSQL tables between databases?

The system I'm designing has a set of reference tables that rarely have to be updated. New databases will be constantly started to process files that will have to query that information.
What's the best arrangement for coordinating communication between that set of information and the work database? I certainly don't want to duplicate that set of reference information in every new work database. The work databases will likely be deleted once their work is completed.

MySQL Workbench: Deleting multiple tables from Physical Schemas

I'm looking for a smart way that deleting multiple tables from Physical Schemas view.
I know right-clicking a table and hitting Delete ... does the job, But it doesn't work well for thousands of tables. Selecting multiple tables and use right-clicking and hit Delete ... doesn't work as I expected. It deletes only one table.
How can I do that? I'm using MySQL Workbench 6.2.5.
That looks like an oversight or design error. You can only delete one table at a time. In order to have a better functionality implemented file a feature request on http://bugs.mysql.com.

How to add new table to the database using sql workbench

I was creating MySQL database to add medicine.I created a table and I need to add one more tabe.After creating it I tried to query the database from the sql workbench.But it donot show the table but it is present in the EER Model.How can I solve this problem.
Modeling is just the task of abstractly designing your schema and its objects (e.g. tables, views etc.). It does not actually create these objects. For this you have to forward engineer your model to a server (see Database menu). Once done you can use the Synchronization feature to update either model or server (or both) with any changes made.
But keep in mind this is only for the objects, not for any data.