Many postgres tables showing up empty in middle of session though no DML or DDL was executed - postgresql

In a given schema we can see the following tables:
The search_path is :
aact=# set search_path to public, ctgov ;
SET
The problem is that in the middle of a psql session a number of these tables just started showing zero rows. Then I exit'ed psql and restarted it: again those tables are still empty.
The interesting thing is this is not affecting all tables: see below:
I am not a regular user of postgres and have no clue what happened / is happening here. What should I look for?

Related

Cannot drop a schema in PostgreSQL

I am trying to drop the schema masterdata from a postgres database, but it does not work. I am using PostgreSQL 9.5. I have 2 databses, one of them has the masterdata schema, which I want to drop. Here is the structure (in DBeaver):
My first attempt was just to execute the SQL statement DROP SCHEMA masterdata in DBeaver, but it tells me, that such a schema does not exist (but it does show it, as we can see in the picture!). Maybe, it does not know, in which of the 2 databses to look for this schema? If so, then how to specify it? I was looking for a way to specify the database, but did not find anything.
However, my second attempt was to use psql and to type the command
psql -U postgres -d bobd -h localhost
So here I concretely specify, which databse to use! psql does not answer anything, it just asks for the next command. And when I type \dn to view the current schemas, then the schema masterdata is still there! Also, the data in the tables is still there. I tried the same with other users instead of the user postgres (also with the owner of the schema to remove), but the result is the same.
Any ideas, what I am doing wrong?
Your DBeaver session was probably connected to the wrong database (postgres?).
Do it from the psql session, that is easiest.
Right after \dn showed you that there is indeed such a schema, enter
DROP SCHEMA masterdata;
It may complain that there are objects in that schema. Then you can use
DROP SCHEMA masterdata CASCADE;
Maybe your schema name contains characters that do not display or display differently (maybe capital letters) in DBeaver. I suggest you check names by running following query:
SELECT '~~' || nspname || '~~' FROM pg_catalog.pg_namespace;
Try adding quotes around schema name if you have added capital letters when creating schema.

SET/RESET command in ALTER DATABASE is not supported

Encountered this issue when trying to modify the search_path to my new Redshift db.
Presently, I've migrated the contents of my MySQL db into a redshift cluster via AWS' Data Migration Service. The data was imported into a schema lets call my_schema. When I try to execute queries against the cluster it requires me to prefix table names with the schema name
i.e.
select * from my_schema.my_table
I wanted to change the setup so that I can reference the table directly without needing the prefix. After a bit of looking around I found out that this was possible by modifying the search_path attribute.
First I tried doing this by running
set search_path = "$user", my_schema;
This appeared to work but then I realized that this was simply setting my_schema as the default schema in the context of the current session, I wanted it set on a database level. I found several sources saying that the way to do this was to use the alter command like so...
alter database my_db set search_path = "$user", public, my_schema
However, running this command results in the following error which somehow shows up in 0 google results:
SET/RESET commmand in ALTER DATABASE is not supported
I'm pretty baffled by how the above error hasn't ever had a post made about it but I'm also pretty interested in figuring out how to resolve my initial issue of setting a global default schema for my redshift cluster.
ALTER DATABASE SET is not supported in Redshift. However you can SET/RESET configuration parameters at USER level using the ALTER USER SET SEARCH_PATH TO <SCHEMA1>,<SCHMEA2>;
Please check: http://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_USER.html
http://docs.aws.amazon.com/redshift/latest/dg/r_search_path.html
When you set the search_path to <SCHEMA1>,<SCHMEA2> in db1 for a user it is not for just current session, it will be set for all future sessions.

how to restore a postgresql database to the exact same state?

I am trying to
create a snapshot of a PostgreSQL database (using pg_dump),
do some random tests, and
restore to the exact same state as the snapshot, and do some other random tests.
These can happen over many/different days. Also I am in a multi-user environment where I am not DB admin. In particular, I cannot create new DB.
However, when I restore db using
gunzip -c dump_file.gz | psql my_db
changes in step 2 above remain.
For example, if I make a copy of a table:
create table foo1 as (select * from foo);
and then restore, the copied table foo1 remains there.
Could some explain how can I restore to the exact same state as if step 2 never happened?
-- Update --
Following the comments #a_horse_with_no_name, I tried to to use
DROP OWNED BY my_db_user
to drop all my objects before restore, but I got an error associated with an extension that I cannot control, and my tables remain intact.
ERROR: cannot drop sequence bg_gid_seq because extension postgis_tiger_geocoder requires it
HINT: You can drop extension postgis_tiger_geocoder instead.
Any suggestions?
You have to remove everything that's there by dropping and recreating the database or something like that. pg_dump basically just makes an SQL script that, when applied, will ensure all the tables, stored procs, etc. exist and have their data. It doesn't remove anything.
You can use PostgreSQL Schemas.

restoring database from pg_dump file creates strange tables

I have backup created like this:
pg_dump dbname > file
I am trying to restore the database (after drop database and create database) like this:
psql dbname < file
What I get is a database full of tables that are created with dbname.tablename instead of just tablename.
How do I restore a postgres database making sure the tables it creates has just tablename and not dbname.tablename?
Thanks to #Craig Ringer for pointing me in the right direction.
Yes, there was SET search_path on the database for the original DB. This created the table names with schema names prefixed to table names.
Removing or commenting those out of the backup script created tables without a schema prefix. Which was desirable. But the restore didn't result in complete restore, and many tables got left out.
So did the restore, with usual means. Tables are created with schema names prefixed. The sql query scripts broke because they were not specifying the schema names every time they queried the table. To fix this, I followed this - https://stackoverflow.com/a/2875705/1945517
ALTER ROLE <your_login_role> SET search_path TO dbname;
This fixed the broken queries.

PostgreSQL reset tables to original state

We have a large PostgreSQL dump with hundreds of tables that I can successfully import with pg_restore. We are developing a software that inserts into a lot of these tables (~100) and for every run we need to return these tables to their original state (that means to the content that was in the dump). Restoring the original dump again takes a lot of time and we just can't wait for half an hour before every debugging session. So I need a relatively fast way to revert these tables to the state they are in after restoring from the dump.
I've tried using pg_restore with -L switch and selecting these tables but I get either a duplicate key error when using both --data-only and --clean or a "cannot drop table X because other objects depend on it" error when using only --clean. Issuing a SET CONSTRAINTS ALL DEFERRED command before pg_restore did not work either. Maybe I have the rows in the table list all wrong, right now it's
491; 1259 39623998 TABLE public some_table some_user
8021; 0 0 COMMENT public TABLE some_table some_user
8022; 0 0 ACL public some_table some_user
for every table and then
6700; 0 39624062 TABLE DATA public some_table postgres
8419; 0 0 SEQUENCE SET public some_table_pk_id_seq some_user
for every table.
We only insert data and don't update existing rows so deleting all rows above an index and resetting the sequences might work, but I really don't want to have to manually create these commands for all the hundred tables and I'm not even sure it would work even if I set cascade to delete other objects depending on the given row.
Does anyone have any better idea how to handle this?
So you are looking for something like a snapshot in order to be able to revert quickly to a certain state.
I am not aware of a possiblity in PostgreSql to rollback to a certain timestamp.
While searching for a solution, I've found two ideas here
Use create database with the template option
Virtualize your PostgreSql installation using VMWare or VirtualBox, and use the snapshot feature of the virtual machines.
Again, both ideas are copied from the above source (I have search for "postgresql db snapshots").
You can use PITR to create a snapshot before loads and use the PITR snapshot to take you back to any point that you have the logs for.