Redshift queries do not finish after dropping and recreating tables - amazon-redshift

I have a few tables in a redshift cluster. When I drop say table A and recreate the table with the records in it using CREATE and the COPY command, any query on table A never finishes execution (even a simple select * from A;). I need to reboot the cluster and execute the query again for it to finish, be it any sort of query. What's the solution here ?

Related

Move table to different schema in postgres

I would like to change schema of few tables in my Postgres DB. Problem is that all the time there are long running queries and as I understand schema change needs exclusive lock.
Question is how can I do it? Of course I can kill all existing queries and try to do schema rename (move table to different schema) but there is a huge chance that in the meantime new queries will appear.
Thanks for help!
run SELECT pg_backend_pid() before running the ALTER TABLE
start the ALTER TABLE statement
in a second database session, run SELECT pg_blocking_pids(12345), where 12345 is the result from the first query
cancel all the blocking transactions found with the previous query with SELECT pg_cancel_backend(23456)

Query to find a critical situation in Greenplum database(4.3.12) which has postgres version 8.2.15

Need a SELECT query that can find the users,procpids along with query statements runs by them with the waiting_state when a particular table is accessed in their respective queries which as a result leads to the concept of blocking table and waiting table.
Thanks.

Move truncated records to another table in Postgresql 9.5

Problem is following: remove all records from one table, and insert them to another.
I have a table that is partitioned by date criteria. To avoid partitioning each record one by one, I'm collecting the data in one table, and periodically move them to another table. Copied records have to be removed from first table. I'm using DELETE query with RETURNING, but the side effect is that autovacuum is having a lot of work to do to clean up the mess from original table.
I'm trying to achieve the same effect (copy and remove records), but without creating additional work for vacuum mechanism.
As I'm removing all rows (by delete without where conditions), I was thinking about TRUNCATE, but it does not support RETURNING clause. Another idea was to somehow configure the table, to automatically remove tuple from page on delete operation, without waiting for vacuum, but I did not found if it is possible.
Can you suggest something, that I could use to solve my problem?
You need to use something like:
--Open your transaction
BEGIN;
--Prevent concurrent writes, but allow concurrent data access
LOCK TABLE table_a IN SHARE MODE;
--Copy the data from table_a to table_b, you can also use CREATE TABLE AS to do this
INSERT INTO table_b AS SELECT * FROM table_a;
--Zeroying table_a
TRUNCATE TABLE table_a;
--Commits and release the lock
COMMIT;

Clearing records in HBase table

We are creating a Disaster Recovery System for HBase tables. Because of the restrictions we are not able to use the fancy methods to maintain the replica of the table. We are using Export/Import statements to get the data into HDFS and using that to create tables in the DR Servers.
While Importing the data into HBase table, we are using truncate command to clear the table and load the data fresh into the table. But the truncate statement is taking a long time to delete the rows. Is there are any other effective statements to clear the entire table?
(truncate takes 33 min for ~2500000 records)
disable -> drop -> create table again, maybe ? I don't know if drop takes too long.

Postgres and temporary tables

I have a stored procedure that use temp table and explicitly drops it when done.
What happed when same procedure is run at same time by 2 different sessions? Sessions are run as single user (webapp).
Will one session interfere with temp table, and data inside it, of other session?
I'm using Postgres 9.0.
In postgresql temporary tables are unique for each session, so no problem.
If I am not totally wrong, the result would be one temp table per query in your procedure which generates the temp table, so the two temp tables would not disturb eachother.