I have a table in Greenplum(4.3.5.1), i want to remove constraint as initially created with a primary key constraint. I tried to do but query is running for 2-3 hours, i have cancelled it as no other option left,
then i have takne a backup and tried to drop table but query is running 2-3 hours and finally i cancelled the query again
(When drop table query executed, it is showing RowExclusiveLock on table pg_depend, pg_class and pg_type)
I just tried truncate also but same problem
Can anyone help on this, what could be the reason?? and what will be the best way to resolve this??
Regards
Most likely you hit a locking issue. First thing to check is pg_locks - it would show you the current locks on the table. I bet your table is locked by some process, this is why truncate and drop table is hanging. Find the blocking query and terminate it, then you would be able to easily drop/truncate target table.
Here is the query that would help you:
select * from pg_locks where relation = 'mytablename'::regclass::oid;
You should use truncate:
TRUNCATE TABLE table_name;
http://www.postgresql.org/docs/9.1/static/sql-truncate.html
Related
I would like to change schema of few tables in my Postgres DB. Problem is that all the time there are long running queries and as I understand schema change needs exclusive lock.
Question is how can I do it? Of course I can kill all existing queries and try to do schema rename (move table to different schema) but there is a huge chance that in the meantime new queries will appear.
Thanks for help!
run SELECT pg_backend_pid() before running the ALTER TABLE
start the ALTER TABLE statement
in a second database session, run SELECT pg_blocking_pids(12345), where 12345 is the result from the first query
cancel all the blocking transactions found with the previous query with SELECT pg_cancel_backend(23456)
We have thousands of tables. Out of these tables we have few tables. Which are busy some times. If I execute any ALTER statement or creating trigger on those tables I am unable to do it. How to check whether table is busy or free before running the ALTER or creating TRIGGER on that table in postgresql database.
The easiest way would be to run
LOCK TABLE mytable NOWAIT;
If you get no error, the ALTER TABLE statement can proceed without waiting.
Query below returns locked objects in a database.
select t.relname, l.locktype, page, virtualtransaction, pid, mode, granted
from pg_locks l, pg_stat_all_tables t
where l.relation=t.relid
order by relation asc;
Problem is following: remove all records from one table, and insert them to another.
I have a table that is partitioned by date criteria. To avoid partitioning each record one by one, I'm collecting the data in one table, and periodically move them to another table. Copied records have to be removed from first table. I'm using DELETE query with RETURNING, but the side effect is that autovacuum is having a lot of work to do to clean up the mess from original table.
I'm trying to achieve the same effect (copy and remove records), but without creating additional work for vacuum mechanism.
As I'm removing all rows (by delete without where conditions), I was thinking about TRUNCATE, but it does not support RETURNING clause. Another idea was to somehow configure the table, to automatically remove tuple from page on delete operation, without waiting for vacuum, but I did not found if it is possible.
Can you suggest something, that I could use to solve my problem?
You need to use something like:
--Open your transaction
BEGIN;
--Prevent concurrent writes, but allow concurrent data access
LOCK TABLE table_a IN SHARE MODE;
--Copy the data from table_a to table_b, you can also use CREATE TABLE AS to do this
INSERT INTO table_b AS SELECT * FROM table_a;
--Zeroying table_a
TRUNCATE TABLE table_a;
--Commits and release the lock
COMMIT;
For background of this please see: Do I need a primary key for my table, which has a UNIQUE (composite 4-columns), one of which can be NULL?
My questions are quite simple:
I have a table that holds product pricing information, synchronised from another database. There is just one field in this table that is controlled by the client, and the rest is dropped and re-inserted on synchronisation every once in a while (e.g. once a day to once a week, manually run by the client using a PHP script).
1) The table has an index on 4 columns (one of which can be null) and another partial index on the 3 not-null columns. When you drop about 90% of your table and re-insert data, is it good to also drop your indexes and re-create them after all the data is in the table, or is it better to simply keep the indexes "as is"?
I have now switched to another approach suggested by Erwin Brandstetter:
CREATE TEMP TABLE pr_tmp AS
SELECT * FROM product_pricebands WHERE my_custom_field IS TRUE;
TRUNCATE product_pricebands;
INSERT INTO product_pricebands SELECT * FROM pr_tmp;
which seems to be working very well, so I'm not sure if I need to drop and recreate my indexes or why I would need to do this. Any suggestions?
2) Also, how do I measure performance of my script? I actually want to know if:
CREATE TEMP TABLE pr_tmp AS
SELECT * FROM product_pricebands WHERE my_custom_field IS TRUE;
TRUNCATE product_pricebands;
INSERT INTO product_pricebands SELECT * FROM pr_tmp;
has better performance than
DELETE FROM product_pricebands WHERE my_custom_field IS TRUE;
Can I tell this via PHP? I tried EXPLAIN ANALYZE but I'm not sure if that works for a group of statements like the above?
Many thanks!
If your Postgres version is >= 8.4, you can use auto explain module:
http://www.postgresql.org/docs/8.4/static/auto-explain.html
You can also use in your script:
SET log_min_duration_statement = 0;
SET log_duration = on;
to log all statements and timeings for this script.
As for number one I don't know if it is important to rebuild the indexes but I think that if the table is big than dropping and recreating the indexes can be a performance improvement:
CREATE TEMP TABLE pr_tmp AS
SELECT *
FROM product_pricebands
WHERE my_custom_field IS TRUE
-- drop all indexes here
TRUNCATE product_pricebands;
INSERT INTO product_pricebands
SELECT *
FROM pr_tmp;
-- recreate them here
How to delete all the records in SQL Server 2008?
To delete all records from a table without deleting the table.
DELETE FROM table_name use with care, there is no undo!
To remove a table
DROP TABLE table_name
from a table?
You can use this if you have no foreign keys to other tables
truncate table TableName
or
delete TableName
if you want all tables
sp_msforeachtable 'delete ?'
Use the DELETE statement
Delete From <TableName>
Eg:
Delete from Student;
I can see the that the others answers shown above are right, but I'll make your life easy.
I even created an example for you. I added some rows and want delete them.
You have to right click on the table and as shown in the figure Script Table a> Delete to> New query Editor widows:
Then another window will open with a script. Delete the line of "where", because you want to delete all rows. Then click Execute.
To make sure you did it right right click over the table and click in "Select Top 1000 rows". Then you can see that the query is empty.
If you want to reset your table, you can do
truncate table TableName
truncate needs privileges, and you can't use it if your table has dependents (another tables that have FK of your table,
For one table
truncate table [table name]
For all tables
EXEC sp_MSforeachtable #command1="truncate table ?"
Delete rows in the Results pane if you want to delete records in the database. If you want to delete all of the rows you can use a Delete query.
Delete from Table_name
delete from TableName
isn't a good practice.
Like in Google BigQuery, it don't let to use delete without "where" clause.
use
truncate table TableName
instead
When the table is very large, it's better to delete table itself with drop table TableName and recreate it, if one has create table query; rather than deleting records one by one, using delete from statement because that can be time consuming.
The statement is DELETE FROM YourDatabaseName.SomeTableName; if you are willing to remove all the records with reasonable permission. But you may see errors in the constraints that you defined for your Foreign Keys. So that you need to change your constraints before removing the records otherwise there is a command for MySQL (which may work for others) to ignore the constraints.
SET foreign_key_checks = 0;
Please be aware that this command will disable your foreign keys constrain check, so this can be dangerous for the relationships you created within your schema.