What is meant by a "logical table?" i.e. Deleted and Inserted. That is, why can't we do something like:
Delete
From exampletable
Select *
From deleted
In the same session and see the results from the deleted table?
You probably search for OUPUT clause:
Delete From exampletable
OUTPUT deleted.*
The DELETED and INSERTED tables are created by the SQL engine to handle the data manipulation in your data manipulation statement. Think of them as being similar to a temp table that you would create in a stored procedure to hold interim results.
Once your DML statement has completed, SQL Server doesn't need them anymore, so it "drops" the "temp tables" and they aren't there to query any longer. You can, though, access them using the OUTPUT clause in your DML statement, as #LucaszSzozda explains, because at that point, the engine hasn't dropped them yet.
Related
I assume this question has been asked before, but unfortunately I cannot find the answer to my question.
I have a table, and I am using an update statement to update a column. Simultaneously I am running a create table query with a select statement that is retrieving data from the table and column that is also being updated.
My questions are: can this lead to wrong results in the output of the create table statement? does the update query finish 1st then the create table with the select execute? I just know that the create table statement is taking way longer to execute.
In PostgreSQL readers never lock writers and vice versa. This is guaranteed by PostgreSQL's MVCC implementation that keeps old row versions around.
If the updating transaction isn't finished yet, the reading transaction will see the old value, and the result is consistent.
There is nothing inside PostgreSQL that should slow down the SELECT statement noticeably, but of course I/O contention is a possible explanation.
I am using Redshift. I want a query to delete selected rows from a redshift table if the table exists otherwise just ignore the statement.
Redshift's SQL dialect doesn't contain control-of-flow statements like IF.. THEN so you are not going to be able to do this in a single SQL statement.
Your application or process will need to first query the Redshift table metadata to determine if a table exists e.g.
select 1 from pg_tables where schemaname = 'myschema' and tablename = 'myschema';
If data is returned (i.e. the table exists) then the application or process will execute the delete statement, if no data is returned the application or process does nothing. Basically you need to handle the "if this then do this" logic externally to Redshift.
I recommend #Nathan's answer. I would use python/psycopg2 to set up this logic. The first query would check for the table's existence in pg_tables (eg SELECT count(1) FROM pg_tables WHERE tablename='foo'), and store the result in a variable. Then you'd check the results of that variable to decide whether to kick off a second query (your delete).
But, maybe you don't want to do it in Python. You're just all about Redshift (it's pretty sweet). You could just run the DELETE query in Redshift. If the table is not present, the query fails and nothing happens. If the table is, you delete your data. There's no harm in generating an error here.
Problem is following: remove all records from one table, and insert them to another.
I have a table that is partitioned by date criteria. To avoid partitioning each record one by one, I'm collecting the data in one table, and periodically move them to another table. Copied records have to be removed from first table. I'm using DELETE query with RETURNING, but the side effect is that autovacuum is having a lot of work to do to clean up the mess from original table.
I'm trying to achieve the same effect (copy and remove records), but without creating additional work for vacuum mechanism.
As I'm removing all rows (by delete without where conditions), I was thinking about TRUNCATE, but it does not support RETURNING clause. Another idea was to somehow configure the table, to automatically remove tuple from page on delete operation, without waiting for vacuum, but I did not found if it is possible.
Can you suggest something, that I could use to solve my problem?
You need to use something like:
--Open your transaction
BEGIN;
--Prevent concurrent writes, but allow concurrent data access
LOCK TABLE table_a IN SHARE MODE;
--Copy the data from table_a to table_b, you can also use CREATE TABLE AS to do this
INSERT INTO table_b AS SELECT * FROM table_a;
--Zeroying table_a
TRUNCATE TABLE table_a;
--Commits and release the lock
COMMIT;
I am using the following commands below in postgresql 9.1.3 to move data from a temp staging table to a table being used in a webapp (geoserver) all in the same db. Then dropping the temp table.
TRUNCATE table_foo;
INSERT INTO table_foo
SELECT * FROM table_temp;
DROP TABLE table_temp;
I want to wrap this in a transaction to allow for concurrency. The data-set is small less than 2000 rows and truncating is faster than delete.
What is the best way to run these commands in a transaction?
Is creating a function advisable or writing a UPSERT/MERGE etc in a CTE?
Would it be better to DELETE all rows then bulk INSERT from temp table instead of TRUNCATE?
In postgres which would allow for a roll back TRUNCATE or DELETE?
The temp table is delivered daily via an ETL scripted in arcpy how could I automate the truncate/delete/bulk insert parts within postgres?
I am open to using PL/pgsql, PL/python (or the recommended py for postgres)
Currently I am manually executing the sql commands after the temp staging table is imported into my DB.
Both, truncate and delete can be rolled back (which is clearly documented in the manual).
truncate - due to its nature - has some oddities regarding the visibility.
See the manual for details: http://www.postgresql.org/docs/current/static/sql-truncate.html (the warning at the bottom)
If your application can live with the fact that table_foo is "empty" during that process, truncate is probably better (again see the big red box in the manual for an explanation). If you don't want the application to notice, you need to use delete
To run these statements in a transaction simply put them into one:
begin transaction;
delete from table_foo;
insert into ....
drop table_temp;
commit;
Whether you do that in a function or not is up to you.
truncate/insert will be faster (than delete/insert) as that minimizes the amount of WAL generated.
I'm working on an application that imports data from Access to SQL Server 2008. Currently, I'm using a stored procedure to import the data individually by record. I can't go with a bulk insert or anything like that because the data is inserted into two related tables...I have a bunch of fields that go into the Account table (first name, last name, etc.) and three fields that will each have a record in an Insurance table, linked back to the Account table by the auto-incrementing AccountID that's selected with SCOPE_IDENTITY in the stored procedure.
Performance isn't very good due to the number of round trips to the database from the application. For this and some other reasons I'm planning to instead use a staging table and import the data from there. Reading up on my options for approaching this, a cursor that executes the same insert stored procedure on the data in the staging table would make sense. However it appears that cursors are evil incarnate and should be avoided.
Is there any way to insert data into one table, retrieve the auto-generated IDs, then insert data for the same records into another table using the corresponding ID, in a set-based operation? Or is a cursor my only option here?
Look at the OUTPUT clause. You should be able to add it to your INSERT statement to do what you want.
BTW, if you need to output columns into the second table that weren't inserted into the first one, then use MERGE instead of INSERT (as suggested in the comment to the original question) as its OUTPUT clause supports referencing other columns from the source table(s). Otherwise, keeping it with an INSERT is more straightforward, and it does give you access to the inserted identity column.
I'm having experiment to worked out in inserting multiple record into related table using databinding. So, try this!
Hopefully this is very helpful. Follow this link How to insert record into related tables. for more information.