How to execute TSQL select without blocking? - sql-server-2008-r2

I have a table which occasionally gets massive (300k+) amounts of rows inserted into it in a batch.
However, if the table is being read from during this insert period, the inserts and selects time out.
preventing all selects allows the inserts to run just fine.
Is there a way I can allow the selects to happen in a way that doesn't block the inserts?
I'm selecting with READ UNCOMMITTED but that doesn't seem to be enough.
I don't care if the read isn't 100% accurate (with regards the inserted data), it can miss rows if needs be, I just need the select to be fast and not upset the insert. Is this possible?

NOLOCK - http://technet.microsoft.com/en-us/library/aa213026(v=sql.80).aspx
Does this help?
SELECT * FROM 'TABLE NAME' WITH (NOLOCK)

Related

PostgreSQL query - show which rows are locked

I would like to query data from a table, and if a row is locked, show it as a different color. Is this possible using postgresql's locking for update?
e.g.
select
*,
(select from pg_x -- link row somehow )
from table
thank you
There is no good way to do that. The row locks are stored in the row (system column xmax), but this attribute serves other purposes too, and the flags that determine if it is indeed a row lock or perhaps a rolled back update are not exposed via SQL.
There are only unpleasant alternatives:
Use the pageinspect contrib module to examine those flags. That would be a second scan of the table, and such a query doesn't respect MVCC visibility.
Run a second query:
SELECT * FROM atable
FOR UPDATE SKIP LOCKED;
That would lock all rows in the table and be very bad for concurrency.
Besides, that information would be pretty useless for the user. In a well-written application, row locks are only held for split seconds, so the information would be outdated by the time it reaches the user.

Bloating of pg_attribute caused by repetitive temporary table creations

I have a process which is creating thousands of temporary tables a day to import data into a system.
It is using the form of:
create temp table if not exists test_table_temp as
select * from test_table where 1=0;
This very quickly creates a lot of dead rows in pg_attribute as it is constantly making lots of new columns and deleting them shortly afterwards for these tables. I have seen solutions elsewhere that suggest using on commit delete rows. However, this does not appear to have the desired effect either.
To test the above, you can create two separate sessions on a test database. In one of them, check:
select count(*)
from pg_catalog.pg_attribute;
and also note down the value for n_dead_tup from:
select n_dead_tup
from pg_stat_sys_tables
where relname = 'pg_attribute';
On the other one, create a temp table (will need another table to select from):
create temp table if not exists test_table_temp on commit delete rows as
select * from test_table where 1=0;
The count query for pg_attribute immediately goes up, even before we reach the commit. Upon closing the temp table creation session, the pg_attribute value goes down, but the n_dead_tup goes up, suggesting that vacuuming is still required.
I guess my real question is have I missed something above, or is the only way of dealing with this issue vacuuming aggressively and taking the performance hit that comes with it?
Thanks for any responses in advance.
No, you have understood the situation correctly.
You either need to make autovacuum more aggressive, or you need to use fewer temporary tables.
Unfortunately you cannot change the storage parameters on a catalog table – at least not in a supported fashion that will survive an upgrade – so you will have to do so for the whole cluster.

Postgres - Bulk transferring of data from one table to another

I need to transfer a large amount of data (several million rows) from one table to another. So far I’ve tried doing this….
INSERT INTO TABLE_A (field1, field2)
SELECT field1, field2 FROM TABLE_A_20180807_BCK;
This worked (eventually) for a table with about 10 million rows in it (took 24 hours). The problem is that I have several other tables that need the same process applied and they’re all a lot larger (the biggest is 20 million rows). I have attempted a similar load with a table holding 12 million rows and it failed to complete in 48 hours so I had to cancel it.
Other issues that probably affect performance are 1) TABLE_A has a field based on an auto-generated sequence, 2) TABLE_A has an AFTER INSERT trigger on it that parses each new record and adds a second record to TABLE_B
A number of other threads have suggested doing a pg_dump of TABLE_A_20180807_BCK and then load the data back into TABLE_A. I’m not sure a pg_dump would actually work for me because I’m only interested in couple of fields from TABLE_A, not the whole lot.
Instead I was wondering about the following….
Export to a CSV file…..
COPY TABLE_A_20180807_BCK (field1,field2) to 'd:\tmp\dump\table_a.dump' DELIMITER ',' CSV;
Import back into the desired table….
COPY TABLE_A(field1,field2) FROM 'd:\tmp\dump\table_a.dump' DELIMITER ',' CSV
Is the export/import method likely to be any quicker – I’d like some guidance on this before I start on another job that may take days to run, and may not even work any better! The obvious answer of "just try it and see" isn't really an option, I can't afford more downtime!
(this is follow-on question from this, if any background details are required)
Update....
I don't think there is any significant problems with the trigger. Under normal circumstances records are INPUTed into TABLE_A at a rate of about 1000/sec (including trigger time). I think the issue is likely to be size of the transaction, under normal circumstances records are inserted into in blocks of 100 records per INSERT, the statement shown above attempts to add 10 million records in a single transaction, my guess is that this is the problem, but I've no way of knowing if it really is, or if there's a suitable work around (or if the export/import method I've proposed would be any quicker)
Maybe I should have emphasized this earlier, every insert into TABLE_A fires a trigger that adds record to TABLE_B. It's the data that's in TABLE_B that's the final objective, so disabling the trigger isn't an option! This whole problem came about because I accidentally disabled the trigger for a few days, and the preferred solution to the question 'how to run a trigger on existing rows' seemed to be 'remove the rows and add them back again' - see the original post (link above) for details.
My current attempt involves using the COPY command with a WHERE clause to split the contents of TABLE_A_20180807_BCK into a dozen small files and then re-load them one at a time. This may not give me any overall time saving, but although I can't afford 24 hours of continuous downtime, I can afford 6 hours of downtime for 4 nights.
Preparation (if you have access and can restart your server) set checkpoint_segments to 32 or perhaps more. This will reduce the frequency and number of checkpoints during this operation. You can undo it when you're finished. This step is not totally necessary but should speed up writes considerably.
edit postgresql.conf and set checkpoint_segments to 32 or maybe more
Step 1: drop/delete all indexes and triggers on table A.
EDIT: Step 1a
alter table_a set unlogged;
(repeat step 1 for each table you're inserting into)
Step 2. (unnecessary if you do one table at a time)
begin transaction;
Step 3.
INSERT INTO TABLE_A (field1, field2)
SELECT field1, field2 FROM TABLE_A_20180807_BCK;
(repeat step 3 for all tables being inserted into)
Step 4. (unnecessary if you do one table at a time)
commit;
Step 5 re-enable indexes and triggers on all tables.
Step 5a.
Alter table_a set logged;

Caching various statistics in a special one row table?

Expecting hundreds of millions of rows and write-heavy applciation.
We need return SELECT COUNT(*) FROM orders and SELECT SUM(amount) FROM orders quite frequently and both of them are too slow to be ran on every request.
We are thinking about adding a special table called stats with just a single row. It has total_orders and total_amount, which we would increase every time we add a new order. Is this kind of SQL "cache" table a practical solution? What does it mean in terms of write performance?
Another option is to use Memcached or Redis, but they can get out of sync and are not persistent. Any other ideas?

select * or individual columns

Is there any difference in performance between
select *
from table name
and
select [col1]
,[col2]
......
,[coln]
from table name
It is a SQL antipattern to use SELECT *. It is faster for the database when you specify columns (it doesn't have to look them up) and more importantly, you should not ever specify more columns than you actually need. If you have a join in your query you have at least one column you don't need (the join column) and thus SELECT * is always slower to return records in a query with a join since it is returning more infomation than necessary.
Now all this sounds like a small improvement and in a small system it might be, but as the database grows and gets busy, the performance implications become bigger. There is no excuse for using SELECT *.
SELECT * is also bad for maintenance especially if you use it to insert records - always specify both the columns you are inserting to and the fields from the select in an INSERT statement that uses a SELECT. It will break if you change the table structure. You may also end up showing the user columns you don't want them to see such as the GUID added for replication.
If you use SELECT * in a view (at least in SQL Server) the view will still not automatically update in the case of a change to the underlying tables. If someone gets the silly idea to re-arrange the column order in the table (yeah I know you shouldn't but people do this on occasion) using SELECT * might mean that data will show up in the wrong columns in reports or inserts which can cause problems where things might be misinterpreted. I can think of one case where two columns were swapped in a staging table and the social security number became the amount we intended to pay the person giving a speech. You can see how that might really muck up the accounting except we didn't use SELECT * so we were safe because the columns retained the same name.
I want to note that you don't even save much development time by using SELECT *. It takes me a max of about 15 seconds more to use the table names even in a large table as I drag them over from the object browser in SQL Server (you can get all the columns in one step).
I suppose select * takes a few cpu cycles less since sql server parses your statements, but i don't think it will make a noticable difference