The question is simple enough. I have a DataTable object filled with about 200.000 records which needs to be sent into a temporary table in the database. How to do it in a most performant way possible?
IfxBulkCopy is not an option unfortunately since the algorithm involved will be a part of a larger transaction.
I did some tests before and found that calling stored procedure which has INSERT INTO ... VALUES ... is marginally better than calling directly INSERT command but both were considerably slower alternatives.
In my early days of working in Profilics tools on AIX we had insert cursors available which were pretty fast. Perhaps I'm not aware of another option using .NET Informix client (DRDA)?
Related
I have some extensive queries (each of them lasts around 90 seconds). The good news is that my queries are not changed a lot. As a result, most of my queries are duplicate. I am looking for a way to cache the query result in PostgreSQL. I have searched for the answer but I could not find it (Some answers are outdated and some of them are not clear).
I use an application which is connected to the Postgres directly.
The query is a simple SQL query which return thousands of data instance.
SELECT * FROM Foo WHERE field_a<100
Is there any way to cache a query result for at least a couple of hours?
It is possible to cache expensive queries in postgres using a technique called a "materialized view", however given how simple your query is I'm not sure that this will give you much gain.
You may be better caching this information directly in your application, in memory. Or if possible caching a further processed set of data, rather than the raw rows.
ref:
https://www.postgresql.org/docs/current/rules-materializedviews.html
Depending on what your application looks like, a TEMPORARY TABLE might work for you. It is only visible to the connection that created it and it is automatically dropped when the database session is closed.
CREATE TEMPORARY TABLE tempfoo AS
SELECT * FROM Foo WHERE field_a<100;
The downside to this approach is that you get a snapshot of Foo when you create tempfoo. You will not see any new data that gets added to Foo when you look at tempfoo.
Another approach. If you have access to the database, you may be able to significantly speed up your queries by adding and index on on field
From PostgreSQL 9.6 Release Notes:
Only strictly read-only queries where the driving table is accessed via a sequential scan can be parallelized.
My question is: If a CTE (WITH clause) contains only read operations, but its results is used to feed a writing operation, like an insert or update, is it also disallowed to parallelize sequential scans?
I mean, as CTE is much like a temporary table which only exists for currently executing query, can I suppose that its inner query can take advantage of the brand new parallel seq-scan of PostgreSQL 9.6? Or, otherwise, is it treated as a using subquery and cannot perform parallel scan?
For example, consider this query:
WITH foobarbaz AS (
SELECT foo FROM bar
WHERE some_expensive_function(baz)
)
DELETE FROM bar
USING foobarbaz
WHERE bar.foo = foobarbaz.foo
;
Is that foobarbaz calculation expected to be able to be parallelized or is it disallowed because of the delete sentence?
If it isn't allowed, I thought that can replace the CTE by a CREATE TEMPORARY TABLE statement. But I think I will fall into the same issue as CREATE TABLE is a write operation. Am I wrong?
Lastly, a last chance I could try is to perform it as a pure read operation and use its result as input for insert and / or update operations. Outside of a transaction it should work. But the question is: If the read operation and the insert/update are between a begin and commit sentences, it not will be allowed anyway? I understand they are two completely different operations, but in the same transaction and Postgres.
To be clear, my concern is that I have an awful mass of hard-to-read and hard-to-redesign SQL queries that involves multiple sequential scans with low-performance function calls and which performs complex changes over two tables. The whole process runs in a single transaction because, if not, the mess in case of failure would be totally unrecoverable.
My hope is to able to parallelize some sequential scans to take advantage of the 8 cpu cores of the machine to be able to complete the process sooner.
Please, don't answer that I need to fully redesign that mess: I know and I'm working on it. But it is a great project and we need to continue working meantime.
Anyway, any suggestion will be thankful.
EDIT:
I add a brief report of what I could discover up to now:
As #a_horse_with_no_name says in his comment (thanks), CTE and the rest of the query is a single DML statement and, if it has a write operation, even outside of the CTE, then the CTE cannot be parallelized (I also tested it).
Also I found this wiki page with more concise information about parallel scans than what I found in the release notes link.
An interesting point I could check thanks to that wiki page is that I need to declare the involved functions as parallel safe. I did it and worked (in a test without writings).
Another interesting point is what #a_horse_with_no_name says in his second comment: Using DbLink to perform a pure read-only query. But, investigating a bit about that, I seen that postgres_fdw, which is explicitly mentioned in the wiki as non supporting parallel scans, provides roughly the same functionality using a more modern and standards-compliant infrastructure.
And, on the other hand, even if it would worked, I were end up getting data from outside the transaction which, in some cases would be acceptable for me but, I think, not as good idea as general solution.
Finally, I checked that is possible to perform a parallel-scan in a read-only query inside a transaction, even if it later performs write operations (no exception is triggered and I could commit).
...in summary, I think that my best bet (if not the only one) would be to refactor the script in a way that it reads the data to memory before to later perform the write operations in the same transaction.
It will increase I/O overhead but, attending the latencies I manage it will be even better.
I am writing an app that will use many tables and i have been told that using stored procs in the app. is not the way to go, that it is too slow.
It has been suggested i use TSQL. I have only used stored procs till now. in what way is using TSQL different, how can I get up to speed. IN fact, is this the way to go for faster data access or is there other methods?
TSQL is Microsoft and Sybase SQL dialect, so your stored procedures are written with TSQL if you use SQLServer.
In the most cases, properly written stored procedures overperform adhoc queries.
On the other hand, coding procedures requires more skills and debugging is quite a tedious process. It's really hard to give advice without seeing your procedures, but there are some common things that slow down SPs.
Execution plan is generated upon the first run, but sometimes the optimal plan depends on input parameters. See here for more details.
Another thing that prevents generating optimal plan is using conditions in SP body.
For example,
IF (something)
BEGIN
SELECT ... FROM table1
INNER JOIN table2 ...
.....
END
ELSE
BEGIN
SELECT ... FROM table2
INNER JOIN table3 ...
.....
END
should be refactored to
IF (something)
EXEC proc1; // create a new SP and move code from IF there
ELSE
EXEC proc2; // create a new SP and move code from ELSE there
The traditional argument for using SPs was always that they're compiled so they run faster. That hasn't been true for many years but nor is it true, in general, that SPs run slower.
If the reference is to development time rather than runtime then there may be some truth to this but, considering your skills, it may be that learning a new approach would slow you down more than using SPs.
If your system uses Object-Relational Mapping (ORM) then SPs will probably get in your way but then you wouldn't really be using T-SQL either - it'll be done for you.
Stored proc's are written with T-SQL, so it's a bit odd that someone would make such a statement.
Daniel is right, ORM is a good option. If you're doing any data intensive operations (such as parsing content), I'd look at the database first and foremost. You might want to do some reading on SP as speed isn't everything... there are other benefits. This was one hit from Google, but you can do more research yourself:
http://msdn.microsoft.com/en-us/library/ms973918.aspx
I'm trying to figure out how I can parallelize some procedural code to create records in a table.
Here's the situation (sorry I can't provide much in the way of actual code):
I have to predict when a vehicle service will be needed, based upon the previous service date, the current mileage, the planned daily mileage and the difference in mileage between each service.
All in all - it's very procedural, for each vehicle I need to take into account it's history, it's current servicing state, the daily mileage (which can change based on ranges defined in the mileage plan), and the sequence of servicing.
Currently I'm calculating all of this in PHP, and it takes about 20 seconds for 100 vehicles. Since this may in future be expanded to several thousand, 20 seconds is far too long.
So I decided to try and do it in a CLR stored procedure. At first I thought I'd try multithreading it, however I quickly found out it's not easy to do in the TSQL host. I was recommended to allow TSQL to work out the parallelization itself. Yet I have no idea how. If it wasn't for the fact the code needs to create records I could define it as a function and do:
SELECT dbo.PredictServices([FleetID]) FROM Vehicles
And TSQL should figure out it can parallelize that, but I know of no alternative for procedures.
Is there anything I can do to parallelize this?
The recommendation you received is a correct one. You simply don't have .NET framework facilities for parallelism available in your CLR stored procedure. Also please keep in mind that the niche for CLR Stored Procedures is rather narrow and they adversely impact SQL Server's performance and scalability.
If I understand the task correctly you need to compute a function PredictServices for some records and store the results back to database. In this case CLR Stored procedures could be your option provided PredictServices is just data access/straightforward transformation of data. Best practice is to create WWF (Windows Workflow Foundation) service to perform computations and call it from PHP. In Workflow Service you can implement any solution including one involving parallelism.
Background:
I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP.
I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse.
How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5.
Best Solution?
From what I can piece together the most practical solution is as follows:
Create a TRIGGER to write all DML activity to a rotating CSV log file
Perform whatever transformations are required
Use the native DW data pump tool to efficiently pump the transformed CSV into the DW
Why this approach?
TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable
CSV easy and fast to transform
Easy to pump CSV into the DW
Alternatives considered ....
Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log)
Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects
Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner
Using the WAL
Has anyone done this before? Want to share your thoughts?
Assuming that your tables of interest have (or can be augmented with) a unique, indexed, sequential key, then you will get much much better value out of simply issuing SELECT ... FROM table ... WHERE key > :last_max_key with output to a file, where last_max_key is the last key value from the last extraction (0 if first extraction.) This incremental, decoupled approach avoids introducing trigger latency in the insertion datapath (be it custom triggers or modified Slony), and depending on your setup could scale better with number of CPUs etc. (However, if you also have to track UPDATEs, and the sequential key was added by you, then your UPDATE statements should SET the key column to NULL so it gets a new value and gets picked by the next extraction. You would not be able to track DELETEs without a trigger.) Is this what you had in mind when you mentioned Talend?
I would not use the logging facility unless you cannot implement the solution above; logging most likely involves locking overhead to ensure log lines are written sequentially and do not overlap/overwrite each other when multiple backends write to the log (check the Postgres source.) The locking overhead may not be catastrophic, but you can do without it if you can use the incremental SELECT alternative. Moreover, statement logging would drown out any useful WARNING or ERROR messages, and the parsing itself will not be instantaneous.
Unless you are willing to parse WALs (including transaction state tracking, and being ready to rewrite the code everytime you upgrade Postgres) I would not necessarily use the WALs either -- that is, unless you have the extra hardware available, in which case you could ship WALs to another machine for extraction (on the second machine you can use triggers shamelessly -- or even statement logging -- since whatever happens there does not affect INSERT/UPDATE/DELETE performance on the primary machine.) Note that performance-wise (on the primary machine), unless you can write the logs to a SAN, you'd get a comparable performance hit (in terms of thrashing filesystem cache, mostly) from shipping WALs to a different machine as from running the incremental SELECT.
if you can think of a 'checksum table' that contains only the id's and the 'checksum' you can not only do a quick select of the new records but also the changed and deleted records.
the checksum could be a crc32 checksum function you like.
The new ON CONFLICT clause in PostgreSQL has changed the way I do many updates. I pull the new data (based on a row_update_timestamp) into a temp table then in one SQL statement INSERT into the target table with ON CONFLICT UPDATE. If your target table is partitioned then you need to jump through a couple of hoops (i.e. hit the partition table directly). The ETL can happen as you load the the Temp table (most likely) or in the ON CONFLICT SQL (if trivial). Compared to to other "UPSERT" systems (Update, insert if zero rows etc.) this shows a huge speed improvement. In our particular DW environment we don't need/want to accommodate DELETEs. Check out the ON CONFLICT docs - it gives Oracle's MERGE a run for it's money!