I have a CSV file test.csv with a lot of unique records of type (text,int,int), where the text field is no more than 70 characters.
When executing the following statements the speed is usually around 80 MiB/s:
sqlite3 db 'create table test(a text,b int,c int,primary key(a,b,c) without rowid'
pv test.csv | sqlite3 -init <(echo -e '.mode csv\n.import /dev/stdin test') db
But when executing the following statement again the speed is usually under 100 KiB/s and a lot of UNIQUE constraint failed are printed to stderr:
pv test.csv | sqlite3 -init <(echo -e '.mode csv\n.import /dev/stdin test') db
It seems to me that in both cases SQLite needs to check for the same constraint, so how come the case where nothing is written to disk is much much slower than the case where everything is written to disk?
And the most important question - how can I make the secondary import faster? This database needs to be updated daily, and the records are mostly new, but some of them already exists in the database. This makes the import to slow to process.
BTW this is the same case with SSD and HHD, though SSD is a bit faster.
Related
We tested a 40 Mb json insert into a jsonb column and it took 1.3Gb of memory.
While inserting this 40 Mb json as text took only 500Mb of memory.
I understand that a json tree takes a lot of memory, but why is this multiplier so enormous?
And, are there alternatives? Can I prepare jsonb structure, so without having to ::jsonb cast it. That's where the memory consumption comes from.
Some code fragments to help explaining the idea:
create table my_test (data jsonb, data2 text);
insert_jsonb.sql contains
insert into my_test (data) values ('{.......40mb }'::jsonb);
insert_text.sql contains
insert into my_test (data2) values ('{.......40mb }'::text);
then run it via psql:
psql -f insert_jsonb.sql # takes 1.3Gb memory
psql -f insert_text.sql # takes 500Mb memory
I'm doing a bulk insert from a giant csv file that needs to turn into a both a relational and JSONB object at the same time into a table. Problem is; that the inserts need to do ether an insert or update. If it's an update. The column needs to append the JSON object to the row. The current setup I have has individual INSERT/UPDATE calls and of course, it's horribly slow.
Example Import Command I'm Running:
INSERT INTO "trade" ("id_asset", "trade_data", "year", "month") VALUES ('1925ad09-51e9-4de4-a506-9bccb7361297', '{"28":{"open":2.89,"high":2.89,"low":2.89,"close":2.89}}', 2017, 3) ON CONFLICT ("year", "month", "id_asset") DO
UPDATE SET "trade_data" = "trade"."trade_data" || '{"28":{"open":2.89,"high":2.89,"low":2.89,"close":2.89}}' WHERE "trade"."id_asset" = '1925ad09-51e9-4de4-a506-9bccb7361297' AND "trade"."year" = 2017 AND "trade"."month" = 3;
I've tried wrapping my script in a BEGIN and COMMIT, but it didn't improve performance at all and I tried a few configurations, but it didn't seem to help.
\set autocommit off;
set schema 'market';
\set fsync off;
\set full_page_writes off;
SET synchronous_commit TO off;
\i prices.sql
This hole thing is extremely slow, and I'm not sure how to re-write the query without loading a crap ton of data into RAM using my program just to spit out a large INSERT/UPDATE command efficiently for Postgres to read. Since related data could be a million rows or another file all together to properly generate a JSON w/ out losing current JSON data that's already in the database.
Simply scp moved my large SQL file into the Postgres server and re-ran the commands inside psql and now the migration is extremely faster.
I am using psql with a PostgreSQL database and the following copy command:
\COPY isa (np1, np2, sentence) FROM 'c:\Downloads\isa.txt' WITH DELIMITER '|'
I get:
ERROR: extra data after last expected column
How can I skip the lines with errors?
You cannot skip the errors without skipping the whole command up to and including Postgres 14. There is currently no more sophisticated error handling.
\copy is just a wrapper around SQL COPY that channels results through psql. The manual for COPY:
COPY stops operation at the first error. This should not lead to problems in the event of a COPY TO, but the target table will
already have received earlier rows in a COPY FROM. These rows will
not be visible or accessible, but they still occupy disk space. This
might amount to a considerable amount of wasted disk space if the
failure happened well into a large copy operation. You might wish to
invoke VACUUM to recover the wasted space.
Bold emphasis mine. And:
COPY FROM will raise an error if any line of the input file contains
more or fewer columns than are expected.
COPY is an extremely fast way to import / export data. Sophisticated checks and error handling would slow it down.
There was an attempt to add error logging to COPY in Postgres 9.0 but it was never committed.
Solution
Fix your input file instead.
If you have one or more additional columns in your input file and the file is otherwise consistent, you might add dummy columns to your table isa and drop those afterwards. Or (cleaner with production tables) import to a temporary staging table and INSERT selected columns (or expressions) to your target table isa from there.
Related answers with detailed instructions:
How to update selected rows with values from a CSV file in Postgres?
COPY command: copy only specific columns from csv
It is too bad that in 25 years Postgres doesn't have -ignore-errors flag or option for COPY command. In this era of BigData you get a lot of dirty records and it can be very costly for the project to fix every outlier.
I had to make a work-around this way:
Copy the original table and call it dummy_original_table
in the original table, create a trigger like this:
CREATE OR REPLACE FUNCTION on_insert_in_original_table() RETURNS trigger AS $$
DECLARE
v_rec RECORD;
BEGIN
-- we use the trigger to prevent 'duplicate index' error by returning NULL on duplicates
SELECT * FROM original_table WHERE primary_key=NEW.primary_key INTO v_rec;
IF v_rec IS NOT NULL THEN
RETURN NULL;
END IF;
BEGIN
INSERT INTO original_table(datum,primary_key) VALUES(NEW.datum,NEW.primary_key)
ON CONFLICT DO NOTHING;
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
RETURN NULL;
END;
Run a copy into the dummy table. No record will be inserted there, but all of them will be inserted in the original_table
psql dbname -c \copy dummy_original_table(datum,primary_key) FROM '/home/user/data.csv' delimiter E'\t'
Workaround: remove the reported errant line using sed and run \copy again
Later versions of Postgres (including Postgres 13), will report the line number of the error. You can then remove that line with sed and run \copy again, e.g.,
#!/bin/bash
bad_line_number=5 # assuming line 5 is the bad line
sed ${bad_line_number}d < input.csv > filtered.csv
[per the comment from #Botond_Balázs ]
Here's one solution -- import the batch file one line at a time. The performance can be much slower, but it may be sufficient for your scenario:
#!/bin/bash
input_file=./my_input.csv
tmp_file=/tmp/one-line.csv
cat $input_file | while read input_line; do
echo "$input_line" > $tmp_file
psql my_database \
-c "\
COPY my_table \
FROM `$tmp_file` \
DELIMITER '|'\
CSV;\
"
done
Additionally, you could modify the script to capture the psql stdout/stderr and exit
status, and if the exit status is non-zero, echo $input_line and the captured stdout/stderr to stdin and/or append it to a file.
Lets say I have some customer data like the following saved in a text file:
|Mr |Peter |Bradley |72 Milton Rise |Keynes |MK41 2HQ |
|Mr |Kevin |Carney |43 Glen Way |Lincoln |LI2 7RD | 786 3454
I copied the aforementioned data into my customer table using the following command:
\copy customer(title, fname, lname, addressline, town, zipcode, phone) from 'customer.txt' delimiter '|'
However, as it turns out, there are some extra space characters before and after various parts of the data. What I'd like to do is call trim() before copying the data into the table - what is the best way to achieve this?
Is there a way to call trim() on every value of every row and avoid inserting unclean data in the first place?
Thanks,
I think the best way to go about this is to add a BEFORE INSERT trigger to the table you're inserting to. This way, you can write a stored procedure that will execute before every record is inserted and trim whitepsace (or do any other transformations you may need) on any columns that need it. When you're done, simply remove the trigger (or leave it, which will improve data integrity if you never want that whitespace int those columns). I think explaining how to create a trigger and stored procedure in PostgreSQL is probably outside the scope of this question, but I will link to the documentation for each.
I think this is the best way because it is simpler than parsing through a text file or writing shell code to do this. This kind of sanitization is the kind of thing triggers do very well and very simply.
Creating a Trigger
Creating a Trigger Function
I have somehow similar use case in one of the projects. My input files:
has number of lines in the file as a last line;
needs to have line numbers added on every line;
needs to have file_id added to every line.
I use the following piece of shell code:
FACT=$( dosql "TRUNCATE tab_raw RESTART IDENTITY;
COPY tab_raw(file_id,lnum,bnum,bname,a_day,a_month,a_year,a_time,etype,a_value)
FROM stdin WITH (DELIMITER '|', ENCODING 'latin1', NULL '');
$(sed -e '$d' -e '=' "$FILE"|sed -e 'N;s/\n/|/' -e 's/^/'$DSID'|/')
\.
VACUUM ANALYZE tab_raw;
SELECT count(*) FROM tab_raw;
" | sed -e 's/^[ ]*//' -e '/^$/d'
)
dosql is a shell function, that executes psql with proper connectivity info and executes everything, that was given as an argument.
As a result of this operation I will have $FACT variable holding a total count of inserter records (for error detection).
Later I do another dosql call:
dosql "SET work_mem TO '800MB';
SELECT tab_prepare($DSID);
VACUUM ANALYZE tab_raw;
SELECT tab_duplicates($DSID);
SELECT tab_dst($DSID);
SELECT tab_gaps($DSID);
SELECT tab($DSID);"
to get analyze and move data into the final tables from auxiliary one.
I have a really big database (running on PostgreSQL) containing a lot of tables with sophisticated relations between them (foreign keys, on delete cascade and so on).
I need remove some data from a number of tables, but I'm not sure what amount of data will be really deleted from database due to cascade removals.
How can I check that I'll not delete data that should not be deleted?
I have a test database - just a copy of real one where I can do what I want :)
The only idea I have is dump database before and after and check it. But it not looks comfortable.
Another idea - dump part of database, that, as I think, should not be affected by my DELETE statements and check this part before and after data removal. But I see no simple ways to do it (there are hundreds of tables and removal should work with ~10 of them). Is there some way to do it?
Any other ideas how to solve the problem?
You can query the information_schema to draw yourself a picture on how the constraints are defined in the database. Then you'll know what is going to happen when you delete. This will be useful not only for this case, but always.
Something like (for constraints)
select table_catalog,table_schema,table_name,column_name,rc.* from
information_schema.constraint_column_usage ccu,
information_schema.referential_constraints rc
where ccu.constraint_name = rc.constraint_name
Using psql, start a transaction, perform your deletes, then run whatever checking queries you can think of. You can then either rollback or commit.
If the worry is keys left dangling (i.e.: pointing to a deleted record) then run the deletion on your test database, then use queries to find any keys that now point to invalid targets. (while you're doing this you can also make sure the part that should be unaffected did not change)
A better solution would be to spend time mapping out the delete cascades so you know what to expect - knowing how your database works is pretty valuable so the effort spent on this will be useful beyond this particular deletion.
And no matter how sure you are back the DB up before doing big changes!
Thanks for answers!
Vinko, your answer is very useful for me and I'll study it dipper.
actually, for my case, it was enough to compare tables counts before and after records deletion and check what tables were affected by it.
it was done by simple commands described below
psql -U U_NAME -h`hostname` -c '\d' | awk '{print $3}' > tables.list
for i in `cat tables.list `; do echo -n "$i: " >> tables.counts; psql -U U_NAME -h`hostname` -t -c "select count(*) from $i" >> tables.counts; done
for i in `cat tables.list `; do echo -n "$i: " >> tables.counts2; psql -U U_NAME -h`hostname` -t -c "select count(*) from $i" >> tables.counts2; done
diff tables.counts tables.counts2