I have a large csv file which contains over 30million rows. I need to load this file on a daily basis and identify which of the rows have changed. Unfortunately there is no unique key field but it's possible to use four of the fields to make it unique. Once I have identified the changed rows I will then want to export the data. I have tried using a traditional SQL Server solution but the performance is so slow it's not going to work. Therefore I have been looking at Mongodb - this has managed to import the file in about 20 minutes (which is fine). Now I don't have any experience using Monogdb and more importantly knowing best practices. So, my idea is the following:
As a one off - Import data into a collection using the mongoimport.
Copy all of the unique id's generated by mongo and put them in a separate collection.
Import new data into the existing collection using upsert fields which should create a new id for each new and changed row.
Compare the 'copy' to the new collection to list out all the changed rows.
Export changed data.
This to me will work but I am hoping there is a much better way to tackle this problem.
Use unix sort and diff.
Sort the file on disk
sort -o new_file.csv -t ',' big_file.csv
sort -o old_file.csv -t ',' yesterday.csv
diff new_file.csv old_file.csv
Commands may need some tweeking.
You can also use mysql to import the file via
http://dev.mysql.com/doc/refman/5.1/en/load-data.html (LOAD FILE)
and then create KEY (or primary key) on the 4 fields.
Then load yesterday's file into a different table and then use a 2 sql statements to compare the files...
But, diff will work best!
-daniel
Related
I need to write CSV data into MongoDB (version 4.4.4). Right now I'm using MongoEngine as a data layer for my application.
Each CSV has at least 4 million records and 8 columns.
What is the fastest way to bulk insert (if the data doesn't exist yet) or update (if the data is already in the collection)?
Right now I'm doing the following:
for inf in list:
daily_report = DailyReport.objects.get(company_code=code, date=date)
if daily_report is not None:
inf.id = daily_report.id
inf.save()
The list is a list of DailyReports built from the CSV data.
The _id is auto-generated. However, for business purposes the primary keys are the variables company_code (StringField) and date (DateTimeField).
The DailyReport class has an unique compound index made of the following fields: company_code and date.
The previous code traverse through the list and for each DailyReport it looks for an existing DailyReport in the database with the same company_code and date. If so, the id from the DailyReport in the database is assigned to the DailyReport built from the CSV data. With the id assigned to the object, the object is saved using the Object.save() method from MongoEngine.
The insert operation is being done at each object and is incredibly slow.
Any ideas on how to make this process it faster?
I have a PostgreSQL database. I had to extend an existing, big table with a few more columns.
Now I need to fill those columns. I tought I can create an .csv file (out of Excel/Calc) which contains the IDs / primary keys of existing rows - and the data for the new, empty fields. Is it possible to do so? If it is, how to?
I remember doing exactly this pretty easily using Microsoft SQL Management Server, but for PostgreSQL I am using PG Admin (but I am ofc willing to switch the tool if it'd be helpfull). I tried using the import function of PG Admin which uses the COPY function of PostgreSQL, but it seems like COPY isn't suitable as it can only create whole new rows.
Edit: I guess I could write a script which loads the csv and iterates over the rows, using UPDATE. But I don't want to reinvent the wheel.
Edit2: I've found this question here on SO which provides an answer by using a temp table. I guess I will use it - although it's more of a workaround than an actual solution.
PostgreSQL can import data directly from CSV files with COPY statements, this will however only work, as you stated, for new rows.
Instead of creating a CSV file you could just generate the necessary SQL UPDATE statements.
Suppose this would be the CSV file
PK;ExtraCol1;ExtraCol2
1;"foo",42
4;"bar",21
Then just produce the following
UPDATE my_table SET ExtraCol1 = 'foo', ExtraCol2 = 42 WHERE PK = 1;
UPDATE my_table SET ExtraCol1 = 'bar', ExtraCol2 = 21 WHERE PK = 4;
You seem to work under Windows, so I don't really know how to accomplish this there (probably with PowerShell), but under Unix you could generate the SQL from a CSV easily with tools like awk or sed. An editor with regular expression support would probably suffice too.
I am looking for the easiest way to manually dump a subset of records in an OpenEdge database table in the Progress ".d" file format.
The best way I can imagine is creating an extra test database with the identical schema as the source database, and then copying the subset of records over to the test database using FOR EACH and BUFFER-COPY statements. Then just export the data from the test database using the Dump Data and Definitions Table Contens (.d file )... menu option.
That seems like a lot of trouble. If you can identify the subset of records in order to do the BUFFER-COPY than you should also be able to:
OUTPUT TO VALUE( "table.d" ).
FOR EACH table NO-LOCK WHERE someCondition:
EXPORT table.
END.
OUTPUT CLOSE.
Which is, essentially, what the dictionary "dump data" .d file is less a few lines of administrivia at the bottom which can be safely omitted for most purposes.
I currently have a redshift table in our database that has 10 columns, and I want to add another. It's trivial to do an alter table to do this.
My question - When I do this, will all my old CSV files fail to insert into redshift (via COPY from S3) given they won't have this new column?
I was hoping the columns would just be NULL vs. it failing on import, but I haven't seen any documentation on this.
Ideally I wish I could specify the actual column name in the header row of the CSV, but I haven't seen if that is possible anywhere.
FILLRECORD in COPY command does that: 'Allows data files to be loaded when contiguous columns are missing at the end of some of the records'.
I'm writing scripts to export some tables to flat files every day. I'm looking at the BCP utility, but I'm not sure it has the kind of features I really need.
For example, I need to output the fields out of order. That is, the 15th field in the MSSQL database should be the 2nd field in the flat file, et.c
More importantly, some of the fields need to be altered. For example, if a certain field is null or contains some special values, I need to replace them with codes.
Is BCP the right tool for this? My gut tells me to do this in Perl instead.
You can write a stored procedure and do all data transformations there.
Then feed this stored procedure to bcp.
It will surely be faster than Perl.
SSIS is fast too; could be an option in case transformations are very complex.
You can use a query to order and format the columns directly with BCP
bcp Utility
"query"
Is a Transact-SQL query that returns a result set.
example:
bcp "SELECT Name FROM AdventureWorks.Sales.Currency" queryout Currency.Name.dat -T -c