import Multiple records from csv file into sqlite3 database - iphone

Iam working on a app which needs huge to be stord in database and display it later, but problem is that how to insert that much data in database , i use .csv file to import data from file and store it in table ,Every thing is working fine but only one Row has been inserted in dataBase , any one have any idea how to insert multiple records in databse,
For Information Iam using Terminal for sqlite3,

You can use .import file table to load large file into the table from terminal, your file and table needs to be organized properly so that they have same columns.
Also you can do search and replace in text editor and then import file with .read filename command.
Type .help in terminal when in sqlite console for list of the commands available.

Related

Question/Resolved - "extra data after last expected column" Error when trying to import a csv file into postgresql

Just posting this question and the solution since it took forever for me to figure this out.
Using CSV file, I was trying to import data into PostgreSQL with pgAdmin. I kept running into the same issue of "extra data after last expected column."
Solution that worked for me (instead of using Import module): copy tablename (columns) FROM 'file location .csv' CSV HEADER
Since some of the data included multiple commas within the cell, it was counting as a new column each time.

Import a CSV file into MySQL workbench into a new table dynamically

I can import a CSV file data into a existing table in MySQL Workbench, using Load data infile , but what if I have a file with 25 columns, it becomes a pain to create a structure for such tables before importing.
Is there a way to import CSV files without creating the structure, like proc import in SAS?
Yes, there is. Try the new 6.3 release (currently in RC, soon to be GA) which comes with a new table data import/export feature that supports CSV and JSON data. It creates the table on the fly during import.

Dump subset of records in an OpenEdge database table in the ".d" file format

I am looking for the easiest way to manually dump a subset of records in an OpenEdge database table in the Progress ".d" file format.
The best way I can imagine is creating an extra test database with the identical schema as the source database, and then copying the subset of records over to the test database using FOR EACH and BUFFER-COPY statements. Then just export the data from the test database using the Dump Data and Definitions Table Contens (.d file )... menu option.
That seems like a lot of trouble. If you can identify the subset of records in order to do the BUFFER-COPY than you should also be able to:
OUTPUT TO VALUE( "table.d" ).
FOR EACH table NO-LOCK WHERE someCondition:
EXPORT table.
END.
OUTPUT CLOSE.
Which is, essentially, what the dictionary "dump data" .d file is less a few lines of administrivia at the bottom which can be safely omitted for most purposes.

How should I open a PostgreSQL dump file and add actual data to it?

I have a pretty basic database. I need to drop a good size users list into the db. I have the dump file, need to convert it to a .pg file and then somehow load this data into it.
The data I need to add are in CSV format.
I assume you already have a .pg file, which I assume is a database dump in the "custom" format.
PostgreSQL can load data in CSV format using the COPY statement. So the absolute simplest thing to do is just add your data to the database this way.
If you really must edit your dump, and the file is in the "custom" format, there is unfortunately no way to edit the file manually. However, you can use pg_restore to create a plain SQL backup from the custom format and edit that instead. pg_restore with no -d argument will generate an SQL script for insertion.
As suggested by Daniel, the simplest solution is to keep your data in CSV format and just import into into Postgres as is.
If you're trying to to merge this CSV data into a 3rd party Postgres dump file, then you'll need to first convert the data into SQL insert statements.
One possible unix solution:
awk -F, '{printf "INSERT INTO TABLE my_tab (\"%s\",\"%s\",\"%s\");\n",$1,$2,$3}' data.csv

how can I ignore id column when importing into mySQL via phpMyAdmin?

I need to export data from a table in database A, then import it into an identically-structured table in database B. This needs to be done via phpMyAdmin. Here's the problem: no matter what format I choose for the export (CSV or SQL) ALL columns (including the auto-incremented ID field) get exported. Because there's already data in the table in database B, I can't import the ID field with the new records - I need it to import the records and assign new auto-incremented values to the records. What settings do I need to use in either the export (to be able to choose which columns to export) or the import (to tell it to ignore the ID column in the file)?
Or should I just export as CSV, then open in Excel and delete the ID column? Is there a way to tell phpMyAdmin that it should generate new auto-incremented IDs for the records being imported, without it telling me that there's an incorrect column count in the import file?
EDIT: to clarify, I'm exporting only data, not structure.
Excel is an option to remove the column and probably the fastest at this point.
But if these databases are on the same server and you have access you can just to an INSERT INTO databaseB.table (column_list) SELECT column_list FROM databaseA.table.
You can also run the SELECT statement to just get the desired columns and then export the results. This link should be available in the recent versions of PHPMyAdmin.
It is several years since the original question, but this still came out top in a google search so I'll comment on what worked for me:
If I delete the Id column in my CSV and then try to import I get the 'Invalid column count in CSV input on line 1.' error.
But if I keep the Id column but change all of the Id values to NULL in excel (just typing NULL into the cell), then when I import this the id auto-increment fills in the new records with consecutive numbers (presumably starting with the highest existing record Id +1 ).
I'm using PHPMyAdmin 4.7.0
Another way is
Go to the import menu for that table
Add the CSV file (without an ID column)
Pick CSV in the Format section
In the section where you pick the format of the file (separated by which char, which char encloses fields, etc) there's a field called Column names Type the names of the columns you ARE including, separated by commas.