how can I ignore id column when importing into mySQL via phpMyAdmin? - import

I need to export data from a table in database A, then import it into an identically-structured table in database B. This needs to be done via phpMyAdmin. Here's the problem: no matter what format I choose for the export (CSV or SQL) ALL columns (including the auto-incremented ID field) get exported. Because there's already data in the table in database B, I can't import the ID field with the new records - I need it to import the records and assign new auto-incremented values to the records. What settings do I need to use in either the export (to be able to choose which columns to export) or the import (to tell it to ignore the ID column in the file)?
Or should I just export as CSV, then open in Excel and delete the ID column? Is there a way to tell phpMyAdmin that it should generate new auto-incremented IDs for the records being imported, without it telling me that there's an incorrect column count in the import file?
EDIT: to clarify, I'm exporting only data, not structure.

Excel is an option to remove the column and probably the fastest at this point.
But if these databases are on the same server and you have access you can just to an INSERT INTO databaseB.table (column_list) SELECT column_list FROM databaseA.table.
You can also run the SELECT statement to just get the desired columns and then export the results. This link should be available in the recent versions of PHPMyAdmin.

It is several years since the original question, but this still came out top in a google search so I'll comment on what worked for me:
If I delete the Id column in my CSV and then try to import I get the 'Invalid column count in CSV input on line 1.' error.
But if I keep the Id column but change all of the Id values to NULL in excel (just typing NULL into the cell), then when I import this the id auto-increment fills in the new records with consecutive numbers (presumably starting with the highest existing record Id +1 ).
I'm using PHPMyAdmin 4.7.0

Another way is
Go to the import menu for that table
Add the CSV file (without an ID column)
Pick CSV in the Format section
In the section where you pick the format of the file (separated by which char, which char encloses fields, etc) there's a field called Column names Type the names of the columns you ARE including, separated by commas.

Related

How can I export images from a postgreSQL database?

I have a simple data table.
Schemas>resources>tables>image_data contains the columns
image_id(integer) raw_data(bytea)
How can I export this data to files named based on the image_id? Once I figure that out I want to use the image_id to name the files based on a reference in another table.
So far I've got this:
SELECT image_id, encode(raw_data, 'hex')
FROM resources.image_data;
It generates a csv with all the images as HEX, but I don't know what to do with it.
Data-sample:
"image_id";"encode"
166;"89504e470d0a1a0a0000000d49484452000001e0000003160806000000298 ..."

How to assign foreign keys in Access within imported table from Excel

I will use Access database instead of Excel. But I need to import data from one huge Excel sheet into several pre-prepared normalized tables in Access. In the core Access table I have mainly the foreign keys from other tables (of course some other fields are texts or dates).
How should I perform the import in the easiest way? I cannot perform import directly, because there is NOT, for example, "United States" string in the Access field 'Country'; there must be foreign key no. 84 from the table tblCountries. I think about DLOOKUP function in the Excel and replace strings for FK... Do you know any more simple method?
Thank you, Martin
You don’t mention how you will get the Excel data into several Access tables, so I will assume you will import the entire Excel file into ONE large table then break out the data from there. I assume the imported data may NOT match with existing Access keys (i.e. misspellings, new values, etc.) so you will need to locate those so you can make corrections. This will involve creating a number of ‘unmatched queries’ then a number of ‘Update queries’, finally you can use Append queries to pull data from your import table into the final resting place. Using your example, you have imported ‘Country = United States’, but you need to relate that value to key “84”?
Let’s set some examples:
Assume you imported your Excel data into one large Access table. Also assume your import has three fields you need to get keys for.
You already have several control tables in Access similar to the following:
a. tblRegion: contains RegionCode, RegionName (i.e. 1=Pacific, 2=North America, 3=Asia, …)
b. tblCountry: contains CountryCode, Country, Region (i.e. 84 | United States | 2
c. tblProductType: contains ProdCode, ProductType (i.e. VEH | vehicles; ELE | electrical; etc.)
d. Assume your imported data has fields
Here are the steps I would take:
If your Excel file does not already have columns to hold the key values (i.e. 84), add them before the import. Or after the import, modify the table to add the columns.
Create ‘Unmatched query’ for each key field you need to relate. (Use ‘Query Wizard’, ‘Find Unmatched Query Wizard’. This will show you all imported data that does not have a match in your key table and you will need to correct those valuse. i.e.:
SELECT tblFromExcel.Country, tblFromExcel.Region, tblFromExcel.ProductType, tblFromExcel.SomeData
FROM tblFromExcel LEFT JOIN tblCountry ON tblFromExcel.[Country] = tblCountry.[CountryName]
WHERE (((tblCountry.CountryName) Is Null));
Update the FK with matching values:
UPDATE tblCountry
INNER JOIN tblFromExcel ON tblCountry.CountryName = tblFromExcel.Country
SET tblFromExcel.CountryFK = [CountryNbr];
Repeat the above Unmatched / Matched for all other key fields.

Redshift - Adding a column, do we have to change our previous CSVs to include it?

I currently have a redshift table in our database that has 10 columns, and I want to add another. It's trivial to do an alter table to do this.
My question - When I do this, will all my old CSV files fail to insert into redshift (via COPY from S3) given they won't have this new column?
I was hoping the columns would just be NULL vs. it failing on import, but I haven't seen any documentation on this.
Ideally I wish I could specify the actual column name in the header row of the CSV, but I haven't seen if that is possible anywhere.
FILLRECORD in COPY command does that: 'Allows data files to be loaded when contiguous columns are missing at the end of some of the records'.

db2 import csv with null date

I run this
db2 "IMPORT FROM C:\my.csv OF DEL MODIFIED BY COLDEL, LOBSINFILE DATEFORMAT=\"D/MM/YYYY\" SKIPCOUNT 1 REPLACE INTO scratch.table_name"
However some of my rows have a empty date field so I get this error
SQL3191N which begins with """" does not match the user specified DATEFORMAT, TIMEFORMAT, or TIMESTAMPFORMAT. The row will be rejected.
My CSV file looks like this
"XX","25/10/1985"
"YY",""
"ZZ","25/10/1985"
I realise if I insert charater instead of a blank string I could use NULL INDICATORS paramater.
However I do not have access to change the CSV file. Is there a way to ignore import a blank string as a null?
This is an error in your input file. DB2 differentiates between a NULL and a zero-length string. If you need to have NULL dates, a NULL would have no quotes at all, like:
"AA",
If you can't change the format of the input file, you have 2 options:
Insert your data into a staging table (changing the DATE column to a char) and then using SQL to populate the ultimate target table
Write a program to parse ("fix") the input file and then import the resulting fixed data. You can often do this without having to write the entire file out to disk – your program could write to a named pipe, and the DB2 IMPORT (and LOAD) utility is capable of reading from named pipes.
I'm not aware of anything. Yes, ideally that date field should be null.
Probably the best thing to do would be load the data into a scratch/temp table where that isn't a date column - just leave it as character data (it looks like you're already using a scratch table anyways). It should be trivial after that to use a CASE statement to transform the information into a null date if the value is blank, when doing your INSERT to the real table.

How to import text file to table with primary key as auto-increment

I have some bulk data in a text file that I need to import into a MySQL table. The table consists of two fields ..
ID (integer with auto-increment)
Name (varchar)
The text file is a large collection of names with one name per line ...
(example)
John Doe
Alex Smith
Bob Denver
I know how to import a text file via phpMyAdmin however, as far as I understand, I need to import data that has the same number of fields as the target table. Is there a way to import the data from my text file into one field and have the ID field auto-increment automatically?
Thank you in advance for any help.
Another method I use that does not require reordering a table's fields (assuming the auto-increment field is the first column) is as follows:
1) Open/import the text file in Excel (or a similar program).
2) Insert a column before the first column.
3) Set the first cell in this new column with a zero or some other placeholder.
4) Close the file (keeping it in its original text/tab/csv/etc. format).
5) Open the file in a text editor.
6) Delete the placeholder value you entered into the first cell.
7) Close and save the file.
Now you will have a file containing each row of your original file preceded by an empty column, which will be converted into the next relevant auto-increment value upon import via phpMyAdmin.
Here is the simplest method to date:
Make sure your file does NOT have a header line with the column names. If it does, remove it.
In phpMyAdmin, as usual: go in the Import tab for your table and select your file. Select CSV as the format. Then -- and this is the important part -- in the
Format-Specific Options:
...in the Column names: fill in the name of the column the data is for, in your case "Name".
This will import the names and auto-increment the id column. You're done!
Tested fine with phpMyAdmin 4.2.7.1.
Not correct on import with the LOADTABLE INFILE, just create the auto-increment column as the LAST column/field... As its parsing, if your table is defined with 30 columns, but the text file only has 1 (or anything less), it will import the leading columns first, in direct sequence, so ensure your delimited with... is correct between fields (for any future imports). Again, put the auto-increment AFTER the number of columns you know are being imported.
create table YourMySQLTable
( FullName varchar(30) not null ,
SomeOtherFlds varchar(20) not null,
IDKey int not null AUTO_INCREMENT,
Primary KEY (IDKey)
);
Notice the IDKey is auto-increment in the last field of the table... regardless of your INPUT stream text file which may have less columns than your final table will actually hold.
Then, import the data via...
LOAD DATA
INFILE `C:\SomePath\WhereTextFileIs\ActualFile.txt`
INTO TABLE YourMySQLTable
COLUMNS TERMINATED BY `","`
LINES TERMINATED BY `\r\n` ;
Above example is based on comma seperated list with quotes around each field such as
"myfield1","anotherField","LastField". Also, the terminated is the cr/lf that typical text files are delimited per row
In the sample of your text file having the full name as the single column, all the data would get loaded into the "YourMySQLTable" into the FullName column. Since the IDKey is at the END of the list, it will still be auto-increment assigned values from 1-? and not have any conflict with the columns from the inbound text.
I just used a TAB as the first field in my text file, then imported it as usual. I got a warning about the ID field but the field incremented as expected...
I just tried this:
In phpMyAdmin table- match the amount of fields you have in your csv.
Perform the import of csv data into your table
Go to the [Structure] tab and add a new field [At beginning of table] (I assume you want the id field there)
Fill in the [name] attribute as "id",
[length] to "5"
[Index] to "Primary"
Tick the A_I (Auto Increment)
Hit [Go] button
The table should have updated with the id field at the front of all your data with auto-incrementing.
At least this way you don't have to worry about matching fields, etc.
I´ve solved that problem by simply add the column_names under Format-Specific Options without the Column ID. Because the Column ID ist Auto increment. In my case it works fine without changing anything in the CSV File. My CSV File has only Data inside no Column Headers.
If the table columns do not match, I usually add "bogus" fields with empty data where the real data would've been, so, if my table needs "id", "name", "surname", "address", "email" and I have "id", "name", "surname", I change my CSV file to have "id", "name", "surname", "address", "email" but leave the fields that I do not have data for blank.
This results in a CSV file looking like this:
1,John,Doe,,
2,Jane,Doe,,
I find it simpler than the other methods.