On a Mac, I have a txt file with two columns, one being an autoincrement in an sqlite table:
, "mytext1"
, "mytext2"
, "mytext3"
When I try to import this file, I get a datatype mismatch error:
.separator ","
.import mytextfile.txt mytable
How should the txt file be structured so that it uses the autoincrement?
Also, how do I enter in text that will have line breaks? For example:
"this is a description of the code below.
The text might have some line breaks and indents. Here's
the related code sample:
foreach (int i = 0; i < 5; i++){
//do some stuff here
}
this is a little more follow up text."
I need the above inserted into one row. Is there anything special I need to do to the formatting?
For one particular table, I want each of my rows as a file and import them that way. I'm guessing it is a matter of creating some sort of batch file that runs multiple imports.
Edit
That's exactly the syntax I posted, minus a tab since I'm using a comma. The missing line break in my post didn't make it as apparent. Anyways, that gives the mismatch error.
I was looking on the same problem. Looks like I've found an answer on the first part of your question — about importing a file into a table with ID field.
So yes, create a temporary table without ID, import your file into it, then do insert..select to copy its data into your target table. (Remove leading commas from mytextfile.txt).
-- assuming your table is called Strings and
-- was created like this:
-- create table Strings( ID integer primary key, Code text )
create table StringsImport( Code text );
.import mytextfile.txt StringsImport
insert into Strings ( Code ) select * from StringsImport;
drop table StringsImport;
Do not know what to do with newlines. I've read some mentions that importing in CSV mode will do the trick (.mode csv), but when I tried it did not seem to work.
In case anyone is still having issues with this you can download an SQLLite manager.
There are several that allow importing from a CSV file.
Here is one but a google search should reveal a few: http://sqlitemanager.en.softonic.com/
I'm in the process of moving data containing long text fields with various punctuation marks (they are actually articles on coding) into SQLite and I've been experimenting with various text imports.
I created a database in SQLite with a table:
CREATE TABLE test (id PRIMARY KEY AUTOINCREMENT, textfield TEXT);
then do a backup with .dump.
I then add the text below the "CREATE TABLE" line manually in the resulting .dump file as such:
INSERT INTO test textfield VALUES (1,'Is''t it great to have
really long text with various punctaution marks and
newlines');
Change any single quotes to two single quotes (change ' to ''). Note that an index number needs to be added manually (I'm sure there is an AWK/SED command to do it automatically). Change the auto increment number in the "sequence" line in the dump file to one above the last index number you added (I don't have SQLite in front of me to give you the exact line, but it should be obvious).
With the new file, I can then do a restore onto the database
Related
The file is not importing after having created a table. The first line of code is for the table (COPY), the second line of code is for the path of the file (FROM) and the WITH I am not entirely sure if there's a prior line of code that needs to be entered for its success as its not being highlighted in pink. The importing should be going through in either the built-in tool of pgAdmin or the syntax but neither of them generates the needed output. Here are some screenshots:
So I did another table, this time focusing on a single column and ensuring that the name of the column matched on both the table and the file and it worked. The prior example had several columns that had difference in spellings of the column content in table and the file:
You can try this sequentially...
1. First create csv file. .csv file column sequence is most important.
2. Consider the below employee_info.csv file
And consider your database table employee_info table which contain (emp_id [numeric],emp_name[character],emp_sal[numeric],emp_loc [character])
Then Execute the below query
a. copy employee_info(emp_id,emp_name,emp_sal,emp_loc) from 'C:\Users\Zbook\Desktop\employee_info.csv' DELIMITERS ',' CSV;
Note: Ensure that each .csv file row value has not null. Like below...
I need to copy a text file which has confusing delimiter. I believe the delimiter is space. However, some of the column values are empty and I cannot differentiate which column which making it harder to load the data to database since the space is not indicating anything. Thus, when I try to COPY, the mapping is not right and I am getting ERROR: extra data after last expected column
I have tried to change the delimiter to comma and such, I am still getting the same error above. The below code can be used when I try to load some dummy data with proper delimiter.
COPY usm00070219(HEADREC_ID,YEAR,MONTH,DAY,HOUR,RELTIME,NUMLEV,P_SRC,NP_SRC,LAT,LON) FROM 'D:\....\USM00070219-data.txt' DELIMITER ' ';
This is example data:
It should have 11 columns but the data on the first row is only 10 and it cannot identify the empty value column. The spacings are not helpful at all!
Is there any way I can separate the columns by character size as delimiter and force the data to be divided by the size given?
COPY is not made to handle fixed-width text files. I can think of two options:
Load the file as it is into a table with a single text column using COPY. Then use regexp_split_to_array to split it into its components and inser these into another table.
You can use file_fdw to create a foreign table with a single text column like above and operate on that. That saves loading the file into the database.
There is a foreign data wrapper for fixed-width text files that you can try.
I have the following table in redshift:
Column | Type
id integer
value varchar(255)
I'm trying to copy in (using the datapipeline's RedshiftCopyActivity), and the data has the line 1,maybe as the entry trying to be added, but I get back the error 1214:Delimiter not found, and the raw_field_data value is maybe. Is there something I'm missing in the copy parameters?
The entire csv is three lines that goes:
1,maybe
2,no
3,yes
You may want to take a look at the similar question Redshift COPY command delimiter not found.
Make sure your RedshiftCopyActivity configuration includes FORMAT AS CSV from https://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-data-format.html#copy-csv.
Be sure your input data has your configured delimiter between every field, even in the case of nulls.
Be sure you do not have any trailing blank lines.
You can run the following SQL (from the linked question) to see more specific details of what row is causing the problem.
SELECT le.starttime,
d.query,
d.line_number,
d.colname,
d.value,
le.raw_line,
le.err_reason
FROM stl_loaderror_detail d,
JOIN stl_load_errors le
ON d.query = le.query
ORDER BY le.starttime DESC;
I need to load an entire file (contains only ASCII text), to the database (DB2 Express ed.). The table has only two columns (ID, TEXT). The ID column is PK, with auto generated data, whereas the text is CLOB(5): I have no idea about the input parameter 5, it was entered by default in the Data Studio.
Now I need to use the load utility to save a text file (contains 5 MB of data), in a single row, namely in the column TEXT. I do not want the text to be broken into different rows.
thanks for your answer in advance!
Firstly, you may want to redefine your table: CLOB(5) means you expect 5 bytes in the column, which is hardly enough for a 5 MB file. After that you can use the DB2 IMPORT or LOAD commands with the lobsinfile modifier.
Create a text file and place LOB Location Specifiers (LLS) for each file you want to import, one per line.
LLS is a way to tell IMPORT where to find LOB data. It has this
format: <file path>[.<offset>.<length>/], e.g.
/tmp/lobsource.dta.0.100/ to indicate that the first 100 bytes of
the file /tmp/lobsource.dta should be loaded into the particular LOB
column. Notice also the trailing slash. If you want to import the
entire file, skip the offset and length part. LLSes are placed in
the input file instead of the actual data for each row and LOB column.
So, for example:
echo "/home/you/yourfile.txt" > /tmp/import.dat
Since you said the IDs will be generated in the input data, you don't need to enter them in the input file, just don't forget to use the appropriate command modifier: identitymissing or generatedmissing, depending on how the ID column is defined.
Now you can connect to the database and run the IMPORT command, e.g.
db2 "import from /tmp/import.dat of del
modified by lobsinfile identitymissing
method p (1)
insert into yourtable (yourclobcolumn)"
I split the command onto multiple lines for readability, but you should type it on a single line.
method p (1) means parse the input file and read the column in position 1.
More info in the manual
I have some bulk data in a text file that I need to import into a MySQL table. The table consists of two fields ..
ID (integer with auto-increment)
Name (varchar)
The text file is a large collection of names with one name per line ...
(example)
John Doe
Alex Smith
Bob Denver
I know how to import a text file via phpMyAdmin however, as far as I understand, I need to import data that has the same number of fields as the target table. Is there a way to import the data from my text file into one field and have the ID field auto-increment automatically?
Thank you in advance for any help.
Another method I use that does not require reordering a table's fields (assuming the auto-increment field is the first column) is as follows:
1) Open/import the text file in Excel (or a similar program).
2) Insert a column before the first column.
3) Set the first cell in this new column with a zero or some other placeholder.
4) Close the file (keeping it in its original text/tab/csv/etc. format).
5) Open the file in a text editor.
6) Delete the placeholder value you entered into the first cell.
7) Close and save the file.
Now you will have a file containing each row of your original file preceded by an empty column, which will be converted into the next relevant auto-increment value upon import via phpMyAdmin.
Here is the simplest method to date:
Make sure your file does NOT have a header line with the column names. If it does, remove it.
In phpMyAdmin, as usual: go in the Import tab for your table and select your file. Select CSV as the format. Then -- and this is the important part -- in the
Format-Specific Options:
...in the Column names: fill in the name of the column the data is for, in your case "Name".
This will import the names and auto-increment the id column. You're done!
Tested fine with phpMyAdmin 4.2.7.1.
Not correct on import with the LOADTABLE INFILE, just create the auto-increment column as the LAST column/field... As its parsing, if your table is defined with 30 columns, but the text file only has 1 (or anything less), it will import the leading columns first, in direct sequence, so ensure your delimited with... is correct between fields (for any future imports). Again, put the auto-increment AFTER the number of columns you know are being imported.
create table YourMySQLTable
( FullName varchar(30) not null ,
SomeOtherFlds varchar(20) not null,
IDKey int not null AUTO_INCREMENT,
Primary KEY (IDKey)
);
Notice the IDKey is auto-increment in the last field of the table... regardless of your INPUT stream text file which may have less columns than your final table will actually hold.
Then, import the data via...
LOAD DATA
INFILE `C:\SomePath\WhereTextFileIs\ActualFile.txt`
INTO TABLE YourMySQLTable
COLUMNS TERMINATED BY `","`
LINES TERMINATED BY `\r\n` ;
Above example is based on comma seperated list with quotes around each field such as
"myfield1","anotherField","LastField". Also, the terminated is the cr/lf that typical text files are delimited per row
In the sample of your text file having the full name as the single column, all the data would get loaded into the "YourMySQLTable" into the FullName column. Since the IDKey is at the END of the list, it will still be auto-increment assigned values from 1-? and not have any conflict with the columns from the inbound text.
I just used a TAB as the first field in my text file, then imported it as usual. I got a warning about the ID field but the field incremented as expected...
I just tried this:
In phpMyAdmin table- match the amount of fields you have in your csv.
Perform the import of csv data into your table
Go to the [Structure] tab and add a new field [At beginning of table] (I assume you want the id field there)
Fill in the [name] attribute as "id",
[length] to "5"
[Index] to "Primary"
Tick the A_I (Auto Increment)
Hit [Go] button
The table should have updated with the id field at the front of all your data with auto-incrementing.
At least this way you don't have to worry about matching fields, etc.
I´ve solved that problem by simply add the column_names under Format-Specific Options without the Column ID. Because the Column ID ist Auto increment. In my case it works fine without changing anything in the CSV File. My CSV File has only Data inside no Column Headers.
If the table columns do not match, I usually add "bogus" fields with empty data where the real data would've been, so, if my table needs "id", "name", "surname", "address", "email" and I have "id", "name", "surname", I change my CSV file to have "id", "name", "surname", "address", "email" but leave the fields that I do not have data for blank.
This results in a CSV file looking like this:
1,John,Doe,,
2,Jane,Doe,,
I find it simpler than the other methods.