Connect to server user myuser using mypass;
LOAD CLIENT from "Text_File.TXT" OF DEL
MODIFIED BY CHARDEL0x22 coldel0x09 KEEPBLANKS USEDEFAULTS
TIMESTAMPFORMAT="YYYY-MM-DD HH:MM:SS.UUUUUUUUU" MESSAGES "Log_Text_File.TXT"
INSERT INTO SCHEMA.Table NONRECOVERABLE;
This is my current command above, the single text file generated is below:
"int" "AND 8 / 2010.
" "int" "int" "string" "2014-03-12 14:52:29" "name" "int"
The error I'm getting is:
SQL3116W The field value in row "F8-8245" and column "6" is missing, but the
target column is not nullable.
SQL3185W The previous error occurred while processing data from row "F8-8245"
of the input file.
I'm using a text qualifier of "
It's a tab delimited file.
I'm not sure why the file is failing as the 6th column is filled.
Any help would be greatly appreciated.
If your input data file can contain a newline character inside a character-string value , then add DELPRIORITYCHAR to the modified-by list like this:
MODIFIED BY CHARDEL0x22 coldel0x09 delprioritychar
then retry and check the output. Remember to erase your message file before each load(or archive) so you only see fresh messages.
Related
I receive a .csv export every 10 minutes that I'd like to import into a postgreSQL server. Working with a test csv, I got everything to work, but didn't take notice that my actual csv file has a forced ":" at the end of each column header (but not on the first header for some reason)(built into the back-end of the exporter, so I cant get it removed, already asked the company). So I added the ":"s to my test csv as shown in the link,
My insert into functions no longer work and give me syntax errors. First I'm trying to add them using the following code,
print("Reading file contents and copying into table...")
with open('C:\\Users\\admin\\Desktop\\test2.csv') as csvfile:
readCSV = csv.reader(csvfile, delimiter=',')
columns = next(readCSV) #skips the header row
query = 'insert into test({0}) values ({1})'
query = query.format(','.join(columns), ','.join('?' * len(columns)))
for data in readCSV:
cursor.execute(query, data)
con.commit()
Resulting in '42601' error near ":" in the second column header.
The results are the same while actually listing column headers and ? ? ?s out in the INSERT INTO section.
What is the syntax to get the script to accept ":" on column headers? If there's no way, is there a way to scan through headers and remove the ":" at the end of each?
Because : is a special character, if your column is named year: in the DB, you must double quote its name --> select "year:" from test;
You are getting a PG error because you are referencing the unquoted column name (insert into test({0})), so add double quotes there.
query = 'insert into test("year:","day:", "etc:") values (...)'
That being said, it might be simpler to remove every occurrence of : in your csv's 1st line
Much appreciated JGH and Adrian. I went with your suggestion to remove every occurrence of : by adding the following line after the first columns = ... statement
columns = [column.strip(':') for column in columns]
It worked well.
I am loading a CSV file into a Postgresql database using the Camel SQL component.
The original CSV file header names (Columns) are mixed case with spaces e.g. "Cost Price"
The SQL component refers to an SQL insert statement in a properties file,
e.g.
insert into upload_data(year,month,cost)values(:#year,:#month,:#Cost Price)
I get this error:
Caused by: [org.springframework.jdbc.BadSqlGrammarException - PreparedStatementCallback; bad SQL grammar []; nested exception is org.postgresql.util.PSQLException: ERROR: syntax error at or near ":" at position...
-the position refers to the : before #Cost Price
If I change the parameter name to cost_price and modify the CSV file the file is uploaded correctly without error.
I have tried surrounding the parameter with " ' \" and {} in the insert statement
Is it possible to use mixed case with spaces in named parameters using escapes or something or do I need to intervene and modify the CSV header?
The SQL component does not support this, in fact its a real bad design to use spaces in header names. So after you read the CSV file, you can change the header name before calling the SQL component.
Its a Django app in which im loading a CSV , table gets created OK but the CSV copying to PSQL fails with ERROR =
psycopg2.DataError: extra data after last expected column
CONTEXT: COPY csvfails, line 1:
Questions already referred -
"extra data after last expected column" while trying to import a csv file into postgresql
Have tested multiple times , with CSV of different Column Counts , am sure now the COLUMN Count is not the issue , its the content of the CSV file. As when i change the Content and upload same CSV , table gets created and dont get this error . Content of CSV file that fails is as seen below. Kindly advise what in this content prompts - psycopg2/psql/postgres to give this error .
No as suggested in the comment cant paste even a single row of the CSV file , the **imgur** image add-in , wont allow , not sure what to do now ?
Seen below screenshots from psql - cli - the table had been created with the correct columns count , still got the error .
EDIT_1 - Further while saving on my ubuntu , using libre office , unchecked the - Separator Options >> Separated By >> TAB and SEMICOLON . This CSV then saved with only -- Separator Options >> COMMA.
The python line of code which might be the culprit is =
with open(path_csv_for_psql, 'r') as f:
next(f) # Skip the header row.
csv_up_cursor.copy_from(f, str(new_table_name), sep=',')
conn.commit()
I thought i read somewhere that the - separator parameter passed to copy_from which is default = sep=',') , could be the issue ?
I want to write in the table "genome1 "in which there is one column of "shingle" (VARCHAR 64), all data from the text file.
In data are written in a look: FA, GL, YH, LO, GH, KL, HF...
In case of execution of the command of COPY:
COPY genome1(shingle) FROM '/path/to/file/genome1.2.txt' (DELIMITER (','));
There is an error:
ERROR: extra data after last expected column
Changeover of a command on:
COPY genome1(shingle) FROM '/path/to/file/genome1.2.txt' CSV HEADER DELIMITER ',';
Gives nothing (COPY 0). Please help, I do not understand in what a problem.
I understood how to correct this problem. Postgres did not allow to copy as there was no transfer sinks. It was necessary to enter input data in a format:
AA
DD
FF
...
For some reason Postgresql wont read my CSV files in the form:
2017-10-20T21:20:00,124.502,CAM[CR][LF]
2017-10-20T21:21:00,124.765,CAM[CR][LF]
(thats an ISO compliant timestamp right?) into a table defined as:
CREATE TABLE ext_bsrn.spa_temp (
spadate TIMESTAMP WITHOUT TIME ZONE,
spa_azimuth NUMERIC,
station_id CHAR(3) )
WITH (oids = false);
It returns this error:
ERROR: invalid input syntax for type timestamp: "?2015-01-01T00:00:00"
CONTEXT: COPY spa_temp, line 1, column spadate: "?2015-01-01T00:00:00"
I don't understand why the '?' is shown inside the quotes in the error message, there's no characters before 2015 in my file (checked it in Notepad++ with noprint characters shown.)
I tried both windows (CRLF) and unix (LF) line ends, but neither makes any difference.
I also tried seperate date & time columns but then it just throws a similar error re the date field. "invalid input syntax for type date"
Does line 1 mean the first line or the second line (if there is a Line 0)?