I'm using pgadmin 1.18.
I have a copy of a table that I truncated. I simply want to load an import csv file which essentially looks like this:
20151228,12/28/2015,53,12,December,4,2015,1,Monday
20140828,08/28/2014,35,8,August,3,2014,4,Thursday
20150208,02/08/2015,6,2,February,1,2015,7,Sunday
I'm getting an error:
extra data after last expected column CONTEXT: COPY tblname, line 1:
"20151228,12/28/2015,53,12,December,4,2015,1,Monday"
This is the first line it´s trying to import. Any suggestions on how to fix this?
From the comments it appears you were using the wrong function in pgadmin.
If you have an existing table, which you have truncated and wish to load from a CSV file, select the table and then use Tools => Import, select the file and choose format 'CSV'.
There are other options in the import dialog to allow you to skip specified columns, use different quoting options, and specify how to deal with NULL values.
One tip that always trips me up: make sure there is no blank line at the end of the file.
Related
I'm writing a script to import a .csv file, change some data in it then export it back out.
The problem is the last column in the file isn't getting imported.
I've tested and found that if I delete a column, the last column does get imported. If I add another column, neither of the last two columns get imported.
It seems as though there is a limit of 10 columns, but that would be mind-boggling and would almost certainly come up when Googling. So... what's going on here?
Here's the command I'm using. I've tried specifying the headers and that gave the same behavior as well.
$customers = Import-Csv .\Raw_Data\CUSTOMER.csv #-Header customer_identifier,opening_dt,closing_dt,customer_status,birth_dt,bank_emp_status,cust_seg_code,cust_coh_catgry_code,cust_coh_code,branch_identifier,as_of_date
The header for the file looks like this:
customer_identifier,opening_dt,closing_dt,customer_status,birth_dt,bank_emp_status,cust_seg_code,cust_coh_catgry_code,cust_coh_code,branch_identifier,as_of_date
I've also checked the newlines on the file and they are all CRLF so that shouldn't be an issue either.
UPDATE: in the workbench/J log file I am seeing this error:
ERROR Variable names may only contain characters (a-z, A-Z), numbers and underscores
I'm sure this is what is causing my process to fail, but I have no idea why because my variables are named appropriately. I've tried renaming them a few times just in case and the same thing happens.
ORIGINAL POST:
I am working on an automated process to dump the contents of a Postgres query to a text file and FTP it to someone. The process I have been using successfully is a windows batch script that runs SQL Workbench to run the query and write the entire contents of the table to a text file and FTP it.
Now I want to be able to use WBVarDef to load a variable from a text file and use it in my query. For reference, the variable is the unique id of the last record that was FTPed. This is the code i have:
WBVarDef -variable=id -contentFile=id.txt;
WBVardef today=#"select to_char(current_date,'mmddyyyy')";
WBExport -type=text
-file='c:/CLP/FTP/$[today]circ_trans.txt'
-delimiter='|'
-quoteAlways=true
-lineEnding=crlf
-encoding=utf8;
SELECT
*
FROM
transactions
WHERE
transactions.id > $[id]
ORDER BY
transactions.id;
The only thing new here is the reference to the text file that contains the id on the first line. This completely breaks the process but as far as I can tell, I am using this according to the SQL Workbench documentation.
Any help would be greatly appreciated.
I have figured this one out. I was running an older version of workbench that did not support this functionality. Now that I upgraded to build 119 this is working. I'm having other issues but that's a different story....
I am trying to import a large csv file (~4.5gb) into Postgres but it keeps throwing the following error:
ERROR: unquoted carriage return found in data
HINT: Use quoted CSV field to represent carriage return.
CONTEXT: COPY abc_complete_file_261115, line 9041959
I opened my csv in SublimeText2 and jumped to line 9041959, found the URN for record I needed, loaded the file in Vim and went to that line. I have hidden characters enabled in Vim (by using :set list) so I would expect to see a carriage return ^M somewhere on the line within the data but the only one I could find is at the end of the line as expected.
After an entire day of research and having gotten no further with this issue I ended up deleting the record on line 9041959 - this didn't fix the issue.
Then I figured well maybe it's something strange going on between records - so I ended up deleting about 5 records on either side of the line that threw the error - but it gave the the same error again. (I'll worry about preserving the data later on, right now I'm just trying to import the file so that I can have a look in Postgres). I made sure that I had saved the changes to the csv file before rerunning my query but it just gave the same error.
I feel like I am missing something really really obvious - does anyone have any ideas what might be causing the issue?
I'm using a Mac running El Capitan.
Many thanks
Update 27/11/15
Hi #JakubKania. Sorry for not putting up the query - the reason I didn't was because I am 99.9% sure that the issue is to do with the csv file rather than the query. A generalised version is:
CREATE TABLE large_file_test(
urn VARCHAR,
forename CHAR(32),
surname CHAR(32));
COPY large_file_test FROM '/Users/Shared/largefile1.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
COPY large_file_test FROM '/Users/Shared/largefile2.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
COPY large_file_test FROM '/Users/Shared/largefile3.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
ALTER TABLE large_file_test
ADD CONSTRAINT large_urn
PRIMARY KEY (large_urn);
ANALYZE large_file_test;
So I am actually trying to load 3 separate files into the Table that I created. The issue is that there seems to be hidden characters in part 1 that are preventing it from importing into Postgres. I haven't tried anything with part 2 or 3 yet.
The easiest way I found to solve this in MAC -El Capitan is:
1) Open the file with Sublime Text
2) in menu Reopen the file with encoding UTF8
3) in menu Save the file with encoding UTF8
Sublime "normalize" all end of line EOF.
This likely is caused by Windows line endings. Try installing the utility dos2unix and running dos2unix <filename> before executing the COPY command.
In my case, I noticed that the csv file had an extra blank at the end. After removing it, the file imported properly.
I created a separate folder and gave read/write permissions to "everybody" and that solved all this problem as well as the problem of access being denied when trying to import the file through pgAdmin4 as well. Seems to have been the "cure all".
Now, just to find out which user I need to give these permissions to instead of "everybody".
Using PostgreSQL v 9.6 on Windows 10.
I'm trying to import a .txt file into PostgreSQL. The txt file has 6 columns:
Laboratory_Name Laboratory_ID Facility ZIP_Code City State
And 213 rows.
I'm trying to use \copy to put the contents of this file into a table called doe2 in PostgreSQL using this command:
\copy DOE2 FROM '/users/nathangroom/desktop/DOE_inventory5.txt' (DELIMITER(' '))
It gives me this error:
missing data for column "facility"
I've looked all around for what to do when encountering this error and nothing has helped. Has anyone else encountered this?
Three possible causes:
One or more lines of your file has only 4 or fewer space characters (your delimiter).
One or more space characters have been escaped (inadvertently). Maybe with a backslash at the end of an unquoted value. For the (default) text format you are using, the manual explains:
Backslash characters (\) can be used in the COPY data to quote data
characters that might otherwise be taken as row or column delimiters.
Output from COPY TO or pg_dump would not exhibit any of these faults when reading from a table with matching layout. But maybe your file has been edited or is from a different, faulty source?
You are not using the file you think you are using. The \copy meta-command of the psql command-line interface is a wrapper for COPY and reads files local to the client. If your file lives on the server, use the SQL command COPY instead.
Check the file carefully. In my case, a blank line at the end of the file caused the ERROR: missing data for column. Deleted it, and worked fine.
Printing the blank lines might reveal something interesting:
cat -e $filename
I had a similar error. check the version of pg_dump that was used in exporting the data and the version of the database you are want to insert it into. make sure they are same. Also, if copy export fails then export the data by insert
Trying to save a text file from desktop
copy my_notes(notes) from '/root/desktop/test_note.txt'
It shows this error..
ERROR: extra data after last expected column
CONTEXT: COPY my_notes, line 3: " cher"
I'm a newbie in postgresql
COPY expects tab-separated data, with newlines separating rows.
It isn't suitable for just loading a text file into a field. To do that, I suggest using a simple script, say python and psycopg2.
To read a plain text from a file, there is also pg_read_file() - reserved for superusers, though, because of potential security implications. Details in this related answer:
Read data from a text file inside a trigger