ERROR: missing data for column when using \copy in psql - postgresql

I'm trying to import a .txt file into PostgreSQL. The txt file has 6 columns:
Laboratory_Name Laboratory_ID Facility ZIP_Code City State
And 213 rows.
I'm trying to use \copy to put the contents of this file into a table called doe2 in PostgreSQL using this command:
\copy DOE2 FROM '/users/nathangroom/desktop/DOE_inventory5.txt' (DELIMITER(' '))
It gives me this error:
missing data for column "facility"
I've looked all around for what to do when encountering this error and nothing has helped. Has anyone else encountered this?

Three possible causes:
One or more lines of your file has only 4 or fewer space characters (your delimiter).
One or more space characters have been escaped (inadvertently). Maybe with a backslash at the end of an unquoted value. For the (default) text format you are using, the manual explains:
Backslash characters (\) can be used in the COPY data to quote data
characters that might otherwise be taken as row or column delimiters.
Output from COPY TO or pg_dump would not exhibit any of these faults when reading from a table with matching layout. But maybe your file has been edited or is from a different, faulty source?
You are not using the file you think you are using. The \copy meta-command of the psql command-line interface is a wrapper for COPY and reads files local to the client. If your file lives on the server, use the SQL command COPY instead.

Check the file carefully. In my case, a blank line at the end of the file caused the ERROR: missing data for column. Deleted it, and worked fine.
Printing the blank lines might reveal something interesting:
cat -e $filename

I had a similar error. check the version of pg_dump that was used in exporting the data and the version of the database you are want to insert it into. make sure they are same. Also, if copy export fails then export the data by insert

Related

psql copy from read data from script

I am trying to export / import a set of tables from a PostgreSQL database.
I am using psql's copy from with stdin from a script. I have read that data (formerly produced using copy to with stdout) can be read and delimited using the command escape \..
What I didn't get from the documentation clearly is what would happen if \. appears in the formerly exported data.
Specifcally this section of the documentation (emphasis mine) isn't very clear about that.
For \copy ... from stdin, data rows are read from the same source that
issued the command, continuing until \. is read or the stream reaches
EOF. This option is useful for populating tables in-line within a SQL
script file. For \copy ... to stdout, output is sent to the same place
as psql command output, and the COPY count command status is not
printed (since it might be confused with a data row). To read/write
psql's standard input or output regardless of the current command
source or \o option, write from pstdin or to pstdout.
Can / must a \. appearing in the data escaped somehow?
I am currently using utf8 encoded text format for the export / import.
I think I found the relevant information in the documentation of the SQL COPY command (TEXT Format section, again emphasis mine):
End of data can be represented by a single line containing just backslash-period (\.). An end-of-data marker is not necessary when reading from a file, since the end of file serves perfectly well; it is needed only when copying data to or from client applications using pre-3.0 client protocol.

Unquoted carriage return found in data - Preventing COPY FROM in PostgreSQL

I am trying to import a large csv file (~4.5gb) into Postgres but it keeps throwing the following error:
ERROR: unquoted carriage return found in data
HINT: Use quoted CSV field to represent carriage return.
CONTEXT: COPY abc_complete_file_261115, line 9041959
I opened my csv in SublimeText2 and jumped to line 9041959, found the URN for record I needed, loaded the file in Vim and went to that line. I have hidden characters enabled in Vim (by using :set list) so I would expect to see a carriage return ^M somewhere on the line within the data but the only one I could find is at the end of the line as expected.
After an entire day of research and having gotten no further with this issue I ended up deleting the record on line 9041959 - this didn't fix the issue.
Then I figured well maybe it's something strange going on between records - so I ended up deleting about 5 records on either side of the line that threw the error - but it gave the the same error again. (I'll worry about preserving the data later on, right now I'm just trying to import the file so that I can have a look in Postgres). I made sure that I had saved the changes to the csv file before rerunning my query but it just gave the same error.
I feel like I am missing something really really obvious - does anyone have any ideas what might be causing the issue?
I'm using a Mac running El Capitan.
Many thanks
Update 27/11/15
Hi #JakubKania. Sorry for not putting up the query - the reason I didn't was because I am 99.9% sure that the issue is to do with the csv file rather than the query. A generalised version is:
CREATE TABLE large_file_test(
urn VARCHAR,
forename CHAR(32),
surname CHAR(32));
COPY large_file_test FROM '/Users/Shared/largefile1.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
COPY large_file_test FROM '/Users/Shared/largefile2.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
COPY large_file_test FROM '/Users/Shared/largefile3.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
ALTER TABLE large_file_test
ADD CONSTRAINT large_urn
PRIMARY KEY (large_urn);
ANALYZE large_file_test;
So I am actually trying to load 3 separate files into the Table that I created. The issue is that there seems to be hidden characters in part 1 that are preventing it from importing into Postgres. I haven't tried anything with part 2 or 3 yet.
The easiest way I found to solve this in MAC -El Capitan is:
1) Open the file with Sublime Text
2) in menu Reopen the file with encoding UTF8
3) in menu Save the file with encoding UTF8
Sublime "normalize" all end of line EOF.
This likely is caused by Windows line endings. Try installing the utility dos2unix and running dos2unix <filename> before executing the COPY command.
In my case, I noticed that the csv file had an extra blank at the end. After removing it, the file imported properly.
I created a separate folder and gave read/write permissions to "everybody" and that solved all this problem as well as the problem of access being denied when trying to import the file through pgAdmin4 as well. Seems to have been the "cure all".
Now, just to find out which user I need to give these permissions to instead of "everybody".
Using PostgreSQL v 9.6 on Windows 10.

pgadmin importing csv file errors

I'm using pgadmin 1.18.
I have a copy of a table that I truncated. I simply want to load an import csv file which essentially looks like this:
20151228,12/28/2015,53,12,December,4,2015,1,Monday
20140828,08/28/2014,35,8,August,3,2014,4,Thursday
20150208,02/08/2015,6,2,February,1,2015,7,Sunday
I'm getting an error:
extra data after last expected column CONTEXT: COPY tblname, line 1:
"20151228,12/28/2015,53,12,December,4,2015,1,Monday"
This is the first line it´s trying to import. Any suggestions on how to fix this?
From the comments it appears you were using the wrong function in pgadmin.
If you have an existing table, which you have truncated and wish to load from a CSV file, select the table and then use Tools => Import, select the file and choose format 'CSV'.
There are other options in the import dialog to allow you to skip specified columns, use different quoting options, and specify how to deal with NULL values.
One tip that always trips me up: make sure there is no blank line at the end of the file.

Trying to copy text file to table postgresql

Trying to save a text file from desktop
copy my_notes(notes) from '/root/desktop/test_note.txt'
It shows this error..
ERROR: extra data after last expected column
CONTEXT: COPY my_notes, line 3: " cher"
I'm a newbie in postgresql
COPY expects tab-separated data, with newlines separating rows.
It isn't suitable for just loading a text file into a field. To do that, I suggest using a simple script, say python and psycopg2.
To read a plain text from a file, there is also pg_read_file() - reserved for superusers, though, because of potential security implications. Details in this related answer:
Read data from a text file inside a trigger

How to use BCP to dump query (cdc function ) retrieved data to text file

Im trying to use BCP to dump data from CDC function into a .dat file. Im using the following query (which works in Server 2008 R2):
USE LEESWIJZER
DECLARE #begin_time datetime
, #end_time datetime
, #from_lsn binary(10)
, #to_lsn binary(10)
SET #end_time = '2013-07-05 12:00:00.000';
SELECT #to_lsn = sys.fn_cdc_map_time_to_lsn('largest less than or equal', #end_time);
SELECT #from_lsn = sys.fn_cdc_get_min_lsn('dbo_LWR_CONTRIBUTIES')
SELECT sys.fn_cdc_map_lsn_to_time(__$start_lsn) AS ChangeDTS
, *
FROM cdc.fn_cdc_get_net_changes_dbo_LWR_CONTRIBUTIES (#from_lsn, #to_LSN, 'all')
(edited for readability, used in BCP as single string)
my BCP string is:
BCP "Query above" queryout "C:\temp\LWRCONTRIBUTIES.dat" -w -t ";|" -r \n -T -S {server\\instance} -o "C:\temp\LWRCONTRIBUTIES.log"
As you can see I want a resulting .dat file in unicode, and a log file. I'm guessing the "ChangeDTS" column added to the function outcome is causing my problem. Error message reads: "[Microsoft][SQL Native Client]Host-file columns may be skipped only when copying into the Server".
It may be resolved using a format file, but since this code needs to run daily, likely more than once a day, and the tables are subject to change, I'm reluctant to constantly adjust my format files (there are 100's of tables needing the same procedure).
Furthermore, this is run on a clients database, who wont like me creating views in their database.
Anybody got any idea how I can create a text file (.dat) with a selected number of columns from a cdc function?
Found the answer, regardless of which version of bcp used, bcp cant handle declarations, it seems. If i edit those out, works like a charm.
However, according to someone on a different forum, BCP should be to handle declarations of variables. So happy it works for me now, but still confused why it does now and didnt before.