This is sort of a general question here, as I cannot provide the data I am trying to import. However, I have exported a csv file from Filemaker Pro Advanced 15 and I am trying to import it into PostgreSQL using PgAdmin III. When importing this infromation, the progress bar fills and I am presented the option to click done (I presume the import is done). However, after clicking "done" and looking in the table, none of the data "imported" is actually there. I am not sure why this is, I have looked many places and have also contacted FM. Does anyone here have any suggestions ?
Here is the log for this execution:
2016-06-15 17:40:22 QUERY : COPY query (127.0.0.1:5432):
COPY public."Clients"(name,"createdAt","updatedAt",age,birthday,canreceivetxt,phone,dln,ssn,zip,scndcntctprsn,casetype,incidentloc,dol,pdp,pr,incidentfcts,posncar,pdd,ymmov,advinfo,wtnsnfo,advtick,clitick,policy,adjstrnme,adjstrphne,adjstrfx,insclaimnum,advpol,advinsn,advinsp,trnsprtbprmdc,whtcmpny,medtrtmnt,prvdrs,doi,rltdclaims,wglss,source,notes,locov,dls,email,cai,advins,address,gender,scndrycntctprsonphone,scndryctnctprsnrel,inslimit,checksreceived,disbursement,clientinssettlesamnt,casefeepercent,advsettleamnt,"Casecost",lineitemfees,lastcall)
FROM STDIN
(FORMAT 'csv', DELIMITER ',', HEADER, QUOTE '"', ESCAPE '"', ENCODING 'UTF8')
Related
I'd like to ask if my syntax is correct in loading a csv format file to a DB2 Database. I cannot confirm as I'm having problems in configuring DB2 in my local. I'd also like to confirm the placement of double quote is correct for both dateformat and timeformat?
Below is my code snippet.
LOGFILE=/mnt/bin/log/myLog.txt
db2 "load from /mnt/bin/test.csv of del modified by coldel noeofchar noheader dateformat=\"YYYY-MM-DD\" timeformat=\"HH:MM:SS\" usedefaults METHOD P(1,2,3,4,5) messages $LOGFILE insert_update into myuser.desctb(DESC_ID,START_DATE,START_TIME,END_DATE,END_TIME)"
If you use modified by coldel then you should also specify the delimiter character. If the delimiter really is a comma, then omit the coldel option.
Additionally insert_update is for the IMPORT command (not for load command), but import is a logged action which reduces insert throughput. You can use ... replace into ... with the LOAD command. Study the docs for the details.
The quoting seems OK, but correctness of the formats depends on data file values.
Refer to the LOAD documentation for details, you should study this page and the related pages.
An alternative to LOAD is to use INGEST command (available in current Db2-clients) which has insert, replace, merge and other options and is high throughput (compared to import).
I am trying to import a large csv file (~4.5gb) into Postgres but it keeps throwing the following error:
ERROR: unquoted carriage return found in data
HINT: Use quoted CSV field to represent carriage return.
CONTEXT: COPY abc_complete_file_261115, line 9041959
I opened my csv in SublimeText2 and jumped to line 9041959, found the URN for record I needed, loaded the file in Vim and went to that line. I have hidden characters enabled in Vim (by using :set list) so I would expect to see a carriage return ^M somewhere on the line within the data but the only one I could find is at the end of the line as expected.
After an entire day of research and having gotten no further with this issue I ended up deleting the record on line 9041959 - this didn't fix the issue.
Then I figured well maybe it's something strange going on between records - so I ended up deleting about 5 records on either side of the line that threw the error - but it gave the the same error again. (I'll worry about preserving the data later on, right now I'm just trying to import the file so that I can have a look in Postgres). I made sure that I had saved the changes to the csv file before rerunning my query but it just gave the same error.
I feel like I am missing something really really obvious - does anyone have any ideas what might be causing the issue?
I'm using a Mac running El Capitan.
Many thanks
Update 27/11/15
Hi #JakubKania. Sorry for not putting up the query - the reason I didn't was because I am 99.9% sure that the issue is to do with the csv file rather than the query. A generalised version is:
CREATE TABLE large_file_test(
urn VARCHAR,
forename CHAR(32),
surname CHAR(32));
COPY large_file_test FROM '/Users/Shared/largefile1.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
COPY large_file_test FROM '/Users/Shared/largefile2.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
COPY large_file_test FROM '/Users/Shared/largefile3.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
ALTER TABLE large_file_test
ADD CONSTRAINT large_urn
PRIMARY KEY (large_urn);
ANALYZE large_file_test;
So I am actually trying to load 3 separate files into the Table that I created. The issue is that there seems to be hidden characters in part 1 that are preventing it from importing into Postgres. I haven't tried anything with part 2 or 3 yet.
The easiest way I found to solve this in MAC -El Capitan is:
1) Open the file with Sublime Text
2) in menu Reopen the file with encoding UTF8
3) in menu Save the file with encoding UTF8
Sublime "normalize" all end of line EOF.
This likely is caused by Windows line endings. Try installing the utility dos2unix and running dos2unix <filename> before executing the COPY command.
In my case, I noticed that the csv file had an extra blank at the end. After removing it, the file imported properly.
I created a separate folder and gave read/write permissions to "everybody" and that solved all this problem as well as the problem of access being denied when trying to import the file through pgAdmin4 as well. Seems to have been the "cure all".
Now, just to find out which user I need to give these permissions to instead of "everybody".
Using PostgreSQL v 9.6 on Windows 10.
I'm trying to import a .txt file into PostgreSQL. The txt file has 6 columns:
Laboratory_Name Laboratory_ID Facility ZIP_Code City State
And 213 rows.
I'm trying to use \copy to put the contents of this file into a table called doe2 in PostgreSQL using this command:
\copy DOE2 FROM '/users/nathangroom/desktop/DOE_inventory5.txt' (DELIMITER(' '))
It gives me this error:
missing data for column "facility"
I've looked all around for what to do when encountering this error and nothing has helped. Has anyone else encountered this?
Three possible causes:
One or more lines of your file has only 4 or fewer space characters (your delimiter).
One or more space characters have been escaped (inadvertently). Maybe with a backslash at the end of an unquoted value. For the (default) text format you are using, the manual explains:
Backslash characters (\) can be used in the COPY data to quote data
characters that might otherwise be taken as row or column delimiters.
Output from COPY TO or pg_dump would not exhibit any of these faults when reading from a table with matching layout. But maybe your file has been edited or is from a different, faulty source?
You are not using the file you think you are using. The \copy meta-command of the psql command-line interface is a wrapper for COPY and reads files local to the client. If your file lives on the server, use the SQL command COPY instead.
Check the file carefully. In my case, a blank line at the end of the file caused the ERROR: missing data for column. Deleted it, and worked fine.
Printing the blank lines might reveal something interesting:
cat -e $filename
I had a similar error. check the version of pg_dump that was used in exporting the data and the version of the database you are want to insert it into. make sure they are same. Also, if copy export fails then export the data by insert
I'm trying to import data from a txt file and keep getting a 'Wrong number of data values in row xxx' error. Looking at the text file, everything looks fine but I can't tell what/how Teradata is interpreting it.
So is there a way to view or preview the data from Teradata's perspective? I tried running a SELECT statement, but since the import doesn't finish, nothing is even imported. Which brings me to my next question, is there a way to limit an external-file import to a certain # of rows? Like import just the first 50 rows from the text file?
May I suggest you obtain a copy of Notepad++ or Sublime Text, both of which are free to download, to view the text file. This will allow you to open the text file and identify what in the records is causing you trouble loading the file. You will be able to display non-printable characters and use advanced search techniques to traverse the files looking for problems with the data.
It is possible there is an embedded carriage return, line feed, or other non-printable character that is being interpreted during the import and generating this error.
I want to export my database to an excel file by php,I need a source code in php to do this
I'm not going to write your whole program for you (that's not what this site is about) but if you have a specific problem, feel free to post another question.
It looks like PHP has a built-in function to export an array to a line in a CSV file: fputcsv. So run your query and for each row returned, call fputcsv.
Or, just use mysqldump which claims to support dumping to natively support dumping a database to CSV.
PLEASE NOTE!
Exporting Records to .csv is not the same as exporting records to MX Excel .csv.
First and foremost, the source code is out there. Not problem finding it.
The difference though is with Excel, with you are separating with commas and encapsulating with ", Excel escapes quotes (") with an additional quote (so it looks like "").
This means you can't simply use addslashes when trying to export.
This is not meant any harm. If you need the sourcecode for an CSV export (lot of code available at php.net) the phpBlocks is maybe the right tool for you. Export to CSV without
coding. Click&Point like Google's AppInventor.
see: http://www.freegroup.de/software/phpBlocks/demo.html