I/O error while reading BCP format file - sql-server-2008-r2

Today I created a new staging table and a BCP .fmt file. I created some test data and attempted to run the BCP utility from the command line:
I've got about 20 different format files and staging tables from previous work and this is the first time I have encountered this error.
How can I fix this error?
Please note, I have added my solution below, but if you have other answers, please add them in. The answer was so quirky/obscure that I think it may help others.

Basically, this one was really strange. In order for it to work, make sure there is an empty line after the last column defined in the format file. I added an extra empty line, resaved the file, and then the BCP utility ran the file successfully. I've indicated the extra line with a red rectangle.

Related

When trying to save pgAdmin result to a file (TXT) the result is modified

When I launch my query into pgAdmin 4 v5's Query Tool I get this type of data representation (this is also what I would like to get in my export file).
Unfortunately this information is transformed when saving it to a .TXT file by clicking the following button and saving it as indicated in the subsequent image.
As you can see below, after double-clicking on the saved TXT document, it added '.0' and wrapped my long character by indicating 'e+29' up to a certain row.
Can you please indicate me how to remove those transformations ?
All,
I found out the above problem is linked with the version of pgAdmin I was using, pgAdmin 4 v5 precisely.
After downloading pgAdmin 4 v6.4 the problem doesn't appear anymore.
I consider this thus as fixed, even if the cause of the problem remains unknown to me.
Thanks for your help.
Brieuc

Find corrupt data in xlsx file

We are generating xlsx files using a perl script. Files usually contains thousands of records. This makes spotting errors a very difficult operation.
This process was working since years without problems.
This week we got a request to check a file which contains errors. While opening Excel prompted that the file contains errors and asked whether we want to repair them.
In fact we do not want to recover the data but want to know which part of the file is corrupt. The error should be coming from corrupt data and we are interested to identify these data.
the log message shows the following:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<recoveryLog xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main">
<logFileName>error068200_01.xml</logFileName> </br>
<summary>Errors were detected in file 'D:\Temp\20161020\file_name.xlsx'</summary>
<repairedRecords summary="Following is a list of repairs:"><repairedRecord>Repaired Records: Cell information from /xl/worksheets/sheet1.xml part</repairedRecord>
</repairedRecords>
</recoveryLog>
The error should come from corrupt data. Is there any tool/method which helps to spot this corrupt data?
I tried renaming it a zip file, extracting it and opening it via an XML editor but was not able to find any errors in XML file.
We also checked that the different XML file structures are fine.
Thank you and best regards
As expected, the problem was coming from text cells containing numbers having an E in the middle.I used the following steps to identify the erronous cells.
1. Wrote small Java class to read the file. The class was checking the cell type and displaying the value afterwards.The java program generated an Exception at some line "Cannot get a numeric value from a text cell" even If I was correctly checking the cell type before displaying the content.
2. I checked the opened Excel file at that line and found that the cell contains only 'inf'.
3. I opened the file using open office and looked at the same cells. They contain 0.
4. I debugged the program generating the data and found out that these cells contain data like '914E5514'. Seems that E which was interpreted by Excel as an exponent.We changed the program to use the format '#' for that cell and this solved the issue.
Thank you.
Thank you very much, you helped me a lot by saying that 1 particular content item may be the root problem.
My corrupted content was https://www.example.com XYZ ... ASDAS
Solution: www.example.com XYZ ... ASDAS
This is something which cannot be handled by excel. Would be nice to have a list of thing which do not work

Use SQL Workbench to read a variable from a file

UPDATE: in the workbench/J log file I am seeing this error:
ERROR Variable names may only contain characters (a-z, A-Z), numbers and underscores
I'm sure this is what is causing my process to fail, but I have no idea why because my variables are named appropriately. I've tried renaming them a few times just in case and the same thing happens.
ORIGINAL POST:
I am working on an automated process to dump the contents of a Postgres query to a text file and FTP it to someone. The process I have been using successfully is a windows batch script that runs SQL Workbench to run the query and write the entire contents of the table to a text file and FTP it.
Now I want to be able to use WBVarDef to load a variable from a text file and use it in my query. For reference, the variable is the unique id of the last record that was FTPed. This is the code i have:
WBVarDef -variable=id -contentFile=id.txt;
WBVardef today=#"select to_char(current_date,'mmddyyyy')";
WBExport -type=text
-file='c:/CLP/FTP/$[today]circ_trans.txt'
-delimiter='|'
-quoteAlways=true
-lineEnding=crlf
-encoding=utf8;
SELECT
*
FROM
transactions
WHERE
transactions.id > $[id]
ORDER BY
transactions.id;
The only thing new here is the reference to the text file that contains the id on the first line. This completely breaks the process but as far as I can tell, I am using this according to the SQL Workbench documentation.
Any help would be greatly appreciated.
I have figured this one out. I was running an older version of workbench that did not support this functionality. Now that I upgraded to build 119 this is working. I'm having other issues but that's a different story....

Unquoted carriage return found in data - Preventing COPY FROM in PostgreSQL

I am trying to import a large csv file (~4.5gb) into Postgres but it keeps throwing the following error:
ERROR: unquoted carriage return found in data
HINT: Use quoted CSV field to represent carriage return.
CONTEXT: COPY abc_complete_file_261115, line 9041959
I opened my csv in SublimeText2 and jumped to line 9041959, found the URN for record I needed, loaded the file in Vim and went to that line. I have hidden characters enabled in Vim (by using :set list) so I would expect to see a carriage return ^M somewhere on the line within the data but the only one I could find is at the end of the line as expected.
After an entire day of research and having gotten no further with this issue I ended up deleting the record on line 9041959 - this didn't fix the issue.
Then I figured well maybe it's something strange going on between records - so I ended up deleting about 5 records on either side of the line that threw the error - but it gave the the same error again. (I'll worry about preserving the data later on, right now I'm just trying to import the file so that I can have a look in Postgres). I made sure that I had saved the changes to the csv file before rerunning my query but it just gave the same error.
I feel like I am missing something really really obvious - does anyone have any ideas what might be causing the issue?
I'm using a Mac running El Capitan.
Many thanks
Update 27/11/15
Hi #JakubKania. Sorry for not putting up the query - the reason I didn't was because I am 99.9% sure that the issue is to do with the csv file rather than the query. A generalised version is:
CREATE TABLE large_file_test(
urn VARCHAR,
forename CHAR(32),
surname CHAR(32));
COPY large_file_test FROM '/Users/Shared/largefile1.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
COPY large_file_test FROM '/Users/Shared/largefile2.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
COPY large_file_test FROM '/Users/Shared/largefile3.csv' (FORMAT CSV, DELIMITER ',', HEADER, ENCODING LATIN1);
ALTER TABLE large_file_test
ADD CONSTRAINT large_urn
PRIMARY KEY (large_urn);
ANALYZE large_file_test;
So I am actually trying to load 3 separate files into the Table that I created. The issue is that there seems to be hidden characters in part 1 that are preventing it from importing into Postgres. I haven't tried anything with part 2 or 3 yet.
The easiest way I found to solve this in MAC -El Capitan is:
1) Open the file with Sublime Text
2) in menu Reopen the file with encoding UTF8
3) in menu Save the file with encoding UTF8
Sublime "normalize" all end of line EOF.
This likely is caused by Windows line endings. Try installing the utility dos2unix and running dos2unix <filename> before executing the COPY command.
In my case, I noticed that the csv file had an extra blank at the end. After removing it, the file imported properly.
I created a separate folder and gave read/write permissions to "everybody" and that solved all this problem as well as the problem of access being denied when trying to import the file through pgAdmin4 as well. Seems to have been the "cure all".
Now, just to find out which user I need to give these permissions to instead of "everybody".
Using PostgreSQL v 9.6 on Windows 10.

Matlab error with "load" - any idea what might cause it?

I'm trying to load a text file into Matlab and getting an error message that doesn't make sense to me. The file is a very large text file containing 8 columns of numbers. The error message says:
Number of columns on line 1308295 of ASCII file [filename.txt] must be the same as previous lines.
But as far as I can see there's nothing special about line 1308295. In fact that line has been in the file for months and the code hasn't complained before. So I wondered if there might be something else unrelated that could cause Matlab to give that error?
What kind of text file is it? If it's a data file, you could try the "importdata" function.