How to use BCP to dump query (cdc function ) retrieved data to text file - sql-server-2008-r2

Im trying to use BCP to dump data from CDC function into a .dat file. Im using the following query (which works in Server 2008 R2):
USE LEESWIJZER
DECLARE #begin_time datetime
, #end_time datetime
, #from_lsn binary(10)
, #to_lsn binary(10)
SET #end_time = '2013-07-05 12:00:00.000';
SELECT #to_lsn = sys.fn_cdc_map_time_to_lsn('largest less than or equal', #end_time);
SELECT #from_lsn = sys.fn_cdc_get_min_lsn('dbo_LWR_CONTRIBUTIES')
SELECT sys.fn_cdc_map_lsn_to_time(__$start_lsn) AS ChangeDTS
, *
FROM cdc.fn_cdc_get_net_changes_dbo_LWR_CONTRIBUTIES (#from_lsn, #to_LSN, 'all')
(edited for readability, used in BCP as single string)
my BCP string is:
BCP "Query above" queryout "C:\temp\LWRCONTRIBUTIES.dat" -w -t ";|" -r \n -T -S {server\\instance} -o "C:\temp\LWRCONTRIBUTIES.log"
As you can see I want a resulting .dat file in unicode, and a log file. I'm guessing the "ChangeDTS" column added to the function outcome is causing my problem. Error message reads: "[Microsoft][SQL Native Client]Host-file columns may be skipped only when copying into the Server".
It may be resolved using a format file, but since this code needs to run daily, likely more than once a day, and the tables are subject to change, I'm reluctant to constantly adjust my format files (there are 100's of tables needing the same procedure).
Furthermore, this is run on a clients database, who wont like me creating views in their database.
Anybody got any idea how I can create a text file (.dat) with a selected number of columns from a cdc function?

Found the answer, regardless of which version of bcp used, bcp cant handle declarations, it seems. If i edit those out, works like a charm.
However, according to someone on a different forum, BCP should be to handle declarations of variables. So happy it works for me now, but still confused why it does now and didnt before.

Related

How to export full-text files with SQL?

There are an easy way to import/export full-text fields as files?
that solve the problem of "load as multiple lines". Trying with SQL's COPY I can only to transform full-file into full-table, not into a single text field, because each line from COPY is a raw.
that solve the save-back problem, to save the full XML file in the filesystem, without changes in bynary representation (preserving SHA1), and without other exernal procedures (as Unix sed use).
The main problem is on export, so this is the title of this page.
PS: the "proof of same file" in the the round trip — import, export back and compare with original — can be obtained by sha1sum demonstration; see examples below. So, a natural demand is also to check same SHA1 by SQL, avoiding to export on simple check tasks.
All examples
Import a full text into a full-table (is not what I need), and test that can export as the same text. PS: I need to import one file into one field and one row.
Transform full table into one file (is not what I need) and test that can export as same text.PS: I need one row (of one field) into one file.
Calculate the hash by SQL, the SHA1 of the field. Must be the same when compare ... Else it is not a solution for me.
The folowing examples show each problem and a non-elegant workaround.
1. Import
CREATE TABLE ttmp (x text);
COPY ttmp FROM '/tmp/test.xml' ( FORMAT text ); -- breaking lines lines
COPY (SELECT x FROM ttmp) TO '/tmp/test_back.xml' (format TEXT);
Checking that original and "back" have exactly the same content:
sha1sum /tmp/test*.*
570b13fb01d38e04ebf7ac1f73dfad0e1d02b027 /tmp/test_back.xml
570b13fb01d38e04ebf7ac1f73dfad0e1d02b027 /tmp/test.xml
PS: seems perfect, but the problem here is the use of many rows. A real import-solution can import a file into a one-row (and one field). A real export-solution is a SQL function that produce test_back.xml from a single row (of a single field).
2. Transform full table into one file
Use it to store XML:
CREATE TABLE xtmp (x xml);
INSERT INTO xtmp (x)
SELECT array_to_string(array_agg(x),E'\n')::xml FROM ttmp
;
COPY (select x::text from xtmp) TO '/tmp/test_back2-bad.xml' ( FORMAT text );
... But not works as we can check by sha1sum /tmp/test*.xml, not produce the same result for test_back2-bad.xml.
So do also a translation from \n to chr(10), using an external tool (perl, sed or any other) perl -p -e 's/\\n/\n/g' /tmp/test_back2-bad.xml > /tmp/test_back2-good.xml
Ok, now test_back2-good.xml have the same hash ("570b13fb..." in my example) tham original.
Use of Perl is a workaround, how to do without it?
3. The SHA1 of the field
SELECT encode(digest(x::text::bytea, 'sha1'), 'hex') FROM xtmp;
Not solved, is not the same hash tham original (the "570b13fb..." in my example)... Perhaps the ::text enforced internal representation with \n symbols, so a solution will be direct cast to bytea, but it is an invalid cast. The other workaround also not is a solution,
SELECT encode(digest( replace(x::text,'\n',E'\n')::bytea, 'sha1' ), 'hex')
FROM xtmp
... I try CREATE TABLE btmp (x bytea) and COPY btmp FROM '/tmp/test.xml' ( FORMAT binary ), but error ("unknown COPY file signature").
COPY isn't designed for this. It's meant to deal with table-structured data, so it can't work without some way of dividing rows and columns; there will always be some characters which COPY FROM interprets as separators, and for which COPY TO will insert some escape sequence if it finds one in your data. This isn't great if you're looking for a general file I/O facility.
In fact, database servers aren't designed for general file I/O. For one thing, anything which interacts directly with the server's file system will require a superuser role. If at all possible, you should just query the table as usual, and deal with the file I/O on the client side.
That said, there are a few alternatives:
The built-in pg_read_file() function, and pg_file_write() from the adminpack module, provide the most direct interface to the file system, but they're both restricted to the cluster's data directory (and I wouldn't recommend storing random user-created files in there).
lo_import() and lo_export() are the only built-in functions I know of which deal directly with file I/O and which have unrestricted access to the server's file system (within the constraints imposed by the host OS), but the Large Object interface is not particularly user-friendly....
If you install the untrusted variant of a procedural language like Perl (plperlu) or Python (plpythonu), you can write wrapper functions for that language's native I/O routines.
There isn't much you can't accomplish via COPY TO PROGRAM if you're determined enough - for one, you could COPY (SELECT 1) TO PROGRAM 'mv <source_file> <target_file>' to work around the limitations of pg_file_write() - though this blurs the line between SQL and external tools somewhat (and whoever inherits your codebase will likely not be impressed...).
You can use plpythonu f.open(), f.write(), f.close() within a postgres function to write to a file.
Language extension would need to be installed.,
https://www.postgresql.org/docs/8.3/static/plpython.html
Working example from the mailing list.
https://www.postgresql.org/message-id/flat/20041106125209.55697.qmail%40web51806.mail.yahoo.com#20041106125209.55697.qmail#web51806.mail.yahoo.com
for example plpythonu
CREATE FUNCTION makefile(p_file text, p_content text) RETURNS text AS $$
o=open(args[0],"w")
o.write(args[1])
o.close()
return "ok"
$$ LANGUAGE PLpythonU;
PS: for safe implementation see this example.
Preparing
There are a not-so-obvious procedure to use PLpython extension. Supposing an UBUNTU server:
On SQL check SELECT version().
On terminal check sudo apt install postgresql-plpython listed versions.
Install the correct version, eg. sudo apt install postgresql-plpython-9.6.
Back to SQL do CREATE EXTENSION plpythonu.
Testing
The /tmp is default, to create or use other folder, eg. /tmp/sandbox, use sudo chown postgres.postgres /tmp/sandbox.
Suppose the tables of the question's examples. SQL script, repeating some lines:
DROP TABLE IF EXISTS ttmp;
DROP TABLE IF EXISTS xtmp;
CREATE TABLE ttmp (x text);
COPY ttmp FROM '/tmp/sandbox/original.xml' ( FORMAT text );
COPY (SELECT x FROM ttmp) TO '/tmp/sandbox/test1-good.xml' (format TEXT);
CREATE TABLE xtmp (x xml);
INSERT INTO xtmp (x)
SELECT array_to_string(array_agg(x),E'\n')::xml FROM ttmp
;
COPY (select x::text from xtmp)
TO '/tmp/sandbox/test2-bad.xml' ( FORMAT text );
SELECT makefile('/tmp/sandbox/test3-good.xml',x::text) FROM xtmp;
The sha1sum *.xml output of my XML original file:
4947.. original.xml
4947.. test1-good.xml
949f.. test2-bad.xml
4947.. test3-good.xml

Use SQL Workbench to read a variable from a file

UPDATE: in the workbench/J log file I am seeing this error:
ERROR Variable names may only contain characters (a-z, A-Z), numbers and underscores
I'm sure this is what is causing my process to fail, but I have no idea why because my variables are named appropriately. I've tried renaming them a few times just in case and the same thing happens.
ORIGINAL POST:
I am working on an automated process to dump the contents of a Postgres query to a text file and FTP it to someone. The process I have been using successfully is a windows batch script that runs SQL Workbench to run the query and write the entire contents of the table to a text file and FTP it.
Now I want to be able to use WBVarDef to load a variable from a text file and use it in my query. For reference, the variable is the unique id of the last record that was FTPed. This is the code i have:
WBVarDef -variable=id -contentFile=id.txt;
WBVardef today=#"select to_char(current_date,'mmddyyyy')";
WBExport -type=text
-file='c:/CLP/FTP/$[today]circ_trans.txt'
-delimiter='|'
-quoteAlways=true
-lineEnding=crlf
-encoding=utf8;
SELECT
*
FROM
transactions
WHERE
transactions.id > $[id]
ORDER BY
transactions.id;
The only thing new here is the reference to the text file that contains the id on the first line. This completely breaks the process but as far as I can tell, I am using this according to the SQL Workbench documentation.
Any help would be greatly appreciated.
I have figured this one out. I was running an older version of workbench that did not support this functionality. Now that I upgraded to build 119 this is working. I'm having other issues but that's a different story....

ERROR: missing data for column when using \copy in psql

I'm trying to import a .txt file into PostgreSQL. The txt file has 6 columns:
Laboratory_Name Laboratory_ID Facility ZIP_Code City State
And 213 rows.
I'm trying to use \copy to put the contents of this file into a table called doe2 in PostgreSQL using this command:
\copy DOE2 FROM '/users/nathangroom/desktop/DOE_inventory5.txt' (DELIMITER(' '))
It gives me this error:
missing data for column "facility"
I've looked all around for what to do when encountering this error and nothing has helped. Has anyone else encountered this?
Three possible causes:
One or more lines of your file has only 4 or fewer space characters (your delimiter).
One or more space characters have been escaped (inadvertently). Maybe with a backslash at the end of an unquoted value. For the (default) text format you are using, the manual explains:
Backslash characters (\) can be used in the COPY data to quote data
characters that might otherwise be taken as row or column delimiters.
Output from COPY TO or pg_dump would not exhibit any of these faults when reading from a table with matching layout. But maybe your file has been edited or is from a different, faulty source?
You are not using the file you think you are using. The \copy meta-command of the psql command-line interface is a wrapper for COPY and reads files local to the client. If your file lives on the server, use the SQL command COPY instead.
Check the file carefully. In my case, a blank line at the end of the file caused the ERROR: missing data for column. Deleted it, and worked fine.
Printing the blank lines might reveal something interesting:
cat -e $filename
I had a similar error. check the version of pg_dump that was used in exporting the data and the version of the database you are want to insert it into. make sure they are same. Also, if copy export fails then export the data by insert

I/O error while reading BCP format file

Today I created a new staging table and a BCP .fmt file. I created some test data and attempted to run the BCP utility from the command line:
I've got about 20 different format files and staging tables from previous work and this is the first time I have encountered this error.
How can I fix this error?
Please note, I have added my solution below, but if you have other answers, please add them in. The answer was so quirky/obscure that I think it may help others.
Basically, this one was really strange. In order for it to work, make sure there is an empty line after the last column defined in the format file. I added an extra empty line, resaved the file, and then the BCP utility ran the file successfully. I've indicated the extra line with a red rectangle.

How to collect column headers and data using dbisqlc.exe command

I am trying to query a Sybase ASA database using the dbisqlc.exe command-line on a Windows system and would like to collect the column headers along with the associated table data.
Example:
dbisqlc.exe -nogui -c "ENG=myDB;DBN=dbName;UID=dba;PWD=mypwd;CommLinks=tcpip{PORT=12345}" select * from myTable; OUTPUT TO C:\OutputFile.txt
I would prefer it if this command wrote to stdout however that does not appear to be an option aside from using dbisql.exe which is not available in the environment I am in.
When I run it in this format the header and data is generated however in an unparsable format.
Any assistance would be greatly appreciated.
Try adding the 'FORMAT SQL' clause to the OUTPUT statement. It will give you the select statement containing the column names as well as the data.
In reviewing the output from the following dbisqlc.exe command, it appears as though I can parse the output using perl.
Command:
dbisqlc.exe -nogui -c "ENG=myDB;DBN=dbName;UID=dba;PWD=mypwd;CommLinks=tcpip{PORT=12345}" select * from myTable; OUTPUT TO C:\OutputFile.txt
The output appears to break in odd places using text editors such as vi or TextPad however the output from this command is actually returned with specific column widths.
The second line of the output includes a set of ='s signs which are contained for the width of each column. What I did was build a "template" string based on the ='s which can be passed to perls unpack function. I then use this template to build an array of column names and parse the result set using unpack.
This may not be the most efficient method however I think it should give me the results I am looking for.