FreeTDS runs out of memory from DBD::Sybase - perl

When I add
client charset = UTF-8
to my freetds.conf file, my DBD::Sybase program emits:
Out of memory!
and terminates. This happens when I call execute() on an SQL query statement that returns any ntext fields. I can return numeric data, datetimes, and nvarchars just fine, but whenever one of the output fields is ntext, I get this error.
All these queries work perfectly fine without the UTF-8 setting, but I do need to handle some characters that throw warnings under the default character set. (See related question.)
The error message is not formatted the same way other DBD::Sybase error messages seem to be formatted. I do get a message that a rollback() is being issued, though. (My false AutoCommit flag is being honored.) I think I read somewhere that FreeTDS uses the iconv program to convert between character sets; is it possible that this message is being emitted from iconv?
If I execute the same query with the same freetds.conf settings in tsql (FreeTDS's command-line SQL shell), I don't get the error.
I'm connecting to SQL Server.
What do I need to do to get these queries to return successfully?

I saw this in .conf file - see if it helps:
# Command and connection timeouts
; timeout = 10
; connect timeout = 10
# If you get out of memory errors, it may mean that your client
# is trying to allocate a huge buffer for a TEXT field.
# (Microsoft servers sometimes pretend TEXT columns are
# 4 GB wide!) If you have this problem, try setting
# 'text size' to a more reasonable limit
text size = 64512

These links seem relevant as well and show how the setting can be changed without modifying the freetds.conf file:
http://lists.ibiblio.org/pipermail/freetds/2002q1/006611.html
http://www.freetds.org/faq.html#textdata
The FAQ is particularly unhelpful, not listing the actual error message.

Related

DBeaver: Redirect server output to file

I'm using DBeaver to execute a large script file which produces a lot of output (via PostgreSQLs RAISE NOTICE statement). I can see the output in the server output tab, however, the buffer size seems to be limited so a lot of output is lost at the end of the execution.
Is it somehow possible to either increase the server output tab buffer size or redirect the server output directly to a file?
I was experiencing the same issue as you, and I have been unable to find any setting which limits the output length.
In my case, what I eventually discovered was that there was an error in my script which was causing it to fail silently. It looks like part of the output is missing, however it was just the script terminating prematurely.
I encourage you to consider this option, and check your script for errors. Be aware that errors in the script don't appear in the output log.

Use SQL Workbench to read a variable from a file

UPDATE: in the workbench/J log file I am seeing this error:
ERROR Variable names may only contain characters (a-z, A-Z), numbers and underscores
I'm sure this is what is causing my process to fail, but I have no idea why because my variables are named appropriately. I've tried renaming them a few times just in case and the same thing happens.
ORIGINAL POST:
I am working on an automated process to dump the contents of a Postgres query to a text file and FTP it to someone. The process I have been using successfully is a windows batch script that runs SQL Workbench to run the query and write the entire contents of the table to a text file and FTP it.
Now I want to be able to use WBVarDef to load a variable from a text file and use it in my query. For reference, the variable is the unique id of the last record that was FTPed. This is the code i have:
WBVarDef -variable=id -contentFile=id.txt;
WBVardef today=#"select to_char(current_date,'mmddyyyy')";
WBExport -type=text
-file='c:/CLP/FTP/$[today]circ_trans.txt'
-delimiter='|'
-quoteAlways=true
-lineEnding=crlf
-encoding=utf8;
SELECT
*
FROM
transactions
WHERE
transactions.id > $[id]
ORDER BY
transactions.id;
The only thing new here is the reference to the text file that contains the id on the first line. This completely breaks the process but as far as I can tell, I am using this according to the SQL Workbench documentation.
Any help would be greatly appreciated.
I have figured this one out. I was running an older version of workbench that did not support this functionality. Now that I upgraded to build 119 this is working. I'm having other issues but that's a different story....

Stop Groovy console truncating output?

Is it possible to stop the Groovy console truncating output?
Using the 1.8.4 console, if I execute the following script:
for (i in 0..4000) println i
I get the following output:
01
602
603
...
3999
4000
I can't see any options to preserve all program output.
Expanding upon the accepted answer, groovyconsole uses JAVA_OPTS, so anything you set in there will be picked up. For instance, if you wanted to increase the max memory to 4 GB and the console limit to 200,000, then you could do this statement prior to running groovyConsole:
export JAVA_OPTS="-Xmx4096m -Dgroovy.console.output.limit=200000"
This file:
https://svn.codehaus.org/groovy/trunk/groovy/groovy-core/src/main/groovy/ui/Console.groovy
contains this code:
// Maximum number of characters to show on console at any time
int maxOutputChars = System.getProperty('groovy.console.output.limit','20000') as int
Which seems to be the thing I want to change. There was even a JIRA on this:
https://issues.apache.org/jira/browse/GROOVY-4425
But so far I haven't been able to pass this property as a -D option through groovyConsole or groovyConsole.bat, the started console immediately closes. Will update if/when I figure out how to easily pass this property through to the console..

How to use BCP to dump query (cdc function ) retrieved data to text file

Im trying to use BCP to dump data from CDC function into a .dat file. Im using the following query (which works in Server 2008 R2):
USE LEESWIJZER
DECLARE #begin_time datetime
, #end_time datetime
, #from_lsn binary(10)
, #to_lsn binary(10)
SET #end_time = '2013-07-05 12:00:00.000';
SELECT #to_lsn = sys.fn_cdc_map_time_to_lsn('largest less than or equal', #end_time);
SELECT #from_lsn = sys.fn_cdc_get_min_lsn('dbo_LWR_CONTRIBUTIES')
SELECT sys.fn_cdc_map_lsn_to_time(__$start_lsn) AS ChangeDTS
, *
FROM cdc.fn_cdc_get_net_changes_dbo_LWR_CONTRIBUTIES (#from_lsn, #to_LSN, 'all')
(edited for readability, used in BCP as single string)
my BCP string is:
BCP "Query above" queryout "C:\temp\LWRCONTRIBUTIES.dat" -w -t ";|" -r \n -T -S {server\\instance} -o "C:\temp\LWRCONTRIBUTIES.log"
As you can see I want a resulting .dat file in unicode, and a log file. I'm guessing the "ChangeDTS" column added to the function outcome is causing my problem. Error message reads: "[Microsoft][SQL Native Client]Host-file columns may be skipped only when copying into the Server".
It may be resolved using a format file, but since this code needs to run daily, likely more than once a day, and the tables are subject to change, I'm reluctant to constantly adjust my format files (there are 100's of tables needing the same procedure).
Furthermore, this is run on a clients database, who wont like me creating views in their database.
Anybody got any idea how I can create a text file (.dat) with a selected number of columns from a cdc function?
Found the answer, regardless of which version of bcp used, bcp cant handle declarations, it seems. If i edit those out, works like a charm.
However, according to someone on a different forum, BCP should be to handle declarations of variables. So happy it works for me now, but still confused why it does now and didnt before.

How can I make log4perl output easier to read?

When using log4perl, the debug log layout that I'm using is :
log4perl.appender.D10.layout=PatternLayout
log4perl.appender.D10.layout.ConversionPattern=%d [pid=%P] %p %F{1} (%L) %M %m%n
log4perl.appender.D10.Filter = DebugAndUp
This produces very verbose debug logs, for example:
2008/11/26 11:57:28 [pid=25485] DEBUG SomeModule.pm (331) functions::SomeModule::Test Test XXX was successfull
2008/11/26 11:57:29 [pid=25485] ERROR SomeOtherUnrelatedModule.pm (99999) functions::SomeModule::AnotherTest AnotherTest YYY has faled
This works great, and provides excellent debugging data.
However, each line of the debug log contains different function names, pid length, etc. This makes each line layout differently, and makes reading debug logs much harder than it needs to be.
Is there a way in log4perl to format the line so that the debugging metadata (everything up until the actual log message) be padded at the end with spaces/tabs, and have the actual message start at the same column of text?
You can pad the single fields that make up your entries. For example [pid=%5P] will always give you at least 5 characters for the PID.
The "Quantify Placeholders" section in the docs for Log::Log4perl::Layout gives more details.
There are a couple of ways to go with this, although you have to figure out which one works better for your situation:
Use a different appender if you are working live. Have that appender use a pattern that shows only the information you want. If you're working in a single process, for instance, your alternate appender might leave off the PID and the timestamp. You might only need the file name and line number.
Use %n to put newlines in the right place. That makes it multi-line output that is slightly harder to parse later, but you can choose another sequence for the input record separator (say, a literal "[EOL]") to make it easy to read entry-by-entry.
Log to a database instead of a file. For your reports, select just the columns you want to inspect.
Log everything, but write a filter to go through the log file ad-hoc to display just the parts that you want to see, such as only the debugging messages, the entries between certain times, only the entries involving a file, and so on.