Netbeans SQL select column names with # in the - netbeans

I have an odd problem with netbeans (6.7.1). Using the built in SQL editor I cannot select any column defined with a # in it's name. It appears that Netbeans is treating this a comment and never passing to the underlying connection. Is there a way to change this?
Thanks,
David

If you have any control over the column names, I suggest you remove the # symbols. NetBeans is not the only application that will choke on them.

Related

make tsvector tokenize by space only

I need to create a tsvector that does not split its content by hyphens but ideally only by whitespace.
select to_tsvector('simple','7073-03-001-01 7072-05-003-06')
creates
'-001':3 '-003':7 '-01':4 '-03':2 '-05':6 '-06':8 '7072':5 '7073':1
where I rather want
'7072-05-003-06':2 '7073-03-001-01':1
is this possible somehow?
There is a simple example of a parser called test_parser which seems to do what you want. It was last in the documents in 9.4, after that it was moved to only be documented in the source tree. These test extensions aren't always installed, so you might need to take special steps (depending on how you installed PostgreSQL and what your OS is and whether you are really using an EOL version) to get it.
create extension test_parser ;
create text search configuration test ( parser = testparser);
ALTER TEXT SEARCH CONFIGURATION test ADD MAPPING FOR word WITH simple;
SELECT * FROM to_tsvector('test', '7073-03-001-01 7072-05-003-06');
to_tsvector
---------------------------------------
'7072-05-003-06':2 '7073-03-001-01':1

Eclipse navigator and project explorer sort the numbers in files names wrong

For instance, when I have the following files:
pro0.cpp
pro1.cpp
pro2.cpp
pro3.cpp
pro10.cpp
pro11.cpp
I expect to see them in the above-mentioned order. But Eclipse would sort them like the following:
pro0.cpp
pro1.cpp
pro10.cpp
pro11.cpp
pro2.cpp
pro3.cpp
I made a lookup but I failed to find any related information about that issue. Is not it an issue at all? Or did this problem only occurred to me?
This is working as intended. These views just sort the file names using the result of the Java String.compareTo method. This just compares the strings character by character left to right. It does not try and look for numbers in the strings. This gives the result you see.
Some file viewers (macOS Finder for one) do look for numbers in the file names and sort using the whole number. This is quite a bit more complex and the Eclipse views don't try to do this.

Use SQL Workbench to read a variable from a file

UPDATE: in the workbench/J log file I am seeing this error:
ERROR Variable names may only contain characters (a-z, A-Z), numbers and underscores
I'm sure this is what is causing my process to fail, but I have no idea why because my variables are named appropriately. I've tried renaming them a few times just in case and the same thing happens.
ORIGINAL POST:
I am working on an automated process to dump the contents of a Postgres query to a text file and FTP it to someone. The process I have been using successfully is a windows batch script that runs SQL Workbench to run the query and write the entire contents of the table to a text file and FTP it.
Now I want to be able to use WBVarDef to load a variable from a text file and use it in my query. For reference, the variable is the unique id of the last record that was FTPed. This is the code i have:
WBVarDef -variable=id -contentFile=id.txt;
WBVardef today=#"select to_char(current_date,'mmddyyyy')";
WBExport -type=text
-file='c:/CLP/FTP/$[today]circ_trans.txt'
-delimiter='|'
-quoteAlways=true
-lineEnding=crlf
-encoding=utf8;
SELECT
*
FROM
transactions
WHERE
transactions.id > $[id]
ORDER BY
transactions.id;
The only thing new here is the reference to the text file that contains the id on the first line. This completely breaks the process but as far as I can tell, I am using this according to the SQL Workbench documentation.
Any help would be greatly appreciated.
I have figured this one out. I was running an older version of workbench that did not support this functionality. Now that I upgraded to build 119 this is working. I'm having other issues but that's a different story....

Progress 4gl Creating a .xlsx file without excel

Version: 10.2b
I want to create a .xlsx file with progress but the machine this will run on doesn't have excel.
Can someone point me in the right direction about how to do this.
Is there a library already written that can do something like this?
Thanks for any help!
The project was moved to the Free DocxFactory Project.
It was rewritten in C++ with Progress 4GL/ABL wrappers and tutorial.
It is 300x times faster, alot of new features were added including barcodes, paging features etc.
and it's completely free for private and commercial use without any time or feature limits.
HTH
You might find this to be useful: http://www.oehive.org/project/libooxml although it appears that there is nothing there right now. There might also be an older version of that code here: http://www.oehive.org/project/lib
Also -- in many cases the need to provide data to Excel can be satisfied with a Tab or Comma delimited file.
Another trick is to create an HTML table fragment. Excel imports those quite nicely.
A super simple example of how to export a semi-colon delimited file from a temp-table. In 90% of the cases this is enough Excel-support - at least it has been for me.
DEFINE STREAM strCsv.
DEFINE TEMP-TABLE ttExample NO-UNDO
FIELD col1 AS CHARACTER
FIELD col2 AS INTEGER.
CREATE ttExample.
ASSIGN ttExample.col1 = "ABC"
ttExample.col2 = 123.
CREATE ttExample.
ASSIGN ttExample.col1 = "DEF"
ttExample.col2 = 456.
OUTPUT STREAM strCsv TO VALUE("c:\test\test.csv").
FOR EACH ttExample NO-LOCK:
EXPORT DELIMITER ";" ttExample.
END.
OUTPUT STREAM strCsv CLOSE.

Disabling the PostgreSQL 8.4 tsvector parser's `file` token type

I have some documents that contain sequences such as radio/tested that I would like to return hits in queries like
select * from doc
where to_tsvector('english',body) ## to_tsvector('english','radio')
Unfortunately, the default parser takes radio/tested as a file token (despite being in a Windows environment), so it doesn't match the above query. When I run ts_debug on it, that's when I see that it's being recognized as a file, and the lexeme ends up being radio/tested rather than the two lexemes radio and test.
Is there any way to configure the parser not to look for file tokens? I tried
ALTER TEXT SEARCH CONFIGURATION public.english
DROP MAPPING FOR file;
...but it didn't change the output of ts_debug. If there's some way of disabling file, or at least having it recognize both file and all the words that it thinks make up the directory names along the way, or if there's a way to get it to treat slashes as hyphens or spaces (without the performance hit of regexp_replaceing them myself) that would be really helpful.
I think the only way to do what you want is to create your own parser :-( Copy wparser_def.c to a new file, remove from the parse tables (actionTPS_Base and the ones following it) the entries that relate to files (TPS_InFileFirst, TPS_InFileNext etc), and you should be set. I think the main difficulty is making the module conform to PostgreSQL's C idiom (PG_FUNCTION_INFO_V1 and so on). Have a look at contrib/test_parser/ for an example.