Sybase: Incorrect syntax near 'go' in a 'IF EXISTS' block - tsql

This is my sql statement
IF EXISTS (select 1 from sysobjects where name = 'PNL_VALUE_ESTIMATE')
drop table dbo.PNL_VALUE_ESTIMATE
go
isql bails out with this error message
Msg 102, Level 15, State 1:
Server 'DB_SERVER', Line 3:
Incorrect syntax near 'go'.
But the sql statement look correct to me. What's wrong?
Sybase version is 15

Try this:
IF EXISTS (select 1 from sysobjects where name = 'PNL_VALUE_ESTIMATE')
drop table dbo.PNL_VALUE_ESTIMATE
go
or this:
IF EXISTS (select 1 from sysobjects where name = 'PNL_VALUE_ESTIMATE')
BEGIN
drop table dbo.PNL_VALUE_ESTIMATE
END
go
or this:
IF EXISTS (select 1 from sysobjects where name = 'PNL_VALUE_ESTIMATE')
BEGIN
select 1
END
go
Does any work?

GO is not a keyword of T-SQL, but of the editor.
SMSS (between others) uses it as 'division' between batches of commands it sends to the database server. Executing it inside a stored procedure, or even a script file, won't work.
edit: Maybe it works with SyBase, but I think it'll need to be uppercase in that case.

From the documentation, the GO statement is a command of the editor you're using, not SQL itself:
GO is not a Transact-SQL statement; it is a command recognized by the
sqlcmd and osql utilities and SQL Server Management Studio Code
editor.
That said - Sybase is also an editor that supports the GO statement.
I've had the same problem, but with SQL Management Studio. The issue is that the editor does not support mixed-newline types around certain statements - GO being one of them. In Management Studio, for example, only Windows-style newlines (CR + LF) are allowed and if I were to use the Linux format (LF), it will give the exact same error as yours above.
Text-editors such as Notepad++ (what I use) have an option for what type of End-of-Line characters you use by default (Windows, Linux, Mac (CR)).
Try checking what newline character(s) are being used in your statements to see if that can fix the problem.

Shouldn't the object reference have
dbo..PNL_VALUE_ESTIMATE
because you haven't given a database name, and if you include the obj owner you need .. to miss the db name?
I'd go:
EXEC('DROP TABLE dbo..PNL_VALUE_ESTIMATE')
in the true part as well, because DROP TABLE is always compiled, and if the table isn't there you'll still have a failure.
Do you even need dbo? If your sql always runs as dbo just leave it out.

Related

How to export full-text files with SQL?

There are an easy way to import/export full-text fields as files?
that solve the problem of "load as multiple lines". Trying with SQL's COPY I can only to transform full-file into full-table, not into a single text field, because each line from COPY is a raw.
that solve the save-back problem, to save the full XML file in the filesystem, without changes in bynary representation (preserving SHA1), and without other exernal procedures (as Unix sed use).
The main problem is on export, so this is the title of this page.
PS: the "proof of same file" in the the round trip — import, export back and compare with original — can be obtained by sha1sum demonstration; see examples below. So, a natural demand is also to check same SHA1 by SQL, avoiding to export on simple check tasks.
All examples
Import a full text into a full-table (is not what I need), and test that can export as the same text. PS: I need to import one file into one field and one row.
Transform full table into one file (is not what I need) and test that can export as same text.PS: I need one row (of one field) into one file.
Calculate the hash by SQL, the SHA1 of the field. Must be the same when compare ... Else it is not a solution for me.
The folowing examples show each problem and a non-elegant workaround.
1. Import
CREATE TABLE ttmp (x text);
COPY ttmp FROM '/tmp/test.xml' ( FORMAT text ); -- breaking lines lines
COPY (SELECT x FROM ttmp) TO '/tmp/test_back.xml' (format TEXT);
Checking that original and "back" have exactly the same content:
sha1sum /tmp/test*.*
570b13fb01d38e04ebf7ac1f73dfad0e1d02b027 /tmp/test_back.xml
570b13fb01d38e04ebf7ac1f73dfad0e1d02b027 /tmp/test.xml
PS: seems perfect, but the problem here is the use of many rows. A real import-solution can import a file into a one-row (and one field). A real export-solution is a SQL function that produce test_back.xml from a single row (of a single field).
2. Transform full table into one file
Use it to store XML:
CREATE TABLE xtmp (x xml);
INSERT INTO xtmp (x)
SELECT array_to_string(array_agg(x),E'\n')::xml FROM ttmp
;
COPY (select x::text from xtmp) TO '/tmp/test_back2-bad.xml' ( FORMAT text );
... But not works as we can check by sha1sum /tmp/test*.xml, not produce the same result for test_back2-bad.xml.
So do also a translation from \n to chr(10), using an external tool (perl, sed or any other) perl -p -e 's/\\n/\n/g' /tmp/test_back2-bad.xml > /tmp/test_back2-good.xml
Ok, now test_back2-good.xml have the same hash ("570b13fb..." in my example) tham original.
Use of Perl is a workaround, how to do without it?
3. The SHA1 of the field
SELECT encode(digest(x::text::bytea, 'sha1'), 'hex') FROM xtmp;
Not solved, is not the same hash tham original (the "570b13fb..." in my example)... Perhaps the ::text enforced internal representation with \n symbols, so a solution will be direct cast to bytea, but it is an invalid cast. The other workaround also not is a solution,
SELECT encode(digest( replace(x::text,'\n',E'\n')::bytea, 'sha1' ), 'hex')
FROM xtmp
... I try CREATE TABLE btmp (x bytea) and COPY btmp FROM '/tmp/test.xml' ( FORMAT binary ), but error ("unknown COPY file signature").
COPY isn't designed for this. It's meant to deal with table-structured data, so it can't work without some way of dividing rows and columns; there will always be some characters which COPY FROM interprets as separators, and for which COPY TO will insert some escape sequence if it finds one in your data. This isn't great if you're looking for a general file I/O facility.
In fact, database servers aren't designed for general file I/O. For one thing, anything which interacts directly with the server's file system will require a superuser role. If at all possible, you should just query the table as usual, and deal with the file I/O on the client side.
That said, there are a few alternatives:
The built-in pg_read_file() function, and pg_file_write() from the adminpack module, provide the most direct interface to the file system, but they're both restricted to the cluster's data directory (and I wouldn't recommend storing random user-created files in there).
lo_import() and lo_export() are the only built-in functions I know of which deal directly with file I/O and which have unrestricted access to the server's file system (within the constraints imposed by the host OS), but the Large Object interface is not particularly user-friendly....
If you install the untrusted variant of a procedural language like Perl (plperlu) or Python (plpythonu), you can write wrapper functions for that language's native I/O routines.
There isn't much you can't accomplish via COPY TO PROGRAM if you're determined enough - for one, you could COPY (SELECT 1) TO PROGRAM 'mv <source_file> <target_file>' to work around the limitations of pg_file_write() - though this blurs the line between SQL and external tools somewhat (and whoever inherits your codebase will likely not be impressed...).
You can use plpythonu f.open(), f.write(), f.close() within a postgres function to write to a file.
Language extension would need to be installed.,
https://www.postgresql.org/docs/8.3/static/plpython.html
Working example from the mailing list.
https://www.postgresql.org/message-id/flat/20041106125209.55697.qmail%40web51806.mail.yahoo.com#20041106125209.55697.qmail#web51806.mail.yahoo.com
for example plpythonu
CREATE FUNCTION makefile(p_file text, p_content text) RETURNS text AS $$
o=open(args[0],"w")
o.write(args[1])
o.close()
return "ok"
$$ LANGUAGE PLpythonU;
PS: for safe implementation see this example.
Preparing
There are a not-so-obvious procedure to use PLpython extension. Supposing an UBUNTU server:
On SQL check SELECT version().
On terminal check sudo apt install postgresql-plpython listed versions.
Install the correct version, eg. sudo apt install postgresql-plpython-9.6.
Back to SQL do CREATE EXTENSION plpythonu.
Testing
The /tmp is default, to create or use other folder, eg. /tmp/sandbox, use sudo chown postgres.postgres /tmp/sandbox.
Suppose the tables of the question's examples. SQL script, repeating some lines:
DROP TABLE IF EXISTS ttmp;
DROP TABLE IF EXISTS xtmp;
CREATE TABLE ttmp (x text);
COPY ttmp FROM '/tmp/sandbox/original.xml' ( FORMAT text );
COPY (SELECT x FROM ttmp) TO '/tmp/sandbox/test1-good.xml' (format TEXT);
CREATE TABLE xtmp (x xml);
INSERT INTO xtmp (x)
SELECT array_to_string(array_agg(x),E'\n')::xml FROM ttmp
;
COPY (select x::text from xtmp)
TO '/tmp/sandbox/test2-bad.xml' ( FORMAT text );
SELECT makefile('/tmp/sandbox/test3-good.xml',x::text) FROM xtmp;
The sha1sum *.xml output of my XML original file:
4947.. original.xml
4947.. test1-good.xml
949f.. test2-bad.xml
4947.. test3-good.xml

Use SQL Workbench to read a variable from a file

UPDATE: in the workbench/J log file I am seeing this error:
ERROR Variable names may only contain characters (a-z, A-Z), numbers and underscores
I'm sure this is what is causing my process to fail, but I have no idea why because my variables are named appropriately. I've tried renaming them a few times just in case and the same thing happens.
ORIGINAL POST:
I am working on an automated process to dump the contents of a Postgres query to a text file and FTP it to someone. The process I have been using successfully is a windows batch script that runs SQL Workbench to run the query and write the entire contents of the table to a text file and FTP it.
Now I want to be able to use WBVarDef to load a variable from a text file and use it in my query. For reference, the variable is the unique id of the last record that was FTPed. This is the code i have:
WBVarDef -variable=id -contentFile=id.txt;
WBVardef today=#"select to_char(current_date,'mmddyyyy')";
WBExport -type=text
-file='c:/CLP/FTP/$[today]circ_trans.txt'
-delimiter='|'
-quoteAlways=true
-lineEnding=crlf
-encoding=utf8;
SELECT
*
FROM
transactions
WHERE
transactions.id > $[id]
ORDER BY
transactions.id;
The only thing new here is the reference to the text file that contains the id on the first line. This completely breaks the process but as far as I can tell, I am using this according to the SQL Workbench documentation.
Any help would be greatly appreciated.
I have figured this one out. I was running an older version of workbench that did not support this functionality. Now that I upgraded to build 119 this is working. I'm having other issues but that's a different story....

SQL Anywhere v10 Syntax error near OUTPUT

I'm attempting to output a table to an outside file. I've found a few questions regarding this and followed the answers from there without any luck.
SELECT *
FROM transactions;
OUTPUT TO 'C:\Users\administrator\Desktop\Test.txt'
Is the statement I've been using, I've attempted different variations with formatting and file types such as .csv with no change.
Which produces:
ErrorCode : 102
SQLState : 42W04
Message : SQL Anywhere Error -131: Syntax error near 'OUTPUT' on line 1
SQL =
OUTPUT TO 'C:\Users\administrator\Desktop\Test.txt'
Appreciate all your help
Are you running this through dbisql, or in a different application? OUTPUT TO is a dbisql command, not a SQL statement recognized by the database server. You can use the UNLOAD statement in any application to allow the server to create the file.
Disclaimer: I work for SAP in SQL Anywhere engineering.

MySQL Workbench 5.2.47 CE: EDIT database.table command doesn't work

When I type the following command on the SQL editor I get Error Code: 1064
EDIT my_database.my_table;
but the commands
SELECT * from my_database.my_table;
works fine.
Thanks
The EDIT command was only a temporary workaround until we had proper parsing in place to determine if a query result can be edited or not. This keyword is no longer supported since a year or more.

Netbeans SQL select column names with # in the

I have an odd problem with netbeans (6.7.1). Using the built in SQL editor I cannot select any column defined with a # in it's name. It appears that Netbeans is treating this a comment and never passing to the underlying connection. Is there a way to change this?
Thanks,
David
If you have any control over the column names, I suggest you remove the # symbols. NetBeans is not the only application that will choke on them.