SQLAnywhere: Watcom SQL or T-SQL - sqlanywhere

A general question.
I am developing for Sybase SQL Anywhere 10. For backwards comptibility reasons, almost all our Stored procedures are written in Transact-SQL.
Are there any advantages or disadvantages to using T-SQL instead of the Watcom dialect?

Advantages of TSQL:
greater compatibility with Sybase ASE and Microsoft SQL Server
Disadvantages of TSQL:
some statements and functionality are only available in Watcom-SQL procedures. Some examples:
greater control over EXECUTE IMMEDIATE behavior in Watcom-SQL
LOAD TABLE, UNLOAD TABLE, REORGANIZE (among others) are only available in Watcom-SQL
the FOR statement for looping over the results of a query and automatically declaring variables to contain the values is very useful, but not available in TSQL
error reporting is less consistent since TSQL procedures are assumed to handle their own errors, while Watcom-SQL procedures report errors immediately. Watcom-SQL procedures can contain an EXCEPTION clause to handle errors
statements are not delimited by semi-colons, so TSQL procedures are more difficult to parse (and read). Syntax errors can sometimes fail to point to the actual location of the error
no ability to explicitly declare the result set of a procedure
no support for row-level triggers in TSQL
event handlers can only be written using Watcom-SQL
The documentation for SQL Anywhere T-SQL compatibility is available online. There are some database options that change behaviour to more closely match what you would expect from Sybase ASE. In addition, there are some functions that can be used to translate from one syntax to another.
Note that if you want to start adding statements in the Watcom dialect into an existing stored procedure, you will need to change the SP so that it is entirely written in the Watcom dialect. You cannot mix syntaxes in a SP, trigger, or batch.

What KM said - on the other hand, the "Watcom" dialect is much closer to ISO/ANSI-standard SQL, so that dialect is more likely to match to some other products and to appeal to people familiar with SQL standards.

if you ever try to port to SQL Server (or you go for a job on SQL Server), Sybase T-SQL is very close to SQL Server T-SQL. Sybase and MS joined up back in the day, so the core of those languages are very similar.

Related

How to solve SQL injection for Athena?

I am working on writing a Spring Java program accessing data from Athena, but I found that Athena JDBC driver does not support PreparedStatement, does anyone have idea about how to avoid SQL injection on Athena?
Update: I originally answered this question in 2018, and since then Athena now supports query parameters.
Below is my original answer:
You'll have to format your SQL query as a string before you prepare the query, and include variables by string concatenation.
In other words, welcome to PHP programming circa 2005! :-(
This puts the responsibility on you and your application code to ensure the variables are safe, and don't cause SQL injection vulnerabilities.
For example, you can cast variables to numeric data types before you interpolate them into your SQL.
Or you can create an allowlist when it's possible to declare a limited set of values that may be allowed. If you accept input, check it against the whitelist. If the input is not in the allowlist, don't use it as part of your SQL statement.
I recommend you give feedback to the AWS Athena project and ask them when they will provide support for SQL query parameters in their JDBC driver. Email them at Athena-feedback#amazon.com
See also this related question: AWS Athena JDBC PreparedStatement
Athena now has support for prepared statements (this was not the case when the question was asked).
That being said, prepared statements aren't the only way to guard against SQL injection attacks in Athena, and SQL injection attacks aren't as serious as they are in a database.
Athena is just a query engine, not a database. While dropping a table can be disruptive, tables are just metadata, and the data is not dropped along with it.
Athena's API does not allow multiple statements in the same execution, so you can't sneak a DROP TABLE foo into a statement without completely replacing the query.
Athena does not, by design, have any capability of deleting data. Athena has features that can create new data, such as CTAS, but it will refuse to write into an existing location and cannot overwrite existing data.

Matlab's JDBC/ODBC SQL behaviour differs from Access 2010

This is a hybrid question & public service announcement. The basic question is whether there are convenient and/or efficient workarounds to the limitations listed below. I spent half the morning discovering that one cannot just transplant SQL code from Access to Matlab. I've boiled it down to 3 points so far.
Can't use double quotes in SQL statements to avoid collision with
Matlab's string delimiter. The Matlab code for the SQL code strings
can become quite complicated, especially if the SQL strings already
use repeated single-quotes to represent a single quote within a
string constant.
Must always specify a source table from which to query. What won't
work is "SELECT #2015-07-28#". One basically needs to create a 1-row dummy table.
Must always select at least one field in the table being queried.
An asterisk does not seem to suffice.
The above limitations do not exist when submitting SQL code using the Access Query Designer (either in SQL 92 mode or not), nor do these limitations exist when submitting SQL code using VBA via CurrentProject.Connection.Execute.
Hopefully, this saves someone else some time in learning about these differences. And if anyone has found a workaround, that would be appreciated. Note that the above is in the context of using the JDBC/ODBC bridge (3rd of 3 illustrated configurations in the drivers documentation. The database toolbox documentation for directly connecting to an Access file using code (rather than setting up a data source using the GUI) only describes a code pattern that uses the JDBC/ODBC bridge. This is described in Example "Connect to Microsoft Access Using a File DSN" in the "Connect to database" page. I'd like to stick to this approach because I want to quickly be able to directly specify the source *.accdb file without jumping through GUI hoops of setting up a data source.
I've posted this to:
Stack overflow
Usenet
While I have not connected Matlab to Access, I have connected Access databases to PHP, Python, SAS, R, even Excel without any issues with quotes or asterisk. As for the needed source table in queries that is simply due to the Jet/ACE SQL dialect as each dialect have their particular rules. But your query above should work for one-row, one-column output of such date. SQL Server does not require a source table but Oracle does. Access dates are in the MM/DD/YYYY format, requiring # in conditional statements. MySQL use dates in YYYY-MM-DD format, requiring single quotes.
Are you using Windows installed Jet/ACE ODBC drivers? Do note these drivers are softwares external to any application (not part of Access) and can be used by any client interfacing to any data source. Aside - many are not aware that Access' backend database engine, Jet/ACE is actually a Windows technology and not restricted to Access; PC users without Access installed can still use this engine. Hence, connection strings are standard across ODBC calls. And as I see you can apply same principles to Matlab database connections.
With that said, my suggestions/workarounds:
properly learn the SQL dialect for connecting database and check if query runs; even optimize or re-write as needed
try escaping various punctuation not compliant in Matlab strings or use the ASCII counterparts: chr(34) for double quote; chr(39) for single quote; chr(42) for asterisk.
if select statements are limited in Matlab strings, create a temp table from query (use make-table query SELECT * INTO newtable FROM query outside of Matlab), then connect to this new table
use intermediary coding language (VBA, Python, C#) by user trigger or
command line to connect to database and export needed dataset, then
automate Matlab to import
export query to flatfile format (csv, txt, etc.) for Matlab import

Behaviour of fetchrow_hashref in Perl

I am trying to execute a procedure from Perl and store the results in a text file (on Windows). I am using DBI's
fetchrow_hashref() to fetch row results. The stored procedure that I am trying to execute returns more than 5 million rows. I want to know the functionality "behind-the-scene" - particularly what happens during the
fetchrow_hashref() call. e.g. Perl executes the procedure, the procedure returns all the impacted rows, keeps it in a pool (either on Database side or the calling machine side?) and then Perl selects the rows from resultset one by one. Does it happen that way or something else?
This is a difficult question to answer as you've not said which Perl database driver you are using. I'm assuming you are using DBD::ODBC and the MS SQL Server ODBC Driver in this answer.
When you call prepare on the SQL calling the procedure the ODBC driver sends it to MS SQL Server where the procedure is parsed. On calling execute the procedure is started (how you progress through the procedure depends on a lot of things). Assuming the first thing in your procedure is a select a cursor will be created for the query and MS SQL Server will start sending the rows back to the ODBC Driver (it uses the TDS protocol). In the mean time DBD::ODBC will make ODBC calls to the driver which tells it there is a result-set (SQLNumResultCol returns a non zero value). DBD::ODBC will then query the driver for the types of the columns in the result-set and bind them (SQLBindCol).
Each time you call fetchrow_hashref, DBD::ODBC will call SQLFetch, the ODBC driver will read a row from the socket and copy the data to the bound buffers.
There are important things to realise here. Mostly MS SQL Server will write a lot of rows initially to the socket even though the ODBC driver is probably not reading them yet. As a result, if you close your statement the driver has to read a lot of rows from the socket and throw them away. If you use a non standard cursor or enable Multiple Active Statements in the driver then rows are sent back to the driver one at a time so the ODBC driver can ask the server to move forward, backward in the result-set or request a row from result-set 1 then result-set 2.
There are other areas a little unusual when using procedures like whether nocount is enabled or not and you progress through your procedure statements using SQLMoreResults (odbc_more_results). Also, the output parameters in a procedure are no available until SQLMoreResults returns false.
You may find Multiple Active Statements (MAS) and DBD::ODBC of some interest to you or may some of the other articles. You may also want to read about the TDS protocol.

How compatible is Blackfish database with TSQL?

I'm using a sqlServer database that has stored procedures, and I want to use an in-memory database to unit test my code.
I've looked at a few - including VistaDB which looks amazing but expensive - and Blackfish seems to be the only possiblity so far. Before using it though i'd like to know exactly how compatible it is with TSQL - obviously if I have a lot of existing stored procedures these will use TSQL, so it's important that the in-memory db I use can handle this.
Thanks
Short Answer: Not Very
Long Answer:
Whilst Blackfish is SQL-92 compliant, you’re bound to run into stuff that worked on your T-SQL database that won’t work on BlackFish.
I'd strongly recommend SQL Server Compact 4.0 (or Express at a crunch), Compact can be easily bundled, has a tiny footprint (3mb installer? [18mb on disk ish]).
For instance, T-SQL Flow control might differ to Blackfish flow control - not really relevant for selects, inserts & updates etc, but if you have T-SQL logic gates in stored procedures, i don’t think these will port to Blackfish? Blackfish supports stored procedures, but they are compiled in other native languages (Delphi mainly). Good example from the documentation:
http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/bfsql/storedprocedures_xml.html
Very different from the T-SQL procedures used in MS SQL

Is writing eSQL database independent or not?

Using EF we can use LINQ to read data which is rather simple (especially using fluent calls), but we have less control unless we write eSQL on our own.
Is writing eSQL actually data store independent code?
So if we decide to change data store, can the same statements still be used?
Does writing eSQL strings in your code pose any serious security threats similar to writing TSQL statements as plain strings in C# code? That's why SPs are recommended. Could we still move eSQL scripts outside of code and use some other technique to make them a bit more secure?
ESQL is database independent in general, so it can be used like LINQ to Entities.
But please be aware that it has more serious limitations. It does not have DML, DDL, and DB-specific abilities.
The main ESQL disadvantage is that even simple query containing a couple of lines can be translated into monstrous SQL query for a particular DBMS, so one should check the generated SQL to be appropriate and analyze if it is optimal.
ESQL will not be executed directly on a database, it will be translated to SQL.
EF Security discussion is usually started from the connection string proptection, then model security is discussed, and only after that query protection is analyzed. It's up to the developer to decide if the peculiar query should be protected.