Is the Unicode prefix N still needed in SQL Compact Edition? - unicode

At least in previous versions of SQL Server, you had to prefix Unicode string constants with an "N" to make them be treated as Unicode. Thus,
select foo from bar where fizz = N'buzz'
(See "Server-Side Programming with Unicode" for SQL Server 2005 "from the horse's mouth" documentation.)
We have an application that is using SQL Compact Edition and I am wondering if that is still necessary. From the testing I am doing, it appears to be unneeded. That is, the following SQL statements both behave identically in SQL CE, but the second one fails in SQL Server 2005:
select foo from bar where foo=N'າຢວ'
select foo from bar where foo='າຢວ'
(I hope I'm not swearing in some language I don't know about...)
I'm wondering if that is because all strings are treated as Unicode in SQL CE, or if perhaps the default code page is now Unicode-aware.
If anyone has seen any official documentation, either yea or nay, I'd appreciate it.
I know I could go the safe route and just add the "N"'s, but there's a lot of code that will need changed, and if I don't need to, I don't want to! Thanks for your help!

SQL CE was originally developed for Windows CE, which is purely Unicode. As a result, SQL CE already leans heavily toward Unicode and the "N" prefix is unnecessary.

Related

Is double escaping in postgres enough to prevent SQL injections/attacks? (Alternative to using parameters) [duplicate]

I realize that parameterized SQL queries is the optimal way to sanitize user input when building queries that contain user input, but I'm wondering what is wrong with taking user input and escaping any single quotes and surrounding the whole string with single quotes. Here's the code:
sSanitizedInput = "'" & Replace(sInput, "'", "''") & "'"
Any single-quote the user enters is replaced with double single-quotes, which eliminates the users ability to end the string, so anything else they may type, such as semicolons, percent signs, etc., will all be part of the string and not actually executed as part of the command.
We are using Microsoft SQL Server 2000, for which I believe the single-quote is the only string delimiter and the only way to escape the string delimiter, so there is no way to execute anything the user types in.
I don't see any way to launch an SQL injection attack against this, but I realize that if this were as bulletproof as it seems to me someone else would have thought of it already and it would be common practice.
What's wrong with this code? Is there a way to get an SQL injection attack past this sanitization technique? Sample user input that exploits this technique would be very helpful.
UPDATE:
I still don't know of any way to effectively launch a SQL injection attack against this code. A few people suggested that a backslash would escape one single-quote and leave the other to end the string so that the rest of the string would be executed as part of the SQL command, and I realize that this method would work to inject SQL into a MySQL database, but in SQL Server 2000 the only way (that I've been able to find) to escape a single-quote is with another single-quote; backslashes won't do it.
And unless there is a way to stop the escaping of the single-quote, none of the rest of the user input will be executed because it will all be taken as one contiguous string.
I understand that there are better ways to sanitize input, but I'm really more interested in learning why the method I provided above won't work. If anyone knows of any specific way to mount a SQL injection attack against this sanitization method I would love to see it.
First of all, it's just bad practice. Input validation is always necessary, but it's also always iffy.
Worse yet, blacklist validation is always problematic, it's much better to explicitly and strictly define what values/formats you accept. Admittedly, this is not always possible - but to some extent it must always be done.
Some research papers on the subject:
http://www.imperva.com/docs/WP_SQL_Injection_Protection_LK.pdf
http://www.it-docs.net/ddata/4954.pdf (Disclosure, this last one was mine ;) )
https://www.owasp.org/images/d/d4/OWASP_IL_2007_SQL_Smuggling.pdf (based on the previous paper, which is no longer available)
Point is, any blacklist you do (and too-permissive whitelists) can be bypassed. The last link to my paper shows situations where even quote escaping can be bypassed.
Even if these situations do not apply to you, it's still a bad idea. Moreover, unless your app is trivially small, you're going to have to deal with maintenance, and maybe a certain amount of governance: how do you ensure that its done right, everywhere all the time?
The proper way to do it:
Whitelist validation: type, length, format or accepted values
If you want to blacklist, go right ahead. Quote escaping is good, but within context of the other mitigations.
Use Command and Parameter objects, to preparse and validate
Call parameterized queries only.
Better yet, use Stored Procedures exclusively.
Avoid using dynamic SQL, and dont use string concatenation to build queries.
If using SPs, you can also limit permissions in the database to executing the needed SPs only, and not access tables directly.
you can also easily verify that the entire codebase only accesses the DB through SPs...
Okay, this response will relate to the update of the question:
"If anyone knows of any specific way to mount a SQL injection attack against this sanitization method I would love to see it."
Now, besides the MySQL backslash escaping - and taking into account that we're actually talking about MSSQL, there are actually 3 possible ways of still SQL injecting your code
sSanitizedInput = "'" & Replace(sInput, "'", "''") & "'"
Take into account that these will not all be valid at all times, and are very dependant on your actual code around it:
Second-order SQL Injection - if an SQL query is rebuilt based upon data retrieved from the database after escaping, the data is concatenated unescaped and may be indirectly SQL-injected. See
String truncation - (a bit more complicated) - Scenario is you have two fields, say a username and password, and the SQL concatenates both of them. And both fields (or just the first) has a hard limit on length. For instance, the username is limited to 20 characters. Say you have this code:
username = left(Replace(sInput, "'", "''"), 20)
Then what you get - is the username, escaped, and then trimmed to 20 characters. The problem here - I'll stick my quote in the 20th character (e.g. after 19 a's), and your escaping quote will be trimmed (in the 21st character). Then the SQL
sSQL = "select * from USERS where username = '" + username + "' and password = '" + password + "'"
combined with the aforementioned malformed username will result in the password already being outside the quotes, and will just contain the payload directly.
3. Unicode Smuggling - In certain situations, it is possible to pass a high-level unicode character that looks like a quote, but isn't - until it gets to the database, where suddenly it is. Since it isn't a quote when you validate it, it will go through easy... See my previous response for more details, and link to original research.
In a nutshell: Never do query escaping yourself. You're bound to get something wrong. Instead, use parameterized queries, or if you can't do that for some reason, use an existing library that does this for you. There's no reason to be doing it yourself.
I realize this is a long time after the question was asked, but ..
One way to launch an attack on the 'quote the argument' procedure is with string truncation.
According to MSDN, in SQL Server 2000 SP4 (and SQL Server 2005 SP1), a too long string will be quietly truncated.
When you quote a string, the string increases in size. Every apostrophe is repeated.
This can then be used to push parts of the SQL outside the buffer. So you could effectively trim away parts of a where clause.
This would probably be mostly useful in a 'user admin' page scenario where you could abuse the 'update' statement to not do all the checks it was supposed to do.
So if you decide to quote all the arguments, make sure you know what goes on with the string sizes and see to it that you don't run into truncation.
I would recommend going with parameters. Always. Just wish I could enforce that in the database. And as a side effect, you are more likely to get better cache hits because more of the statements look the same. (This was certainly true on Oracle 8)
I've used this technique when dealing with 'advanced search' functionality, where building a query from scratch was the only viable answer. (Example: allow the user to search for products based on an unlimited set of constraints on product attributes, displaying columns and their permitted values as GUI controls to reduce the learning threshold for users.)
In itself it is safe AFAIK. As another answerer pointed out, however, you may also need to deal with backspace escaping (albeit not when passing the query to SQL Server using ADO or ADO.NET, at least -- can't vouch for all databases or technologies).
The snag is that you really have to be certain which strings contain user input (always potentially malicious), and which strings are valid SQL queries. One of the traps is if you use values from the database -- were those values originally user-supplied? If so, they must also be escaped. My answer is to try to sanitize as late as possible (but no later!), when constructing the SQL query.
However, in most cases, parameter binding is the way to go -- it's just simpler.
Input sanitation is not something you want to half-ass. Use your whole ass. Use regular expressions on text fields. TryCast your numerics to the proper numeric type, and report a validation error if it doesn't work. It is very easy to search for attack patterns in your input, such as ' --. Assume all input from the user is hostile.
It's a bad idea anyway as you seem to know.
What about something like escaping the quote in string like this: \'
Your replace would result in: \''
If the backslash escapes the first quote, then the second quote has ended the string.
Simple answer: It will work sometimes, but not all the time.
You want to use white-list validation on everything you do, but I realize that's not always possible, so you're forced to go with the best guess blacklist. Likewise, you want to use parametrized stored procs in everything, but once again, that's not always possible, so you're forced to use sp_execute with parameters.
There are ways around any usable blacklist you can come up with (and some whitelists too).
A decent writeup is here: http://www.owasp.org/index.php/Top_10_2007-A2
If you need to do this as a quick fix to give you time to get a real one in place, do it. But don't think you're safe.
There are two ways to do it, no exceptions, to be safe from SQL-injections; prepared statements or prameterized stored procedures.
If you have parameterised queries available you should be using them at all times. All it takes is for one query to slip through the net and your DB is at risk.
Patrick, are you adding single quotes around ALL input, even numeric input? If you have numeric input, but are not putting the single quotes around it, then you have an exposure.
Yeah, that should work right up until someone runs SET QUOTED_IDENTIFIER OFF and uses a double quote on you.
Edit: It isn't as simple as not allowing the malicious user to turn off quoted identifiers:
The SQL Server Native Client ODBC driver and SQL Server Native Client OLE DB Provider for SQL Server automatically set QUOTED_IDENTIFIER to ON when connecting. This can be configured in ODBC data sources, in ODBC connection attributes, or OLE DB connection properties. The default for SET QUOTED_IDENTIFIER is OFF for connections from DB-Library applications.
When a stored procedure is created, the SET QUOTED_IDENTIFIER and SET ANSI_NULLS settings are captured and used for subsequent invocations of that stored procedure.
SET QUOTED_IDENTIFIER also corresponds to the QUOTED_IDENTIFER setting of ALTER DATABASE.
SET QUOTED_IDENTIFIER is set at parse time. Setting at parse time means that if the SET statement is present in the batch or stored procedure, it takes effect, regardless of whether code execution actually reaches that point; and the SET statement takes effect before any statements are executed.
There's a lot of ways QUOTED_IDENTIFIER could be off without you necessarily knowing it. Admittedly - this isn't the smoking gun exploit you're looking for, but it's a pretty big attack surface. Of course, if you also escaped double quotes - then we're back where we started. ;)
Your defence would fail if:
the query is expecting a number rather than a string
there were any other way to represent a single quotation mark, including:
an escape sequence such as \039
a unicode character
(in the latter case, it would have to be something which were expanded only after you've done your replace)
What ugly code all that sanitisation of user input would be! Then the clunky StringBuilder for the SQL statement. The prepared statement method results in much cleaner code, and the SQL Injection benefits are a really nice addition.
Also why reinvent the wheel?
Rather than changing a single quote to (what looks like) two single quotes, why not just change it to an apostrophe, a quote, or remove it entirely?
Either way, it's a bit of a kludge... especially when you legitimately have things (like names) which may use single quotes...
NOTE: Your method also assumes everyone working on your app always remembers to sanitize input before it hits the database, which probably isn't realistic most of the time.
I'm not sure about your case, but I just encountered a case in Mysql that Replace(value, "'", "''") not only can't prevent SQL injection, but also causes the injection.
if an input ended with \', it's OK without replace, but when replacing the trailing ', the \ before end of string quote causes the SQL error.
While you might find a solution that works for strings, for numerical predicates you need to also make sure they're only passing in numbers (simple check is can it be parsed as int/double/decimal?).
It's a lot of extra work.
It might work, but it seems a little hokey to me. I'd recommend verifing that each string is valid by testing it against a regular expression instead.
Yes, you can, if...
After studying the topic, I think input sanitized as you suggested is safe, but only under these rules:
you never allow string values coming from users to become anything else than string literals (i.e. avoid giving configuration option: "Enter additional SQL column names/expressions here:"). Value types other than strings (numbers, dates, ...): convert them to their native data types and provide a routine for SQL literal from each data type.
SQL statements are problematic to validate
you either use nvarchar/nchar columns (and prefix string literals with N) OR limit values going into varchar/char columns to ASCII characters only (e.g. throw exception when creating SQL statement)
this way you will be avoiding automatic apostrophe conversion from CHAR(700) to CHAR(39) (and maybe other similar Unicode hacks)
you always validate value length to fit actual column length (throw exception if longer)
there was a known defect in SQL Server allowing to bypass SQL error thrown on truncation (leading to silent truncation)
you ensure that SET QUOTED_IDENTIFIER is always ON
beware, it is taken into effect in parse-time, i.e. even in inaccessible sections of code
Complying with these 4 points, you should be safe. If you violate any of them, a way for SQL injection opens.

How to display unicode char in kdb console

I am following the example here: https://code.kx.com/q/kb/unicode/, and it all works as expected, but it is still very inconveinent to work with unicode in a table:
q)select from t
sym name text ..
-----------------------------------------------------------------------------..
apples 蘋果 "\303\277\310\325\322\273\314O\271\373, \341t\311\372\337h\353x\..
bananas 香蕉 "\317\343\275\266\264\254\312\307\322\273\265\300\277\311\277\33..
oranges 橙 "\217\304\267\360\301_\300\357\337_\326\335\201\355\265\304\365r..
Is there a way to display the unicode char properly in kdb console? I know we could use symbol, but I believe symbol types requires different storage, and may not appropriate for free text, so would like to see a way for char types to work.
Or are there any other tool that could enable us to work with unicode smoothly? - I've tried qStudio, unicode seems like not supported at all, even for symbol types
Thanks.
Studio for kdb+ displays tables with unicode characters properly, so does any JetBrains' IDE with KDB+ Studio plugin installed (the plugin is based on Studio's code). Both are cross-platform.
If you are on Windows, try QInsightPad 2.1, it should handle Unicode properly too.

How to query with unicode input value in Firebird 2.1? [duplicate]

This question already has answers here:
How to use non-ascii character string literals in firebird sql statements?
(3 answers)
Closed 3 years ago.
I made a simple SELECT statement, like this:
select t.no_qult, t.desc_qult
from qualities_type t
where t.name_qult = 'Lỗi vải'
The field name_qult is UNICODE_FSS charset. Problem is it didn't work with unicode input value Lỗi vải (Vietnamese language), just work when I use plain text Lá»—i vải.
Does anyone know how to query with a unicode input value?
Do not use literals. Don't put data into the query text, put it outside of query as "parameters".
It has a number of benefits, like more reliable parsing, more type-checking, more safety and often more speed (you can prepare query once, and then run it many times only changing parameters value).
How you code the parameters in SQL queries depends upon the library you use in your programming language for connecting to Firebird. See http://bobby-tables.com/ for some examples. The following are three often used conventions to try:
SELECT .... WHERE t.name_qult = ? -- natively supported by Firebird, index-based access to parameters
SELECT .... WHERE t.name_qult = :NAME_PARAM -- BDE / Delphi style
SELECT .... WHERE t.name_qult = #NAME_PARAM -- MS SQL / .Net style
I do not know which flavours are supported in languages and programs you use.
IB Expert uses Delphi libraries, hence using #2 option.
Java-written programs tent to use #1 option.
Additionally in your connection options check that the "connection charset" is set to UTF-8 or to come Vietnamese codepage that can transfer all those specific characters.
UNICODE_FSS charset is outdated. If possible, it would be better to move to UTF-8 charset wherever possible.

Plone 4.0.5 and Unicode confusion

At first, Im using FreeBSD 8.1, Plone 4.0.5 and testing both Data.fs and RelStorage 1.5.0b2 (Postgresql 9.0.3). Im from Denmark and we use danish letters ("æøå").
Im confused about encoding, but my initial guess is that the best way to go is with Unicode (utf-8). What is the correct way to configure FreeBSD, Plone (and products) and PostgreSQL to comply with Danish letters. Ive already been told that the encoding does not matter for PostgreSQL.
Ive been seeing comments about site.py and sitecustomize.py around when googling for errors - please comment.
Thanks.
Nikolaj G.
Plone and all its add-ons support Unicode by default, you don't need to configure the encoding at any level.
Even when using RelStorage, we only store binary data inside the SQL database and no strings, so there's no de/encoding taking place at this level.
Changing the Python default encoding in site.py or sitecustomize.py is actually harmful and you should not do this. It will only mask actual programming errors inside the code base and can lead to inconsistent data.
Inside the codebase we do use a mixture of both Unicode and utf-8 encoded strings. So generally your code will have to be written in a way to handle both of these. This is unfortunate but a side-effect of us slowly migrating to proper Unicode at all levels.

How to insert asian characters using Squirrel Sql

I'm running Squirrel-SQL on Ubuntu.
I cannot write chinese characters on Squirrel, but I can write them in another text editor and copy+paste into squirrel. However, when I run the update and select the data I just inserted, the characters I write show up as question marks.
When I insert the data from a web interface, or when I right click on results and choose "make editable", I can paste in the data which will show up fine when I select again.
This tells me that the database saves the characters fine. Squirrel is capable of displaying the characters fine. The problem seems to be in the sql text editor.
Anyone have this problem before?
I finally found the answer! Looks like hibernate was doing some extra work for me (via web interface or squirrel's "make editable" option on results) that I wasn't aware was necessary. Looks like the problem was actually a syntactical mistake for Microsoft SQL Server. I needed to prepend the letter 'N' right before the characters I wish to insert.
For example:
update title_product
set synopsis = N'我很高兴 test'
where title_product_id = 26
This converts chinese and english characters correctly. Yay.
Although I still cannot write chinese characters directly into Squirrel, I have to copy+paste from another editor.