What qualifies as a token in CodeMirror - codemirror

In Codemirror cm.getTokenAt(pos) returns a token at the current cursor position. What exactly does a token mean?
For eg. When you look for a token in the line "System.out.println", it returns the token println, whereas when you search for a token in "System.out.", it return "."
Also is a token language dependent and different results are obtained for different languages?

A token is whatever the mode writer wants it to be. In the context of a codemirror mode, a token is a sequence of text that should be highlighted as a unit. A token is one "thing". That might mean one character (such as an opening or closing brace), or a function or variable name.
The token is completely dependent on the language mode, though most languages use the same rough definition for tokens (functions, variables, syntax characters, comments, etc)

Related

How to properly parse a multi-character token in tree-sitter scanner function

Tree-sitter allows you to use an external scanner for those tokens that are tricky to parse or that depend on specific states like multiline strings.
The scanner takes a convenient lexer object with several methods that allow you to "scan" the document looking for the proper token characters.
Two of the key parts of this lexer are lookahead, which tells you the next character the lexer is "lookin at" and advance, which will move the lexer pointer to the next caracter.
However, after reading the docs and several other parsers that make use of this it's still not clear to me if calling this methods will "affect" the overall tree-sitter parser of if they are just local to my function invocation.
Specially tricky is trying to parse a multi-character token (more than 2 characters in fact) because you need to "advance the lexer, consuming the potential next chars that may be part of other tokens. One possible escape is to just return false after consuming the tokens and let tree-sitter go to the next step in the parsing, but thay may skip other valid tokens that potentially depend on the characters that I already consumed.
Of course I can move this parsing to the bottom of the scan function, but then maybe other shorter tokens may shadow this longer one and also produce an incorrect parsing.
As far as I know, there is no way to "rewind" the parser to undo the "consumption" of the characters, so I am not sure how to deal with this.
The tokens that I'm trying to parse are {js| for string opening and |js} for string closing.

Is double escaping in postgres enough to prevent SQL injections/attacks? (Alternative to using parameters) [duplicate]

I realize that parameterized SQL queries is the optimal way to sanitize user input when building queries that contain user input, but I'm wondering what is wrong with taking user input and escaping any single quotes and surrounding the whole string with single quotes. Here's the code:
sSanitizedInput = "'" & Replace(sInput, "'", "''") & "'"
Any single-quote the user enters is replaced with double single-quotes, which eliminates the users ability to end the string, so anything else they may type, such as semicolons, percent signs, etc., will all be part of the string and not actually executed as part of the command.
We are using Microsoft SQL Server 2000, for which I believe the single-quote is the only string delimiter and the only way to escape the string delimiter, so there is no way to execute anything the user types in.
I don't see any way to launch an SQL injection attack against this, but I realize that if this were as bulletproof as it seems to me someone else would have thought of it already and it would be common practice.
What's wrong with this code? Is there a way to get an SQL injection attack past this sanitization technique? Sample user input that exploits this technique would be very helpful.
UPDATE:
I still don't know of any way to effectively launch a SQL injection attack against this code. A few people suggested that a backslash would escape one single-quote and leave the other to end the string so that the rest of the string would be executed as part of the SQL command, and I realize that this method would work to inject SQL into a MySQL database, but in SQL Server 2000 the only way (that I've been able to find) to escape a single-quote is with another single-quote; backslashes won't do it.
And unless there is a way to stop the escaping of the single-quote, none of the rest of the user input will be executed because it will all be taken as one contiguous string.
I understand that there are better ways to sanitize input, but I'm really more interested in learning why the method I provided above won't work. If anyone knows of any specific way to mount a SQL injection attack against this sanitization method I would love to see it.
First of all, it's just bad practice. Input validation is always necessary, but it's also always iffy.
Worse yet, blacklist validation is always problematic, it's much better to explicitly and strictly define what values/formats you accept. Admittedly, this is not always possible - but to some extent it must always be done.
Some research papers on the subject:
http://www.imperva.com/docs/WP_SQL_Injection_Protection_LK.pdf
http://www.it-docs.net/ddata/4954.pdf (Disclosure, this last one was mine ;) )
https://www.owasp.org/images/d/d4/OWASP_IL_2007_SQL_Smuggling.pdf (based on the previous paper, which is no longer available)
Point is, any blacklist you do (and too-permissive whitelists) can be bypassed. The last link to my paper shows situations where even quote escaping can be bypassed.
Even if these situations do not apply to you, it's still a bad idea. Moreover, unless your app is trivially small, you're going to have to deal with maintenance, and maybe a certain amount of governance: how do you ensure that its done right, everywhere all the time?
The proper way to do it:
Whitelist validation: type, length, format or accepted values
If you want to blacklist, go right ahead. Quote escaping is good, but within context of the other mitigations.
Use Command and Parameter objects, to preparse and validate
Call parameterized queries only.
Better yet, use Stored Procedures exclusively.
Avoid using dynamic SQL, and dont use string concatenation to build queries.
If using SPs, you can also limit permissions in the database to executing the needed SPs only, and not access tables directly.
you can also easily verify that the entire codebase only accesses the DB through SPs...
Okay, this response will relate to the update of the question:
"If anyone knows of any specific way to mount a SQL injection attack against this sanitization method I would love to see it."
Now, besides the MySQL backslash escaping - and taking into account that we're actually talking about MSSQL, there are actually 3 possible ways of still SQL injecting your code
sSanitizedInput = "'" & Replace(sInput, "'", "''") & "'"
Take into account that these will not all be valid at all times, and are very dependant on your actual code around it:
Second-order SQL Injection - if an SQL query is rebuilt based upon data retrieved from the database after escaping, the data is concatenated unescaped and may be indirectly SQL-injected. See
String truncation - (a bit more complicated) - Scenario is you have two fields, say a username and password, and the SQL concatenates both of them. And both fields (or just the first) has a hard limit on length. For instance, the username is limited to 20 characters. Say you have this code:
username = left(Replace(sInput, "'", "''"), 20)
Then what you get - is the username, escaped, and then trimmed to 20 characters. The problem here - I'll stick my quote in the 20th character (e.g. after 19 a's), and your escaping quote will be trimmed (in the 21st character). Then the SQL
sSQL = "select * from USERS where username = '" + username + "' and password = '" + password + "'"
combined with the aforementioned malformed username will result in the password already being outside the quotes, and will just contain the payload directly.
3. Unicode Smuggling - In certain situations, it is possible to pass a high-level unicode character that looks like a quote, but isn't - until it gets to the database, where suddenly it is. Since it isn't a quote when you validate it, it will go through easy... See my previous response for more details, and link to original research.
In a nutshell: Never do query escaping yourself. You're bound to get something wrong. Instead, use parameterized queries, or if you can't do that for some reason, use an existing library that does this for you. There's no reason to be doing it yourself.
I realize this is a long time after the question was asked, but ..
One way to launch an attack on the 'quote the argument' procedure is with string truncation.
According to MSDN, in SQL Server 2000 SP4 (and SQL Server 2005 SP1), a too long string will be quietly truncated.
When you quote a string, the string increases in size. Every apostrophe is repeated.
This can then be used to push parts of the SQL outside the buffer. So you could effectively trim away parts of a where clause.
This would probably be mostly useful in a 'user admin' page scenario where you could abuse the 'update' statement to not do all the checks it was supposed to do.
So if you decide to quote all the arguments, make sure you know what goes on with the string sizes and see to it that you don't run into truncation.
I would recommend going with parameters. Always. Just wish I could enforce that in the database. And as a side effect, you are more likely to get better cache hits because more of the statements look the same. (This was certainly true on Oracle 8)
I've used this technique when dealing with 'advanced search' functionality, where building a query from scratch was the only viable answer. (Example: allow the user to search for products based on an unlimited set of constraints on product attributes, displaying columns and their permitted values as GUI controls to reduce the learning threshold for users.)
In itself it is safe AFAIK. As another answerer pointed out, however, you may also need to deal with backspace escaping (albeit not when passing the query to SQL Server using ADO or ADO.NET, at least -- can't vouch for all databases or technologies).
The snag is that you really have to be certain which strings contain user input (always potentially malicious), and which strings are valid SQL queries. One of the traps is if you use values from the database -- were those values originally user-supplied? If so, they must also be escaped. My answer is to try to sanitize as late as possible (but no later!), when constructing the SQL query.
However, in most cases, parameter binding is the way to go -- it's just simpler.
Input sanitation is not something you want to half-ass. Use your whole ass. Use regular expressions on text fields. TryCast your numerics to the proper numeric type, and report a validation error if it doesn't work. It is very easy to search for attack patterns in your input, such as ' --. Assume all input from the user is hostile.
It's a bad idea anyway as you seem to know.
What about something like escaping the quote in string like this: \'
Your replace would result in: \''
If the backslash escapes the first quote, then the second quote has ended the string.
Simple answer: It will work sometimes, but not all the time.
You want to use white-list validation on everything you do, but I realize that's not always possible, so you're forced to go with the best guess blacklist. Likewise, you want to use parametrized stored procs in everything, but once again, that's not always possible, so you're forced to use sp_execute with parameters.
There are ways around any usable blacklist you can come up with (and some whitelists too).
A decent writeup is here: http://www.owasp.org/index.php/Top_10_2007-A2
If you need to do this as a quick fix to give you time to get a real one in place, do it. But don't think you're safe.
There are two ways to do it, no exceptions, to be safe from SQL-injections; prepared statements or prameterized stored procedures.
If you have parameterised queries available you should be using them at all times. All it takes is for one query to slip through the net and your DB is at risk.
Patrick, are you adding single quotes around ALL input, even numeric input? If you have numeric input, but are not putting the single quotes around it, then you have an exposure.
Yeah, that should work right up until someone runs SET QUOTED_IDENTIFIER OFF and uses a double quote on you.
Edit: It isn't as simple as not allowing the malicious user to turn off quoted identifiers:
The SQL Server Native Client ODBC driver and SQL Server Native Client OLE DB Provider for SQL Server automatically set QUOTED_IDENTIFIER to ON when connecting. This can be configured in ODBC data sources, in ODBC connection attributes, or OLE DB connection properties. The default for SET QUOTED_IDENTIFIER is OFF for connections from DB-Library applications.
When a stored procedure is created, the SET QUOTED_IDENTIFIER and SET ANSI_NULLS settings are captured and used for subsequent invocations of that stored procedure.
SET QUOTED_IDENTIFIER also corresponds to the QUOTED_IDENTIFER setting of ALTER DATABASE.
SET QUOTED_IDENTIFIER is set at parse time. Setting at parse time means that if the SET statement is present in the batch or stored procedure, it takes effect, regardless of whether code execution actually reaches that point; and the SET statement takes effect before any statements are executed.
There's a lot of ways QUOTED_IDENTIFIER could be off without you necessarily knowing it. Admittedly - this isn't the smoking gun exploit you're looking for, but it's a pretty big attack surface. Of course, if you also escaped double quotes - then we're back where we started. ;)
Your defence would fail if:
the query is expecting a number rather than a string
there were any other way to represent a single quotation mark, including:
an escape sequence such as \039
a unicode character
(in the latter case, it would have to be something which were expanded only after you've done your replace)
What ugly code all that sanitisation of user input would be! Then the clunky StringBuilder for the SQL statement. The prepared statement method results in much cleaner code, and the SQL Injection benefits are a really nice addition.
Also why reinvent the wheel?
Rather than changing a single quote to (what looks like) two single quotes, why not just change it to an apostrophe, a quote, or remove it entirely?
Either way, it's a bit of a kludge... especially when you legitimately have things (like names) which may use single quotes...
NOTE: Your method also assumes everyone working on your app always remembers to sanitize input before it hits the database, which probably isn't realistic most of the time.
I'm not sure about your case, but I just encountered a case in Mysql that Replace(value, "'", "''") not only can't prevent SQL injection, but also causes the injection.
if an input ended with \', it's OK without replace, but when replacing the trailing ', the \ before end of string quote causes the SQL error.
While you might find a solution that works for strings, for numerical predicates you need to also make sure they're only passing in numbers (simple check is can it be parsed as int/double/decimal?).
It's a lot of extra work.
It might work, but it seems a little hokey to me. I'd recommend verifing that each string is valid by testing it against a regular expression instead.
Yes, you can, if...
After studying the topic, I think input sanitized as you suggested is safe, but only under these rules:
you never allow string values coming from users to become anything else than string literals (i.e. avoid giving configuration option: "Enter additional SQL column names/expressions here:"). Value types other than strings (numbers, dates, ...): convert them to their native data types and provide a routine for SQL literal from each data type.
SQL statements are problematic to validate
you either use nvarchar/nchar columns (and prefix string literals with N) OR limit values going into varchar/char columns to ASCII characters only (e.g. throw exception when creating SQL statement)
this way you will be avoiding automatic apostrophe conversion from CHAR(700) to CHAR(39) (and maybe other similar Unicode hacks)
you always validate value length to fit actual column length (throw exception if longer)
there was a known defect in SQL Server allowing to bypass SQL error thrown on truncation (leading to silent truncation)
you ensure that SET QUOTED_IDENTIFIER is always ON
beware, it is taken into effect in parse-time, i.e. even in inaccessible sections of code
Complying with these 4 points, you should be safe. If you violate any of them, a way for SQL injection opens.

Lex - Do the rules only match to the tokens or over the sequence of tokens?

In Lex, there are a set of rules defined. Do the rules only apply to the tokens that are delimitated by spaces and such? If there is any block in the line buffer that matches the rule, will it work?
For example, if I want to write a rule that checks the function of a program (e.g., void sum()) with a rule such as "void "[a-zA-Z]+"()", this rule should be matched to a line. Can rules that span over more than a single token work in Lex?
When you call yylex, it finds the longest match starting at the current buffer pointer. It does not search for a token. It will match spaces if (and only if) the rule can match space characters.
Lex has no idea of what a token is other than "a sequence of characters which match a rule", so the question about whether a rule can span more than a token is meaningless. By definition, anything which matches a rule is a token.
There must always be some rule which matches, since the scanner will never match anything which doesn't start at the current buffer pointer. By default, lex adds a rule (if necessary) at the end which matches any character and echoes it to yyout. Unless you're writing a transducer, that is almost certainly not what you want, so I always recommend that you add
%option nodefault
(assuming you are actually using flex, which is by far the most common lex implementation); that will suppress the default rule and give you a warning if it is possible that no rule matches the input. Then you can define your own fallback rule, which might be something like:
.|\n { return yytext[0]; }
or which might throw a scanner error.

lex default token definition syntax

I guess this is a simple question, but I have found no reference. I have a small lex file defining some tokens from a string and altering them (actually converting them to uppercase).
Basically it is a list of commands like this:
word {setToUppercase(yytext);}
Where setToUppercase is a procedure to change case and store it.
I need to have the complete entry string with the altered words. Is there a way to define a default token / rest of tokens so I can asociate them with an unaltered storage in an output string?
You can do that in one shot with:
.|\n {save_str(yytext);}
I said it was an easy one.
. {save_str(yytext);}
\n {save_str(yytext);}
This way all characters and newline are treated.

Special characters in WCF web API URLs

I have a web service which uses the WCF web api to create RESTful service. This serivce expects many different values in the url path seperated with a comma. This method works perfectly for simple data e.g. someones name or a numeric value. However I have a field on the client side (a java based BlackBerry app) which allows a user to freely type data which includes characters such as . or / which messes up my whole url.
Even when I replace the characters with their hex values e.g. a / to %2F the problem persists.
Does anyone know a means to either represent these characters in a URL which will be ignored when looking for the address or better yet a means to tell the URL the following characters are to be ignored perhaps in the way quotation marks would work?
You can use encode url function. URL Encoding is the process of converting string into valid URL format. Valid URL format means that the URL contains only what is termed alpha | digit | safe | extra | escape characters.