Get the ResultSet of an SQL injection - sql-injection

Suppose the server side code is something like that:
String id = getIdFromHttpRequest();
String value = getValueFromHttpRequest();
ResultSet rs = new ResultSet();
String query = "INSERT INTO users VALUES ('" + id + "', '" + value + "');"
rs = SQL.doQuery(query); // i know it's not the syntax, but the point is clear
Well, the injection is easy, I can make it execute an SQL command, but the problem is I want to see the result set (I inject SELECT command).
Is there a way of doing so?

You probably cannot achieve this.
As you know, an INSERT statement has no result set, even if you use SQL injection. At best, you could make it execute a SELECT as a scalar subquery. It's not hard to spoof your example to execute the following:
INSERT INTO users VALUES ('8675309', '' || (SELECT ...blah blah...) || '');
But that still would not return a result set, because INSERT never has a result set.
You would need to execute a second query to do that. Some query interfaces do support multi-query in a single call to doQuery(), but this is not always true (depends on the brand of database you use, and possibly some configuration options).
INSERT INTO users VALUES (...whatever...);
SELECT * FROM secure_table WHERE (id = '8675309');
With SQL injection, you can manipulate the SQL, but you can't manipulate the rest of the code in the application that runs the SQL. In the example you show, the app is designed to run an INSERT query, not an INSERT followed by a SELECT. The app would have no reason to fetch a result set after executing an INSERT.
It's hard to imagine how you could use SQL injection alone to trick the code you show into fetching and displaying a result set.
I don't think it is possible to use SQL injection do read data by exploiting a non-reading query.

Related

PostgreSQL, allow to filter by not existing fields

I'm using a PostgreSQL with a Go driver. Sometimes I need to query not existing fields, just to check - maybe something exists in a DB. Before querying I can't tell whether that field exists. Example:
where size=10 or length=10
By default I get an error column "length" does not exist, however, the size column could exist and I could get some results.
Is it possible to handle such cases to return what is possible?
EDIT:
Yes, I could get all the existing columns first. But the initial queries can be rather complex and not created by me directly, I can only modify them.
That means the query can be simple like the previous example and can be much more complex like this:
WHERE size=10 OR (length=10 AND n='example') OR (c BETWEEN 1 and 5 AND p='Mars')
If missing columns are length and c - does that mean I have to parse the SQL, split it by OR (or other operators), check every part of the query, then remove any part with missing columns - and in the end to generate a new SQL query?
Any easier way?
I would try to check within information schema first
"select column_name from INFORMATION_SCHEMA.COLUMNS where table_name ='table_name';"
And then based on result do query
Why don't you get a list of columns that are in the table first? Like this
select column_name
from information_schema.columns
where table_name = 'table_name' and (column_name = 'size' or column_name = 'length');
The result will be the columns that exist.
There is no way to do what you want, except for constructing an SQL string from the list of available columns, which can be got by querying information_schema.columns.
SQL statements are parsed before they are executed, and there is no conditional compilation or no short-circuiting, so you get an error if a non-existing column is referenced.

SQL Server openrowset() test column count in MS Access table

Haven't found an answer via Google. I need to execute this code from SQL Server stored proc.
I have a folder with 100+ access dbs with a table called tblReports. Some of the access db's have an extra column in tblReports called AdminReport.
I need to capture the extra column if it exists, thus... I need to test how many columns are in tblReports so that I can use an if/else statement in the sp to generate the correct sql based on the column count.
I'd love to read your thoughts, here's the relevant snippet.
set #sql = 'Insert into CustomerServiceIntranet.dbo.ReportCriteria
(UserInfo,RptNbr,RptType,RptDesc,GroupCDBrk,ClientCDBrk,CategoryCDBrk,
UserIDBrk,UnitCDBrk,WrkTypeBrk,StatCDBrk,StatDatBrk,
ExperBrk,GroupList,ClientList,CategoryList,UserIDList,BusAreaList,
WrkTypList,StatusList,QueueList,ReviewDay,ReviewDayNA,
ErrorImpact,DateRange,DataSource,RptPathFile)'
+ 'Select '''+ #userfilename + ''', ors.* '
+ 'from (select * From Openrowset(''Microsoft.ACE.OLEDB.12.0'','''
+ #CurrentName
+ ''';''Admin'';,''select * from tblReports'')) ors'
The standard approach would be to link to tblReports by calling DoCmd.TransferDatabase. You would then be able to count number of the fields in the table, before embarking on any SQL. At the end of the look you would delete the link by calling DoCmd.DeleteObject.
It certainly looks neater than what you are trying to do.

Conditional dynamic SQL with cursor

I have a query which uses a cursor to cycle through the results of a select statement.
The select statement in short selects all of the records from a mapping table I have. One of the columns is 'SourceTableName'.
I use this field to generate some dynamic SQL.
I am looking to add a parameter to my stored procedure wrapped around this, which will allow me to only create dynamic SQL for the 'SourceTableName' that I want - IF I pass in a 'SourceTableNameFilter'.
I am stuck with some logic which wraps my dynamic SQL.
IF #SourceTableNameFilter(SP parameter) = #SourceTableName(from mapping table)
BEGIN
Generate and execute some dynamic SQL based on the SourceTableName.
The problem is, I want this to either work on all tables that come back from a select against 'SourceTableName' BUT if a #SourceTableNameFilter parameter is present and not null - then only generate dynamic SQL for any rows in the cursor which match my filter parameter.
Is there a way for me to accomplish this with an IF statement without copying the logic inside the IF/ELSE twice?
FETCH NEXT FROM TABLECUR INTO #SourceTableName
,#SourceInColumn
,#SourceOutColumn
,#TargetTableName
,#TargetLookupColumn
,#TargetLookupResultColumn
,#MappingTableID
WHILE (##fetch_status <> -1)
BEGIN
IF (##fetch_status <> -2)
BEGIN
IF (#SourceTableName = #SourceTableNameFilter)
--GENERATE DYNAMIC SQL
ELSE
--GENERATE DYNAMIC SQL FOR ALL RECORDS
The generate dynamic SQL string is the same in both the if and the else, any way to change the conditions so that I'm not duplicating the dynamic SQL generation and to not generate dynamic SQL when the #SourceTableName != #SourceTableNameFilter?
Thank you
Consider adding this logic to the cursor definition, rather than having that logic within the processing of each cursor record.
So if the cursor is normally:
DECLARE MY_CURSOR Cursor FOR
SELECT SourceTableName, SourceInColumn, SourceOutColumn
,TargetTableName, TargetLookupColumn
,TargetLookupResultColumn, MappingTableID
FROM MappingTable
--get source tables when filter is specified; otherwise get all
WHERE (SourceTableName = #SourceTableNameFilter) OR (LEN(ISNULL(SourceTableNameFilter,'')=0)
Now you can execute your business logic within the cursor without having to detect the filtered table or not. The cursor is loaded with the records you need to care about. It sounds, from the question, that the business logic is the same, no matter if the filter was passed in or not. If this is incorrect, or if it doesn't satisfy your requirement, please comment.
Knowing nothing about the dynamic sql you're building, I'd recommend doing something along the lines of:
SET #DynamicCommand = '<whatever, first part>'
+ isnull(#SourceTableNameFilter
,'<no special action, perhaps just empty string>'
,'<add conditional text dependent upon contents of #SourceTableNameFilter>')
+ '<whatever, second part>'

Re-write this Query to make it more scalable

I have a page on my site which has multiple drop down boxes as filters.
So the SQL procedure for that page would be something like this
IF #Filter1 = 0, #Filter2 = 0, #Filter3 = 0
BEGIN
SELECT * FROM Table1
END
ELSE IF #Filter1 = 1, #Filter2 = 0, #Filter3 = 0
BEGIN
SELECT * FROM Table2
END
At the beginning, there were only a few results per filter so there weren't that many permutations. However, more filters have been added such that there are over 20 IF ELSE checks now.
So if each filter has 5 options, I will need to do 5*5*5 = 125 IF ELSE checks to return data dependent on the the filters.
Update
The first filter alters the WHERE condition, the second filter adds more tables to the result set, the third filter alters the ORDER BY condition
How can I make this query more scalable such that I don't have to write a new bunch of IF ELSE statements to check for every condition everytime a new filter is added to the list besides using dynamic SQL...
You must have to have a rule table with formulaes maybe bitwise and construct a query that might plug variable data from the table and appends to a string to form the sql and the use dynamic sql to run them.
As much as I dislike dynamic SQL, this may be the time for it. You can build the query a little at a time, then execute it at the end.
If you're unfamiliar, the syntax is something like:
DECLARE #SQL VARCHAR(1000)
SELECT #SQL = 'SELECT * FROM ' + 'SOME_TABLE'
EXEC(#SQL)
Make sure you deal with SQL injection attacks, proper spacing, etc.
In this case, I'd do my best to put this logic in application code, but that's not always possible. If you're using LINQ-to-SQL or another LINQ framework, you should be able to do this safely, but it may take some creativity to get the LINQ query built properly.
You can set up a bunch of views, one for each "filter" and then select from the appropriate view based on which "filter" was selected.

SQL join from multiple tables

We've got a system (MS SQL 2008 R2-based) that has a number of "input" database and a one "output" database. I'd like to write a query that will read from the output DB, and JOIN it to data in one of the source DB. However, the source table may be one or more individual tables :( The name of the source DB is included in the output DB; ideally, I'd like to do something like the following (pseudo-SQL ahoy)
select o.[UID]
,o.[description]
,i.[data]
from [output].dbo.[description] as o
left join (select [UID]
,[data]
from
[output.sourcedb].dbo.datatable
) as i
on i.[UID] = o.[UID];
Is there any way to do something like the above - "dynamically" specify the database and table to be joined on for each row in the query?
Try using the exec function, then specify the select as a string, adding variables for database names and tables where appropriate. Simple example:
DECLARE #dbName VARCHAR(255), #tableName VARCHAR(255), #colName VARCHAR(255)
...
EXEC('SELECT * FROM ' + #dbName + '.dbo.' + #tableName + ' WHERE ' + #colName + ' = 1')
No, the table must be known at the time you prepare the query. Otherwise how would the query optimizer know what indexes it might be able to use? Or if the table you reference even has an UID column?
You'll have to do this in stages:
Fetch the sourcedb value from your output database in one query.
Build an SQL query string, interpolating the value you fetched in the first query into the FROM clause of the second query.
Be careful to check that this value contains a legitimate database name. For instance, filter out non-alpha characters or apply a regular expression or look it up in a whitelist. Otherwise you're exposing yourself to a SQL Injection risk.
Execute the new SQL string you built with exec() as #user353852 suggests.