finding length of longest coulmn in a database - SELECT subquery - select

My skills in SQL are limited:
I have a database (SQLBase in this case) that has a couple of LONGVARs in columns.
I'm searching for the actual length of all COLUMNS that have a particular type.
SELECT tbname,name FROM sysadm.syscolumns where coltype='LONGVAR';
The above statement works. It gives me all tables and the respective column names that have a LONGVAR datatype. Now I would like to take these data and search through all the respective tables (the rows, so the data) and find the lengths of the respective LONGVAR columns (to find the maximum for instance, or those above a certain limit).
I have the idea that it can be solved with a subquery of nested SELECT statement but have no idea how to formulate the statement.

I don't have any real knowledge of SQLbase, so I may be off-base here: but if I was trying to do this on SQL Server, a simple approach would be to do something like the following:
SELECT
tbname,
name,
'SELECT ''' + tbname + ''' AS TableName, ''' + name + ''' AS ColumnName, MAX(LEN(' + name + ')) AS ColumnLength FROM ' + tbname + ' -- add a WHERE clause here if needed' AS Query
FROM sysadm.syscolumns
WHERE coltype='LONGVAR';
This will output a set of values, which you could then copy/paste into a new query editor window and examine before running.
Other, more complex solutions would involve dynamic SQL that automatically executes each of these statements; but again, not knowing much about SQLbase, this is where I would start.

Related

PostgreSQL, allow to filter by not existing fields

I'm using a PostgreSQL with a Go driver. Sometimes I need to query not existing fields, just to check - maybe something exists in a DB. Before querying I can't tell whether that field exists. Example:
where size=10 or length=10
By default I get an error column "length" does not exist, however, the size column could exist and I could get some results.
Is it possible to handle such cases to return what is possible?
EDIT:
Yes, I could get all the existing columns first. But the initial queries can be rather complex and not created by me directly, I can only modify them.
That means the query can be simple like the previous example and can be much more complex like this:
WHERE size=10 OR (length=10 AND n='example') OR (c BETWEEN 1 and 5 AND p='Mars')
If missing columns are length and c - does that mean I have to parse the SQL, split it by OR (or other operators), check every part of the query, then remove any part with missing columns - and in the end to generate a new SQL query?
Any easier way?
I would try to check within information schema first
"select column_name from INFORMATION_SCHEMA.COLUMNS where table_name ='table_name';"
And then based on result do query
Why don't you get a list of columns that are in the table first? Like this
select column_name
from information_schema.columns
where table_name = 'table_name' and (column_name = 'size' or column_name = 'length');
The result will be the columns that exist.
There is no way to do what you want, except for constructing an SQL string from the list of available columns, which can be got by querying information_schema.columns.
SQL statements are parsed before they are executed, and there is no conditional compilation or no short-circuiting, so you get an error if a non-existing column is referenced.

Get the ResultSet of an SQL injection

Suppose the server side code is something like that:
String id = getIdFromHttpRequest();
String value = getValueFromHttpRequest();
ResultSet rs = new ResultSet();
String query = "INSERT INTO users VALUES ('" + id + "', '" + value + "');"
rs = SQL.doQuery(query); // i know it's not the syntax, but the point is clear
Well, the injection is easy, I can make it execute an SQL command, but the problem is I want to see the result set (I inject SELECT command).
Is there a way of doing so?
You probably cannot achieve this.
As you know, an INSERT statement has no result set, even if you use SQL injection. At best, you could make it execute a SELECT as a scalar subquery. It's not hard to spoof your example to execute the following:
INSERT INTO users VALUES ('8675309', '' || (SELECT ...blah blah...) || '');
But that still would not return a result set, because INSERT never has a result set.
You would need to execute a second query to do that. Some query interfaces do support multi-query in a single call to doQuery(), but this is not always true (depends on the brand of database you use, and possibly some configuration options).
INSERT INTO users VALUES (...whatever...);
SELECT * FROM secure_table WHERE (id = '8675309');
With SQL injection, you can manipulate the SQL, but you can't manipulate the rest of the code in the application that runs the SQL. In the example you show, the app is designed to run an INSERT query, not an INSERT followed by a SELECT. The app would have no reason to fetch a result set after executing an INSERT.
It's hard to imagine how you could use SQL injection alone to trick the code you show into fetching and displaying a result set.
I don't think it is possible to use SQL injection do read data by exploiting a non-reading query.

syb_describe in DBD::Sybase

I am looking to extract Sybase datatype for all the columns in a table. When I try to achieve this using $sth->{TYPE}, I get a numeric version of the datatype (i.e. instead of sybase datatype varchar, I get 0).
From the DBD::Sybase documentation, I noticed that SYBTYPE attribute of syb_describe function might be able to produce what I am looking for. But it seems that my understanding is not proper. SYBTYPE also prints datatype in numeric form only.
Is there any way to fetch the textual representation of actual Sybase datatype (instead of the number)?
It sounds like you wish to reverse engineer the create table definition. Here is an SQL script you can use for Sybase or SQL Server tables.
select c.name,
"type(size)"=case
when t.name in ("char", "varchar") then
t.name + "(" + rtrim(convert(char(3), c.length)) + ")"
else t.name
end,
"null"=case
when convert(bit, (c.status & 8)) = 0 then "NOT NULL"
else "NULL"
end
from syscolumns c, systypes t
where c.id = object_id("my_table_name")
and c.usertype *= t.usertype
order by c.colid
go
Note: This could still be edited with a nawk script to create a real SQL schema file.
The nawk script would strip the header, add "create table my_table_name", add commas, strip the footer and add a "go".
Good SQL, good night!
I found a workaround (Note: This does not answer the question though):
What I did was simply joined the sysobjects, systypes and syscolumns system tables.

SQL Server openrowset() test column count in MS Access table

Haven't found an answer via Google. I need to execute this code from SQL Server stored proc.
I have a folder with 100+ access dbs with a table called tblReports. Some of the access db's have an extra column in tblReports called AdminReport.
I need to capture the extra column if it exists, thus... I need to test how many columns are in tblReports so that I can use an if/else statement in the sp to generate the correct sql based on the column count.
I'd love to read your thoughts, here's the relevant snippet.
set #sql = 'Insert into CustomerServiceIntranet.dbo.ReportCriteria
(UserInfo,RptNbr,RptType,RptDesc,GroupCDBrk,ClientCDBrk,CategoryCDBrk,
UserIDBrk,UnitCDBrk,WrkTypeBrk,StatCDBrk,StatDatBrk,
ExperBrk,GroupList,ClientList,CategoryList,UserIDList,BusAreaList,
WrkTypList,StatusList,QueueList,ReviewDay,ReviewDayNA,
ErrorImpact,DateRange,DataSource,RptPathFile)'
+ 'Select '''+ #userfilename + ''', ors.* '
+ 'from (select * From Openrowset(''Microsoft.ACE.OLEDB.12.0'','''
+ #CurrentName
+ ''';''Admin'';,''select * from tblReports'')) ors'
The standard approach would be to link to tblReports by calling DoCmd.TransferDatabase. You would then be able to count number of the fields in the table, before embarking on any SQL. At the end of the look you would delete the link by calling DoCmd.DeleteObject.
It certainly looks neater than what you are trying to do.

SQL join from multiple tables

We've got a system (MS SQL 2008 R2-based) that has a number of "input" database and a one "output" database. I'd like to write a query that will read from the output DB, and JOIN it to data in one of the source DB. However, the source table may be one or more individual tables :( The name of the source DB is included in the output DB; ideally, I'd like to do something like the following (pseudo-SQL ahoy)
select o.[UID]
,o.[description]
,i.[data]
from [output].dbo.[description] as o
left join (select [UID]
,[data]
from
[output.sourcedb].dbo.datatable
) as i
on i.[UID] = o.[UID];
Is there any way to do something like the above - "dynamically" specify the database and table to be joined on for each row in the query?
Try using the exec function, then specify the select as a string, adding variables for database names and tables where appropriate. Simple example:
DECLARE #dbName VARCHAR(255), #tableName VARCHAR(255), #colName VARCHAR(255)
...
EXEC('SELECT * FROM ' + #dbName + '.dbo.' + #tableName + ' WHERE ' + #colName + ' = 1')
No, the table must be known at the time you prepare the query. Otherwise how would the query optimizer know what indexes it might be able to use? Or if the table you reference even has an UID column?
You'll have to do this in stages:
Fetch the sourcedb value from your output database in one query.
Build an SQL query string, interpolating the value you fetched in the first query into the FROM clause of the second query.
Be careful to check that this value contains a legitimate database name. For instance, filter out non-alpha characters or apply a regular expression or look it up in a whitelist. Otherwise you're exposing yourself to a SQL Injection risk.
Execute the new SQL string you built with exec() as #user353852 suggests.