We've got a system (MS SQL 2008 R2-based) that has a number of "input" database and a one "output" database. I'd like to write a query that will read from the output DB, and JOIN it to data in one of the source DB. However, the source table may be one or more individual tables :( The name of the source DB is included in the output DB; ideally, I'd like to do something like the following (pseudo-SQL ahoy)
select o.[UID]
,o.[description]
,i.[data]
from [output].dbo.[description] as o
left join (select [UID]
,[data]
from
[output.sourcedb].dbo.datatable
) as i
on i.[UID] = o.[UID];
Is there any way to do something like the above - "dynamically" specify the database and table to be joined on for each row in the query?
Try using the exec function, then specify the select as a string, adding variables for database names and tables where appropriate. Simple example:
DECLARE #dbName VARCHAR(255), #tableName VARCHAR(255), #colName VARCHAR(255)
...
EXEC('SELECT * FROM ' + #dbName + '.dbo.' + #tableName + ' WHERE ' + #colName + ' = 1')
No, the table must be known at the time you prepare the query. Otherwise how would the query optimizer know what indexes it might be able to use? Or if the table you reference even has an UID column?
You'll have to do this in stages:
Fetch the sourcedb value from your output database in one query.
Build an SQL query string, interpolating the value you fetched in the first query into the FROM clause of the second query.
Be careful to check that this value contains a legitimate database name. For instance, filter out non-alpha characters or apply a regular expression or look it up in a whitelist. Otherwise you're exposing yourself to a SQL Injection risk.
Execute the new SQL string you built with exec() as #user353852 suggests.
Related
I'm using a PostgreSQL with a Go driver. Sometimes I need to query not existing fields, just to check - maybe something exists in a DB. Before querying I can't tell whether that field exists. Example:
where size=10 or length=10
By default I get an error column "length" does not exist, however, the size column could exist and I could get some results.
Is it possible to handle such cases to return what is possible?
EDIT:
Yes, I could get all the existing columns first. But the initial queries can be rather complex and not created by me directly, I can only modify them.
That means the query can be simple like the previous example and can be much more complex like this:
WHERE size=10 OR (length=10 AND n='example') OR (c BETWEEN 1 and 5 AND p='Mars')
If missing columns are length and c - does that mean I have to parse the SQL, split it by OR (or other operators), check every part of the query, then remove any part with missing columns - and in the end to generate a new SQL query?
Any easier way?
I would try to check within information schema first
"select column_name from INFORMATION_SCHEMA.COLUMNS where table_name ='table_name';"
And then based on result do query
Why don't you get a list of columns that are in the table first? Like this
select column_name
from information_schema.columns
where table_name = 'table_name' and (column_name = 'size' or column_name = 'length');
The result will be the columns that exist.
There is no way to do what you want, except for constructing an SQL string from the list of available columns, which can be got by querying information_schema.columns.
SQL statements are parsed before they are executed, and there is no conditional compilation or no short-circuiting, so you get an error if a non-existing column is referenced.
I am new to using cursors for looping through a set of rows. But so far I had prior knowledge of which columns I am about to read.
E.g.
DECLARE db_cursor FOR
SELECT Column1, Column2
FROM MyTable
DECLARE #ColumnOne VARCHAR(50), #ColumnTwo VARCHAR(50)
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO #ColumnOne, #ColumnTwo
...
But the tables I am about to read into my key/value table have no specific structure and I should be able to process them one row at a time. How, using a nested cursor, can I loop through all the columns of the fetched row and process them according to their type and name?
TSQL cursors are not really designed to read data from tables of unknown structure. The two possibilities I can think of to achieve something in that direction are:
First read the column names of an unknown table from the Information Schema Views (see System Information Schema Views (Transact-SQL)). Then use dynamic SQL to create the cursor.
If you simply want to get any columns as a large string value, you might also try a simple SELECT * FROM TABLE_NAME FOR XML AUTO and further process the retrieved data for your purposes (see FOR XML (SQL Server)).
SQL is not very good in dealing with sets generically. In most cases you must know column names, data types and much more in advance. But there is XQuery. You can transform any SELECT into XML rather easily and use the mighty abilities to deal with generic structures there. I would not recommend this, but it might be worth a try:
CREATE PROCEDURE dbo.Get_EAV_FROM_SELECT
(
#SELECT NVARCHAR(MAX)
)
AS
BEGIN
DECLARE #tmptbl TABLE(TheContent XML);
DECLARE #cmd NVARCHAR(MAX)= N'SELECT (' + #SELECT + N' FOR XML RAW, ELEMENTS XSINIL);';
INSERT INTO #tmptbl EXEC(#cmd);
SELECT r.value('*[1]/text()[1]','nvarchar(max)') AS RowID
,c.value('local-name(.)','nvarchar(max)') AS ColumnKey
,c.value('text()[1]','nvarchar(max)') AS ColumnValue
FROM #tmptbl t
CROSS APPLY t.TheContent.nodes('/row') A(r)
CROSS APPLY A.r.nodes('*[position()>1]') B(c)
END;
GO
EXEC Get_EAV_FROM_SELECT #SELECT='SELECT TOP 10 o.object_id,o.* FROM sys.objects o';
GO
--Clean-Up for test purpose
DROP PROCEDURE Get_EAV_FROM_SELECT;
The idea in short
The select is passed into the procedure as string. With the SP we create a statement dynamically and create XML from it.
The very first column is considered to be the Row's ID, if not (like in sys.objects) we can write the SELECT and force it that way.
The inner SELECT will read each row and return a classical EAV-list.
My skills in SQL are limited:
I have a database (SQLBase in this case) that has a couple of LONGVARs in columns.
I'm searching for the actual length of all COLUMNS that have a particular type.
SELECT tbname,name FROM sysadm.syscolumns where coltype='LONGVAR';
The above statement works. It gives me all tables and the respective column names that have a LONGVAR datatype. Now I would like to take these data and search through all the respective tables (the rows, so the data) and find the lengths of the respective LONGVAR columns (to find the maximum for instance, or those above a certain limit).
I have the idea that it can be solved with a subquery of nested SELECT statement but have no idea how to formulate the statement.
I don't have any real knowledge of SQLbase, so I may be off-base here: but if I was trying to do this on SQL Server, a simple approach would be to do something like the following:
SELECT
tbname,
name,
'SELECT ''' + tbname + ''' AS TableName, ''' + name + ''' AS ColumnName, MAX(LEN(' + name + ')) AS ColumnLength FROM ' + tbname + ' -- add a WHERE clause here if needed' AS Query
FROM sysadm.syscolumns
WHERE coltype='LONGVAR';
This will output a set of values, which you could then copy/paste into a new query editor window and examine before running.
Other, more complex solutions would involve dynamic SQL that automatically executes each of these statements; but again, not knowing much about SQLbase, this is where I would start.
Haven't found an answer via Google. I need to execute this code from SQL Server stored proc.
I have a folder with 100+ access dbs with a table called tblReports. Some of the access db's have an extra column in tblReports called AdminReport.
I need to capture the extra column if it exists, thus... I need to test how many columns are in tblReports so that I can use an if/else statement in the sp to generate the correct sql based on the column count.
I'd love to read your thoughts, here's the relevant snippet.
set #sql = 'Insert into CustomerServiceIntranet.dbo.ReportCriteria
(UserInfo,RptNbr,RptType,RptDesc,GroupCDBrk,ClientCDBrk,CategoryCDBrk,
UserIDBrk,UnitCDBrk,WrkTypeBrk,StatCDBrk,StatDatBrk,
ExperBrk,GroupList,ClientList,CategoryList,UserIDList,BusAreaList,
WrkTypList,StatusList,QueueList,ReviewDay,ReviewDayNA,
ErrorImpact,DateRange,DataSource,RptPathFile)'
+ 'Select '''+ #userfilename + ''', ors.* '
+ 'from (select * From Openrowset(''Microsoft.ACE.OLEDB.12.0'','''
+ #CurrentName
+ ''';''Admin'';,''select * from tblReports'')) ors'
The standard approach would be to link to tblReports by calling DoCmd.TransferDatabase. You would then be able to count number of the fields in the table, before embarking on any SQL. At the end of the look you would delete the link by calling DoCmd.DeleteObject.
It certainly looks neater than what you are trying to do.
I have that procedure which returns rows associated by ID with passed argument, i.e 1,5,7,9
ALTER PROCEDURE [dbo].[get_data]
#MyCodes as varchar(max) = ''
AS
BEGIN
DECLARE #query as nvarchar(max)
set #query = 'SELECT name FROM user WHERE id IN (#p_MyCodes)'
exec SP_EXECUTESQL #query,
N'#p_MyCodes varchar(max)',
#p_MyCodes = #MyCodes
END
That procedure generates an error : Error converting data type varchar to numeric. when I pass as an argument e.g. 3,7,5
What's wrong ?
I don't think this is going to accomplish what you are expecting it to. The error you are getting is because it can't convert the string '3,7,5' to a number (note that it is NOT trying to parse out your individual values).
Two ways to get what you want:
1) Create a table value function that takes a CSV string and returns the results (I'm sure there are many on the web; Here's a related question: Split function equivalent in T-SQL?). This is nice because you can get rid of the SP_EXECUTESQL sproc call entirely; Your query becomes:
SELECT name FROM user where id IN (SELECT value FROM dbo.f_Split(#p_MyCodes))
2) Change your set to something like:
set #query = 'SELECT name FROM user WHERE id in (' + #p_MyCodes + ')'
I don't recommend #2, it offers a SQL injection hole.
You cannot pass the ID list as parameter. You could create the SQL statement by concatenating:
set #query = 'SELECT name FROM user WHERE id IN (' + #MyCodes + ')'
exec SP_EXECUTESQL #query
Though, this disables any kind of execution plan re-usage and enables SQL injection
A better solution would be to split the list into a temp table (or table variable) and using a JOIN. Last year, I wrote a blog post about different ways to split strings in T-SQL:
http://florianreischl.blogspot.com/2009/09/high-performance-string-split-functions.html
You can't use a comma separated string with the in operator, you have to use the actual values. So, you either have to split the string up and put the values in a temporary table, or concatenate the string into the query:
set #query = 'SELECT name FROM user WHERE id IN (' + #p_MyCodes + ')'
Note that this opens up a potential security hole for SQL injection. You should not do this if you don't have full control over where the string comes from.