I am trying to dynamically set my SSIS Default Buffer Max Rows property by running the following query in an Execute SSIS Tasl on the PreExecute event of a Data Flow task:
SELECT
SUM (max_length) [row_length]
FROM sys.tables t
JOIN sys.columns c
ON t.object_id=c.object_id
JOIN sys.schemas s
ON t.schema_id=s.schema_id
WHERE t.name = 'TableName'
AND s.name = 'SchemaName'
I am attempting to be clever and use the name of my Data Flow Task (which is the same as my Destination table name) as a parameter in my Execute SQL Task by referencing the System::TaskName variable.
The problem is, that everything seems to validate correctly but, when it comes to executing the process, the PreExecute fails.
I have since found out that this is due to System::TaskName not being available at anything other than Execute.
Are there any alternative variables / methods that I can use other than dumping a load of script tasks that manually change a variable into my flow?
Thanks,
Daniel
Found the answer,
I have used System::SourceName
Daniel
Related
I am currently comparing performance of PostgreSQL with several other SQL systems. I am aware of the \timing option to turn on timing queries. However, I would very much like to automate the process of copying the statements executed and the query speed below it. I imagine there is a simple way to log this?
Let's say I run:
CREATE TABLE t1 AS
SELECT itemID, prodCategory
FROM products
WHERE prodCategory = "footwear"
I want to automatically save into a text file:
CREATE TABLE t1 AS
SELECT itemID, prodCategory
FROM products
WHERE prodCategory = "footwear"
SELECT 7790
Time: 10.884 ms
If OS Specifications are needed, I am using MacOS.
I just learned that you can use the:
script filename
command to save everything that is printed on your screen. If timing is on, you can record the queries and the query time outputs.
To stop recording, simply type exit.
I recent began working in TOAD DB2 and I am trying to design some templates where I use a specific term for multiple queries.
i.e.
SELECT * FROM TABLE1 WHERE PERSON = 'X';
SELECT * FROM TABLE2 WHERE PERSON = 'X';
SELECT * FROM TABLE3 WHERE PERSON = 'X';
and so on...
I was hoping to figure out a way to set a single variable so that I can write the code more like:
SET PERS_CODE = 'X'
SELECT * FROM TABLE1 WHERE PERSON = :PERS_CODE;
SELECT * FROM TABLE2 WHERE PERSON = :PERS_CODE;
SELECT * FROM TABLE3 WHERE PERSON = :PERS_CODE;
and so on...
The difficulty is that I rarely run all the queries as a batch. I run one query to research how to best write code to make BA modifications; research results, and mark down terms that need to change and terms for uniquely identify records. So I run them adhoc and I don't want to run them as a batch or a single procedure.
Instead I am looking for a way to define the variable so that when I run any single query it will find how the variable is defined without needing to put the SET statement into each individual line, without running all the queries as a batch (or single procedure) or using the Project Manager Bind Variables table.
My Research
I have come across the variable definitions table in the "Project Manager", however this is problematic because I will need to remember to change the variable in another screen, for each work item. I want to define the Variables within a single EDITOR window so, I can switch between EDTOR window that use the same variable terminology (but different values) without having to remember to reset the value.
I have also come across the "--TOAD:" syntax that helps define the variable from within the EDITOR window. However this needs to be included in the procedure. Therefore, if I needed to run the query for TABLE2 and not TABLE1, I wouldn't be able to use the variable because the definition would be above the TABLE1 query code.
i.e.
--TOAD: SET PERS_CODE = 'X'
SELECT * FROM TABLE1 WHERE PERSON = :PERS_CODE;
SELECT * FROM TABLE2 WHERE PERSON = :PERS_CODE;
This wouldn't allow me to use the Variable definition for TABLE2 without running the TABLE1 query or modifying the code each time.
You may find substitution variables to be more appropriate here.
SELECT * FROM TABLE1 WHERE PERSON = &&PERS_CODE;
SELECT * FROM TABLE2 WHERE PERSON = &&PERS_CODE;
When you execute one of those statements you are prompted for value of PERS_CODE. When you run multiple statements as script then all will reuse the same value. Substitution variables can be defined using a single ampersand and a double ampersand. When a single ampersand is used you are prompted for each variable in your script even if they have the same name. The double ampersand prompts once and every other occurrence reuses the value. Toad's Editor will remember the last used value so you can continue running queries in an ad hoc manner and you'll only have to reset the value when you want to test a different value.
If PERS_CODE is a string then use it in your query with quotes.
SELECT * FROM TABLE1 WHERE PERSON = '&&PERS_CODE';
EDIT: For performance reasons bind variables are better (:VAR_NAME) but for your needs I think substitution variables will make things easier for you.
I have a function (stored procedure) defined in a database that I would like to edit.
I think one way of doing this is to dump the function definition to a SQL file, edit the SQL file, then replace the definition in the database with the edited version.
Is it possible to do this (dump the definition to a SQL file)?
What I have been doing in the past is to use psql to connect to the database, run /df+ function, copy the output to a text file, massage the text so it looks like a function declaration, but this is time consuming and I'm wondering if there is a sleeker way of doing it.
I am using PostgreSQL 9.1 if it matters.
EDIT:
I accepted Mike Buland's answer because he provided the correct answer in his comment, which was to run \ef function in psql.
This is actually listed in a previous question:
SELECT proname, prosrc
FROM pg_catalog.pg_namespace n
JOIN pg_catalog.pg_proc p
ON pronamespace = n.oid
WHERE nspname = 'public';
List stored functions that reference a table in PostgreSQL
You should be able to use this on the command line or with a client to read the current text of the proc and do anything you'd like with it :)
I hope that helps
You would also need the function arguments:
SELECT p.proname
, pg_catalog.pg_get_function_arguments(p.oid) as params
, p.prosrc
FROM pg_catalog.pg_proc p
WHERE oid = 'myschema.myfunc'::regproc;
Or, to make it unambiguous for functions with parameters:
WHERE oid = 'myschema.myfunc(text)'::regprocedure;
Or you can use pgAdmin to do what you describe a lot more comfortably. It displays the complete SQL script to recreate objects and has an option to copy that to the the edit window automatically. Edit and execute.
I think you need to take a step back and see the root issue here. Which is you're not using version control to version your files (database objects). There are plenty of free ones, Git and Mercurial to name a few. So use psql or the query Mike provided to dump the structure out and put them in version control. Going forward check out from version control and make edits there. You should be deploying code from this version control system to the database server. It is also useful to reconcile that the code you have in version control matches the code in the database in a automated and on a regular basis. In theory though if a strict process is in place code that isn't checked into source control should never make it to the database and you don't ever have to wonder if source control matches what the database has. However I don't trust that people with admin access won't abuse their privilege so I put steps in place to check this. If someone is found to be abusing their privileges that can be dealt with in other ways.
Currently my development environment is using SQL server express 2008 r2 and VS2010 for my project development.
My question is like this by providing a scenario:
Development goal:
I develop window services something like data mining or data warehousing using .net C#.
That meant I have a two or more database involved.
my senario is like this:
I have a database with a table call SQL_Stored inside provided with a coloum name QueryToExec.
I first idea that get on my mind is written a stored procedure and i tried to came out a stored procedure name Extract_Sources with two parameter passed in thats ID and TableName.
My first step is to select out the sql need to be execute from table SQL_Stored. I tried to get the SQL by using a simple select statement such as:
Select Download_Sql As Query From SQL_Stored
Where ID=#ID AND TableName=#TableName
Is that possible to get the result or is there another way to do so?
My Second step is to excecute the Sql that i get from SQL_Stored Table.Is possible to
to execute the query that select on the following process of this particular stored proc?
Need to create a variable to store the sql ?
Thank you,Appreciate for you all help.Please don't hesitate to voice out my error or mistake because I can learn from it. Thank you.
PS_1:I am sorry for my poor English.
PS_2:I am new to stored procedure.
LiangCk
Try this:
DECLARE #download_sql VARCHAR(MAX)
Select
#download_sql = Download_Sql
From
SQL_Stored
Where
AreaID = #AreaID
AND TableName = #TableName
EXEC (#download_sql)
Is it possible to declare a variable within a View? For example:
Declare #SomeVar varchar(8) = 'something'
gives me the syntax error:
Incorrect syntax near the keyword 'Declare'.
You are correct. Local variables are not allowed in a VIEW.
You can set a local variable in a table valued function, which returns a result set (like a view does.)
http://msdn.microsoft.com/en-us/library/ms191165.aspx
e.g.
CREATE FUNCTION dbo.udf_foo()
RETURNS #ret TABLE (col INT)
AS
BEGIN
DECLARE #myvar INT;
SELECT #myvar = 1;
INSERT INTO #ret SELECT #myvar;
RETURN;
END;
GO
SELECT * FROM dbo.udf_foo();
GO
You could use WITH to define your expressions. Then do a simple Sub-SELECT to access those definitions.
CREATE VIEW MyView
AS
WITH MyVars (SomeVar, Var2)
AS (
SELECT
'something' AS 'SomeVar',
123 AS 'Var2'
)
SELECT *
FROM MyTable
WHERE x = (SELECT SomeVar FROM MyVars)
EDIT: I tried using a CTE on my previous answer which was incorrect, as pointed out by #bummi. This option should work instead:
Here's one option using a CROSS APPLY, to kind of work around this problem:
SELECT st.Value, Constants.CONSTANT_ONE, Constants.CONSTANT_TWO
FROM SomeTable st
CROSS APPLY (
SELECT 'Value1' AS CONSTANT_ONE,
'Value2' AS CONSTANT_TWO
) Constants
#datenstation had the correct concept. Here is a working example that uses CTE to cache variable's names:
CREATE VIEW vwImportant_Users AS
WITH params AS (
SELECT
varType='%Admin%',
varMinStatus=1)
SELECT status, name
FROM sys.sysusers, params
WHERE status > varMinStatus OR name LIKE varType
SELECT * FROM vwImportant_Users
also via JOIN
WITH params AS ( SELECT varType='%Admin%', varMinStatus=1)
SELECT status, name
FROM sys.sysusers INNER JOIN params ON 1=1
WHERE status > varMinStatus OR name LIKE varType
also via CROSS APPLY
WITH params AS ( SELECT varType='%Admin%', varMinStatus=1)
SELECT status, name
FROM sys.sysusers CROSS APPLY params
WHERE status > varMinStatus OR name LIKE varType
Yes this is correct, you can't have variables in views
(there are other restrictions too).
Views can be used for cases where the result can be replaced with a select statement.
Using functions as spencer7593 mentioned is a correct approach for dynamic data. For static data, a more performant approach which is consistent with SQL data design (versus the anti-pattern of writting massive procedural code in sprocs) is to create a separate table with the static values and join to it. This is extremely beneficial from a performace perspective since the SQL Engine can build effective execution plans around a JOIN, and you have the potential to add indexes as well if needed.
The disadvantage of using functions (or any inline calculated values) is the callout happens for every potential row returned, which is costly. Why? Because SQL has to first create a full dataset with the calculated values and then apply the WHERE clause to that dataset.
Nine times out of ten you should not need dynamically calculated cell values in your queries. Its much better to figure out what you will need, then design a data model that supports it, and populate that data model with semi-dynamic data (via batch jobs for instance) and use the SQL Engine to do the heavy lifting via standard SQL.
What I do is create a view that performs the same select as the table variable and link that view into the second view. So a view can select from another view. This achieves the same result
How often do you need to refresh the view? I have a similar case where the new data comes once a month; then I have to load it, and during the loading processes I have to create new tables. At that moment I alter my view to consider the changes.
I used as base the information in this other question:
Create View Dynamically & synonyms
In there, it is proposed to do it 2 ways:
using synonyms.
Using dynamic SQL to create view (this is what helped me achieve my result).