Why does my typed dataset not like temporary tables? - tsql

I am attempting add a tableadapter to a stored procedure in my SQL Server 2005 Express. The stored procedure, however, uses a temporary table called #temp. When creating the table adapter, Visual Studio complains "Unknown Object '#temp'" and says that the stored procedure returns 0 columns. This is problematic because I use that stored procedure with a crystal report, and need those columns.
How can I fix this?

Bizarre. According to this you add
IF 1=0 BEGIN
SET FMTONLY OFF
END
to the SP right after the AS part of the SP and it works. Visual Studio now has no problem with it. I have no idea why this works like this, or why it would work, but it does.

This may be an old thread and the answer is found, but when someone gets into your stored procedure after and see this code, he really does not understand. There is another way to do this properly and it is to simply declare the table as a variable like this :
DECLARE #temp TABLE
(
SomeText1 nvarchar(255),
SomeText2 nvarchar(255)
)
Also, don't forget to remove the DROP TABLE at the end.
PS : If you really need to use the temporary table because you need to create it, then you have to write the code given in the previous answer. Hope this helps.

Related

On INSERT to a table INSERT data in connected tables

I have two tables that have a column named id_user in common. These two tables are created in my Drupal webpage at some point (that I don't know because I didn't created the Netbeans project).
I checked on the internet and found that probably by adding REFERENCES 1sttable (id_user) to the second table, it should copy the value of the 1sttable (that is always created when a new user arrives) to the id_user value of the 2ndtable (that I don't know at which point is created). Is it correct?
If it's not correct I would like to know a way in pgAdmin that could make me synchronize those tables, or at least create both of them in the same moment.
The problem I have is that the new user has a new row on 1sttable automatically as soon as he registers, while to get a new row on 2ndtable it needs some kind of "activation" like inserting all of the data. What I'm looking for is a way that as soon as there is a new row in the 1sttable, it automatically creates the new row on the other table too. I don't know how to make it more clear (English is not my native language).
The solution you gave me seems clear for the question, but the problem is a little bigger: the two tables presents different kinds of variables, and it should be that they are, one in mySQL, with the user data (drupal default for users), then i have 2 in postgresql, both with the same primary key (id_user):
the first has 118 columns, most of them real integer;
the second has 50 columns, with mixed types.
the web application i'm using needs both this column with all the values NOT EMPTY (otherwise i get a NullPointerException) to work, so what i'm searching for is (i think):
when the user register -inserting his email- in drupal, automatically it creates the two fulfilled columns, to make the web automatically works as soon as the email is stored in mysql. Is it possible? Is it well explained?
My environment is:
windows server 2008 enterprise edition
glassfish 2.1
netbeans 6.7.1
drupal 6.17
postgresql 8.4
mysql 5.1.48
pgAdmin is just the GUI. You mean PostgreSQL, the RDBMS.
A foreign key constraint, like you have only enforces that no value can be used, that isn't present in the referenced column. You can use ON UPDATE CASCADE or ON DELETE CASCADE to propagate changes from the referenced column, but you cannot create new rows with it like you describe. You got the wrong tool.
What you describe could be achieved with a trigger. Another, more complex way would be a RULE. Go with a trigger here.
In PostgreSQL you need a trigger function, mostly using plpgsql, and a trigger on a table that makes use of it.
Something like:
CREATE OR REPLACE FUNCTION trg_insert_row_in_tbl2()
RETURNS trigger AS
$func$
BEGIN
INSERT INTO tbl2 (my_id, col1)
VALUES (NEW.my_id, NEW.col1) -- more columns?
RETURN NEW; -- doesn't matter much for AFTER trigger
END
$func$ LANGUAGE plpgsql;
And a trigger AFTER INSERT on tbl1:
CREATE TRIGGER insaft
AFTER INSERT ON tbl1
FOR EACH ROW EXECUTE PROCEDURE trg_insert_row_in_tbl2();
You might want to read about using Drupal hooks to add extra code to be run when a user is registered. Once you know how to use hooks, you can write code (in a module) to insert a corresponding record in the 2nd table. A good candidate hook to use here would be hook_user for Drupal 6 or hook_user_insert for Drupal 7.
The REFERENCES you read about is part of an SQL command to define a foreign key constraint from the second table to the first. This is not strictly necessary to solve your problem, but it can help in keeping your database consistent. I suggest you read up on database structures and constraints if you want to learn more on this topic.

Creating Stored Procedures that can work with different tables

I need to use the same Stored Procedures against many tables all with the same structure in my DB. This is data loaded from customers,with one table/customer and the data needs calculations/checks run before it's loaded to our DataWarehouse.
So far these are the options and issues I've found and I'm looking for a better pattern/approach.
Create a view that points to the
table I want to process, the SPs
then talk to that view. This works
well (especially once I'd worked out
how to create views 'automagically'
based on their columns). But the
view can only be used with one table
at a time, forcing the system to
deal with one customer at a time.
Use dynamic sql within each SP -
makes the SPs much harder to
read/debug and for those reasons has
been ruled out
Create a partitioned view across
all the tables and then use a
paramatised table function to return
just the data we're interested in -
ah but then I can't update the data
as the function returns a table that
can be only used for select
Use dynamic sql inside a function
(can't be done) to create a view
(which also can't be done) .... give
up
Within the SP create a temp table
with over the target table using
dynamic sql, but then the temp table
only exists in the session that runs
the dynamic sql not the 'parent'
session that's running the SP ...
give up
Create a global temp table using
dynamic SQL to avoid the scope issue
of 5, then run the SP against the
global temp table. Still run into
the single customer issue.
Create the view as in 1 within a
transaction and then run all the SPs
and then commit - works fine for one
user, but any others are now blocked
trying to create a new view of the
same name
Use a temporary view ... can't in
T/Sql
Move all the code into .Net - but
we have environment issues where
tsql is much easier to host/run
I know I'm not the only person who has this problem, have any of you good people solved it, please help.
Maybe your approach is wrong, I will go deep in details in a while but it seems that your problem can be solved using SSIS
-- Updated answer:
First, the big picture:
The most affordable way to process the tables dynamically is using a script instead of a stored procedure. If you want to make table access randomly chosen, you certainly will not use any of the performance advantages of stored procedures, i.e. execution plans. A SQL Script can be easily upgraded to point one table at runtime using placeholders and replacing it before executing.
The script can be loaded from the filesystem, a variable, a text column in a table, etc. The loading process consists in read the script content to a string variable. This step occurs once.
The next step is the preparation stage. This step will be executed for each table to be processed. The main business of this step is to replace the table placeholders with the current table being processed. Also is possible to set parameter values like any parameter you can need to pass into the sp that you already wrote.
The last step is the execution of the script. As is already loaded into a variable and the placeholders were set to the current table name, you can safely call a ExecuteSQLTask with the sql variable as the input. This process of course happens for each table you want to process.
Ok. Now let's see this in action.
This is a sample database model:
CREATE TABLE [dbo].[t_n](
[id] [int] IDENTITY(1,1) NOT NULL,
[name] [varchar](50) NOT NULL,
[start] [datetime] NULL,
CONSTRAINT [PK_t_n] PRIMARY KEY CLUSTERED ([id] ASC)
) ON [PRIMARY]
where t_n represents any table (t_1, t_2, t_3, etc).
This is your current stored procedure:
CREATE PROCEDURE SpProcessT_n
AS
BEGIN
SET NOCOUNT ON;
SELECT * FROM [t1];
END
GO
Now, transform this stored procedure to a Sql script, placing a placeholder instead of the table name
SET NOCOUNT ON;
SELECT * FROM [$table_name];
I choose to save this in a .sql file in the filesystem to keep the POC as simple as possible.
Next, create a SSIS Package like this:
These are the settings I choose to set up the loop:
And this is the way you can assign the table name to a variable called appropriately _table_name_
This is the setup of the script task, here you find that the variable _table_name_ has read only access, while a new variable called SqlExec has read/write access:
And this is it's Main function:
public void Main()
{
String Table_Name = Dts.Variables["table_name"].Value.ToString();
String SqlScript;
Regex reg = new Regex(#"\$table_name", RegexOptions.Compiled);
using (var f = File.OpenText(#"c:\sqlscript.sql")) {
SqlScript = f.ReadToEnd();
f.Close();
}
SqlScript = reg.Replace(SqlScript, Table_Name);
Dts.Variables["SqlExec"].Value = SqlScript;
Dts.TaskResult = (int)ScriptResults.Success;
}
You can notice that the Dts Variable SqlExec contains the sql script that will be executed. Now you can set the following options in your ExecuteSqlTask:
Successfully tested in MSSQL 2008, if you put a insert inside the script file you will notice new rows in each table.
Hope this helps!
If your application can afford to have one cut-off day late, then you can have a nightly scheduled job to run an SSIS package that will consolidate all 150+ tables into one single huge table. Since the freshness of the results of the queries against that huge table will then be 1 'date' late, this solution will not include any rows that recently been loaded.
You can actually time the running of this package. If it is still amazingly fast, say within 30 minutes, then you can bet to run it in every few hours, like during: the start of work day, lunch break, and end of day. This way you can have a nearly fresh data to query with.
Write a partitioned view including table names?
SELECT 'TableName', t.* FROM TableName t
UNION ALL
SELECT 'TableName2', t.* FROM TableName2 t
Then write a single instead of trigger which uses dynamic SQL for writing (less testing involved with that use of dynamic SQL because you'd just write the simple CRUD operations once for all tables I'd think)
I would not do this with SQL. What you are describing sounds like a traditional ETL situation.
Since all of the customer tables are the same, I would create a table in the data warehouse with all the columns from the client table, a surrogate key column, and a type identifier. You have an option to create a "staging" table here that will only have data in it during the ETL process, or just working on a single "live" table. I would create the staging table.
Then within SSIS package (don't worry you can still schedule from SQL Server agent, it hasn't totally left the DB server), start the ETL process...
E(xtract): copy the data from your source into the staging table in the data warehouse. You most likely want to use a sub-package within a foreach loop and changing the name of the table that you want to process from an external store (most people would say put this in the warehouse, but its up to you).
T(ransform): run the calculations/checks you were talking about, but do it on the whole set...
L(oad): Copy it to your real within the data warehouse.
There are a couple things I would NOT do.
1. Modify the data in the source table.
2. Try to do this in t-sql. Its just not what tsql is good at.
If you need more detail on this approach, I would probably ask the question with some Business Intelligence tags. I'll be traveling for the next week or so, but I will try to look at the comments to clear anything up if you need me to.
I am fairly certain that the standard way to solve this is using dynamic SQL in each sp (your option 2), which has already been ruled out.
Your goal is to make generic, multi-table SQL. I don't see how you intend to accomplish that without sacrificing some efficiency and readability.

Problem with Entity Framework 4, Complex Types, StoredProcs, and temp tables

I am skinning my knees on Entity Framework 4 and running into a slight problem.
I have some stored procedures that I am pulling into my EDMX. When I create complex types from these procs, EF has no problem getting the column information. Except in one place. After being puzzled for a while, I figure out it was my temporary table getting populated that is causing the problem. Actually it is simply calling the INSERT into the temp table that is causing the problem. I'm not actually populating it with any information.
While I know that I can manually create a complex type then map the function to that type, I would like to be able to just let EF take care of it for me. Does anyone know what I am doing wrong?
Below is a sample proc that doesn't work. Run this in a DB and add the proc to you EDMX. Then try to get the column information in the "Add Function Import" screen. Nothing is returned. Comment out the INSERT to the temp table and get the column information and it works.
Thanks,
Steve
CREATE PROCEDURE dbo.TestProc
AS
SET NOCOUNT ON
CREATE TABLE #TempTable(
StartDate datetime
)
INSERT INTO #TempTable
SELECT null
DROP TABLE #TempTable
SELECT 1 AS ReturnValue
SET NOCOUNT OFF
GO
A few things to try.
Use Variable Tables instead -> maybe the import wizard prefers that?
Name your return fields.
Try using the following stored proc (untested .. just thinking out loud...)
CREATE PROCEDURE dbo.Foo
AS
SET NOCOUNT ON
DECLARE #ResultTable TABLE (SomeId INTEGER)
INSERT INTO #ResultTable
SELECT DISTINCT Id AS Identity -- Or u can rename this field to anything...
FROM SomeExistingTableWhichHasAnIdentityField
GO
Try that and see if the wizard refreshes, now.
--
Attempt #2 :)
Ok .. when the EF designer/wizard/whatever fails to figure out EXACTLY what my stored proc is suppose to be returning, I usually do the following :-
Make sure the stored procedure doesn't exist at all in the EF designer/context, etc. (You have a clean starting point)
Open up your stored procedure and /* /* comment out EVERYTHING after the procedure definition.
eg..
ALTER PROCEDURE dbo.Foo
(
Bar1 INT,
Bar2 TINYINT,
... // whatever u have as your optional input arguments //
)
AS
SET NOCOUNT ON
/*
.... every thing in here is commented out
*/
GO
Now ...
3. Add a forced fake return in the stored proc, which (more or less) just defines the output structure/fields.
eg..
ALTER PROCEDURE dbo.Foo
(
Bar1 INT,
Bar2 TINYINT,
... // whatever u have as your optional input arguments //
)
AS
SET NOCOUNT ON
SELECT 1 AS Id, 1 AS UserId, 1 AS SomeOtherId,
CAST('AAA' AS NVARCHAR(350)) AS Name,
-- etc etc etc..
/*
.... every thing in here is commented out
*/
GO
and then ...
Add this stored proc to your EF designer/wizard/etc... Now the correct fields should be 'determined' by the designer. AWESOME. Yes .. the values are all hardcoded .. but that's ok (so far).
Once your happy that EF is now updated right, go back to your stored proc, and remove all hardcoded SELECT (which we did in the above step). Now we remove the comments which we commented out the entire real code. So you should have your original stored proc, back.
... and now EF is updated and doesn't know we've changed the plumbing of your stored proc.
win :)
does this work for ya?
Here is a variation of Pure.Krome's excellent answer. Rather than commenting out your sproc code, create a new view that consists of only the "fake" select statement described by Pure. The view will be used to create an entity. The view entity then becomes the container for the stored procedure results.
Create View dbo.FooWrapperView as
Select IsNull(MyPrimaryID,-999) as IntFieldName, --IsNull disallows nulls so EF designer will make this the primary key.
NullIf(CAST('AAA' as VarChar(20)), '') as VarChar20FieldName, --NullIf allows null so EF designer will NOT make this part of the primary key.
NullIf(CAST('AAA' as VarChar(42)), '') as VarChar42FieldName,
NullIf(CAST(1.1 as DECIMAL(8, 5)), '') as Decimal85FieldName
In the entity designer right-click and choose "Update Model From Database" then select your wrapper view (and the sproc if you haven't done so already). This will create the entity mapped to the bogus wrapper view. The designer picks the primary key based on the view's IsNull and NullIf statements (details here). Find the sproc in the model browser. Right-click it and select "Add Function Import...". Under "Returns a collection of" select Entities. Choose your view entity and click OK. Now when your stored procedure is called it will dump the results into your view entity.
MyProject.MyEntities myContext = new MyProject.MyEntities();
var myQuery = myContext.usp_FOO(myRecordID);
FooWrapperViewEntity myFooEntity = new FooWrapperViewEntity();
myFooEntity = myQuery.FirstOrDefault();
At first you have to create a normal store procedure without using temp table. this store procedure will contain all the column name (normal table+temp table). Now will be able to create the complex type in your EDMX
For more see this

How do you generate INSERT stored procedures for all tables in a SQL2008 database?

I have a bunch of tables and I need to create basic INSERT stored procedures for all of them.
Does anyone have anything that does this or a good start to do this?
We use SSMS Tool Pack. Great tool once you get it configured how you like. It will generate all of your CRUD for you.
Once you get it setup you just right click on a table and generate crud. BOOM. You got it all done for you.
Another nice thing about this tool is that is integrates into SSMS.
Take a look http://www.ssmstoolspack.com/
Thanks,
Mike
I wrote a stored proc:
CREATE PROCEDURE [dbo].[pDBCreateInsert]
#schemaname varchar(max) = 'dbo',
#tablename varchar(max)
...
I can then call:
EXEC pDBCreateInsert #tablename = 'myTable'
and it creates a stored proc called dbo.myTableInsert that does the insert.
Doing all tables would be easy with a cursor, but I never needed to.

SQL query to list all dependent entities

An SQL table has hundreds of tables, stored procedures and functions.
I am trying to put together an SQL query that will return all the dependencies of a given set of tables. Is there a way to accomplish this using SQL Server Management Studio without writing queries?
Updated: Simplified the question to the point.
In SSMS, just right click on the table and choose "View Dependencies". As far as scripting, take a look at this article.
EDIT: In SSMS, you can only see it for one. The reason why is because of the stored procedure that is run to view them only takes one database object. So to script multiple, you'd simply need to use multiple lines of EXEC sp_depends #objname = N'DATABASE.OBJECT'; for the tables/views/stored procedures/functions that you want to get dependencies for. One approach would be to use a script like the following to get the unique list of all dependent objects that will have to be included:
CREATE TABLE #dependents (obj_name nvarchar(255), obj_type nvarchar(255))
-- Do this for every primary object you're concerned with finding dependents for
INSERT INTO #dependents (obj_name, obj_type)
EXEC sp_depends #objname = N'DATABASE.OBJECT'
-- ...
SELECT DISTINCT obj_name, obj_type
FROM #dependents
DROP TABLE #dependents
I just blog something similar to this that might help:
Knowing What to Test When Changing a SQL Server Object.
Another approach would be to right click the database and select "Tasks" and then "Generate Scripts...", check the checkbox "Script all objects in the selected database". This will give you a giant text file that you can then search.