Using SQL server 2008 R2, I'm getting the error:
Msg 311, Level 16, State 1, Procedure ad_user, Line 28
Cannot use text, ntext, or image columns in the 'inserted' and 'deleted' tables.
The purpose of the trigger is to update a user-group table when a new user is inserted. I've only included the SQL up to where the error occurs.). What confuses me is if I remove one of the integer declarations, I don't get the same error (just errors about not having declared the variable).
CREATE trigger [dbo].[ad_user] on [dbo].[tps_user]
FOR INSERT
AS
DECLARE #UserGuid uniqueidentifier
DECLARE #EndUserTypeGuid uniqueidentifier
DECLARE #UserTypeGuid uniqueidentifier
DECLARE #saGuid uniqueidentifier
DECLARE #GroupGuid uniqueidentifier
DECLARE #NewUser VarChar(250)
DECLARE #deptnum VarChar(250)
DECLARE #locnum VarChar(250)
DECLARE #CN VarChar(250)
DECLARE #NewOU VarChar(250)
DECLARE #pos1 integer
DECLARE #pos2 integer
BEGIN
SELECT #EndUserTypeGuid=tps_guid FROM tps_user_type WHERE tps_name='EndUser'
SELECT #saGuid = tps_guid FROM tps_user WHERE tps_title = 'SA'
SELECT #UserGuid=tps_guid,
#UserTypeGuid=tps_user_type_guid,
#NewUser=tps_title,
#deptnum=usr_departmentnumber,
#locnum=usr_locationnumber,
#CN=usr_ou
FROM inserted
IF #UserTypeGuid=#EndUserTypeGuid
BEGIN
SELECT #GroupGuid=tps_guid FROM tps_group WHERE usr_departmentnumber=#deptnum
IF #GroupGuid IS NOT NULL
BEGIN
IF #UserGuid NOT IN (SELECT tps_user_id
FROM tps_user_group WHERE tps_group_id = #GroupGuid)
BEGIN
-- Remove the user from other groups
DELETE FROM tps_user_group WHERE tps_user_id = #UserGuid;
-- Create Customer Group Membership from department
INSERT INTO tps_user_group(tps_user_id, tps_group_id, tps_creation_user_guid,
tps_last_update_user_guid, tps_creation_date, tps_last_update)
VALUES(#UserGuid, #GroupGuid, #saGuid, #saGuid, GetDate(), GetDate());
END
END
END
END
I have tested this and this error message will come at compile time, not at runtime, when you explicitly reference such a column in the inserted / deleted pseudo-tables. Unfortunately I think you will have to correct these columns in the underlying table in order to use them, since you can't just apply conversions against the columns in inserted.
What is blocking the client from upgrading these columns to use proper, first-class data types that haven't been deprecated since SQL Server 2005 for many good reasons (including this one)?
You'll need to re-write your trigger anyway. It currently is not multi-row safe. The trigger won't break once you have the data type corrected, it will just pick an arbitrary row from inserted and ignore the rest. So it needs to treat inserted as a set, not as a single row, since triggers in SQL Server fire per statement, not per row.
Related
I would like to create a trigger function inside my database which checks, if the newly "inserted" value (max_bid) is at least +1 greater than the largest max_bid value currently in the table.
If this is the case, the max_bid value inside the table should be updated, although not with the newly "inserted" value, but instead it should be increased by 1.
For instance, if max_bid is 10 and the newly "inserted" max_bid is 20, the max_bid value inside the table should be increased by +1 (in this case 11).
I tried to do it with a trigger, but unfortunatelly it doesn't work. Please help me to solve this problem.
Here is my code:
CREATE TABLE bidtable (
mail_buyer VARCHAR(80) NOT NULL,
auction_id INTEGER NOT NULL,
max_bid INTEGER,
PRIMARY KEY (mail_buyer),
);
CREATE OR REPLACE FUNCTION max_bid()
RETURNS TRIGGER LANGUAGE PLPGSQL AS $$
DECLARE
current_maxbid INTEGER;
BEGIN
SELECT MAX(max_bid) INTO current_maxbid
FROM bidtable WHERE NEW.auction_id = OLD.auction_id;
IF (NEW.max_bid < (current_maxbid + 1)) THEN
RAISE EXCEPTION 'error';
RETURN NULL;
END IF;
UPDATE bidtable SET max_bid = (current_maxbid + 1)
WHERE NEW.auction_id = OLD.auction_id
AND NEW.mail_buyer = OLD.mail_buyer;
RETURN NEW;
END;
$$;
CREATE OR REPLACE TRIGGER max_bid_trigger
BEFORE INSERT
ON bidtable
FOR EACH ROW
EXECUTE PROCEDURE max_bid();
Thank you very much for your help.
In a trigger function that is called for an INSERT operation the OLD implicit record variable is null, which is probably the cause of "unfortunately it doesn't work".
Trigger function
In a case like this there is a much easier solution. First of all, disregard the value for max_bid upon input because you require a specific value in all cases. Instead, you are going to set it to that specific value in the function. The trigger function can then be simplified to:
CREATE OR REPLACE FUNCTION set_max_bid() -- Function name different from column name
RETURNS TRIGGER LANGUAGE PLPGSQL AS $$
BEGIN
SELECT MAX(max_bid) + 1 INTO NEW.max_bid
FROM bidtable
WHERE auction_id = NEW.auction_id;
RETURN NEW;
END; $$;
That's all there is to it for the trigger function. Update the trigger to the new function name and it should work.
Concurrency
As several comments to your question pointed out, you run the risk of getting duplicates. This will currently not generate an error because you do not have an appropriate constraint on your table. Avoiding duplicates requires a table constraint like:
UNIQUE (auction_id, max_bid)
You cannot deal with any concurrency issue in the trigger function because the INSERT operation will take place after the trigger function completes with a RETURN NEW statement. What would be the most appropriate way to deal with this depends on your application. Your options are table locking to block any concurrent inserts, or looping in a function until the insert succeeds.
Avoid the concurrency issue altogether
If you can change the structure of the bidtable table, you can get rid of the whole concurrency issue by changing your business logic to not require the max_bid column. The max_bid column appears to indicate the order in which bids were placed for each auction_id. If that is the case then you could add a serial column to your table and use that to indicate order of bids being placed (for all auctions). That serial column could then also be the PRIMARY KEY to make your table more agile (no indexing on a large text column). The table would look something like this:
CREATE TABLE bidtable (
id SERIAL PRIMARY KEY,
mail_buyer VARCHAR(80) NOT NULL,
auction_id INTEGER NOT NULL
);
You can drop your trigger and trigger function and just depend on the proper id value being supplied by the system.
The bids for a specific action can then be extracted using a straightforward SELECT:
SELECT id, mail_buyer
FROM bidtable
WHERE auction_id = xxx
ORDER BY id;
If you require a max_bid-like value (the id values increment over the full set of auctions), you can use a simple window function:
SELECT mail_buyer, row_number() AS max_bid OVER (PARTITION BY auction_id ORDER BY id)
FROM bidtable
WHERE auction_id = xxx;
I am trying to remove duplicated data from some of our databases based upon unique id's. All deleted data should be stored in a separate table for auditing purposes. Since it concerns quite some databases and different schemas and tables I wanted to start using variables to reduce chance of errors and the amount of work it will take me.
This is the best example query I could think off, but it doesn't work:
do $$
declare #source_schema varchar := 'my_source_schema';
declare #source_table varchar := 'my_source_table';
declare #target_table varchar := 'my_target_schema' || source_table || '_duplicates'; --target schema and appendix are always the same, source_table is a variable input.
declare #unique_keys varchar := ('1', '2', '3')
begin
select into #target_table
from #source_schema.#source_table
where id in (#unique_keys);
delete from #source_schema.#source_table where export_id in (#unique_keys);
end ;
$$;
The query syntax works with hard-coded values.
Most of the times my variables are perceived as columns or not recognized at all. :(
You need to create and then call a plpgsql procedure with input parameters :
CREATE OR REPLACE PROCEDURE duplicates_suppress
(my_target_schema text, my_source_schema text, my_source_table text, unique_keys text[])
LANGUAGE plpgsql AS
$$
BEGIN
EXECUTE FORMAT(
'WITH list AS (INSERT INTO %1$I.%3$I_duplicates SELECT * FROM %2$I.%3$I WHERE array[id] <# %4$L :: integer[] RETURNING id)
DELETE FROM %2$I.%3$I AS t USING list AS l WHERE t.id = l.id', my_target_schema, my_source_schema, my_source_table, unique_keys :: text) ;
END ;
$$ ;
The procedure duplicates_suppress inserts into my_target_schema.my_source_table || '_duplicates' the rows from my_source_schema.my_source_table whose id is in the array unique_keys and then deletes these rows from the table my_source_schema.my_source_table .
See the test result in dbfiddle.
As has been commented, you need some kind of dynamic SQL. In a FUNCTION, PROCEDURE or a DO statement to do it on the server.
You should be comfortable with PL/pgSQL. Dynamic SQL is no beginners' toy.
Example with a PROCEDURE, like Edouard already suggested. You'll need a FUNCTION instead to wrap it in an outer transaction (like you very well might). See:
When to use stored procedure / user-defined function?
CREATE OR REPLACE PROCEDURE pg_temp.f_archive_dupes(_source_schema text, _source_table text, _unique_keys int[], OUT _row_count int)
LANGUAGE plpgsql AS
$proc$
-- target schema and appendix are always the same, source_table is a variable input
DECLARE
_target_schema CONSTANT text := 's2'; -- hardcoded
_target_table text := _source_table || '_duplicates';
_sql text := format(
'WITH del AS (
DELETE FROM %I.%I
WHERE id = ANY($1)
RETURNING *
)
INSERT INTO %I.%I TABLE del', _source_schema, _source_table
, _target_schema, _target_table);
BEGIN
RAISE NOTICE '%', _sql; -- debug
EXECUTE _sql USING _unique_keys; -- execute
GET DIAGNOSTICS _row_count = ROW_COUNT;
END
$proc$;
Call:
CALL pg_temp.f_archive_dupes('s1', 't1', '{1, 3}', 0);
db<>fiddle here
I made the procedure temporary, since I assume you don't need to keep it permanently. Create it once per database. See:
How to create a temporary function in PostgreSQL?
Passed schema and table names are case-sensitive strings! (Unlike unquoted identifiers in plain SQL.) Either way, be wary of SQL-injection when concatenating SQL dynamically. See:
Are PostgreSQL column names case-sensitive?
Table name as a PostgreSQL function parameter
Made _unique_keys type int[] (array of integer) since your sample values look like integers. Use a the actual data type of your id columns!
The variable _sql holds the query string, so it can easily be debugged before actually executing. Using RAISE NOTICE '%', _sql; for that purpose.
I suggest to comment the EXECUTE line until you are sure.
I made the PROCEDURE return the number of processed rows. You didn't ask for that, but it's typically convenient. At hardly any cost. See:
Dynamic SQL (EXECUTE) as condition for IF statement
Best way to get result count before LIMIT was applied
Last, but not least, use DELETE ... RETURNING * in a data-modifying CTE. Since that has to find rows only once it comes at about half the cost of separate SELECT and DELETE. And it's perfectly safe. If anything goes wrong, the whole transaction is rolled back anyway.
Two separate commands can also run into concurrency issues or race conditions which are ruled out this way, as DELETE implicitly locks the rows to delete. Example:
Replicating data between Postgres DBs
Or you can build the statements in a client program. Like psql, and use \gexec. Example:
Filter column names from existing table for SQL DDL statement
Based on Erwin's answer, minor optimization...
create or replace procedure pg_temp.p_archive_dump
(_source_schema text, _source_table text,
_unique_key int[],_target_schema text)
language plpgsql as
$$
declare
_row_count bigint;
_target_table text := '';
BEGIN
select quote_ident(_source_table) ||'_'|| array_to_string(_unique_key,'_') into _target_table from quote_ident(_source_table);
raise notice 'the deleted table records will store in %.%',_target_schema, _target_table;
execute format('create table %I.%I as select * from %I.%I limit 0',_target_schema, _target_table,_source_schema,_source_table );
execute format('with mm as ( delete from %I.%I where id = any (%L) returning * ) insert into %I.%I table mm'
,_source_schema,_source_table,_unique_key, _target_schema, _target_table);
GET DIAGNOSTICS _row_count = ROW_COUNT;
RAISE notice 'rows influenced, %',_row_count;
end
$$;
--
if your _unique_key is not that much, this solution also create a table for you. Obviously you need to create the target schema yourself.
If your unique_key is too much, you can customize to properly rename the dumped table.
Let's call it.
call pg_temp.p_archive_dump('s1','t1', '{1,2}','s2');
s1 is the source schema, t1 is source table, {1,2} is the unique key you want to extract to the new table. s2 is the target schema
Converting below SQL Server procedures and tables to store and generate sequence to postgresql.
Can anyone guide how to do this in Postgres (via table and this function) and not via sequence or nextval or currval
Sequence table
IF NOT EXISTS (SELECT name FROM sys.tables WHERE name = 'testtable')
CREATE TABLE dbo.testtable(Sequence int NOT NULL )
go
IF NOT EXISTS (SELECT * FROM testtable)
INSERT INTO testtable VALUES (-2147483648)
go
Sequence generating proc
CREATE PROCEDURE test_proc
AS
SET NOCOUNT ON
DECLARE #iReturn int
BEGIN TRANSACTION
SELECT #iReturn = Sequence FROM schema.test (TABLOCKX) -- set exclusive table lock
UPDATE schema.test SET Sequence = ( Sequence + 1 )
COMMIT TRANSACTION
SELECT #iReturn
RETURN #iReturn
go
grant execute on schema.test to public
go
Disclaimer: using a sequence is the only scalable and efficient way to generate unique numbers.
Having said that, it is possible to implement your own sequence generator. The only situation where makes any sense is, if you are required to generate gapless numbers. If you do not have such a requirement, use a sequence.
You need one table that stores the values of the sequences. I usually use one table with a row for each "generator" that avoids costly table locks.
create table seq_generator
(
entity varchar(30) not null primary key,
seq_value integer default 0 not null
);
insert into seq_generator (entity) values ('testsequence');
Then create a function to increment the sequence value:
create or replace function next_value(p_entity varchar)
returns integer
as
$$
update seq_generator
set seq_value = seq_value + 1
where entity = lower(p_entity)
returning seq_value;
$$
language sql;
To obtain the next sequence value, e.g. inside an insert:
insert into some_table
(id, ...)
values
(next_value('testsequence'), ...);
Or make it a default value:
create table some_table
(
id integer not null primary key default next_value('testsequence'),
...
);
The UPDATE increments and locks the row in a single statement returning the new value for the sequence. If the calling transaction commits, the update to seq_generator will also be committed. If the calling transaction rolls back, the update will roll back as well.
If a second transaction calls next_value() for the same entity, it has to wait until the first transaction commits or rolls back.
So access to the generator is serialized through this function. Only one transaction at a time can do that.
If you need a second gapless sequence, just insert a new row in the `seq_generator' table.
This will seriously affect performance when you use in an environment that does a lot of concurrent inserts.
The only reason that would justify this is a legal requirement to have a gapless number. In every other case you should really, really use a native Postgres sequence.
I am writing a trigger in plpgsql for Postgres 9.1. I need to be able to capture the column names that were issued in the SET clause of an UPDATE so I can record the specified action in an audit table. The examples in the Postgres documentation are simple and inadequate for my needs. I have searched the internet for days and I am unable to find any other examples that try to achieve what I want to do here.
I am on a tight schedule to resolve this soon. I don't know Tcl so pl/Tcl is out of the question for me at this point. pl/Perl may work but I don't know where to start with it. Also I wanted to find a way to accomplish this in pl/pgsql if at all possible for portability and maintenance. If someone can recommend a pl/Perl solution to this I would be grateful.
Here is the table structure of the target table that will be audited:
Note: There are many other columns in the record table but I have not listed them here in order to keep things simple. But the trigger should be able to record changes to any of the columns in the row.
CREATE TABLE record (
record_id integer NOT NULL PRIMARY KEY,
lastname text,
frstname text,
dob date,
created timestamp default NOW(),
created_by integer,
inactive boolean default false
);
create sequence record_record_id_seq;
alter table record alter record_id set default nextval('record_record_id_seq');
Here is my audit table:
CREATE TABLE record_audit (
id integer NOT NULL PRIMARY KEY,
operation char(1) NOT NULL, -- U, I or D
source_column text,
source_id integer,
old_value text,
new_value text,
created_date timestamp default now(),
created_by integer
);
create sequence record_audit_id_seq;
alter table record_audit alter id set default nextval('record_audit_id_seq');
My goal is to record INSERTS and UPDATES to the record table in the record_audit table that will detail not only what the target record_id was (source_id) that was updated and what column was updated (source_column), but also the old_value and the new_value of the column.
I understand that the column values will have to be CAST() to a type of text. I believe I can access the old_value and new_value by accessing NEW and OLD but I am having difficulty figuring out how to obtain the column names used in the SET clause of the UPDATE query. I need the trigger to add a new record to the record_audit table for every column specified in the SET clause. Note, there are not DELETE actions as records are simply UPDATED to inactive = 't' (and thus recorded in the audit table)
Here is my trigger so far (obviously incomplete). Please forgive me, I am learning pl/pgsql as I go.
-- Trigger function for record_audit table
CREATE OR REPLACE FUNCTION audit_record() RETURNS TRIGER AS $$
DECLARE
insert_table text;
ref_col text; --how to get the referenced column name??
BEGIN
--
-- Create a new row in record_audit depending on the operation (TG_OP)
--
IF (TG_OP = 'INSERT') THEN
-- old_value and new_value are meaningless for INSERTs. Just record the new ID.
INSERT INTO record_audit
(operation,source_id,created_by)
VALUES
('I', NEW.record_id, NEW.created_by);
ELSIF (TG_OP = 'UPDATE') THEN
FOR i in 1 .. TG_ARGV[0] LOOP
ref_col := TG_ARGV[i].column; -- I know .column doesn't exist but what to use?
INSERT INTO record_audit
(operation, source_column, source_id, old_value, new_value, created_by)
VALUES
('U', ref_col, NEW.record_id, OLD.ref_col, NEW.ref_col, NEW.created_by);
END LOOP;
END IF;
RETURN NULL; -- result is ignored anyway since this is an AFTER trigger
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER record_audit_trig
AFTER INSERT OR UPDATE on record
FOR EACH ROW EXECUTE PROCEDURE audit_record();
Thanks for reading this long and winding question!
you cannot to get this information - not in PL level - probably it is possible in C.
Good enough solution is based on changed fields in records NEW and OLD. You can get list of fields from system tables ~ are related to table that is joined to trigger.
I have the following stored procedure and when I attempt to Function Import it says my Stored Procedure returns no columns. What am I missing? Any Suggestions?
The Proc:
ALTER PROCEDURE [healthc].[ev_kc_Products_Search]
(
#SearchString VARCHAR(1000)
)
AS
SET NOCOUNT ON
DECLARE #SQL VARCHAR(max),
#SQL1 VARCHAR(max),
#Tag VARCHAR(5)
CREATE TABLE #T
( ID INT,
VendorName VARCHAR(255),
ItemName VARCHAR(255),
Type VARCHAR(2),
Sequence TINYINT
)
SET #SQL = '
INSERT #T
SELECT VendorID ID,
Name VendorName,
NULL ItemName,
''V'' Type,
0 Sequence
FROM tblVendors
WHERE '+REPLACE(#SQL1,#Tag,'Name')+'
UNION ALL
BLAH BLAH BLAH'
EXEC(#SQL)
SELECT ID, VendorName, ItemName, Type FROM #T
Try adding this line to the beginning of your stored procedure:
SET FMTONLY OFF
You can remove this after you have finished importing.
Whats happening here behind the scenes?
While doing function import -> Get Column Information ... Visual Studio executes the stored proc with all the param values as NULL (you can cross-check this through MS SQL Profiler).
Doing step 1, the stored proc's resulting columns are returned with its data type and length info.
Once the column info is fetched, clicking on 'Create New Complex Type' button creates the Complex type of the SP in contention.
In your case, the stored proc params are not nullable, hence the Visual Studio call fails and returns no columns.
How to handle this?
IF (1=0)
BEGIN
SET FMTONLY OFF
if #param1 is null and #param2 is null then
begin
select
cast(null as varchar(10)) as Column1,
cast(null as bit) as Column2,
cast(null as decimal) as Column3
END
END
To be precise (in your case):
IF (1=0)
BEGIN
SET FMTONLY OFF
if #SearchString is null then
BEGIN
select
cast(null as int) as ID,
cast(null as varchar(255)) as VendorName,
cast(null as varchar(255)) as ItemName,
cast(null as varchar(2)) as Type
END
END
Reference: http://mysoftwarenotes.wordpress.com/2011/11/04/entity-framework-4-%E2%80%93-the-selected-stored-procedure-returns-no-columns-part-2/
In completeness and simplifying #benshabatnoam's answer, just put the following code at the beginning:
IF (1=2)
SET FMTONLY OFF
Note: it works in EF 6.1.3 and Visual Studio 2015 Update 3
If you are using temporary table, the Entity (EDMX) cant understand what is going on.
So return empty result with the columns name, comment out all your stored procedure and execute in the sql manager, then get the complex type in visual studio. After saving, return your stored procedure to it's original state (uncommented that is).
good luck/
You're having this problem due to the temp table.
All you need to do is:
Alter your stored procedure to return the select statement without the temp table.
Go to the function import and get the column information.
Alter your stored procedure back to the original.
I'd like to add something to Sudhanshu Singh's answer: It works very well, but if you have more complex structures, combine it with a table declaration.
I have used the following successfully (place it at the very beginning of your stored procecure):
CREATE PROCEDURE [dbo].[MyStoredProc]
AS
BEGIN
SET NOCOUNT ON;
IF (1=0) -- it never gets executed, but the EF deducts the structure from it
BEGIN
SET FMTONLY OFF
BEGIN
-- declaration + dummy query
-- to allow EF obtain complex data type:
DECLARE #MyStoredProcResult TABLE(
ID INT,
VendorName VARCHAR(255),
ItemName VARCHAR(255),
Type VARCHAR(2),
Sequence TINYINT
);
SELECT * FROM #MyStoredProcResult WHERE (1=0)
END
END
-- your code follows here (SELECT ... FROM ...)
-- this code must return the same columns/data types
--
-- if you require a temp table / table variable like the one above
-- anyway, add the results during processing to #MyStoredProcResult
-- and then your last statement in the SP can be
-- SELECT * FROM #MyStoredProcResult
END
Note that the 1=0 guarantees that it never gets executed, but the EF deducts the structure from it.
After you have saved your stored procedure, open the EDMX file in Visual Studio, refresh the data model, go to the Entity Frameworks model browser. In the model browser, locate your stored procedure, open up the "Edit Function Import" dialog, select "Returns a collection of ... Complex", then click on the button "Get Column Information".
It should show up the structure as defined above. If it does, click on "Create New Complex Type", and it will create one with the name of the stored procedure, e.g. "MyStoredProc_Result" (appended by "_Result").
Now you can select it in the combobox of "Returns a collection of ... Complex" on the same dialog.
Whenever you need to update something, update the SP first, then you can come back to the Edit Function Import dialog and click on the "Update" button (you don't need to re-create everything from scratch).
As a quick and dirty way to make EF find the columns, comment out the where clause in your stored proc (maybe add a TOP 1 to stop it returning everything), add the proc to the EF and create the Complex Type, then uncomment the where clause again.
I had this issue, what I had to do was create a User-Defined Table Type and return that.
CREATE TYPE T1 AS TABLE
( ID INT,
VendorName VARCHAR(255),
ItemName VARCHAR(255),
Type VARCHAR(2),
Sequence TINYINT
);
GO
Your Stored Procedure will now look like this:
ALTER PROCEDURE [healthc].[ev_kc_Products_Search]
(
#SearchString VARCHAR(1000)
)
AS
SET NOCOUNT ON
DECLARE #SQL VARCHAR(max),
#SQL1 VARCHAR(max),
#Tag VARCHAR(5)
#T [schema].T1
SET #SQL = 'SELECT VendorID ID,
Name VendorName,
NULL ItemName,
''V'' Type,
0 Sequence
FROM tblVendors
WHERE '+REPLACE(#SQL1,#Tag,'Name')+'
UNION ALL
BLAH BLAH BLAH'
INSERT INTO #T
EXEC(#SQL)
SELECT ID, VendorName, ItemName, Type FROM #T
Just add the select statement without the quotation, execute the stored proc, go get update the model, edit your function import and get column information. This should populate the new columns. Update the result set and go back to your stored proc and remove the select list you just added. And execute the stored proc. This way your columns will get populated in the result set.
See below where to add the select list without quote.
ALTER PROCEDURE [healthc].[ev_kc_Products_Search]
(
#SearchString VARCHAR(1000)
)
AS
SET NOCOUNT ON;
SELECT VendorID ID,
Name VendorName,
NULL ItemName,
''V'' Type,
0 Sequence
FROM tblVendors
DECLARE #SQL VARCHAR(max),
#SQL1 VARCHAR(max),
#Tag VARCHAR(5)
CREATE TABLE #T
( ID INT,
VendorName VARCHAR(255),
ItemName VARCHAR(255),
Type VARCHAR(2),
Sequence TINYINT
)
SET #SQL = '
INSERT #T
SELECT VendorID ID,
Name VendorName,
NULL ItemName,
''V'' Type,
0 Sequence
FROM tblVendors
WHERE '+REPLACE(#SQL1,#Tag,'Name')+'
UNION ALL
BLAH BLAH BLAH'
EXEC(#SQL)
SELECT ID, VendorName, ItemName, Type FROM #T
I hope this helps someone out there.
This is the only correct answer and can be found from here
https://stackoverflow.com/a/27960583/511273
Basically, EF knows that it's always going to return the number of rows or -1 if NO COUNT is on, or anything returned from the SQL stored procedure that's called by return <some integer>. For that reason, no matter what stored procedure you import, the type will always be nullable<int>. You can only return an integer from an SQL stored procedure. So, EF gives you a way to edit your function. I would imagine that if you edited it manually you would overwrite it on refresh, but I can't confirm that. Either way, this is the facility provided by EF to deal with this issue.
Click on your .emdx file. It has to be the one you selected in the Solution Explorer. Select Model Browser (Right beside Solution Explorer tab, above Properties). Expand Function Imports, locate your stored procedure, right click, click Edit. Select your variable type. It can either be a primitive type or you can click Get Complex Type. Click Get Column Information. I have confirmed this survives a model refresh.
Why can you only return an integer from a stored procedure? I don't really know, but this return definition explains that you can only return an integer:
https://learn.microsoft.com/en-us/sql/t-sql/language-elements/return-transact-sql?view=sql-server-ver15