Is it possible to include a set of 'constant' values in a TSQL stored procedure? I have a situation where I'm using an integer field to store bit values and I have small set of 'constant' values that I use to insert/select against that field
DECLARE #CostsCalculated int = 32
DECLARE #AggregatedCalculated int = 64
--Set CostCalculated bit
update MyTable set DataStatus = ISNULL(DataStatus, 0) | #CostsCalculated
where Id = 10
--How many rows have that bit set
select count(*) from MyTable where ISNULL(DataStatus, 0) & #CostsCalculated = #CostsCalculated
I could repeat the same set of DECLARES at the top of every SP but I'd rather include the code, which means I can change in one place as new bit values are added.
Off the top of my head, you can't include constants like that.
How many constants are you talking about, though? Instead of declared constants, I suppose you could create a function for each constant you want, and call the function instead of #CostsCalculated, but I'm not sure how realistic that is.
Alternately, store the values in a designated table.
Related
I've defined a user defined Table type - call it TrackRefsTable
Having declared two variables
DECLARE #FOO1 AS TrackRefsTable
DECLARE #FOO2 AS TrackRefsTable
Is there any way to set one to t'other? The obvious
SET #FOO2 = #FOO1
doesn't work as this assignment method only appears to work for Scalar variables and therefore you get the error
Must declare the scalar variable "#FOO1"
I would hope to be able to avoid having to do INSERT statements to move data from one to the other as this can be an expensive operation.
DECLARE #FOO1 AS TrackRefsTable
DECLARE #FOO2 AS TrackRefsTable
-- INSERT INTO #FOO1 here
SET #FOO2 = #FOO1
So my issue was that the SP in which I implemented this would retrieve relatively unstructured data and then try to apply Sorting and Filtering on it. In order to squeeze maximum performance out of this we had to do things like sometimes populating a Table Variable #FOO1, but then sometimes apply Sorting or Filtering on it with results going into #FOO2 before joining it to an actual Data table to retrieve further column data. If performance wasn't such a big deal, I would have taken the simpler option to simply create a variable #FOOFinal into which all the data would be placed before implementing a single JOIN to get the remaining data. But INSERT INTO #FOOFinal SELECT * FROM #FOO1 (for example) costs precious milliseconds so that wasn't acceptable.
Ultimately, the solution was to simply create a separate SP in which we do the JOIN from such a Table Variable to the other data. Because the Table variable was defined as a Table Variable Type we could (thanks to the fact that we no longer support anything older than SQL Server 2008) use a Table Type as a parameter in the SP. So the solution then is to simply call that SP with either #FOO1 or #FOO2 as the parameter being passed in, and that obviates the need to assign one to the other.
Good Afternoon All,
Can anyone advise if I can dynamically declare and assign values to variables in the scenario described below?
I've written a stored procedure (sproc) that calculates the % of members in subgroups of an organization.
I know there are 7 subgroups. I store the results of the percentage calculation in 7 variables which I use later in the sproc. Each variable is named according to the subgroup name.
Naturally this means if there are changes in the name or number of subgroups, I have to rewrite parts of the sproc.
I believe dynamic SQL could be used to allow the sproc to adjust to changes in the subgroups, but I'm not sure how to set up dynamic SQL for this scenario. Can anyone offer an example or guidance?
What you're proposing goes against the grain in Sql Server. Your concern about having to rewrite later kinda speaks to this...so you're on the right track to be concerned.
Generally, you'd want to make your results into some kind of set-oriented thing...table-like...where one column has the name of the subgroup and the other column has the calculated value.
You might find table-valued functions more appropriate for your problem...but it's hard to say...we're not deep on specifics in this question.
Dynamic SQL is almost always the last resort. It seems fun, but has all sorts of issues...not the least of which is addressing the results in a programmatically safe and consistent way.
You can follow this simple query to see how you can do that
declare #sql nvarchar(max) = ''
declare #outerX int = 0 -- this is your variable you want to set it from dynamic SQL
declare #i int = 0 -- for loop
while #i <= 6
begin
set #sql = 'select #x = ' + CAST(#i as varchar)
exec sp_executesql #sql, N'#x int OUTPUT', #x = #outerX output
set #i = #i + 1
print #outerX
end
Output will be
0
1
2
3
4
5
6
More Detail Here
I am using postgresql 9.4 and while writing functions I want to use self-defined error_codes (int). However I may want to change the exact numeric values later.
For instance
-1 means USER_NOT_FOUND.
-2 means USER_DOES_NOT_HAVE_PERMISSION.
I can define these in a table codes_table(code_name::text, code_value::integer) and use them in functions as follows
(SELECT codes_table.code_value FROM codes_table WHERE codes_table.code_name = 'USER_NOT_FOUND')
Is there another way for this. Maybe global variables?
Postgres does not have global variables.
However you can define custom configuration parameters.
To keep things clear define your own parameters with a given prefix, say glb.
This simple function will make it easier to place the parameter in queries:
create or replace function glb(code text)
returns integer language sql as $$
select current_setting('glb.' || code)::integer;
$$;
set glb.user_not_found to -1;
set glb.user_does_not_have_permission to -2;
select glb('user_not_found'), glb('user_does_not_have_permission');
User-defined parameters are local in the session, therefore the parameters should be defined at the beginning of each session.
Building on #klin's answer, there are a couple of ways to persist a configuration parameter beyond the current session. Note that these require superuser privieges.
To set a value for all connections to a particular database:
ALTER DATABASE db SET abc.xyz = 1;
You can also set a server-wide value using the ALTER SYSTEM command, added in 9.4. It only seems to work for user-defined parameters if they have already been SET in your current session. Note also that this requires a configuration reload to take effect.
SET abc.xyz = 1;
ALTER SYSTEM SET abc.xyz = 1;
SELECT pg_reload_conf();
Pre-9.4, you can accomplish the same thing by adding the parameter to your server's postgresql.conf file. In 9.1 and earlier, you also need to register a custom variable class.
You can use a trick and declare your variables as a 1-row CTE, which you then CROSS JOIN to the rest. See example:
WITH
variables AS (
SELECT 'value1'::TEXT AS var1, 10::INT AS var2
)
SELECT t.*, v.*
FROM
my_table AS t
CROSS JOIN variables AS v
WHERE t.random_int_column = var2;
Postgresql does not support global variables on the DB level. Why not add it:
CREATE TABLE global_variables (
key text not null PRIMARY KEY
value text
);
INSERT INTO global_variables (key, value) VALUES ('error_code_for_spaceship_engine', '404');
If different types may be the values, consider JSON to be the type for value, but then deserialization code is required for each type.
You can use this
CREATE OR REPLACE FUNCTION globals.maxCities()
RETURNS integer AS
$$SELECT 100 $$ LANGUAGE sql IMMUTABLE;
.. and directly use globals.maxCities() in the code.
We have an Oracle application that uses a standard pattern to populate surrogate keys. We have a series of extrinsic rows (that have specific values for the surrogate keys) and other rows that have intrinsic values.
We use the following Oracle trigger snippet to determine what to do with the Surrogate key on insert:
IF :NEW.SurrogateKey IS NULL THEN
SELECT SurrogateKey_SEQ.NEXTVAL INTO :NEW.SurrogateKey FROM DUAL;
END IF;
If the supplied surrogate key is null then get a value from the nominated sequence, else pass the supplied surrogate key through to the row.
I can't seem to find an easy way to do this is T-SQL. There are all sorts of approaches, but none of which use the notion of a sequence generator like Oracle and other SQL-92 compliant DBs do.
Anybody know of a really efficient way to do this in SQL Server T-SQL? By the way, we're using SQL Server 2008 if that's any help.
You may want to look at IDENTITY. This gives you a column for which the value will be determined when you insert the row.
This may mean that you have to insert the row, and determine the value afterwards, using SCOPE_IDENTITY().
There is also an article on simulating Oracle Sequences in SQL Server here: http://www.sqlmag.com/Articles/ArticleID/46900/46900.html?Ad=1
Identity is one approach, although it will generate unique identifiers at a per table level.
Another approach is to use unique identifiers, in particualr using NewSequantialID() that ensues the generated id is always bigger than the last. The problem with this approach is you are no longer dealing with integers.
The closest way to emulate the oracle method is to have a separate table with a counter field, and then write a user defined function that queries this field, increments it, and returns the value.
Here is a way to do it using a table to store your last sequence number. The stored proc is very simple, most of the stuff in there is because I'm lazy and don't like surprises should I forget something so...here it is:
----- Create the sequence value table.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[SequenceTbl]
(
[CurrentValue] [bigint]
) ON [PRIMARY]
GO
-----------------Create the stored procedure
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE procedure [dbo].[sp_NextInSequence](#SkipCount BigInt = 1)
AS
BEGIN
BEGIN TRANSACTION
DECLARE #NextInSequence BigInt;
IF NOT EXISTS
(
SELECT
CurrentValue
FROM
SequenceTbl
)
INSERT INTO SequenceTbl (CurrentValue) VALUES (0);
SELECT TOP 1
#NextInSequence = ISNULL(CurrentValue, 0) + 1
FROM
SequenceTbl WITH (HoldLock);
UPDATE SequenceTbl WITH (UPDLOCK)
SET CurrentValue = #NextInSequence + (#SkipCount - 1);
COMMIT TRANSACTION
RETURN #NextInSequence
END;
GO
--------Use the stored procedure in Sql Manager to retrive a test value.
declare #NextInSequence BigInt
exec #NextInSequence = sp_NextInSequence;
--exec #NextInSequence = sp_NextInSequence <skipcount>;
select NextInSequence = #NextInSequence;
-----Show the current table value.
select * from SequenceTbl;
The astute will notice that there is a parameter (optional) for the stored proc. This is to allow the caller to reserve a block of ID's in the instance that the caller has more than one record that needs a unique id - using the SkipCount, the caller need make only a single call for however many IDs are needed.
The entire "IF EXISTS...INSERT INTO..." block can be removed if you remember to insert a record when the table is created. If you also remember to insert that record with a value (your seed value - a number which will never be used as an ID), you can also remove the ISNULL(...) portion of the select and just use CurrentValue + 1.
Now, before anyone makes a comment, please note that I am a software engineer, not a dba! So, any constructive criticism concerning the use of "Top 1", "With (HoldLock)" and "With (UPDLock)" is welcome. I don't know how well this will scale but this works OK for me so far...
Imagine the scene, you're updating some legacy Sybase code and come across a cursor. The stored procedure builds up a result set in a #temporary table which is all ready to be returned except that one of columns isn't terribly human readable, it's an alphanumeric code.
What we need to do, is figure out the possible distinct values of this code, call another stored procedure to cross reference these discrete values and then update the result set with the newly deciphered values:
declare c_lookup_codes for
select distinct lookup_code
from #workinprogress
while(1=1)
begin
fetch c_lookup_codes into #lookup_code
if ##sqlstatus<>0
begin
break
end
exec proc_code_xref #lookup_code #xref_code OUTPUT
update #workinprogress
set xref = #xref_code
where lookup_code = #lookup_code
end
Now then, whilst this may give some folks palpitations, it does work. My question is, how best would one avoid this kind of thing?
_NB: for the purposes of this example you can also imagine that the result set is in the region of 500k rows and that there are 100 distinct values of look_up_code and finally, that it is not possible to have a table with the xref values in as the logic in proc_code_xref is too arcane._
You have to have a XRef table if you want to take out the cursor. Assuming you know the 100 distinct lookup values (and that they're static) it's simple to generate one by calling proc_code_xref 100 times and inserting the results into a table
Unless you are willing to duplicate the code in the xref proc, there is no way to avoid using a cursor.
They say, that if you must use cursor, then, you must have done something wrong ;-) here's solution without cursor:
declare #lookup_code char(8)
select distinct lookup_code
into #lookup_codes
from #workinprogress
while 1=1
begin
select #lookup_code = lookup_code from #lookup_codes
if ##rowcount = 0 break
exec proc_code_xref #lookup_code #xref_code OUTPUT
delete #lookup_codes
where lookup_code = #lookup_code
end