How can i store the result of a comparison into a variable - tsql

I want to print a simple statement
print (1=1), i expect the result to be TRUE or 1 but sql server tell me:
Incorrect syntax near '='.
why is that?
Same will happen for a statement like that
declare #test bit
set #test = (1=1)
in summary how can i "see" what is returned from a comparison without using an IF statement
Update: The reason i'm asking is because i'm trying to debug why the following statement
declare #AgingAmount smallint
set #AgingAmount = 500
select Amount, datediff(day,Batch.SubmitDate,getdate()) as Aging from myreporrt
where datediff(day,Batch.SubmitDate,getdate()) > #AgingAmount
will return all rows even with aging of 300
so i wanted to test if datediff(day,datesubmited,getdate()) > 500 returns true or false but could not find a way how to display the result of this comparison.

Although SQL Server has the concept of aboolean type, and it understands expressions that resolve to a boolean in IF and WHERE clauses, it does not support declaring boolean variables or parameters. The bit data type cannot store the result of a boolean expression directly, even though it looks suspiciously like one.
The nearest you can get to a boolean data type is this:
-- Store the result of a boolean test.
declare #result bit
select #result = case when <boolean expression> then 1 else 0 end
-- Make use of the above result somewhere else.
if #result = 1
...
else
...
To add to the confusion, SQL Server Management Studio treats bit like boolean when displaying results, and ADO.NET maps bit to System.Boolean when passing data back and forth.
Update: To answer your latest question, use the case when ... then 1 else 0 end syntax in the select statement.

Related

Using [Flags] with multiple values in .Net 5.0.4 Entity Framework query

I have an Enum with [Flags] attribute like this:
[Flags]
public enum FlagStatus
{
Value1 = 1
, Value2 = 2
, Value3 = 4
}
And my query for EF like this:
x => x.Status.HasFlag(flagStatus)
Now if I set the flagStatus = FlagStatus.Value1 the query works fine since I set value to 1).
#__request_Status_0 = 1
WHERE ([o].[StatusId] & #__request_Status_0) = #__request_Status_0)
But if I set it to flagStatus = FlagStatus.Value1 | FlagStatus.Value3 the query returns no results as the translated SQL looks like this:
#__request_Status_0 = 5
WHERE ([o].[StatusId] & #__request_Status_0) = #__request_Status_0)
And since that's is not a valid Id int the Status field no results are returned.
So the question now is this: Isn't .HasFlag supposed to be supported by .Net5 EF or is bitwise operation for some reason limited to one value? And if so why have bitwise operations support at all?
I probably missed something, but I just don't see it.
Isn't .HasFlag supposed to be supported by .Net5 EF
Supported means translated to SQL instead of throwing runtime exception, so apparently it is.
is bitwise operation for some reason limited to one value
No, it's not. But when used with multiple values, it has different meaning than what you seem to be expecting. The documentation of the Enum.HasFlags CLR method says that it returns
true if the bit field or bit fields that are set in flag are also set in the current instance; otherwise, false.
and then in remarks:
The HasFlag method returns the result of the following Boolean expression.
thisInstance And flag = flag
which is exactly what EF Core is doing.
Translating it to simple words, it checks if ALL bits in flags are present in value (the equivalent All operation for sets). While you seem to be expecting it to has Any semantics.
Shortly, HasFlag is for All. There is no dedicated method for Any, so you should use its direct bitwise operation equivalent which is
(value & flags) != 0
In your sample
x => (x.Status & flagStatus) != 0

Postgres: Returning Results or Error from Stored Functions

I am struggling to figure out how to best handle the return of results or errors to my application from Postgres stored functions.
Consider the following contrived psudeocode example:
app.get_resource(_username text)
RETURNS <???>
BEGIN
IF ([ ..user exists.. ] = FALSE) THEN
RETURN 'ERR_USER_NOT_FOUND';
END IF;
IF ([ ..user has permission.. ] = FALSE) THEN
RETURN 'ERR_NO_PERMISSION';
END IF;
-- Return the full user object.
RETURN QUERY( SELECT 1
FROM app.resources
WHERE app.resources.owner = _username);
END
The function can fail with a specific error or succeed and return 0 or more resources.
At first I tried creating a custom type to always use as a standard return type in eachh function:
CREATE TYPE app.appresult AS (
success boolean,
error text,
result anyelement
);
Postgres does not allow this however:
[42P16] ERROR: column "result" has pseudo-type anyelement
I then discovered OUT parameters and attempted the following uses:
CREATE OR REPLACE FUNCTION app.get_resource(
IN _username text,
OUT _result app.appresult -- Custom type
-- {success bool, error text}
)
RETURNS SETOF record
AS
$$
BEGIN
IF 1 = 1 THEN -- just a test
_result.success = false;
_result.error = 'ERROR_ERROR';
RETURN NULL;
END IF;
RETURN QUERY(SELECT * FROM app.resources);
END;
$$
LANGUAGE 'plpgsql' VOLATILE;
Postgres doesn't like this either:
[42P13] ERROR: function result type must be app.appresult because of OUT parameters
Also tried a similar function but reversed: Returning a custom app.appresult object and setting the OUT param to "SETOF RECORD". This was also not allowed.
Lastly i looked into Postgres exception handling using
RAISE EXCEPTION 'ERR_MY_ERROR';
So in the example function, i'd just raise this error and return.
This resulted in the driver sending back the error as:
"ERROR: ERR_MY_ERROR\nCONTEXT: PL/pgSQL function app.test(text) line 6 at RAISE\n(P0001)"
This is easy enough to parse but doing things this way feels wrong.
What is the best way to solve this problem?
Is it possible to have a custom AppResult object that i could return?
Something like:
{ success bool, error text, result <whatever type> }
//Edit 1 //
I think I'm leaning more towards #Laurenz Albe solution.
My main goal is simple: Call a stored procedure which can return either an error or some data.
Using RAISE seems to accomplish this and the C++ driver allows easy checking for an error condition returned from a query.
if ([error code returned from the query] == 90100)
{
// 1. Parse out my overly verbose error from the raw driver
// error string.
// 2. Handle the error.
}
I'm also wondering about using custom SQLSTATE codes instead of parsing the driver string.
Throwing '__404' might mean that during the course of my SPs execution, it could not continue because some record needed was not found.
When calling the sql function from my app, i have a general idea of what it failing with a '__404' would mean and how to handle it. This avoids the additional step of parsing driver error string.
I can also see the potential of this being a bad idea.
Bedtime reading:
https://www.postgresql.org/docs/current/static/errcodes-appendix.html
This is slightly opinion based, but I think that throwing an error is the best and most elegant solution. That is what errors are for!
To distinguish various error messages, you could use SQLSTATEs that start with 6, 8 or 9 (these are not used), then you don't have to depend on the wording of the error message.
You can raise such an error with
RAISE EXCEPTION SQLSTATE '90001' USING MESSAGE = 'my own error';
We do something similar to what you're trying to do, but we use TEXT rather than ANYELEMENT, because (almost?) any type can be cast to TEXT and back. So our type looks something like:
(errors our_error_type[], result TEXT)
The function which returns this stores errors in the errors array (it's just some custom error type), and can store the result (cast to text) in the result field.
The calling function knows what type it expects, so it can first check the errors array to see if any errors were returned, and if not it can cast the result value to the expected return type.
As a general observation, I think exceptions are more elegant (possibly because I come from a c# background). The only problem is in plpgsql exception handling is (relatively) slow, so it depends on the context - if you're running something many times in a loop, I would prefer a solution that doesn't use exception handling; if it's a single call, and/or especially when you want it to abort, I prefer raising an exception. In practice we use both at various points throughout our call stacks.
And as Laurenz Albe pointed out, you're not meant to "parse" exceptions, so much as raise an exception with specific values in specific fields, which the function that catches the exception can then extract and act on directly.
As an example:
Setup:
CREATE TABLE my_table (id INTEGER, txt TEXT);
INSERT INTO my_table VALUES (1,'blah');
CREATE TYPE my_type AS (result TEXT);
CREATE OR REPLACE FUNCTION my_func()
RETURNS my_type AS
$BODY$
DECLARE
m my_type;
BEGIN
SELECT my_table::TEXT
INTO m.result
FROM my_table;
RETURN m;
END
$BODY$
LANGUAGE plpgsql STABLE;
Run:
SELECT (m.result::my_table).*
FROM my_func() AS m
Result:
| id | txt |
-------------
| 1 | blah |

I can create a stored procure with invalid user defined function names in it

I just noticed that I could alter my stored procedure code with a misspelled user defined function in it.
I noticed that at 1st time I execute the SP.
Is there any way to get a compile error when an SP include an invalid user-defined function name in it?
At compile time? No.
You can, however, use some of SQL's dependency objects (if using MS SQL) to find problems just after deployment, or as part of your beta testing. Aaron Bertran has a pretty nice article rounding up the options, depending upon the version of SQL Server.
Here is an example using SQL Server 2008 sys object called sql_expression_dependencies
CREATE FUNCTION dbo.scalarTest
(
#input1 INT,
#input2 INT
)
RETURNS INT
AS
BEGIN
-- Declare the return variable here
DECLARE #ResultVar int
-- Add the T-SQL statements to compute the return value here
SELECT #ResultVar = #input1 * #input2
-- Return the result of the function
RETURN #ResultVar
END
GO
--Fn Works!
SELECT dbo.ScalarTest(2,2)
GO
CREATE PROCEDURE dbo.procTest
AS
BEGIN
SELECT TOP 1 dbo.scalarTest(3, 3) as procResult
FROM sys.objects
END
GO
--Sproc Works!
EXEC dbo.procTest
GO
--Remove a dependency needed by our sproc
DROP FUNCTION dbo.scalarTest
GO
--Does anything have a broken dependency? YES
SELECT OBJECT_NAME(referencing_id) AS referencing_entity_name,
referenced_entity_name, *
FROM sys.sql_expression_dependencies
WHERE referenced_id IS NULL --dependency is missing
GO
--Does it work? No
EXEC dbo.procTest
GO

Assigning a whole DataStructure its nullind array

Some context before the question.
Imagine file FileA having around 50 fields of different types. Instead of all programs using the file, I tried having a service program, so the file could only be accessed by that service program. The programs calling the service would then receive a DataStructure based on the file structure, as an ExtName. I use SQL to recover the information, so, basically, the procedure would go like this :
Datastructure shared by service program :
D FileADS E DS ExtName(FileA) Qualified
Procedure called by programs :
P getFileADS B Export
D PI N
D PI_IDKey 9B 0 Const
D PO_DS LikeDS(FileADS)
D LocalDS E DS ExtName(FileA) Qualified
D NullInd S 5i 0 Array(50) <-- Since 50 fields in fileA
//Code
Clear LocalDS;
Clear PO_DS;
exec sql
SELECT *
INTO :LocalDS :nullind
FROM FileA
WHERE FileA.ID = :PI_IDKey;
If SqlCod <> 0;
Return *Off;
EndIf;
PO_DS = LocalDS;
Return *On;
P getFileADS E
So, that procedure will return a datastructure filled with a record from FileA if it finds it.
Now my question : Is there any way I can assign the %nullind(field) = *On without specifying EACH 50 fields of my file?
Something like a loop
i = 1;
DoW (i <= 50);
if nullind(i) = -1;
%nullind(datastructure.field) = *On;
endif;
i++;
EndDo;
Cause let's face it, it'd be a pain to look each fields of each file every time.
I know a simple chain(n) could do the trick
chain(n) PI_IDKey FileA FileADS;
but I really was looking to do it with SQL.
Thank you for your advices!
OS Version : 7.1
First, you'll be better off in the long run by eliminating SELECT * and supplying a SELECT list of the 50 field names.
Next, consider these two web pages -- Meaningful Names for Null Indicators and Embedded SQL and null indicators. The first shows an example of assigning names to each null indicator to match the associated field names. It's just a matter of declaring a based DS with names, based on the address of your null indicator array. The second points out how a null indicator array can be larger than needed, so future database changes won't affect results. (Bear in mind that the page shows a null array of 1000 elements, and the memory is actually relatively tiny even at that size. You can declare it smaller if you think it's necessary for some reason.)
You're creating a proc that you'll only write once. It's not worth saving the effort of listing the 50 fields. Maybe if you had many programs using this proc and you had to create the list each time it'd be a slight help to use SELECT *, but even then it's not a great idea.
A matching template DS for the 50 data fields can be defined in the /COPY member that will hold the proc prototype. The template DS will be available in any program that brings the proc prototype in. Any program that needs to call the proc can simply specify LIKEDS referencing the template to define its version in memory. The template DS should probably include the QUALIFIED keyword, and programs would then use their own DS names as the qualifying prefix. The null indicator array can be handled similarly.
However, it's not completely clear what your actual question is. You show an example loop and ask if it'll work, but you don't say if you had a problem with it. It's an array, so a loop can be used much like you show. But it depends on what you're actually trying to accomplish with it.
for old school rpg just include the nulls in the data structure populated with the select statement.
select col1, ifnull(col1), col2, ifnull(col2), etc. into :dsfilewithnull where f.id = :id;
for old school rpg that can't handle nulls remove them with the select statement.
select coalesce(col1,0), coalesce(col2,' '), coalesce(col3, :lowdate) into :dsfile where f.id = :id;
The second method would be easier to use in a legacy environment.
pass the key by value to the procedure so you can use it like a built in function.
One answer to your question would be to make the array part of a data structure, and assign *all'0' to the data structure.
dcl-ds nullIndDs;
nullInd Ind Dim(50);
end-ds;
nullIndDs = *all'0';
The answer by jmarkmurphy is an example of assigning all zeros to an array of indicators. For the example that you show in your question, you can do it this way:
D NullInd S 5i 0 dim(50)
/free
NullInd(*) = 1 ;
Nullind(*) = 0 ;
*inlr = *on ;
return ;
/end-free
That's a complete program that you can compile and test. Run it in debug and stop at the first statement. Display NullInd to see the initial value of its elements. Step through the first statement and display it again to see how the elements changed. Step through the next statement to see how things changed again.
As for "how to do it in SQL", that part doesn't make sense. SQL sets the values automatically when you FETCH a row. Other than that, the array is used by the host language (RPG in this case) to communicate values back to SQL. When a SQL statement runs, it again automatically uses whatever values were set. So, it either is used automatically by SQL for input or output, or is set by your host language statements. There is nothing useful that you can do 'in SQL' with that array.

SQL Report Builder 3.0 Multi Value Parameter

I have a report that is required to accept number ranges and comma delimited values in a text field parameter. The parameter is for 'Account Type' and they want to be able to enter "1,2,5-9" and that will take integer values of 1,2,5,6,7,8,9. I know how to do this with a single value but never with a range.
The example code I would use for a single value is:
SELECT
arcu.vwARCUAccount.AccountType
,arcu.vwARCUAccount.ACCOUNTNUMBER
FROM
arcu.vwARCUAccount
WHERE
arcu.vwARCUAccount.AccountType = #AccountType
Any information would be extremely helpful. Someone on my team already estimated it and said it could be done without even realising that they wanted a range so now I am stuck figuring it out. I bet everyone here has been in my position so I thank everyone in advance.
There are several things you want to do.
A. Set up a parameter that is data type 'integer' and ensure you select the checkbox 'Allow multiple values'. Set it's value to be 'Ints'. Hit okay for now.
This essentially set up an array available list of a data set that you define for that type of data type that can be passed more than one dataset.
B. Create a simple dataset called 'values' that is like so
declare #Ints table ( id int);
insert into #Ints values (1),(2),(5),(6),(7),(8),(9)
C. Go back to your variable in step one and open up it's properties. Select 'Available Values' on the side pane. Choose radio button 'Get values from a query'. List your data set as 'values' and your value and label to be 'id'.
You have now bound your parameter array to values you specified. However a user DOES NOT have to just choose one or all of these but choose one or many of them.
D. You need to set up your main dataset(which I assume you already did before coming here). For the purpose of my example I will make a simple one up. I create a dataset called person:
declare #Table Table ( personID int identity, person varchar(8));
insert into #Table values ('Brett'),('Brett'),('Brett'),('John'),('John'),('Peter');
Select *
from #Table
where PersonID in (#Ints)
The important part is the predicate showing:
'where PersonID in (#Ints)'
This tells the dataset that it is dependent on the user to select a value in this array parameter.
I'm not completely proficient with tsql but what about using reg expressions?
See LIKE (Transact-SQL)
Such as:
arcu.vwARCUAccount.AccountType like '[' + replace(#AccountType, ',','') + ']'
This could work. As a broud brush try:
Remove the filter of the account type from the tsql.
Create a vb function that inputs a number, this being the account type, and tests to see if it is in the parameter string supplied by the user and outputs a 1 or 0. Vb Functions
Function test(byval myin as integer, byval mylimits as string) as integer
'results as 1 or 0
dim mysplit as string()= Split(mylimits, ",")
dim mysplit2 as string(1)
'look through all separated by a ","
For Each s As String In mysplit
'does there exists a range, i.e. "-"?
if s like "%-%" then
mysplit2 = split(s, "-")
'is the value between this range?
if myin >= mysplit(0) andalso myin <= mysplit(1) then
return 1
end if
'is the value equal to the number?
elseif s = myin then
return 1
end if
Next s
return 0
End
3. Create a filter on the dataset using the vb function with the account type as the input equal to 1.
=code.test(Fields!AccountType.Value, Paramaters!MyPar.Value)