I have a stored procedure as under
ALTER PROCEDURE [dbo].[usp_testProc]
(#Product NVARCHAR(200) = '',
#BOMBucket NVARCHAR(100) = '')
--exec usp_testProc '','1'
AS
BEGIN
SELECT *
FROM Mytbl x
WHERE 1 = 1
AND (#Product IS NULL OR #Product = '' OR x.PRODUCT = #Product)
AND (#BOMBucket IS NULL OR #BOMBucket = '' OR CAST(x.BOMBucket AS NVARCHAR(100)) IN (IIF(#BOMBucket != '12+', #BOMBucket, '13,14')))
END
Everything else if working fine except when I am passing the bucket value as 12+ . It should ideally show the result for bucket 13 and 14. But the result set is blank.
I know IN expects values as ('13','14'). But somehow not able to fit it in the program.
You can express that logic in a Boolean expression.
...
(#BOMBucket <> '12+'
AND cast(x.BOMBucket AS nvarchar(100)) = #BOMBucket
OR #BOMBucket = '12+'
AND cast(x.BOMBucket AS nvarchar(100)) IN ('13', '14'))
...
But casting the column prevents indexes from being used. You rather should cast the other operand. Like in:
x.BOMBucket = cast(#BOMBucket AS integer)
But then you had the problem, that the input must not be a string representing an inter but can be any string. this would cause an error when casting. In newer SQL Server versions you could circumvent that by using try_cast() but not in 2012 as far as I know. Maybe you should rethink your approach overall and pass a table variable with the wanted BOMBucket as integers instead.
I am trying to have a macro run but I'm not sure if it will resolve since I don't have connection to my database for a little while. I want to know if the macro is written correctly and will resolve the states on each pass through the code (ie do it repetitively and create a table for each state).
The second thing I would like to know is if I can run a macro through a from statement. For example let entpr be the database that I'm pulling from. Would the following resolve correctly:
proc sql;
select * from entpr.&state.; /*Do I need the . after &state?*/
The rest of my code:
libname mdt "........."
%let state = ny il ar ak mi;
proc sql;
create table mdt.&state._members
as select
corp_ent_cd
,mkt_sgmt_admnstn_cd
,fincl_arngmt_cd
,aca_ind
,prod_type
,cvyr
,cvmo
,sum(1) as mbr_cnt
from mbrship1_&state.
group by 1,2,3,4,5,6,7;
quit;
If &state contains ny il ar ak mi then as it is written, the from statement in your code will resolve to: from mbrship1_ny il ar ak mi - which is invalid SQL syntax.
My guess is that you're wanting to run the SQL statement for each of the following tables:
mbrship1_ny
mbrship1_il
mbrship1_ar
mbrship1_ak
mbrship1_mi
In which case the simplest macro would look something like this:
%macro do_sql(state=);
proc sql;
create table mdt.&state._members
as select
...
from mbrship1_&state
group by 1,2,3,4,5,6,7;
quit;
%mend;
%do_sql(state=ny);
%do_sql(state=il);
%do_sql(state=ar);
%do_sql(state=ak);
%do_sql(state=mi);
As to your question regarding whether or not to include the . the rule is that if the character following your macro variable is not a-Z, 0-9, or the underscore, then the period is optional. Those characters are the list of valid characters for a macro variable name, so as long as it's not one of those you don't need it as SAS will be able to identify where the name of the macro finishes. Some people always include it, personally I leave it out unless it's required.
When selecting data from multiple tables, whose names themselves contain some data (in your case the state) you can stack the data with:
UNION ALL in SQL
SET in Data step
As long as you are stacking data, you should also add a new column to the query selection that tracks the state.
Consider this pattern for stacking in SQL
data one;
do index = 1 to 10; do _n_ = 1 to 2; output; end; end;
run;
data two;
do index = 101 to 110; do _n_ = 1 to 2; output; end; end;
run;
proc sql;
create table want as
select
source, index
from
(select 'one' as source, * from one)
union all
(select 'two' as source, * from two)
;
The pattern can be abstracted into a template for SQL source code that will be generated by macro.
%macro my_ultimate_selector (out=, inlib=, prefix= states=);
%local index n state;
%let n = %sysfunc(countw(&states));
proc sql;
create table &out as
select
state
, corp_ent_cd
, mkt_sgmt_admnstn_cd
, fincl_arngmt_cd
, aca_ind
, prod_type
, cvyr
, cvmo
, count(*) as state_7dim_level_cnt
from
%* ----- use the UNION ALL pattern for stacking data -----;
%do index = 1 %to &n;
%let state = %scan(&states, &index);
%if &index > 1 %then %str(UNION ALL);
(select "&state" as state, * from &inlib..&prefix.&state.)
%end;
group by 1,2,3,4,5,6,7,8 %* this seems to be to much grouping ?;
;
quit;
%mend;
%my_ultimate_selector (out=work.want, inlib=mdt, prefix=mbrship1_, states=ny il ar ak mi)
If the columns of the inlib tables are not identical with regard to column order and type, use a UNION ALL CORRESPONDING to have the SQL procedure line up the columns for you.
I have table called users in PostgreSQL.I want to get all users in table from my Laravel application.
I can get from table directly as:
$testData = DB::table('users')->get();
output is:
[{"id":1,"name":"Chris Sevilleja","username":"sevilayha","email":"chris#scotch.io","password":"$2y$08$i\/ATa68ierqRL47ZxHX4EesJGEcdtKPckZs8GDGpYS.IR4aaQn.\/q","created_at":"2016-09-07 09:32:41","updated_at":"2016-09-07 09:32:41"},{"id":2,"name":"Bemagoni chandrashekar","username":"chandrashekar","email":"chandrashekar#zessta.com","password":"$2y$08$QG9JsAerYp3UYpXNNyImhuz\/6hiWv8XpURpJX1uJ.hAm8l1RG2JrC","created_at":"2016-09-07 09:32:41","updated_at":"2016-09-07 09:32:41"},{"id":3,"name":"dwefr","username":"ewre","email":"ewrt#egyfrhgt.com","password":"$2y$08$s2XBZvoAvpEqjjhbx8QPw.eSe5TIuJC25XbaFkTskzAKAGi99QWga","created_at":"2016-09-07 09:34:27","updated_at":"2016-09-07 09:34:27"},{"id":4,"name":"r3t54y6u677i","username":"retrytyu","email":"etyru#dddj.com","password":"$2y$08$Xb0Xm4KwdSwcBSHX0F6ETOWU60X.NO5D7\/uwVv\/xUAXTUS8LSPCLu","created_at":"2016-09-07 09:46:26","updated_at":"2016-09-07 09:46:26"},{"id":5,"name":"r3t45y6u","username":"ertrh","email":"435t4y65#sss.com","password":"$2y$08$36DLu49nZ2YOWtU.c625meiyi3\/fmsHxiTuxIU9z9UcyrbIpFSKKW","created_at":"2016-09-07 10:02:05","updated_at":"2016-09-07 10:02:05"},{"id":6,"name":"ewrtryuyj","username":"ryt","email":"wrewtey#sssks.com","password":"$2y$08$GULHtX3GGXPGgm8gA9yDbeawZlQ5QwD2TX7nrCvEU4j7jrSgPWAQO","created_at":"2016-09-07 10:04:17","updated_at":"2016-09-07 10:04:17"},{"id":7,"name":"chandu","username":"chandu","email":"ch1235hdhd#dhdjd.com","password":"$2y$08$gpAhcl\/Sg.lGvb.zk.I\/m.PfcttGI6OPFMsMxQVm15dYOtQDvIWSG","created_at":"2016-09-07 10:06:18","updated_at":"2016-09-07 10:06:18"},{"id":8,"name":"dewfergt","username":"erwrgf","email":"fgf#dgrf.com","password":"$2y$08$ikYAXV1prZsEj2MPxXM4S.Tqn160Jv25cFOQLghK8ptFiSBaIFGZO","created_at":"2016-09-07 10:07:20","updated_at":"2016-09-07 10:07:20"},{"id":9,"name":"rteryt","username":"wrwter","email":"wretr#gjjd.com","password":"$2y$08$pkOUBl1NlNdBShNWiklya.0zlPzPrEH2edCfvdCiHLnj80GY1sdtm","created_at":"2016-09-08 07:11:46","updated_at":"2016-09-08 07:11:46"},{"id":10,"name":"Raghu","username":"raghu","email":"raghu#gdgdjd.com","password":"$2y$08$m9wke.vvTTZytYw91I4\/q.qxKoCobLOW7dbCvs66xFyJyy2R9phni","created_at":"2016-09-10 10:06:40","updated_at":"2016-09-10 10:06:40"},{"id":11,"name":"wewert","username":"ewretr","email":"ewrtey#vsss.com","password":"$2y$08$IXD0eXYTPPGE1MfALonEFey0lr\/KBMZ0.3AIO3sWVgu7IZdWhXwTG","created_at":"2016-09-12 07:48:58","updated_at":"2016-09-12 07:48:58"}]
Through PostgreSQL function calling:
my PostgreSQL function:
-- Function: public."RegiterUsers2"()
-- DROP FUNCTION public."RegiterUsers2"();
CREATE OR REPLACE FUNCTION public."RegiterUsers2"()
RETURNS SETOF users AS
'select * from users'
LANGUAGE sql VOLATILE
COST 100
ROWS 1000;
ALTER FUNCTION public."RegiterUsers2"()
OWNER TO postgres;
Laravel code:
$allUserData=DB::select('SELECT public."RegiterUsers2"()')
output:
[{"RegiterUsers2":"(1,\"Chris Sevilleja\",sevilayha,chris#scotch.io,$2y$08$i\/ATa68ierqRL47ZxHX4EesJGEcdtKPckZs8GDGpYS.IR4aaQn.\/q,\"2016-09-07 09:32:41\",\"2016-09-07 09:32:41\")"},{"RegiterUsers2":"(2,\"Bemagoni chandrashekar\",chandrashekar,chandrashekar#zessta.com,$2y$08$QG9JsAerYp3UYpXNNyImhuz\/6hiWv8XpURpJX1uJ.hAm8l1RG2JrC,\"2016-09-07 09:32:41\",\"2016-09-07 09:32:41\")"},{"RegiterUsers2":"(3,dwefr,ewre,ewrt#egyfrhgt.com,$2y$08$s2XBZvoAvpEqjjhbx8QPw.eSe5TIuJC25XbaFkTskzAKAGi99QWga,\"2016-09-07 09:34:27\",\"2016-09-07 09:34:27\")"},{"RegiterUsers2":"(4,r3t54y6u677i,retrytyu,etyru#dddj.com,$2y$08$Xb0Xm4KwdSwcBSHX0F6ETOWU60X.NO5D7\/uwVv\/xUAXTUS8LSPCLu,\"2016-09-07 09:46:26\",\"2016-09-07 09:46:26\")"},{"RegiterUsers2":"(5,r3t45y6u,ertrh,435t4y65#sss.com,$2y$08$36DLu49nZ2YOWtU.c625meiyi3\/fmsHxiTuxIU9z9UcyrbIpFSKKW,\"2016-09-07 10:02:05\",\"2016-09-07 10:02:05\")"},{"RegiterUsers2":"(6,ewrtryuyj,ryt,wrewtey#sssks.com,$2y$08$GULHtX3GGXPGgm8gA9yDbeawZlQ5QwD2TX7nrCvEU4j7jrSgPWAQO,\"2016-09-07 10:04:17\",\"2016-09-07 10:04:17\")"},{"RegiterUsers2":"(7,chandu,chandu,ch1235hdhd#dhdjd.com,$2y$08$gpAhcl\/Sg.lGvb.zk.I\/m.PfcttGI6OPFMsMxQVm15dYOtQDvIWSG,\"2016-09-07 10:06:18\",\"2016-09-07 10:06:18\")"},{"RegiterUsers2":"(8,dewfergt,erwrgf,fgf#dgrf.com,$2y$08$ikYAXV1prZsEj2MPxXM4S.Tqn160Jv25cFOQLghK8ptFiSBaIFGZO,\"2016-09-07 10:07:20\",\"2016-09-07 10:07:20\")"},{"RegiterUsers2":"(9,rteryt,wrwter,wretr#gjjd.com,$2y$08$pkOUBl1NlNdBShNWiklya.0zlPzPrEH2edCfvdCiHLnj80GY1sdtm,\"2016-09-08 07:11:46\",\"2016-09-08 07:11:46\")"},{"RegiterUsers2":"(10,Raghu,raghu,raghu#gdgdjd.com,$2y$08$m9wke.vvTTZytYw91I4\/q.qxKoCobLOW7dbCvs66xFyJyy2R9phni,\"2016-09-10 10:06:40\",\"2016-09-10 10:06:40\")"},{"RegiterUsers2":"(11,wewert,ewretr,ewrtey#vsss.com,$2y$08$IXD0eXYTPPGE1MfALonEFey0lr\/KBMZ0.3AIO3sWVgu7IZdWhXwTG,\"2016-09-12 07:48:58\",\"2016-09-12 07:48:58\")"}]
I need like this
{"id":1,
"name":"Chris Sevilleja",
"username":"sevilayha",
"email":"chris#scotch.io",
"password":"$2y$08$i\/ATa68ierqRL47ZxHX4EesJGEcdtKPckZs8GDGpYS.IR4aaQn",
"created_at":"2016-09-07 09:32:41",
"updated_at":"2016-09-07 09:32:41"},
so what is wrong with postgres function and how can I get like above one.
When you call a function returning many columns and you want each one, you must call in the form of:
SELECT col1, col2, ... FROM function(...)
Or:
SELECT * FROM function(...)
So in your case you simple want:
$allUserData=DB::select('SELECT * FROM public."RegiterUsers2"()')
I need validate dynamic Fields from a Table. For example:
CREATE TRIGGER BU_TPROYECTOS FOR TPROYECTOS
BEFORE UPDATE AS
DECLARE VARIABLE vCAMPO VARCHAR(64);
BEGIN
/*In then table "TCAMPOS" are the fields to validate*/
for Select CAMPO from TCAMPOS where TABLA = TPROYECTOS and ACTUALIZA = 'V' into :vCAMPO do
Begin
if (New.:vCAMPO <> Old.:vCampo) then
/*How i get dynamic New.Field1, New.Field2 on query return*/
End;
END ;
The question is : How can I put "The name of the field that the query returns me " in the above code .
Ie if the query returns me the field1 and field5 , I would put the trigger
if ( New.Field1 < > Old.Field1 ) or ( New.Field5 < > Old.Field5 ) then
There is no such feature in Firebird. You will need to create (and preferably) generate triggers that will reference all fields hard coded. If the underlying table changes or the requirements for validation, you will need to recreate the trigger to take the added or removed fields into account.
I'm trying to figure out a way to store metadata about a column without repeating myself.
I'm currently working on a generic dimension loading SSIS package that will handle all my dimensions. It currently does :
Create a temporary table identical to the given table name in parameters (this is a generic stored procedure that receive the table name as parameter, and then do : select top 0 * into ##[INSERT ORIGINAL TABLE NAME HERE] from [INSERT ORIGINAL TABLE NAME HERE]).
==> Here we insert custom code for this particular dimension that will first query the data from a datasource and get my delta, then transform the data and finally loads it into my temporary table.
Merge the temporary table into my original table with a T-SQL MERGE, taking care of type1 and type2 fields accordingly.
My problem right now is that I have to maintain a table with all the fields in it to store a metadata to tell my scripts if this particular field is type1 or type2... this is nonsense, I can get the same data (minus type1/type2) from sys.columns/sys.types.
I was ultimately thinking about renaming my fields to include their type in it, such as :
FirstName_T2, LastName_T2, Sex_T1 (well, I know this can be type2, let's not fall into that debate here).
What do you guyz would do with that? My solution (using a table with that metadata) is currently in place and working, but it's obvious that repeating myself from the systables to a custom table is nonsense, just for a simple type1/type2 info.
UPDATE: I also thought about creating user defined types like varchar => t1_varchar, t2_varchar, etc. This sounds like something a bit sluggy too...
Everything you need should already be in INFORMATION_SCHEMA.COLUMNS
I can't follow your thinking of not using provided tables/views...
Edit: As scarpacci mentioned, this somewhat portable if needed.
I know this is bad, but I will post an answer to my own question... Thanks to GBN for the help tho!
I am now storing "flags" in the "description" field of my columns. I, for example, can store a flag this way : "TYPE_2_DATA".
Then, I use this query to get the flag back for each and every column :
select columns.name as [column_name]
,types.name as [type_name]
,extended_properties.value as [column_flags]
from sys.columns
inner join sys.types
on columns.system_type_id = types.system_type_id
left join sys.extended_properties
on extended_properties.major_id = columns.object_id
and extended_properties.minor_id = columns.column_id
and extended_properties.name = 'MS_Description'
where object_id = ( select id from sys.sysobjects where name = 'DimDivision' )
and is_identity = 0
order by column_id
Now I can store metadata about columns without having to create a separate table. I use what's already in place and I don't repeat myself. I'm not sure this is the best possible solution yet, but it works and is far better than duplicating information.
In the future, I will be able to use this field to store more metadata, where as : "TYPE_2_DATA|ANOTHER_FLAG|ETC|OH BOY!".
UPDATE :
I now store the information in separate extended properties. You can manage extended properties using sp_addextendedproperty and sp_updateextendedproperty stored procedures. I have created a simple store procedure that help me to update those values regardless if they currently exist or not :
create procedure [dbo].[UpdateSCDType]
#tablename nvarchar(50),
#fieldname nvarchar(50),
#scdtype char(1),
#dbschema nvarchar(25) = 'dbo'
as
begin
declare #already_exists int;
if ( #scdtype = '1' or #scdtype = '2' )
begin
select #already_exists = count(1)
from sys.columns
inner join sys.extended_properties
on extended_properties.major_id = columns.object_id
and extended_properties.minor_id = columns.column_id
and extended_properties.name = 'ScdType'
where object_id = (select sysobjects.id from sys.sysobjects where sysobjects.name = #tablename)
and columns.name = #fieldname
if ( #already_exists = 0 )
begin
exec sys.sp_addextendedproperty
#name = N'Scd_Type',
#value = #scdtype,
#level0type = N'SCHEMA',
#level0name = #dbschema,
#level1type = N'TABLE',
#level1name = #tablename,
#level2type = N'COLUMN',
#level2name = #fieldname
end
else
begin
exec sys.sp_updateextendedproperty
#name = N'Scd_Type',
#value = #scdtype,
#level0type = N'SCHEMA',
#level0name = #dbschema,
#level1type = N'TABLE',
#level1name = #tablename,
#level2type = N'COLUMN',
#level2name = #fieldname
end
end
end
Thanks again