How to find spcific parameter value from the job in SQL Server - tsql

I am using the below command to find out the script used in job step.
Requirement :
SELECT SUBSTRING(command, 44, 13),
command
FROM msdb.dbo.sysjobsteps
WHERE command LIKE '%--Should provide the default location%';
In the job I would like to find the specific parameter value.
Lets assume in job step I have a command like below
exec sp_backup #backuplocation='c:\temp\',#overwrite='Y' ...
In the above command I would like to fine #backuplocation parameter details i.e 'C:\Temp'. Parameter name is constant in each job.

Assuming that Parameter name is constant in each job. This should do it:
SELECT F1.command,
O.splitdata
FROM
(
SELECT *,
CAST('<X>'+REPLACE(SUBSTRING(F.command, PATINDEX('%#backuplocation%', F.command), LEN(F.command)), ',', '</X><X>')+'</X>' AS XML) AS xmlfilter
FROM msdb.dbo.sysjobsteps F
WHERE command LIKE '%--Should provide the default location%'
) F1
CROSS APPLY
(
SELECT fdata.D.value('.', 'varchar(100)') AS splitdata
FROM f1.xmlfilter.nodes('X') AS fdata(D)
) O;
so for example if I run it on my machine against our server:
SELECT F1.command,
O.splitdata
FROM
(
SELECT *,
CAST('<X>'+REPLACE(SUBSTRING(F.command, PATINDEX('%#db_nm%', F.command), LEN(F.command)), ',', '</X><X>')+'</X>' AS XML) AS xmlfilter
FROM msdb.dbo.sysjobsteps F
WHERE F.command like 'exec dba..mnt_sts #db_nm=''msdb'', #sts_object_recompile_fg= 0'
) F1
CROSS APPLY
(
SELECT fdata.D.value('.', 'varchar(100)') AS splitdata
FROM f1.xmlfilter.nodes('X') AS fdata(D)
) O;
this will return the following:

Related

how to write a loop condition for to get the sub sub folders name in postgresql

I am trying to get the folders name which are in the subfolders.
Example:
folder_id folder_name parent_folder_id
1 F1 0
2 F2 1
3 F3 2
4 F4 3
Now I am trying to get the f4 name along with the parent folder name like :F1/F2/F3/F4
I am getting the parent_folder_id based on folder_id and wrote the loop condition,Here is my function.
for vrecord in (select parent_folder_id from public."VOfficeApp_filefolder"
where folder_id = ip_folder_id)
loop
return query
select (SELECT array_to_json(array_agg(row_to_json(b))) FROM
(select folder_name from public."VOfficeApp_filefolder"
where folder_id = v_id)b)as path;
end loop;
Aggregate functions do not build hierarchical results; for that you need a RECURSIVE CTE. Once the hierarchy is built you can then convert to json. The following function does that. The function takes the folder name you are interested in, after all it only used at the last minuet anyway to eliminate the rest of the hierarchy that as built.
create or replace function path_to_folder(target_folder_name text)
returns json
language sql
as $$
with recursive folder_path (id, folder_name, path) as
( select folder_id, folder_name,folder_name || '/'
from folders
where parent_folder_id = 0
union all
select f.folder_id, f.folder_name, fp.path || f.folder_name || '/'
from folders f
join folder_path fp on (f.parent_folder_id = fp.id)
)
, bjc (folder_name, path) as
( select folder_name, path
from folder_path
where folder_name = target_folder_name
) -- select * from bjc;
select json_agg(row_to_json((folder_name, path)))
from bjc
group by folder_name;
$$;
Note: the second cte bjc (Before Json Conversion) is probably not needed, but as I hate json (imho a complexity I'd rather not deal with). You could move the where clause from it into the json construction. But I always like to see results before converting.
Side Note:
Postgres 9.2 is obsolete. Having gone out of support in Nov, 2017. You seriously should update.

Copy into snowflake table from raw data file using Perl DBI

There's not much info out there for perl dbi and snowflake so I'll give this a shot. I have a raw file, of which the headers are contained in line 1. This exact 'copy into' command works from the snowflake gui. I'm not sure if I can just take this exact command and put it into a perl prepare and execute.
COPY INTO DBTABLE.LND_LND_STANDARD_DATA FROM (
SELECT SPLIT_PART(METADATA$FILENAME,'/',4) as SEAT_ID,
$1:auction_id_64 as AUCTION_ID_64,
DATEADD(S,\$1:date_time,'1970-01-01') as DATE_TIME,
$1:user_tz_offset as USER_TZ_OFFSET,
$1:creative_width as CREATIVE_WIDTH,
$1:creative_height as CREATIVE_HEIGHT,
$1:media_type as MEDIA_TYPE,
$1:fold_position as FOLD_POSITION,
$1:event_type as EVENT_TYPE
FROM #DBTABLE.lnd.S3_STAGE_READY/pr/data/standard/data_dt=20200825/00/STANDARD_FILE.gz.parquet)
pattern = '.*.parquet' file_format = (TYPE = 'PARQUET' SNAPPY_COMPRESSION = TRUE)
ON_ERROR = 'SKIP_FILE_10%'
my $SQL = "COPY INTO DBTABLE.LND_LND_STANDARD_DATA FROM (
SELECT SPLIT_PART(METADATA\$FILENAME,'/',4) as SEAT_ID,
\$1:auction_id_64 as AUCTION_ID_64,
DATEADD(S,\$1:date_time,'1970-01-01') as DATE_TIME,
\$1:user_tz_offset as USER_TZ_OFFSET,
\$1:creative_width as CREATIVE_WIDTH,
\$1:creative_height as CREATIVE_HEIGHT,
\$1:media_type as MEDIA_TYPE,
\$1:fold_position as FOLD_POSITION,
\$1:event_type as EVENT_TYPE
FROM \#DBTABLE.lnd.S3_STAGE_READY/pr/data/standard/data_dt=20200825/00/STANDARD_FILE.gz.parquet)
pattern = '.*.parquet' file_format = (TYPE = 'PARQUET' SNAPPY_COMPRESSION = TRUE)
ON_ERROR = 'SKIP_FILE_10%'";
my $sth = $dbh->prepare($sql);
$sth->execute;
In looking at the output from snowflake I see this error
syntax error line 3 at position 4 unexpected '?'.
syntax error line 4 at position 13 unexpected '?'.
COPY INTO DBTABLE.LND_LND_STANDARD_DATA FROM (
SELECT SPLIT_PART(METADATA$FILENAME,'/',4) as SEAT_ID,
$1? as AUCTION_ID_64,
DATEADD(S,$1?,'1970-01-01') as DATE_TIME,
$1? as USER_TZ_OFFSET,
$1? as CREATIVE_WIDTH,
$1? as CREATIVE_HEIGHT,
$1? as MEDIA_TYPE
Do I need to create bind variables for each of the columns? I usually pull in the data from the file and put them into variables but this is different as I can't read the raw file first, it has to come directly from the copy into command.
Any help would be appreciated.
It was interpreting the : as a bind variable value, rather than a value in a variant. I used the bracket notation, instead like the following:
my $SQL = "COPY INTO DBTABLE.LND_LND_STANDARD_DATA FROM (
SELECT SPLIT_PART(METADATA\$FILENAME,'/',4) as SEAT_ID,
\$1['auction_id_64'] as AUCTION_ID_64,
DATEADD(S,\$1['date_time,'1970-01-01') as DATE_TIME,
\$1['user_tz_offset'] as USER_TZ_OFFSET,
\$1:creative_width'] as CREATIVE_WIDTH,
etc...
That worked

Hive FAILED: Parse Exception line 3:39 mismatched input

Im trying to make a query that will check if there is any row which has a salary that is 10000 higher than the salary for that department but when I try to run it I get this Error:
FAILED: ParseException line 3:39 mismatched input 'SELECT' expecting ) near '''' in expression specification
this is the query Im using
set AVERAGES ='SELECT ROLE, AVG(AnnualSalary) From Salaries GROUP BY ROLE';
SELECT ROLE, AVG(AnnualSalary) FROM Salaries
GROUP BY ROLE, AnnualSalary HAVING AnnualSalary > ('${hiveconf:AVERAGES}' + 10000);
Currently Hive does not support storing the query result into variable.
You can use window function to achieve this.
select * from
( select *,
avg(AnnualSalary) over(partition by ROLE) role_avg
from
Salaries
) a
where
AnnualSalary > role_avg+10000

Repeated use of parameter for multiple UDF's in FROM throws an invalid column name error

When using multiple table-valued functions in a query like beneath, SSMS throws an error. Also, the [Date] parameter of [PRECALCPAGES_asof] is underlined in red.
I am trying to understand why this fails. I think this might be related to the way the SQL Server engine works. Have looked into documentation on MSDN but unfortunately I do not know what to look for. Why is this caused and is there a way around it?
Query
SELECT
[Date]
, COUNT(*)
FROM
[Warehouse].[dbo].[DimDate]
CROSS APPLY
[PROJECTS_asof]([Date])
INNER JOIN
[PRECALCPAGES_asof]([Date]) ON [PRECALCPAGES_asof].[PROJECTID] = [PROJECTS_asof].[PROJECTID]
GROUP BY
[Date]
Error
Msg 207, Level 16, State 1, Line 9
Invalid column name 'Date'.
Functions
CREATE FUNCTION [ProfitManager].[PROJECTS_asof]
(
#date DATETIME
)
RETURNS TABLE AS
RETURN
(
SELECT
[PROJECTID]
, [PROJECT]
, ...
FROM
Profitmanager.[PROJECTS_HISTORY]
WHERE
[RowStartDate] <= #date
AND
[RowEndDate] > #date
)
GO
CREATE FUNCTION [ProfitManager].[PRECALCPAGES_asof]
(
#date DATETIME
)
RETURNS TABLE AS
RETURN
(
SELECT
[PAGEID]
, [PAGENAME]
, ...
FROM
Profitmanager.[PRECALCPAGES_HISTORY]
WHERE
[RowStartDate] <= #date
AND
[RowEndDate] > #date
)
GO
I think you can't use fields from tables as parameters to a function in a join. You should use cross apply.
SELECT
[Date]
, COUNT(*)
FROM
[Warehouse].[dbo].[DimDate]
CROSS APPLY
[PROJECTS_asof]([Date])
CROSS APPLY
[PRECALCPAGES_asof]([Date])
WHERE
[PRECALCPAGES_asof].[PROJECTID] = [PROJECTS_asof].[PROJECTID]
GROUP BY
[Date]

SQL Scalar function element not recognized in TSQL program

I have an input db2 table with two elements: loan_number, debt_to_income; this table's name is #Input_Table. I am trying to run a test the function by running a SQL program against this table. The problem is that the function's element is not being recognized in the SQL program for some reason, maybe I have been looking at to long? I need to validate that the output in the table will output in a order by the debt_to_income field.
Here is the function code:
ALTER FUNCTION [dbo].[FN_DTI_BANDS]
(
-- the parameters for the function here
#FN_DTI_Band decimal(4,3)
)
RETURNS varchar(16)
AS
BEGIN
declare #Return varchar(16)
select #Return =
Case
when #FN_DTI_Band is NULL then ' Missing'
WHEN #FN_DTI_Band = 00.00 then ' Missing'
When #FN_DTI_Band < = 0.31 then 'Invalid'
When #FN_DTI_Band between 0.31 and 0.34 then '31-34'
When #FN_DTI_Band between 0.34 and 0.38 then '34-38'
When #FN_DTI_Band >= 0.38 then '38+'
else null end
-- Return the result of the function
RETURN #Return
END
Here is the T-SQL test program:
SELECT loan_number,dbo.FN_DTI_BANDS(debt_to_income)as FN_DTI_Band
from #Input_table
SELECT COUNT(*), FN_DTI_Band
FROM #Input_table
GROUP BY FN_DTI_Band
ORDER BY FN_DTI_Band
Here is the error:
Msg 207, Level 16, State 1, Line 7
Invalid column name 'FN_DTI_Band'.
Msg 207, Level 16, State 1, Line 5
Invalid column name 'FN_DTI_Band'.
Can someone help me spot what I am overlooking? Thank you!
the table #input_table does not have a column called FN_DTI_Band.
Just the result of the first select statement has that column name.
You need to make the first select statement a sub query of the 2nd
Something like this:
SELECT COUNT(*), T.FN_DTI_Band
FROM
(
SELECT loan_number,dbo.FN_DTI_BANDS(debt_to_income) as FN_DTI_Band
from #Input_table
) T
GROUP BY T.FN_DTI_Band
ORDER BY T.FN_DTI_Band
Try prepending "dbo" onto the name of the function.
Select Count(*), dbo.FN_DTI_Band
From....