I'm trying to set the columnName, databaseName, schemaName etc. dynamically based on a temporary table but cant seem to make it work. i've tried below?
Create Table #test(databaseName varchar(128), schemaName varchar(128), columnName varchar(128), datatypeName varchar(128));
INSERT INTO #test ('testDatabase', 'testSchema', 'testTable', 'priceColumn');
SELECT
Case
WHEN DataType = 'int'
THEN SELECT MAX(ColumnName) FROM Concat(databaseName, '.', schemaName, '.', tableName)
ELSE 0
end
FROM #test;
DROP TABLE #test;
the expected result is that this below subquery take each line in the row in #test table and then query based on these values so it return the maxPrice from that table
SELECT MAX(ColumnName) FROM Concat(databaseName, '.', schemaName, '.', tableName)
Are you trying to aggregate on a calculated field while preserving the query flow?
You can try something like this.
SELECT
MAX(FullName)
FROM
(
SELECT
FullName = databaseName+ '.' + schemaName + '.' + columnName,
*
FROM
#test
) AS A
WHERE
DataTypeName='int'
Another example
SELECT
MAX(FullNameInt),
MAX(FullNameOther)
FROM
(
SELECT
FullNameInt = CASE WHEN DataTypeName='int' THEN databaseName+ '.' + schemaName + '.' + columnName ELSE NULL END,
FullNameOther = CASE WHEN DataTypeName<>'int' THEN databaseName+ '.' + schemaName + '.' + columnName ELSE NULL END,
*
FROM
#test
) AS A
Related
I've got procedure which works for too long period of time.
It parses query to arrays and then search for intersections with objects in database.
In first temp table I split every statement to array.
Second is about combine all possible database objects in array
Third - I'm looking for intersections in arrays.
Now this procedure uses 3 month time period for analyzing.
I dont want to reduce time. Maybe I will if
you dont suggest me something.
I've read that index GIN on array may help. What do you think?
Maybe you did it another way?
database - POSTGRES 11
CREATE TEMP TABLE temp_array_data
AS
(
SELECT id,
pid,
regexp_split_to_array(query, '\s+') as query
FROM t_stat_session
WHERE query_start::DATE BETWEEN pdtQueryDateFrom AND pdtQueryDateTo
AND duration IS NOT NULL
);
CREATE TEMP TABLE temp_sys_objects_data
AS
(
SELECT string_to_array(schemaname || '.' || tablename, '.') object_arr1,
string_to_array(schemaname || '.' || tablename, ',') object_arr2,
schemaname,
tablename object_name,
'T' AS object_type
FROM pg_catalog.pg_tables
UNION ALL
SELECT string_to_array(schemaname || '.' || viewname, '.') object_arr1,
string_to_array(schemaname || '.' || viewname, ',') object_arr2,
schemaname,
viewname object_name,
'VW' AS object_type
FROM pg_catalog.pg_views
UNION ALL
SELECT string_to_array(schemaname || '.' || matviewname, '.') object_arr1,
string_to_array(schemaname || '.' || matviewname, ',') object_arr2,
schemaname,
matviewname object_name,
'MVW' AS object_type
FROM pg_catalog.pg_matviews
);
CREATE TEMP TABLE temp_data_for_final
AS
(
SELECT id,
pid,
schemaname,
object_name,
object_type,
1 cnt
FROM temp_array_data adta,
temp_sys_objects_data
WHERE (ARRAY [object_arr1] && ARRAY [query] OR ARRAY [object_arr2] <# ARRAY [query])
);
I have been using a standard block of TSQL for auditing of various tables for some time now. However I now have a problem when running the trigger on a new table: "Error converting data type varchar to numeric". This occurs when running the EXEC (#sql) line. I've determined that the code for #sql is:
insert Audit_AppointmentsWS
(Type,
TableName,
PK,
FieldName,
OldValue,
NewValue,
UpdateDate,
UserName)
SELECT 'U',
'AppointmentsWorkshop',
+convert(varchar(100), coalesce(i.UniqueID,d.UniqueID)),
'[JobHours]',
convert(varchar(1000),d.[JobHours]),
convert(varchar(1000),i.[JobHours]),
'20220816 12:32:43:410',
'DELLXPS\ian'
from #ins i full outer join #del d on i.UniqueID = d.UniqueID where ISNULL(i.JobHours],'') <> ISNULL(d.[JobHours],'')
I've tried deleting the trigger & the audit table and then recreating them but no joy. I've also tried copying an existing trigger and just changing the table details but I still get the same error. I'm completely stumped on this and would appreciate some feedback. Many thanks in advance!
Here is the trigger:
/****** Object: Trigger [dbo].[tr_AppointmentsWS] Script Date: 16/08/2022 12:02:10 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
create TRIGGER [dbo].[tr_AppointmentsWS] ON [dbo].AppointmentsWorkshop FOR UPDATE, DELETE
AS
DECLARE #bit INT ,
#field INT ,
#maxfield INT ,
#char INT ,
#fieldname VARCHAR(128) ,
#TableName VARCHAR(128) ,
#AuditTable VARCHAR(128) ,
#PKCols VARCHAR(MAX) ,
#sql VARCHAR(2000),
#UpdateDate VARCHAR(21) ,
#UserName VARCHAR(128) ,
#Type CHAR(1) ,
#PKSelect VARCHAR(MAX)
--Changes required:
-- 1. Change the name of the trigger and the table, above
-- 2. Change #TableName to match the table to be audited
-- 3. Change the #AuditTable to the table holding the changes
SELECT #TableName = 'AppointmentsWorkshop'
SELECT #AuditTable = 'Audit_AppointmentsWS'
-- date and user
SELECT #UserName = SYSTEM_USER ,
#UpdateDate = CONVERT(VARCHAR(8), GETDATE(), 112) + ' ' + CONVERT(VARCHAR(12), GETDATE(), 114)
-- Action
IF EXISTS (SELECT * FROM inserted)
IF EXISTS (SELECT * FROM deleted)
SELECT #Type = 'U'
ELSE
SELECT #Type = 'I'
ELSE
SELECT #Type = 'D'
-- get list of columns
SELECT * INTO #ins FROM inserted
SELECT * INTO #del FROM deleted
-- Get primary key columns for full outer join
SELECT #PKCols = COALESCE(#PKCols + ' and', ' on') + ' i.' + c.COLUMN_NAME + ' = d.' + c.COLUMN_NAME
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk, INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
WHERE pk.TABLE_NAME = #TableName
AND CONSTRAINT_TYPE = 'PRIMARY KEY'
AND c.TABLE_NAME = pk.TABLE_NAME
AND c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME
-- Get primary key select for insert
SELECT #PKSelect = COALESCE(#PKSelect+'+','') + '+convert(varchar(100), coalesce(i.' + COLUMN_NAME +',d.' + COLUMN_NAME + '))'
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk, INFORMATION_SCHEMA.KEY_COLUMN_USAGE c
WHERE pk.TABLE_NAME = #TableName
AND CONSTRAINT_TYPE = 'PRIMARY KEY'
AND c.TABLE_NAME = pk.TABLE_NAME
AND c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME
IF #PKCols IS NULL
BEGIN
RAISERROR('no PK on table %s', 16, -1, #TableName)
RETURN
END
SELECT #field = 0, #maxfield = MAX(COLUMNPROPERTY(OBJECT_ID(TABLE_SCHEMA + '.' + #Tablename),COLUMN_NAME, 'ColumnID'))
FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = #TableName
WHILE #field < #maxfield
BEGIN
SELECT #field = MIN(COLUMNPROPERTY(OBJECT_ID(TABLE_SCHEMA + '.' + #Tablename),COLUMN_NAME, 'ColumnID'))
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = #TableName
AND COLUMNPROPERTY(OBJECT_ID(TABLE_SCHEMA + '.' + #Tablename),COLUMN_NAME, 'ColumnID') > #field
SELECT #bit = (#field - 1 )% 8 + 1
SELECT #bit = POWER(2,#bit - 1)
SELECT #char = ((#field - 1) / 8) + 1
IF SUBSTRING(COLUMNS_UPDATED(),#char, 1) & #bit > 0 OR #Type IN ('I','D')
BEGIN
SELECT #fieldname = COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = #TableName
AND COLUMNPROPERTY(OBJECT_ID(TABLE_SCHEMA + '.' + #Tablename),COLUMN_NAME, 'ColumnID') = #field
SELECT #sql = 'insert ' + #AuditTable + '
(Type,
TableName,
PK,
FieldName,
OldValue,
NewValue,
UpdateDate,
UserName)
SELECT ''' + #Type + ''','''
+ #TableName + ''',' + #PKSelect
+ ',''[' + #fieldname + ']'''
+ ',convert(varchar(1000),d.[' + #fieldname + '])'
+ ',convert(varchar(1000),i.[' + #fieldname + '])'
+ ',''' + #UpdateDate + ''''
+ ',''' + #UserName + ''''
+ ' from #ins i full outer join #del d'
+ #PKCols
+ ' where ISNULL(i.[' + #fieldname + '],'''') <> ISNULL(d.[' + #fieldname + '],'''')' --Skip identical values and excludes NULLS vs empty strings
EXEC (#sql)
END
END
Well I finally figured it out. The error is being generated with columns of data type 'decimal' and it is down to the ISNULL section of the last SELECT. I've fixed it by checking for the decimal type and then using the following code (which included a zero rather than an empty sting):
+ ' where ISNULL(i.[' + #fieldname + '],''0'') <> ISNULL(d.[' + #fieldname + '],''0'')' --Skip identical values and excludes NULLS vs empty strings
Context: I am exploring a new database (in MS SQL server), and I want to know for each table, all columns that have null values.
I.e. result would look something like this:
table column nulls
Tbl1 Col1 8
I have found this code here on stackoverflow, that makes a table of table-columnnames - without the WHERE statement which is my addition.
I tried to filter for nulls in WHERE statement, but then the table ends up empty, and I see why - i am checking if the col name is actually null, and not its contents. But can't figure out how to proceed.
select schema_name(tab.schema_id) as schema_name,
tab.name as table_name,
col.name as column_name
from sys.tables as tab
inner join sys.columns as col
on tab.object_id = col.object_id
left join sys.types as t
on col.user_type_id = t.user_type_id
-- in this where statement, I am trying to filter for nulls, but i get an empty result. and i know there are nulls
where col.name is null
order by schema_name, table_name, column_id
I also tried this (see 4th line):
select schema_name(tab.schema_id) as schema_name,
tab.name as table_name,
col.name as column_name
,(select count(*) from tab.name where col.name is null) as countnulls
from sys.tables as tab
inner join sys.columns as col
on tab.object_id = col.object_id
left join sys.types as t
on col.user_type_id = t.user_type_id
order by schema_name, table_name, column_id
the last one returns an error "Invalid object name 'tab.name'."
column name can't be null but if you mean nullable column (column that accept null value) that has null value at least so you can use following statement:
declare #schema varchar(255), #table varchar(255), #col varchar(255), #cmd varchar(max)
DECLARE getinfo cursor for
SELECT schema_name(tab.schema_id) as schema_name,tab.name , col.name from sys.tables as tab
inner join sys.columns as col on tab.object_id = col.object_id
where col.is_nullable =1
order by schema_name(tab.schema_id),tab.name,col.name
OPEN getinfo
FETCH NEXT FROM getinfo into #schema,#table,#col
WHILE ##FETCH_STATUS = 0
BEGIN
set #schema = QUOTENAME(#schema)
set #table = QUOTENAME(#table)
set #col = QUOTENAME(#col)
SELECT #cmd = 'IF EXISTS (SELECT 1 FROM '+ #schema +'.'+ #table +' WHERE ' + #col + ' IS NULL) BEGIN SELECT '''+#schema+''' as schemaName, '''+#table+''' as tablename, '''+#col+''' as columnName, * FROM '+ #schema +'.'+ #table +' WHERE ' + #col + ' IS NULL end'
EXEC(#cmd)
FETCH NEXT FROM getinfo into #schema,#table,#col
END
CLOSE getinfo
DEALLOCATE getinfo
that use cursor on all nullable columns in every table in the Database then check if this column has at least one null value if yes will select schema Name, table name, column name and all records that has null value in this column
but if you want to get only count of nulls you can use the following statement:
declare #schema varchar(255), #table varchar(255), #col varchar(255), #cmd varchar(max)
DECLARE getinfo cursor for
SELECT schema_name(tab.schema_id) as schema_name,tab.name , col.name from sys.tables as tab
inner join sys.columns as col on tab.object_id = col.object_id
where col.is_nullable =1
order by schema_name(tab.schema_id),tab.name,col.name
OPEN getinfo
FETCH NEXT FROM getinfo into #schema,#table,#col
WHILE ##FETCH_STATUS = 0
BEGIN
set #schema = QUOTENAME(#schema)
set #table = QUOTENAME(#table)
set #col = QUOTENAME(#col)
SELECT #cmd = 'IF EXISTS (SELECT 1 FROM '+ #schema +'.'+ #table +' WHERE ' + #col + ' IS NULL) BEGIN SELECT '''+#schema+''' as schemaName, '''+#table+''' as tablename, '''+#col+''' as columnName, count(*) as nulls FROM '+ #schema +'.'+ #table +' WHERE ' + #col + ' IS NULL end'
EXEC(#cmd)
FETCH NEXT FROM getinfo into #schema,#table,#col
END
that use cursor on all nullable columns in every table in the Database then check if this column has at least one null value if yes will select schema Name, table name, column name and count all records that has null value in this column
I have a database with some tables in it and I want to make dynamically generated insert and select statements without the need to use the case statement in the select clause for each table.
The select statement is quite easy, the challenge is with the insert one because I have to deal with each column and its data type. I managed to overcome it by means of a case statement, but I think it's hard working for tables with lots of columns and for databases with many tables.
I wish it was possible to change hardcoded table and column names for each table and column I need the dynamically generated SQL command.
I worked out in the following select statement for the given tables of the database (testDB) I have:
use testDB;
go
set dateformat dmy;
select
'select * from ' + s.name + '.' + t.name + ';' as cmd_select,
'insert into ' + s.name + '.' + t.name + ' (' +
stuff(( select ', ' + column_name
from information_schema.columns
where table_name = t.name and ordinal_position > 1
order by ordinal_position
for xml path(''), type).value('.', 'nvarchar(max)'),
1, 2, '')
+ ') values (' +
case t.name
when 'Person' then '''xxx'''
when 'WeightHistory' then '0.0, ''' + convert(varchar, current_timestamp, 103) + ''', ''' + left(convert(varchar, current_timestamp, 108), 5) + ''', 1'
when 'WorkTime' then '''' + convert(varchar, current_timestamp, 103) + ''', ''' + left(convert(varchar, current_timestamp, 108), 5) + ''', 1, null'
when 'TimeReference' then '''07:00'', ''' + convert(varchar, current_timestamp, 103) + ''', null'
end
+ ');'as cmd_insert --,
--t.lob_data_space_id
--, s.name, t.name, *
from sys.tables as t
inner join sys.schemas as s on t.schema_id = s.schema_id
where t.lob_data_space_id = 0; /* tables that don't have LOB columns (sysdiagrams, varchar(max), xml, etc.) */
What exactly I want to know is:
Is there a better way of making the dynamically generated insert statement without using a case statement for each table and column of the database?
ADITIONAL INFO
The table definition for the above code is the following:
if not exists (select * from sys.tables where lower(name) = N'person' )
begin
create table Person.Person (
PersonID int
constraint PK_Person
primary key
identity (1, 1),
Name varchar(100)
);
end;
go
if not exists (select * from sys.tables where lower(name) = N'weighthistory' )
begin
create table dbo.WeightHistory (
WeightHistoryID int
constraint PK_WeightHistory
primary key
identity (1, 1),
MeasureValue money,
MeasureDate date,
MeasureTime time(0),
PersonID int,
constraint FK_Weight_Person foreign key (PersonID) references Person.Person (PersonID)
);
end;
go
if not exists (select * from sys.tables where lower(name) = N'worktime')
begin
create table WorkTime (
WorkTimeID int
constraint PK_WorkTime primary key
identity(1, 1),
WorkDate date,
WorkTime time(0),
PersonID int,
TimeReferenceID int,
constraint FK_WorkTime_Person foreign key (PersonID) references Person.Person (PersonID) on delete cascade,
constraint FK_WorkTime_Reference foreign key (TimeReferenceID) references Work.Timereference (TimeReferenceID)
);
end;
go
if not exists (select * from sys.tables where lower(name) = N'timereference')
begin
create table Work.TimeReference (
TimeReferenceID int
constraint PK_TimeReferene primary key
identity (1, 1),
WorkTime time(0),
WorkTimeStartDate date,
WorkTimeEndDate date
);
end;
I have successfully constructed the output that I have been looking for from using dynamic SQL to create a pivot table with dynamically created column names.
My code is:
IF OBJECT_ID('tempdb..#TempDB') IS NOT NULL
DROP TABLE #TempDB
SELECT CASEID, FORMNAME, NAME, VALUE INTO #TempDB FROM dbo.EFORM WHERE FORMNAME='IncidentReporting'
IF OBJECT_ID('tempdb..#TempDB1') IS NOT NULL
DROP TABLE #TempDB1
SELECT DISTINCT Name INTO #TempDB1 FROM #TempDB
DECLARE #columns varchar(max)
DECLARE #query varchar(max)
SELECT #columns = COALESCE(#columns + ',[' + cast([Name] as varchar(100)) + ']',
'[' + cast([Name] as varchar(100))+ ']')
FROM #TempDB1
SET #query = 'SELECT * FROM #TempDB AS PivotData '
SET #query = #query +
'PIVOT (MAX(VALUE) FOR [NAME] IN (' + #columns + ')) AS p'
EXEC (#query)
This successfully gives me results like:
CASEID FORMNAME Column1 Column2 Column3
501000000621 IncidentReporting Value1 Valuea Valuev
501000000622 IncidentReporting Value2 Valueb Valuew
601000000126 IncidentReporting Value3 Valuec Valuex
601000000127 IncidentReporting Value4 Valued Valuey
601000000128 IncidentReporting Value5 Valuee Valuez
These results, outputed from the #query variable, are in exactly the format that I want a table of these results to be in.
Can anyone tell me how to get the results that are in the #query variable into a standard SQL table?
I have tried doing a statement like this, but I get the message "Incorrect syntax near ' + #columns + '":
SELECT *
INTO #TempDB4
FROM (SELECT * FROM #TempDB AS PivotData
PIVOT (MAX(VALUE) FOR [NAME] IN (' + #columns + ')) AS p)
Many thanks in advance.
In your existing code, add your into to this line:
SET #query = 'SELECT * FROM #TempDB AS PivotData '
so that you get:
SET #query = 'SELECT * INTO #TempDB4 FROM #TempDB AS PivotData '
Or add insert in the same manner.
To get your unsuccessful query to work as you expect, you'd have to turn that into dynamic SQL, much like your successful query, and call it using exec or sp_executesql