Display formatted JSON in SSRS report - tsql

I have a table where one field is JSON string.
"CX.UW.001": "03/08/2017", "CX.UW.001.AUDIT": "admin",
I want to produce an SSRS report where it appears in readable format like:
CX.UW.001: 03/08/2017
CX.UW.001.AUDIT: admin
Is it possible?

If you are looking for multiple records, just about any parse/split function will do, or you can use a simple CROSS APPLY in concert with a little XML
Declare #YourTable table (ID int, JSON varchar(max))
Insert Into #YourTable values
(1,'"CX.UW.001": "03/08/2017", "CX.UW.001.AUDIT": "admin"')
Select A.ID
,DisplayAs = replace(B.RetVal,'"','')
From #YourTable A
Cross Apply (
Select RetSeq = Row_Number() over (Order By (Select null))
,RetVal = LTrim(RTrim(B.i.value('(./text())[1]', 'varchar(max)')))
From (Select x = Cast('<x>' + replace((Select replace(A.JSON,',','§§Split§§') as [*] For XML Path('')),'§§Split§§','</x><x>')+'</x>' as xml).query('.')) as X
Cross Apply x.nodes('x') AS B(i)
) B
Returns
ID DisplayAs
1 CX.UW.001: 03/08/2017
1 CX.UW.001.AUDIT: admin
Or If you want the string to wrap
Select A.ID
,DisplayAs = replace(replace(JSON,',',char(13)),'"','')
From #YourTable A
Returns
1 CX.UW.001: 03/08/2017
CX.UW.001.AUDIT: admin

Right click that field, choose expression, locate Text from Common Functions category, use Replace function, should be the syntax like :
Replace (Fields!Yours.Value.Value,"""","")
Or in TSQL:
Select Replace(JSON_COLUMN,'"','')
From table

Related

How to use OPENJSON on multiple rows

I have a temp table with multiple rows in it and each row has a column called Categories; which contains a very simple json array of ids for categories in a different table.
A few example rows of the temp table:
Id Name Categories
---------------------------------------------------------------------------------------------
'539f7e28-143e-41bb-8814-a7b93b846007' Test 1 ["category1Id", "category2Id", "category3Id"]
'f29e2ecf-6e37-4aa9-aa56-4a351d298bfc' Test 2 ["category1Id", "category2Id"]
'34e41a0a-ad92-4cd7-bf5c-8df6bfd6ed5c' Test 3 NULL
Now what I would like to do is to select all of the category ids from all of the rows in the temp table.
What I have is the following and it's not working as it's giving me the error of :
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
SELECT
c.Id
,c.[Name]
,c.Color
FROM
dbo.Category as c
WHERE
c.Id in (SELECT [value] FROM OPENJSON((SELECT Categories FROM #TempTable)))
and c.IsDeleted = 0
Which I guess it makes sense that's failing on that because I'm selecting multiple rows and needing to parse each row's respective category ids json. I'm just not sure what to do/change to give me the results that I want. Thank you in advance for any help.
You'd need to use CROSS APPLY like so:
SELECT id ,
name ,
t.Value AS category_id
FROM #temp
CROSS APPLY OPENJSON(categories, '$') t;
And then, you can JOIN to your Categories table using the category_id column, something like this:
SELECT id ,
name ,
t.Value AS category_id,
c.*
FROM #temp
CROSS APPLY OPENJSON(categories, '$') t
LEFT JOIN Categories c ON c.Id = t.Value

Empty data in SSRS Reports

I'm working on ssrs reports. I was able to see the data or result of my stored procedure.unfortunatesly, when used the same as my dataset for report I was unable to see the data instead i'm getting 0 records.what might be the reasons ?My reports structure will be as like below image :
My current result :
Below is my procedure :
ALTER Proc [dbo].[SP_Get_CIPPSubjectMarks_New_HTSTEST] -
-7,'1,17,8','2537,2555,2558,2568'
(
#ReportId int=7,
#SubjectId varchar(200),
#SectionId varchar(200)
)
AS
BEGIN
Create table #temp (Name Varchar(500),Class varchar(50),Section
Varchar(20),enrollno varchar(500),SubjectName varchar(500),TermName
varchar(500),TestName varchar(500),TestGroupName varchar(500),Weightage
int,IsWeight bit,Marks varchar(20),MaxMarks int,IsAbsent bit,SubjectOrder
Varchar(200))
Insert into
#temp(Name,Class,Section,enrollNo,SubjectName,
TermName,TestName,TestGroupName,Weightage,IsWeight,Marks,
MaxMarks,IsAbsent,SubjectOrder)
SELECT DISTINCT CONCAT(d.name,' ',d.surname),cls.Value,sec.Value,
e.enroll_no,
CASE WHEN ISNULL(cxs.subject_alias,'')='' THEN CASE WHEN rtv.value='Second
Language' then '2ND LANGUAGE:' + b.Name WHEN
rtv.value='Third Language' THEN '3rd Language:'+b.name else b.name end
ELSE cxs.subject_alias end as SubjectName,
z.str_termname,c.str_termtestname,i.str_testgroupname,
i.str_testweightage,i.is_weighted_average,a.marks,max_marks,a.is_absent,
CASE WHEN rtv.value='Second Language' THEN 'Second Language' WHEN
rtv.Value='Third Language' THEN 'Third Language' When
ISNULL(cxs.subject_alias,'')='' THEN b.Name
ELSE cxs.subject_alias end as SubOrder
FROM marks_entry_HTS a JOIN subject b ON a.fk_subject_id=b.Id and a.marks
is not null
LEFT JOIN subjectCategory_HTS l ON l.Id= b.subject_categoryID
JOIN class_term_test_mapping_HTS c ON a.fk_class_term_test_mapping_id=c.id
-- added by me
JOIN class_term_test_category_HTS ctc on c.fk_termcategoryid = ctc.id
JOIN reference_type_value rtv ON rtv.id=a.fk_subject_type_id
-- close
JOIN Term_Test_Subject_AssessmentType_HTS m ON m.fk_term_testID=c.id and
m.fk_SubjectID=b.Id
JOIN class_report_types_mapping_test_HTS k ON
k.fk_class_term_test_mapping_id=c.Id
JOIN class_term_mapping_HTS z ON z.id=k.fk_class_term_mapping_id
JOIN Term_Test_Testgroup_aggregate_HTS i ON
i.Id=c.fk_testgroup_aggregateID
JOIN TestGroup_HTS j on j.Id=i.fk_TestGroupID
JOIN student d ON a.fk_student_id=d.Id JOIN student_enroll_no e ON
e.fk_student_id=d.id and IsNULL(e.is_deleted,0)=0
JOIN student_academic f on f.fk_student_enroll_no_id=e.id and
f.fk_academic_year_id=c.fk_academic_year_id
JOIN reference_type_value cls on cls.Id=f.fk_class_id
LEFT JOIN reference_type_value sec ON sec.Id=f.fk_section_id
LEFT JOIN max_marks_entry_HTS h on h.id=a.fk_max_marks_entry_id
join class_xref_subjects cxs ON h.fk_subject_id=cxs.fk_subject_id and
cxs.fk_subject_type_id=h.fk_subject_type_id and
IsNull(cxs.is_deleted,0)=0
and cxs.fk_class_id=h.fk_class_id and
cxs.fk_academic_year_campus_id=h.fk_academic_year_campus_id and
cxs.fk_curriculum_segment_id=h.fk_curriculum_segment_id
where k.fk_class_report_types_mapping_id=#ReportId
and h.fk_section_id in (select * from SplitStringByChar(#SectionId,','))
and a.fk_subject_id in (select * from SplitStringByChar(#SubjectId,','))
select Name,Class,Section,enrollNo,SubjectName,TermName,
TestGroupName as TestName,
Case WHEN IsWeight=1 THEN Round(Cast(((avg(CAST(Marks as
float)/cast(MaxMarks as float)))*Weightage) as decimal(10,0)),0)
else Round(Cast(((cast(max(Marks) as float)/cast(max(MaxMarks) as
float))*Weightage) as decimal(10,0)),0) ENd as Marks ,
SubjectOrder ,sum(maxmarks) as maxmarksare INTO #temp1 from #temp
GROUP BY Name,Class,Section,enrollNo,SubjectName,
TermName,SubjectOrder,IsWeight,Weightage,TestGroupName
Insert into #temp
(Name,Class,Section,enrollNo,SubjectName,
TermName,TestName,Marks,SubjectOrder)
select
Name,Class,Section,enrollNo,SubjectName,TermName,'Total',
SUM(Marks),SubjectOrder from #temp1
GROUP BY Name,Class,Section,enrollNo,SubjectName,TermName,SubjectOrder
Insert into #temp
(Name,Class,SubjectName,Section,enrollNo,TermName,TestName,Marks)
select Name,Class,'Total',Section,enrollNo,TermName,'Total
Marks',SUM(Marks)
from #temp1
GROUP BY Name,Class,Section,enrollNo,TermName
Insert into #temp
(Name,Class,SubjectName,Section,enrollNo,TermName,TestName,Marks)
select
Name,Class,'Total',Section,enrollNo,TermName,'Percentage',
SUM(Marks)*100/sum(m
axmarksare) from #temp1
GROUP BY Name,Class,Section,enrollNo,TermName
select *from #temp
drop table #temp
drop table #temp1
end
My procedure result is as like below image :
As there are a number of things that could be wrong I would do the following.
Create copy of your report
Remove the existing dataset and tablix if you want.
Create some new datasets that just get the basic data (e.g. SELECT top 10 * FROM marks_emtry_HTS). Do not use your parameters yet as we just want to test we can get basic data.
Add some tables to your report to show that the data is being returned
Add a dataset to test you are passing and parsing parameters correctly by using a dataset query like select * from SplitStringByChar(#SectionId,',') and then put tablix on your report to show the results.
Try trimming your parameter values SET #SectionId = LTRIM(RTRIM(#SectionId)) to make sure you're not handling leading or trailing space incorrectly in you split function.
If any parts do not work, run a trace on the SQL Server as you run the report and look at exactly what is being executed on the server.
I know a lot of those steps you will think might be unnecessary but take the time to do it and at least you are certain and you can exclude such basic checks from your investigation.

SQL Server 2008 R2 query related to replacement of data

I have a scenario wherein I have to remove all the strings except a or b or c
My sample table is as follows:
Id Product
------------------
1. a,b,Da,c
2. Ty,a,b,c
3. a,sds,b
Sample output
Id Product
----------------
1. a,b,c
2. a,b,c
3. a,b
My current version is Microsoft SQL Server 2008 R2
This should help you out. As I state in the comments, I make use of Jeff Moden's DelimitedSplit8k, as you're using an older version of SQL Server. if you were using 2016+, you would have access to STRING_SPLIT. I also normalise your data; as storing delimited data is almost always a bad idea.
CREATE TABLE #Sample (id int, Product varchar(20));
INSERT INTO #Sample
VALUES (1,'a,b,Da,c'),
(2,'Ty,a,b,c'),
(3,'a,sds,b');
GO
--The first problem you have is you're storing delimited data
--You really should be storing each item on a separate row.
--This is, however, quite easy to do. i'm going to use a different
--table, however, you can change this fairly easily for your
--needs.
CREATE TABLE #Sample2 (id int, Product varchar(2));
GO
--You can split the data out by using a Splitter.
--My personal preference is Jeff Moden's DelimitedSplit8K
--which I've linked to above.
INSERT INTO #Sample2 (id, Product)
SELECT id, Item AS Product
FROM #Sample S
CROSS APPLY dbo.DelimitedSplit8K(S.Product,',') DS
WHERE DS.Item IN ('a','b','c');
GO
--And hey presto! Your normalised data, and without the unwanted values
SELECT *
FROM #Sample2;
GO
DROP TABLE #Sample;
DROP TABLE #Sample2;
If you have to keep the delimited format, you can use STUFF and FOR XML PATH:
WITH Split AS(
SELECT id,
Item AS Product,
ItemNumber
FROM #Sample S
CROSS APPLY dbo.DelimitedSplit8K(S.Product,',') DS
WHERE DS.Item IN ('a','b','c'))
SELECT id,
STUFF((SELECT ',' + Product
FROM Split sq
WHERE sq.id = S.id
ORDER BY ItemNumber
FOR XML PATH('')),1,1,'')
FROM Split S
GROUP BY id;
This also will do the thing, using xml only:
select * into #t from (values('a,b,Da,c'),('Ty,a,b,c'),('a,sds,b'))v(Product)
;
with x as (
SELECT t.Product, st.sProduct
FROM #t t
cross apply (
SELECT CAST(N'<root><r>' + REPLACE(t.Product,',', N'</r><r>') + N'</r></root>' as xml) xProduct
)xt
cross apply (
select CAST(r.value('.','NVARCHAR(MAX)') as nvarchar) sProduct
from xt.xProduct.nodes(N'//root/r') AS RECORDS(r)
) st
where st.sProduct in ('a', 'b', 'c')
)
select distinct x.Product, REVERSE(SUBSTRING(REVERSE(cleared.cProduct), 2, 999)) cleared
from x
cross apply ( select (
select distinct ref.sProduct + ','
from x ref
where ref.Product = x.Product
for xml path('') )
)cleared(cProduct)
;
drop table #t

How Dynamicaly columns in UNPIVOT operator

I currently have the following query:
WITH History AS (
SELECT
kz.*,
kz.__$operation AS operation,
map.tran_begin_time as beginT,
map.tran_end_time as endT
FROM cdc.fn_cdc_get_all_changes_dbo_EXT_GeolObject_KategZalezh(sys.fn_cdc_get_min_lsn('dbo_EXT_GeolObject_KategZalezh'), sys.fn_cdc_get_max_lsn(), 'all') AS kz
INNER JOIN [cdc].[lsn_time_mapping] map
ON kz.[__$start_lsn] = map.start_lsn
where kz.GUID_BalanceHC_Zalezh = 'DDA9AB3A-A0AF-4623-9362-0000C8C83D63'
),
UnpivotedValues AS(
SELECT guid, GUID_another, field, val, operation, beginT, endT
FROM History
UNPIVOT ( [val] FOR field IN
(
area,
oilwidthmin,
oilwidthmax,
efectivwidthmin,
efectivwidthmax,
etc...
))t
),
UnpivotedWithLastValue AS (
SELECT
*,
--Use LAG() to get the last value for the same field
LAG(val, 1) OVER (PARTITION BY guid, GUID_another, field ORDER BY BeginT) LastVal
FROM UnpivotedValues
)
SELECT * FROM UnpivotedWithLastValue WHERE val <> LastVal OR LastVal IS NULL ORDER BY guid
This query returns the changed values for a single table that has CDC (Change Data Capture) enabled.
I want to create a stored procedure that receives the columns to be unpivoted, and the cdc function (e.g. cdc.fn_cdc_get_all_...) as parameters and returns the result set.
The result for this tables must be joined in one report.
In my case parameter 1 is cdc.fn_cdc_get_all_changes_dbo_EXT_GeolObject_KategZalezh(sys.fn_cdc_get_min_lsn('dbo_EXT_GeolObject_KategZalezh'), sys.fn_cdc_get_max_lsn(), 'all'). This is the CDC function.
How should I send the list of fields that i want in the result? How's the string?
Also, is there a way to do without dynamic SQL? Dynamic SQL it is not better solution for performance.
As you know SQL Server is declarative by design and does not support macro substitution.
UNPIVOT would clearly be more performant, but here is a simplified example of a UNPIVOT which does not require Dynamic SQL, but only a little XML.
Example
Let's assume your table/results looks like this:
You may notice that I only we only specify key fields to EXCLUDE in the final WHERE
Declare #YourData table (ID int,Active bit,First_Name varchar(50),Last_Name varchar(50),EMail varchar(50),Salary decimal(10,2))
Insert into #YourData values
(1,1,'John','Smith','john.smith#email.com',85600),
(2,0,'Jane','Doe' ,'jane.doe#email.com',83200)
;with cte as (
-- Replace with your Complex Query
Select * from #YourData
)
Select A.ID
,A.Active
,C.*
From cte A
Cross Apply (Select XMLData=cast((Select A.* for XML RAW) as xml)) B
Cross Apply (
Select Item = attr.value('local-name(.)','varchar(100)')
,Value = attr.value('.','varchar(max)')
From XMLData.nodes('/row') C1(n)
Cross Apply C1.n.nodes('./#*') C2(attr)
Where attr.value('local-name(.)','varchar(100)') not in ('ID','Active')
) C
Returns

Row concatenation with FOR XML, but with multiple columns?

I often use queries like:
SELECT *
FROM ThisTable
OUTER APPLY (SELECT (SELECT SomeField + ' ' AS [data()]
FROM SomeTable
WHERE SomeTable.ID = ThisTable.ID
FOR XML PATH ('')) AS ConcatenatedSomeField) A
I often want to get multiple concatenated concatenated fields from this table, instead of just one. I could logically do this:
SELECT *
FROM ThisTable
OUTER APPLY (SELECT (SELECT SomeField + ' ' AS [data()]
FROM SomeTable
WHERE SomeTable.ID = ThisTable.ID
FOR XML PATH ('')) AS ConcatenatedSomeField) A
OUTER APPLY (SELECT (SELECT SomeField2 + ' ' AS [data()]
FROM SomeTable
WHERE SomeTable.ID = ThisTable.ID
FOR XML PATH ('')) AS ConcatenatedSomeField2) B
OUTER APPLY (SELECT (SELECT SomeField3 + ' ' AS [data()]
FROM SomeTable
WHERE SomeTable.ID = ThisTable.ID
FOR XML PATH ('')) AS ConcatenatedSomeField3) C
But it looks crappy and error prone when anything needs to be updated; also SomeTable is often a long list of joined tables so it could also have performance implications getting the same tables over and over.
Is there a better way to do this?
Thanks.
You could do something like this. Instead of immediately sending the XML value to a string, this query uses the TYPE keyword to return an xml type object which can then be queried. The three query functions search the xml object for all instances of the Somefield element and return a new xml object containing just those values. Then the value function strips out the xml tags surrounding the values and passes them into a varchar(max)
SELECT ThisTable.ID
,[A].query('/Somefield').value('/', 'varchar(max)') AS [SomeField_Combined]
,[A].query('/Somefield2').value('/', 'varchar(max)') AS [SomeField2_Combined]
,[A].query('/Somefield3').value('/', 'varchar(max)') AS [SomeField3_Combined]
FROM ThisTable
OUTER APPLY (
SELECT (
SELECT SomeField + ' ' AS [SomeField]
,SomeField2 + ' ' AS [SomeField2]
,SomeField3 + ' ' AS [SomeField3]
FROM SomeTable
WHERE SomeTable.ID = ThisTable.ID
FOR
XML PATH('')
,TYPE
) AS [A]
) [A]
You can create a CLR User-Defined Aggregate Function that does the concatenation for you.
Your code would then look like this instead.
select S.ID,
dbo.Concat(S.SomeField1),
dbo.Concat(S.SomeField2),
dbo.Concat(S.SomeField3)
from SomeTable as S
group by S.ID
This is the same answer as I gave here: https://dba.stackexchange.com/questions/125771/multiple-column-concatenation/
The OP of that question referenced the answer given here. You can see below that sometimes the simplest answer can be the best. If SomeTable is multiple tables then I would go ahead and put it into a CTE to avoid having the same complex code multiple times.
I ran a few tests using a little over 6 mil rows. With an index on the ID column.
Here is what I came up with.
Your initial query:
SELECT * FROM (
SELECT t.id,
stuff([M].query('/name').value('/', 'varchar(max)'),1,1,'') AS [SomeField_Combined1],
stuff([M].query('/car').value('/', 'varchar(max)'),1,1,'') AS [SomeField_Combined2]
FROM dbo.test t
OUTER APPLY(SELECT (
SELECT id, ','+name AS name
,','+car AS car
FROM test WHERE test.id=t.id
FOR XML PATH('') ,type)
AS M)
M ) S
GROUP BY id, SomeField_Combined1, SomeField_Combined2
This one ran for ~23 minutes.
I ran this version which is the version I first learned. In some ways it seems like it should take longer but it doesn't.
SELECT test.id,
STUFF((SELECT ', ' + name
FROM test ThisTable
WHERE test.id = ThisTable.id
FOR XML PATH ('')),1,2,'') AS ConcatenatedSomeField,
STUFF((SELECT ', ' + car
FROM test ThisTable
WHERE test.id = ThisTable.id
FOR XML PATH ('')),1,2,'') AS ConcatenatedSomeField2
FROM test
GROUP BY id
This version ran in just over 2 minutes.