I often use queries like:
SELECT *
FROM ThisTable
OUTER APPLY (SELECT (SELECT SomeField + ' ' AS [data()]
FROM SomeTable
WHERE SomeTable.ID = ThisTable.ID
FOR XML PATH ('')) AS ConcatenatedSomeField) A
I often want to get multiple concatenated concatenated fields from this table, instead of just one. I could logically do this:
SELECT *
FROM ThisTable
OUTER APPLY (SELECT (SELECT SomeField + ' ' AS [data()]
FROM SomeTable
WHERE SomeTable.ID = ThisTable.ID
FOR XML PATH ('')) AS ConcatenatedSomeField) A
OUTER APPLY (SELECT (SELECT SomeField2 + ' ' AS [data()]
FROM SomeTable
WHERE SomeTable.ID = ThisTable.ID
FOR XML PATH ('')) AS ConcatenatedSomeField2) B
OUTER APPLY (SELECT (SELECT SomeField3 + ' ' AS [data()]
FROM SomeTable
WHERE SomeTable.ID = ThisTable.ID
FOR XML PATH ('')) AS ConcatenatedSomeField3) C
But it looks crappy and error prone when anything needs to be updated; also SomeTable is often a long list of joined tables so it could also have performance implications getting the same tables over and over.
Is there a better way to do this?
Thanks.
You could do something like this. Instead of immediately sending the XML value to a string, this query uses the TYPE keyword to return an xml type object which can then be queried. The three query functions search the xml object for all instances of the Somefield element and return a new xml object containing just those values. Then the value function strips out the xml tags surrounding the values and passes them into a varchar(max)
SELECT ThisTable.ID
,[A].query('/Somefield').value('/', 'varchar(max)') AS [SomeField_Combined]
,[A].query('/Somefield2').value('/', 'varchar(max)') AS [SomeField2_Combined]
,[A].query('/Somefield3').value('/', 'varchar(max)') AS [SomeField3_Combined]
FROM ThisTable
OUTER APPLY (
SELECT (
SELECT SomeField + ' ' AS [SomeField]
,SomeField2 + ' ' AS [SomeField2]
,SomeField3 + ' ' AS [SomeField3]
FROM SomeTable
WHERE SomeTable.ID = ThisTable.ID
FOR
XML PATH('')
,TYPE
) AS [A]
) [A]
You can create a CLR User-Defined Aggregate Function that does the concatenation for you.
Your code would then look like this instead.
select S.ID,
dbo.Concat(S.SomeField1),
dbo.Concat(S.SomeField2),
dbo.Concat(S.SomeField3)
from SomeTable as S
group by S.ID
This is the same answer as I gave here: https://dba.stackexchange.com/questions/125771/multiple-column-concatenation/
The OP of that question referenced the answer given here. You can see below that sometimes the simplest answer can be the best. If SomeTable is multiple tables then I would go ahead and put it into a CTE to avoid having the same complex code multiple times.
I ran a few tests using a little over 6 mil rows. With an index on the ID column.
Here is what I came up with.
Your initial query:
SELECT * FROM (
SELECT t.id,
stuff([M].query('/name').value('/', 'varchar(max)'),1,1,'') AS [SomeField_Combined1],
stuff([M].query('/car').value('/', 'varchar(max)'),1,1,'') AS [SomeField_Combined2]
FROM dbo.test t
OUTER APPLY(SELECT (
SELECT id, ','+name AS name
,','+car AS car
FROM test WHERE test.id=t.id
FOR XML PATH('') ,type)
AS M)
M ) S
GROUP BY id, SomeField_Combined1, SomeField_Combined2
This one ran for ~23 minutes.
I ran this version which is the version I first learned. In some ways it seems like it should take longer but it doesn't.
SELECT test.id,
STUFF((SELECT ', ' + name
FROM test ThisTable
WHERE test.id = ThisTable.id
FOR XML PATH ('')),1,2,'') AS ConcatenatedSomeField,
STUFF((SELECT ', ' + car
FROM test ThisTable
WHERE test.id = ThisTable.id
FOR XML PATH ('')),1,2,'') AS ConcatenatedSomeField2
FROM test
GROUP BY id
This version ran in just over 2 minutes.
Related
Looking to pivot/transpose with tsql (or something else)? on a table with multiple rows per item number, one row per Code (unit of measure).
It would have to be dynamic as there could be lot of different unit of measure codes per item.
Current data table:
select [Item No_], Code, [Qty_ per Unit of Measure], Weight, Cubage
from [mycompany$Item Unit of Measure]
where [Item No_] in ('007967','007968')
Desired output would be:
Addiotional info
We have a table that holds all the possible Unit of Measure codes that perhaps could be used in the final code?
select
Code
from [mycompany$Unit of Measure]
How to acchieve this, and what would the SQL code look like?
Suggested solution from #Larnu :
DECLARE #cols AS NVARCHAR(MAX);
DECLARE #query AS NVARCHAR(MAX);
select #cols = STUFF((SELECT distinct ',' +
QUOTENAME(Code)
FROM [mycompany$Unit of Measure]
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
, 1, 1, '');
SELECT #query =
'SELECT *
FROM
(
SELECT
o.[Item No_],
p.Code,
o.Weight
FROM [mycompany$Item Unit of Measure] AS o
INNER JOIN [mycompany$Unit of Measure] AS p ON o.Code = p.Code
) AS t
PIVOT
(
MAX(Weight)
FOR Code IN( ' + #cols + ' )' +
' ) AS p ; ';
execute(#query);
However this only give max value for Weight and not Cubage. Also it doesn't meet the desired end results as column heads are not tagged with CODE.Cubage, CODE.Weight etc. (PALLET.Weight, PALLET.Cubage)
Screenshot of results with above code:
I have the following query to pull LOB field (HL_DELETED_RECORD which contains no more than 25 characters) content, and it works fine by itself:
SELECT REPLACE(REPLACE( CONVERT(VARCHAR(20), HL_DELETED_RECORD), '&', '&'), 'ý', ' ') AS FAC_CHANGES,
HL_REC_ID
FROM SEC_HIST WITH(NOLOCK)
When I include this query as part of a CTE there are no errors; the LOB text data does not display at all as:
WITH sec_audit AS
(
SELECT *...
),
hist_logs as
(
SELECT REPLACE(REPLACE( CONVERT(VARCHAR(20), HL_DELETED_RECORD), '&', '&'), 'ý', ' ') AS FAC_CHANGES,
HL_REC_ID
FROM SEC_HIST WITH(NOLOCK)
)
SELECT *
FROM sec_audit
INNER JOIN hist_logs WITH(NOLOCK) ON HL_REC_ID = HL_RECORD_ID
The LOB column is defined as: [HL_DELETED_RECORD] text NULL
And here's an HL_DELETED_RECORD data sample: 36371ý1025074ýLEC
Which is converted as follows: 36371 1025074 LEC
Anyone else experience this? Thanks!
I know this question has been asked before here, but for SNOWSQL in particular, is there a function similar to 'STUFF' to combine two values into a single record? I basically want to be able to use this query:
SELECT ISSUE_ID,
STUFF((SELECT ', ' + AFFECTS_VERSION
FROM VW_JIRA_ISSUES
WHERE ISSUE_ID = T.ISSUE_ID
FOR XML PATH (''), type) ).value('.', 'varchar(max)'), 1, 1, '')
AS VERSIONS
FROM VW_JIRA_ISSUES
GROUP BY ISSUE_ID
How about Snowflake's INSERT() function? I understand it is basically the same as MySQL's INSERT() function which in turn is the equivalent of STUFF() in SQL Server.
References:
https://docs.snowflake.net/manuals/sql-reference/functions/insert.html
https://database.guide/whats-the-mysql-equivalent-of-stuff-in-sql-server/
select issue_id,
listagg(AFFECTS_VERSION, ', ') within group (order by issue_id desc)
FROM VW_JIRA_ISSUES
group by issue_id
I have a table where one field is JSON string.
"CX.UW.001": "03/08/2017", "CX.UW.001.AUDIT": "admin",
I want to produce an SSRS report where it appears in readable format like:
CX.UW.001: 03/08/2017
CX.UW.001.AUDIT: admin
Is it possible?
If you are looking for multiple records, just about any parse/split function will do, or you can use a simple CROSS APPLY in concert with a little XML
Declare #YourTable table (ID int, JSON varchar(max))
Insert Into #YourTable values
(1,'"CX.UW.001": "03/08/2017", "CX.UW.001.AUDIT": "admin"')
Select A.ID
,DisplayAs = replace(B.RetVal,'"','')
From #YourTable A
Cross Apply (
Select RetSeq = Row_Number() over (Order By (Select null))
,RetVal = LTrim(RTrim(B.i.value('(./text())[1]', 'varchar(max)')))
From (Select x = Cast('<x>' + replace((Select replace(A.JSON,',','§§Split§§') as [*] For XML Path('')),'§§Split§§','</x><x>')+'</x>' as xml).query('.')) as X
Cross Apply x.nodes('x') AS B(i)
) B
Returns
ID DisplayAs
1 CX.UW.001: 03/08/2017
1 CX.UW.001.AUDIT: admin
Or If you want the string to wrap
Select A.ID
,DisplayAs = replace(replace(JSON,',',char(13)),'"','')
From #YourTable A
Returns
1 CX.UW.001: 03/08/2017
CX.UW.001.AUDIT: admin
Right click that field, choose expression, locate Text from Common Functions category, use Replace function, should be the syntax like :
Replace (Fields!Yours.Value.Value,"""","")
Or in TSQL:
Select Replace(JSON_COLUMN,'"','')
From table
I'm trying to generate a report that gives me data from 3 different tables. I used a UNION first, now I need to get one column from the last table. I tried a JOIN but it breaks my code.
Here's what I want:
Select document number, patient full name (in one column), patient account number where patient zip = ‘45142’ then Add location of claim (trans or history) after you get the first part.
This is the bulk of my data
(SELECT DOCUMENT_NUMBER, TRANS_TYPE, PATIENT_LAST_NAME + ', ' + PATIENT_FIRST_NAME AS NAME,PATIENT_ZIP
FROM HCFA_M
WHERE patient_zip like '45142%')
UNION
(SELECT DOCUMENT_NUMBER, TRANS_TYPE, PATIENT_LAST_NAME + ', ' + PATIENT_FIRST_NAME AS NAME,PATIENT_ZIP
FROM UB_M
WHERE patient_zip like '45142%')
ORDER BY NAME asc, TRANS_TYPE
The table I need the last colum from is
SELECT LOCATION
FROM DOCUMENT_M
Going to guess on join condition
SELECT DOCUMENT_NUMBER, TRANS_TYPE, PATIENT_LAST_NAME + ', ' + PATIENT_FIRST_NAME AS NAME,PATIENT_ZIP, LOCATION
FROM HCFA_M
JOIN DOCUMENT_M
ON DOCUMENT_M.DOCUMENT_NUMBER = HCFA_M.DOCUMENT_NUMBER
WHERE patient_zip like '45142%'