Performance issue in retriving data from XMLAttribute using Xquery - tsql

I have one history table which has information of transaction record of master table, in this table I have used XML column to store that transaction information. Table structure with data looks as follows,
In Content XML data are stored as XML like below.
<Answers>
<AnswerSet>
<Answer questionId="ProductCode">S3404</Answer>
<Answer questionId="ProductName">Parabolic Triple</Answer>
<Answer questionId="LegacyOptionID" selectedvalue="1389">1389</Answer>
<Answer questionId="LegacyContentID" selectedvalue="624">624</Answer>
<Answer questionId="LegacyPageID" selectedvalue="355">355</Answer>
<Answer questionId="LegacyParentID" selectedvalue="760">760</Answer>
</AnswerSet>
</Answers>
In all rows structure is same but data is different in answer node, I want to get data which has ProductCode="S3404" and CreatedDate is New.
I have created query like
select n2.* from nodehistory n2 CROSS APPLY
n2.content.nodes('Answers/AnswerSet') T(c) WHERE
c.value('./Answer[#questionId="ProductCode"][1]','varchar(100)') ='J154'
ProductCode has unique data for every nodeid, but this is returning more than one row for same nodeid because this is transaction table so same XML can be store multiple time, for this require condition like order by Createddate desc, but execution of this query is taking more time due to XML processing I think.
Can we do like first get
Select Top 1 nodeid from NodeHistory order by CreatedDate desc
then search for XML part.
Any ideas on the more suitable views for better performance?

If you're not doing much else with the XML data then .exist should be more efficient than .value. I think there's a note in BOL regarding this. You could also use sql:variable to make this more generic, eg something like:
declare #produceCode varchar(20) = 'S3404'
select n2.*
from nodehistory n2
inner join ( select max(id) id from #nodehistory group by nodeId ) maxId ON n2.id = maxId.id
where n2.content.exist('Answers/AnswerSet/Answer[#questionId="ProductCode"][.=sql:variable("#produceCode")]') = 1
I've used a subquery to limit the resultset to the max(id) per nodeId. Your requirement might be slightly different but you get the idea.
In terms of performance, XML indexes can transform SQL / XML queries but at a cost. For storage you will need between 2-5 times the size of the original table so you'll have to weigh it up with your data. If you do decide to go with XML indexes, then a PROPERTY index should help this type of query, eg
-- create the primary XML index
CREATE PRIMARY XML INDEX xmlidx_nodehistory ON nodehistory(content)
GO
CREATE XML INDEX xmlprpidx_nodehistory ON nodehistory(content)
USING XML INDEX xmlidx_nodehistory FOR PROPERTY
go
declare #produceCode varchar(20) = 'S3404'
select n2.*
from nodehistory n2
inner join ( select max(id) id from nodehistory group by nodeId ) maxId ON n2.id = maxId.id
where n2.content.exist('Answers/AnswerSet/Answer[#questionId="ProductCode"][.=sql:variable("#produceCode")]') = 1
See these great articles for more SQL XML performance tuning ideas:
Performance Optimizations for the XML Data Type in SQL Server 2005
http://msdn.microsoft.com/en-us/library/ms345118.aspx
XML Indexes in SQL Server 2005
http://msdn.microsoft.com/en-us/library/ms345121(SQL.90).aspx

Related

Is it OK to store transactional primary key on data warehouse dimension table to relate between fact-dim?

I have data source (postgres transactional system) like this (simplified, the actual tables has more fields than this) :
Then I need to create an ETL pipeline, where the required report is something like this :
order number (from sales_order_header)
item name (from sales_order_lines)
batch shift start & end (from receiving_batches)
delivered quantity, approved received quantity, rejected received quantity (from receiving_inventories)
My design for fact-dim tables is this (simplified).
What I don't know about, is the optimal ETL design.
Let's focus on how to insert the fact, and relationship between fact with dim_sales_orders
If I have staging tables like these:
The ETL runs daily. After 22:00, there will be no more receiving, so I can run the ETL at 23:00.
Then I can just fetch data from sales_order_header and sales_order_lines, so at 23:00, the script can runs, kind of :
INSERT
INTO
staging_sales_orders (
SELECT
order_number,
item_name
FROM
sales_order_header soh,
sales_order_lines sol
WHERE
soh.sales_order_id = sol.sales_order_header_id
and date_trunc('day', sol.created_timestamp) = date_trunc('day', now())
);
And for the fact table, can runs at 23:30, with query
SELECT
soh.order_number,
rb.batch_shift_start,
rb.batch_shift_end,
sol.item_name,
ri.delivered_quantity,
ri.approved_received_quantity,
ri.rejected_received_quantity
FROM
receiving_batches rb,
receiving_inventories ri,
sales_order_lines sol,
sales_order_header soh
WHERE
rb.batch_id = ri.batch_id
AND ri.sales_order_line_id = sol.sales_order_line_id
AND sol.sales_order_header_id = soh.sales_order_id
AND date_trunc('day', sol.created_timestamp) = date_trunc('day', now())
But how to optimally load the data into fact table, particulary the fact table?
My approach
select from staging_sales_orders and insert them into dim_sales_orders, using auto increment primary key.
before inserting into fact_receiving_inventories, I need to know the dim_sales_order_id. So in that case, I select :
SELECT
dim_sales_order_id
FROM
dim_sales_orders dso
WHERE
order_number = staging_row.order_number
AND item_name = staging_row.item_name
then insert to fact table.
Now what I doubt, is on point 2 (selecting from existing dim). In here, I select based on 2 varchar columns, which should be performance hit. Since in the normalized form, I'm thinking of modifying the staging tables, adding sales_order_line_id on both staging tables. Hence, during point 2 above, I can just do
SELECT
dim_sales_order_id
FROM
dim_sales_orders dso
WHERE
sales_order_line_id = staging_row.sales_order_line_id
But as consequences, I will need to add sales_order_line_id into dim_sales_orders, which I don't find common on tutorials. I mean, adding transactional table PK, is technically can be done since I can access the data source. But is it a good DW fact-dim dimension, to add such transactional field (especially since it is PK)?
Or there is any other approach, rather than selecting the existing dim based on 2 varchars?
How to optimally select dimension id for fact tables?
Thanks
It is practically mandatory to include the source PK/BK in a dimension.
The standard process is to load your Dims and then load your facts. For the fact loads you translate the source data to the appropriate Dim SKs with lookups to the Dims using the PK/BK

Use SQL to Evaluate XML string broken into several rows

I have an application that stores a single XML record broken up into 3 separate rows, I'm assuming due to length limits. The first two rows each max out the storage at 4000 characters and unfortunately doesn't break at the same place for each record.
I'm trying to find a way to combine the three rows into a complete XML record that I can then extract data from.
I've tried concatenating the rows but can't find a data type or anything else that will let me pull the three rows into a single readable XML record.
I have several limitations I'm up against as we have select only access to the DB and I'm stuck using just SQL as I don't have enough access to implement any kind of external program to pull the data that is there an manipulate it using something else.
Any ideas would be very appreciated.
Without sample data, and desired results, we can only offer a possible approach.
Since you are on 2017, you have access to string_agg()
Here I am using ID as the proper sequence.
I should add that try_convert() will return a NULL if the conversion to XML fails.
Example
Declare #YourTable table (ID int,SomeCol varchar(4000))
Insert Into #YourTable values
(1,'<root><name>XYZ Co')
,(2,'mpany</')
,(3,'name></root>')
Select try_convert(xml,string_agg(SomeCol,'') within group (order by ID) )
From #YourTable
Returns
<root>
<name>XYZ Company</name>
</root>
EDIT 2014 Option
Select try_convert(xml,(Select '' + SomeCol
From #YourTable
Order By ID
For XML Path(''), TYPE).value('.', 'varchar(max)')
)
Or Even
Declare #S varchar(max) = ''
Select #S=#S+SomeCol
From #YourTable
Order By ID
Select try_convert(xml,#S)

Redshift large 'in' clause best practices

We have a query in which a list of parameter values is provided in "IN" clause of the query. Some time back this query failed to execute as the size of data in "IN" clause got quite large and hence the resulting query exceeded the 16 MB limit of the query in REDSHIFT. As a result of which we then tried processing the data in batches so as to limit the data and not breach the 16 MB limit.
My question is what are the factors/pitfalls to keep in mind while supplying such large data for the "IN" clause of a query or is there any alternative way in which I can deal with such large data for the "IN" clause?
If you have control over how you are generating your code, you could split it up as follows
first code to be submitted, drop and recreate filter table:
drop table if exists myfilter;
create table myfilter (filter_text varchar(max));
Second step is to populate the filter table in parts of a suitable size, e.g. 1000 values at a time
insert into myfilter
values({{myvalue1}},{{myvalue2}},{{myvalue3}} etc etc up to 1000 values );
repeat the above step multiple times until you have all of your values inserted
Then, use that filter table as follows
select * from master_table
where some_value in (select filter_text from myfilter);
drop table myfilter;
Large IN is not the best practice itself, it's better to use joins for large lists:
construct a virtual table a subquery
join your target table to the virtual table
like this
with
your_list as (
select 'first_value' as search_value
union select 'second_value'
...
)
select ...
from target_table t1
join your_list t2
on t1.col=t2.search_value

Hive: How to do a SELECT query to output a unique primary key using HiveQL?

I have the following schema dataset which i want to transform into a table that can be exported to SQL. I am using HIVE. Input as follows
call_id,stat1,stat2,stat3
1,a,b,c,
2,x,y,z,
3,d,e,f,
1,j,k,l,
The output table needs to have call_id as its primary key so it needs to be unique. The output schema should be
call_id,stat2,stat3,
1,b,c, or (1,k,l)
2,y,z,
3,e,f,
The problem is that when i use the keyword DISTINCT in the HIVE query, the DISTINCT applies to the all the colums combined. I want to apply the DISTINCT operation only to the call_id. Something on the lines of
SELECT DISTINCT(call_id), stat2,stat3 from intable;
However this is not valid in HIVE(I am not well-versed in SQL either).
The only legal query seems to be
SELECT DISTINCT call_id, stat2,stat3 from intable;
But this returns multiple rows with same call_id as the other columns are different and the row on the whole is distinct.
NOTE: There is no arithmetic relation between a,b,c,x,y,z, etc. So any trick of averaging or summing is not viable.
Any ideas how i can do this?
One quick idea,not the best one, but will do the work-
hive>create table temp1(a int,b string);
hive>insert overwrite table temp1
select call_id,max(concat(stat1,'|',stat2,'|',stat3)) from intable group by call_id;
hive>insert overwrite table intable
select a,split(b,'|')[0],split(b,'|')[1],split(b,'|')[2] from temp1;
,,I want to apply the DISTINCT operation only to the call_id"
But how will then Hive know which row to eliminate?
Without knowing the amount of data / size of the stat fields you have, the following query can the job:
select distinct i1.call_id, i1.stat2, i1.stat3 from (
select call_id, MIN(concat(stat1, stat2, stat3)) as smin
from intable group by call_id
) i2 join intable i1 on i1.call_id = i2.call_id
AND concat(i1.stat1, i1.stat2, i1.stat3) = i2.smin;

Detecting duplicate values in a column of a Datatable while traversing through It

I have a Datatable with Id(guid) and Name(string) columns. I traverse through the data table and run a validation criteria on the Name (say, It should contain only letters and numbers) and then adding the corresponding Id to a List If name passes the validation.
Something like below:-
List<Guid> validIds=new List<Guid>();
foreach(DataRow row in DataTable1.Rows)
{
if(IsValid(row["Name"])
{
validIds.Add((Guid)row["Id"]);
}
}
In addition to this validation I should also check If the name is not repeating in the whole datatable (even for the case-sensitiveness), If It is repeating, I should not add the corresponding Id in the List.
Things I am thinking/have thought about:-
1) I can have another List, check for the "Name" in the same, If It exists, will add the corresponding Guild
2) I cannot use HashSet as that would treat "Test" and "test" as different strings and not duplicates.
3) Take the DataTable to another one where I have the disctict names (this I havent tried and the code might be incorrect, please correct me whereever possible)
DataTable dataTableWithDistinctName = new DataTable();
dataTableWithDistinctName.CaseSensitive=true
CopiedDataTable=DataTable1.DefaultView.ToTable(true,"Name");
I would loop through the original datatable and check the existence of the "Name" in the CopiedDataTable, If It exists, I wont add the Id to the List.
Are there any better and optimum way to achieve the same? I need to always think of performance. Although there are many related questions in SO, I didnt find a problem similar to this. If you could point me to a question similar to this, It would be helpful.
EDIT :- The number of records might vary from 2000-3000.
Thanks
if you are looking to prevent duplicates, it may be grueling work, and I don't know how many records your dealing with at at atime... If a small set, I'd consider doing a query before each attempted insert from your LIVE source based on
select COUNT(*) as CountOnFile from ProductionTable where UPPER(name) = UPPER(name from live data).
If the result set CountOnFile > 0, don't add.
If you are dealing with a large dataset, like a bulk import, I would pull all the data into a temp table, then do a query where NOT IN... something like
create table OkToBeAdded as
select distinct upper( TempTable.Name ) as Name, GUID
from TempTable
where upper( TempTable.Name )
NOT IN ( select upper( LiveTable.Name )
from LiveTable
where upper( TempTable.Name ) = upper( LiveTable.Name )
);
insert into LiveTable ( Name, GUID )
select Name, GUID from OkToBeAdded;
Obviously, the SQL is sample and would need to be adjusted based on your specific back-end source
/* I did this entirely in SQL and avoided ADO.NET*/
/*I Pass the CSV of valid object Ids and split that in a table*/
DECLARE #TableTemp TABLE
(
TempId uniqueidentifier
)
INSERT INTO #TableTemp
SELECT cast(Data AS uniqueidentifier )AS ID FROM dbo.Split1(#ValidObjectIdsAsCSV,',')
/*Self join with table1 for any duplicate rows and update the column value*/
UPDATE Table1
SET IsValidated=1
FROM Table1 AS A INNER JOIN #TableTemp AS Temp
ON A.ID=Temp.TempId
WHERE NOT EXISTS (SELECT Name,Count(Name) FROM Table1
WHERE A.Name=B.Name
GROUP BY Name HAVING Count(Name)>1)