Optimization of Sybase 15.5 union query - tsql

im having trouble trying to optimize the following query on Sybase 15.5. Does anyone know how could i improve it. Each one of the tables used there have about 30 million rows each. I tried my best to optimize it but still taking lot of time(1.5 hours).
create table #tmp1( f_id smallint, a_date smalldatetime )
create table #tmp2( f_id smallint, a_date smalldatetime )
insert #tmp1
select f_id, a_date = max( a_date )
FROM audit_table
WHERE i_date = #pIDate
group by f_id
insert #tmp2
select f_id , a_date = max( a_date )
FROM n_audit_table
WHERE i_date = #pIDate
group by f_id
create table #tmp(
t_account varchar(32) not null,
t_id varchar(32) not null,
product varchar(64) null
)
insert into #tmp
select t_account,t_id, product
FROM audit_table nt, #tmp1 a
WHERE i_date = #pIDate
and nt.a_date = a.a_date
and nt.f_id = a.f_id
union
select t_account,t_id, product
FROM n_audit_table t, #tmp2 a
WHERE t.item_date = #pIDate
and t.a_date = a.a_date
and t.f_id = a.f_id
Both the tables having indexes on i_date, a_date, f_id. Please find below showplan where it is long time.
QUERY PLAN FOR STATEMENT 2 (at line 24).
Optimized using Serial Mode
STEP 1
The type of query is INSERT.
10 operator(s) under root
|ROOT:EMIT Operator (VA = 10)
|
| |INSERT Operator (VA = 9)
| | The update mode is direct.
| |
| | |HASH UNION Operator (VA = 8) has 2 children.
| | | Using Worktable1 for internal storage.
| | | Key Count: 3
| | |
| | | |NESTED LOOP JOIN Operator (VA = 3) (Join Type: Inner Join)
| | | |
| | | | |SCAN Operator (VA = 0)
| | | | | FROM TABLE
| | | | | #tmp1
| | | | | a
| | | | | Table Scan.
| | | | | Forward Scan.
| | | | | Positioning at start of table.
| | | | | Using I/O Size 2 Kbytes for data pages.
| | | | | With LRU Buffer Replacement Strategy for data pages.
| | | |
| | | | |RESTRICT Operator (VA = 2)(5)(0)(0)(0)(0)
| | | | |
| | | | | |SCAN Operator (VA = 1)
| | | | | | FROM TABLE
| | | | | | audit_table
| | | | | | nt
| | | | | | Index : IX_audit_table
| | | | | | Forward Scan.
| | | | | | Positioning by key.
| | | | | | Keys are:
| | | | | | i_date ASC
| | | | | | a_date ASC
| | | | | | Using I/O Size 2 Kbytes for index leaf pages.
| | | | | | With LRU Buffer Replacement Strategy for index leaf pages.
| | | | | | Using I/O Size 2 Kbytes for data pages.
| | | | | | With LRU Buffer Replacement Strategy for data pages.
| | |
| | | |NESTED LOOP JOIN Operator (VA = 7) (Join Type: Inner Join)
| | | |
| | | | |SCAN Operator (VA = 4)
| | | | | FROM TABLE
| | | | | #tmp2
| | | | | a
| | | | | Table Scan.
| | | | | Forward Scan.
| | | | | Positioning at start of table.
| | | | | Using I/O Size 2 Kbytes for data pages.
| | | | | With LRU Buffer Replacement Strategy for data pages.
| | | |
| | | | |RESTRICT Operator (VA = 6)(5)(0)(0)(0)(0)
| | | | |
| | | | | |SCAN Operator (VA = 5)
| | | | | | FROM TABLE
| | | | | | n_audit_table
| | | | | | t
| | | | | | Index : IX_n_audit_table
| | | | | | Forward Scan.
| | | | | | Positioning by key.
| | | | | | Keys are:
| | | | | | i_date ASC
| | | | | | a_date ASC
| | | | | | Using I/O Size 2 Kbytes for index leaf pages.
| | | | | | With LRU Buffer Replacement Strategy for index leaf pages.
| | | | | | Using I/O Size 2 Kbytes for data pages.
| | | | | | With LRU Buffer Replacement Strategy for data pages.
| |
| | TO TABLE
| | #tmp
| | Using I/O Size 2 Kbytes for data pages.
Total estimated I/O cost for statement 2 (at line 24): 29322945.

I doubt its a union issue. Queries are more probable troublemaker.
I suppose you should start from adding indexes on your temp tables:
create table #tmp1( f_id smallint, a_date smalldatetime )
Create clustered index IX1Temp on #tmp1(f_id )
Create clustered index IX2Temp on #tmp1(a_date )
...
Also, I see not much sense in #tmp1, #tmp2 the way you use them. You could call CTE instead. Also. I would recommend you to try PARTITION BY instead GROUP BY statement.

According to the query execution plan, the problem is the table scans on the temporary tables.
Please get the execution plan for the following query:
insert into #tmp
select t_account,t_id, product
FROM
audit_table nt,
(
select f_id, a_date = max(a_date)
FROM audit_table
WHERE i_date = #pIDate
group by f_id
) a
WHERE
i_date = #pIDate
and nt.a_date = a.a_date
and nt.f_id = a.f_id
union
select t_account,t_id, product
FROM
n_audit_table t,
(
select f_id , a_date = max( a_date )
FROM n_audit_table
WHERE i_date = #pIDate
group by f_id
) a
WHERE
t.item_date = #pIDate
and t.a_date = a.a_date
and t.f_id = a.f_id

How many rows end up in each of the temporary tables?
Looks like the temporary tables could be replaced by using HAVING, I would need to test it, it is always complicated when your group by is on a single column and you require more columns in the output.
Try running this statement with SET STATISTICS PLANCOST ON and SET STATISTICS IO ON as that would give a good idea of the number of pages that are scanned and if Sybase is going wrong somewhere while optimising the query.

Related

How to convert row into column in PostgreSQL of below table

I was trying to convert the trace table to resulted table in postgress. I have hug data in the table.
I have table with name : Trace
entity_id | ts | key | bool_v | dbl_v | str_v | long_v |
---------------------------------------------------------------------------------------------------------------
1ea815c48c5ac30bca403a1010b09f1 | 1593934026155 | temperature | | | | 45 |
1ea815c48c5ac30bca403a1010b09f1 | 1593934026155 | operation | | | Normal | |
1ea815c48c5ac30bca403a1010b09f1 | 1593934026155 | period | | | | 6968 |
1ea815c48c5ac30bca403a1010b09f1 | 1593933202984 | temperature | | | | 44 |
1ea815c48c5ac30bca403a1010b09f1 | 1593933202984 | operation | | | Reverse | |
1ea815c48c5ac30bca403a1010b09f1 | 1593933202984 | period | | | | 3535 |
Trace Table
convert the above table into following table in PostgreSQL
Output Table: Result
entity_id | ts | temperature | operation | period |
----------------------------------------------------------------------------------------|
1ea815c48c5ac30bca403a1010b09f1 | 1593934026155 | 45 | Normal | 6968 |
1ea815c48c5ac30bca403a1010b09f1 | 1593933202984 | 44 | Reverse | 3535 |
Result Table
Have you tried this yet?
select entity_id, ts,
max(long_v) filter (where key = 'temperature') as temperature,
max(str_v) filter (where key = 'operation') as operation,
max(long_v) filter (where key = 'period') as period
from trace
group by entity_id, ts;

Sorting Issue with Underscore in Postgres

I'm trying to perform sorting on below data but postgres return the wrong sorting result.
Can someone please help me over her. How can I get proper sorting data.
Here I'm write below query to get data,
SELECT * FROM TempTable ORDER BY a_test ASC NULLS FIRST;
and it's return result like below,
| BB001217 |
| BB001217_000010 |
| BB001217_000011 |
| BB001217_00002 |
| BB001217_00003 |
| BB001218 |
| BB001219 |
| BB001220 |
| BB001220_000010 |
| BB001220_000011 |
| BB001220_00002 |
| BB001220_00003 |
| BB001220_00004 |
| BB001220_00005 |
| BB001220_00006 |
And I Expected result in below form,
| BB001217 |
| BB001217_00002 |
| BB001217_00003 |
| BB001217_000010 |
| BB001217_000011 |
| BB001218 |
| BB001219 |
| BB001220 |
| BB001220_00002 |
| BB001220_00003 |
| BB001220_00004 |
| BB001220_00005 |
| BB001220_00006 |
| BB001220_000010 |
| BB001220_000011 |
From PostgreSQL v10 on you could use an ICU collation that provides “natural sorting”:
CREATE COLLATION english_natural (
LOCALE = 'en-US-u-kn-true',
PROVIDER = icu
);
SELECT *
FROM TempTable
ORDER BY a_test COLLATE english_natural
ASC NULLS FIRST;
You are storing numbers in a VARCHAR column and the sorting is thus based on character sorting where '10' is considered to be smaller than '2'
You need to split the column into two parts, then convert the second to a number and sort on those two:
SELECT *
FROM temptable
ORDER BY split_part(a_test,'_',1),
nullif(split_part(a_test,'_',2),'')::int ASC NULLS FIRST;
Online example: https://rextester.com/RNU44666

T-SQL : Pivot table without aggregate

I am trying to understand how to pivot data within T-SQL but can't seem to get it working. I have the following table structure
+-------------------+-----------------------+
| Name | Value |
+-------------------+-----------------------+
| TaskId | 12417 |
| TaskUid | XX00044497 |
| TaskDefId | 23 |
| TaskStatusId | 4 |
| Notes | |
| TaskActivityIndex | 0 |
| ModifiedBy | Orange |
| Modified | /Date(1554540200000)/ |
| CreatedBy | Apple |
| Created | /Date(2121212100000)/ |
| TaskPriorityId | 40 |
| OId | 2 |
+-------------------+-----------------------+
I want to pivot the name column to be columns expected output
+--------+------------------------+-----------+--------------+-------+-------------------+------------+-----------------------+-----------+-----------------------+----------------+-----+
| TASKID | TASKUID | TASKDEFID | TASKSTATUSID | NOTES | TASKACTIVITYINDEX | MODIFIEDBY | MODIFIED | CREATEDBY | CREATED | TASKPRIORITYID | OID |
+--------+------------------------+-----------+--------------+-------+-------------------+------------+-----------------------+-----------+-----------------------+----------------+-----+
| | | | | | | | | | | | |
| 12417 | XX00044497 | 23 | 4 | | 0 | Orange | /Date(1554540200000)/ | Apple | /Date(2121212100000)/ | 40 | 2 |
+--------+------------------------+-----------+--------------+-------+-------------------+------------+-----------------------+-----------+-----------------------+----------------+-----+
Is there an easy way of doing it? The columns are fixed (not dynamic).
Any help appreciated
Try this:
select * from yourtable
pivot
(
min(value)
for Name in ([TaskID],[TaskUID],[TaskDefID]......)
) as pivotable
You can also use case statements.
You must use the aggregate function in the pivot table.
If you want to learn more, here is the reference:
https://learn.microsoft.com/en-us/sql/t-sql/queries/from-using-pivot-and-unpivot?view=sql-server-2017
Output (I only tried three columns):
DB<>Fiddle

Add columns but keep a specific id

I have a table "Listing" that looks like this:
| listing_id | amenities |
|------------|--------------------------------------------------|
| 5629709 | {"Air conditioning",Heating, Essentials,Shampoo} |
| 4156372 | {"Wireless Internet",Kitchen,"Pets allowed"} |
And another table "Amenity" like this:
| amenity_id | amenities |
|------------|--------------------------------------------------|
| 1 | Air conditioning |
| 2 | Kitchen |
| 3 | Heating |
Is there a way to join the two tables in a new one "Listing_Amenity" like this:
| listing_id | amenities |
|------------|-----------|
| 5629709 | 1 |
| 5629709 | 3 |
| 4156372 | 2 |
You could use unnest:
CREATE TABLE Listing_Amenity
AS
SELECT l.listing_id, a.amenity_id
FROM Listing l
, unnest(l.ammenities) sub(elem)
JOIN Amenity a
ON a.ammenities = sub.elem;
db<>fiddle demo

PostgreSQL - How to do a Loop on a column

I am struggling to do a loop on a Postgres, but functions on postgres are not my piece of cake.
I have the following table on postgres:
| portfolio_1 | total_risk |
|----------------|------------|
| Top 10 Bets | |
| AAPL34 | 2,06699 |
| DISB34 | 1,712684 |
| PETR4 | 0,753324 |
| PETR3 | 0,087767 |
| VALE3 | 0,086346 |
| LREN3 | 0,055108 |
| AMZO34 | 0,0 |
| Bottom 10 Bets | |
| AAPL34 | 0,0 |
What I'm trying to do is get the values after the "Top 10 Bets" and before the "Botton 10 Bets".
My goal is the following result:
| portfolio_1 | total_risk |
|-------------|------------|
| AAPL34 | 2,06699 |
| DISB34 | 1,712684 |
| PETR4 | 0,753324 |
| PETR3 | 0,087767 |
| VALE3 | 0,086346 |
| LREN3 | 0,055108 |
| AMZO34 | 0,0 |
So, my goal is to take off the "Top 10 Bets", the "Botton 10 Bets" and the AAPL34 after the "Botton 10 Bets", which was repeated.
The quantity of rows is variable (I'm importing it from an Excel file), so I need a loop to do this, right?
SQL tables and result sets represent unordered sets. There is no "before" or "after" unless rows explicitly provide that information.
Let me assume that you have such a column, which I will call id for convenience.
Then you can do this in several ways. Here is one:
select t.*
from t
where t.id > (select min(t2.id) from t t2 where t2.portfolio_1 = 'Top 10 Bets') and
t.id < (select max(t2.id) from t t2 where t2.portfolio_1 = 'Bottom 10 Bets');