complicated query need to optimize - oracle-sqldeveloper

Below query take 116 sec to show the output.
I want to know if there is a better way to write below code:
select DP.PARTY_NUMBER
,DP.STATUS
,DP.PARTY_NAME
,DP.ATTRIBUTE14
,hps.party_site_number
,(
select hs.party_name
from ar.hz_parties hs
where hs.party_id = hps.party_id
)REP_NAME,
hps.STATUS
from ar.hz_parties dp
,ar.hz_relationships rl
,ar.hz_party_sites hps
where dp.party_type ='ORGANIZATION'
and dp.status ='A'
and dp.hq_branch_ind ='DP'
and dp.party_id = rl.subject_id(+)
and RL.relationship_code)'GTM_DP_OF_REP_PARTY_SITE'
and rl.status(+) ='A'
and rl.directional_flag(+) ='F'
and trunc(sysdate) between trunc(rl.start_date(+)) and trunc(rl.end_date(+))
and rL.object_id = hps.party_site_id(+);

Related

Query works, but not in function

I've been trying to get this to work in a function. It works from command line only:
SELECT d.cta_id, d.cta_name, d.cta_marketing_text, d.cta_text,
c.category_id, c.category_name,
d.cta_link, d.adid, d.product_type, d.global_cta, d.active_flag,
TO_CHAR(d.date_created,'MM/DD/YYYY') AS date_created,
TO_CHAR(d.date_modified,'MM/DD/YYYY') AS date_modified,
TO_CHAR(p.placement_date,'MM/DD/YYYY') AS placement_date
FROM cta_article_data d
JOIN (SELECT p.cta_id, p.placement_date
FROM cta_placement_dates p
WHERE p.placement_date = CURRENT_DATE) AS p
ON d.cta_id = p.cta_id
JOIN (SELECT c.category_id, c.category_name
FROM categories c) AS c
ON c.category_id = ANY(STRING_TO_ARRAY(d.category_id,',')::int[]);
SELECT retrieve_cta_data();
ON d.cta_id = p.cta_id'" returned 3 columns
CONTEXT: PL/pgSQL function retrieve_current_cta() line 10 at assignment
I'm really stuck here if anyone can spot anything.

Optimizing Postgres query with timestamp filter

I have a query:
SELECT DISTINCT ON (analytics_staging_v2s.event_type, sent_email_v2s.recipient, sent_email_v2s.sent) sent_email_v2s.id, sent_email_v2s.user_id, analytics_staging_v2s.event_type, sent_email_v2s.campaign_id, sent_email_v2s.recipient, sent_email_v2s.sent, sent_email_v2s.stage, sent_email_v2s.sequence_id, people.role, people.company, people.first_name, people.last_name, sequences.name as sequence_name
FROM "sent_email_v2s"
LEFT JOIN analytics_staging_v2s ON sent_email_v2s.id = analytics_staging_v2s.sent_email_v2_id
JOIN people ON sent_email_v2s.person_id = people.id
JOIN sequences on sent_email_v2s.sequence_id = sequences.id
JOIN users ON sent_email_v2s.user_id = users.id
WHERE "sent_email_v2s"."status" = 1
AND "people"."person_type" = 0
AND (sent_email_v2s.sequence_id = 1888) AND (sent_email_v2s.sent >= '2016-03-18')
AND "users"."team_id" = 1
When I run EXPLAIN ANALYZE on it, I get:
Then, if I change that to the following (Just removing the (sent_email_v2s.sent >= '2016-03-18')) as follows:
SELECT DISTINCT ON (analytics_staging_v2s.event_type, sent_email_v2s.recipient, sent_email_v2s.sent) sent_email_v2s.id, sent_email_v2s.user_id, analytics_staging_v2s.event_type, sent_email_v2s.campaign_id, sent_email_v2s.recipient, sent_email_v2s.sent, sent_email_v2s.stage, sent_email_v2s.sequence_id, people.role, people.company, people.first_name, people.last_name, sequences.name as sequence_name
FROM "sent_email_v2s"
LEFT JOIN analytics_staging_v2s ON sent_email_v2s.id = analytics_staging_v2s.sent_email_v2_id
JOIN people ON sent_email_v2s.person_id = people.id
JOIN sequences on sent_email_v2s.sequence_id = sequences.id
JOIN users ON sent_email_v2s.user_id = users.id
WHERE "sent_email_v2s"."status" = 1
AND "people"."person_type" = 0
AND (sent_email_v2s.sequence_id = 1888) AND "users"."team_id" = 1
when I run EXPLAIN ANALYZE on this query, the results are:
EDIT:
The results above from today are about as I expected. When I ran this last night, however, the difference created by including the timestamp filter was about 100x slower (0.5s -> 59s). The EXPLAIN ANALYZE from last night showed all of the time increase to be attributed to the first unique/sort operation in the query plan above.
Could there be some kind of caching issue here? I am worried now that there might be something else going on (transiently) that might make this query take 100x longer since it happened at least once.
Any thoughts are appreciated!

Forming JPQL query from native SQL query for multiple views with CASE condition and join statements

Kindly helping in converting native query ti JPQL query for multiple views with CASE condition and join statements.Table c1 and c3 are views. I am trying the get the current and pending information from c1.
Please find the query mentioned below
SELECT c3.eqip_id AS EQIP_ID,
CASE WHEN c1.inst_ts IS NULL OR c1.sent_ts > c1.inst_ts THEN c1.ver_nm END AS PEND,
CASE WHEN c1.sent_ts IS NULL OR c1.sent_ts > c1.inst_ts THEN c1.ver_nm END AS CURRENT,
c1.trm_ver_hist_id AS TRM_VER_HIST_ID
FROM trm_ver_hist_vw c1
JOIN(SELECT Max(trm_ver_hist_id) AS TRM_VER_HIST_ID, dvc_id, status
FROM trm_ver_hist_vw
WHERE ver_typ_id = 1 AND status IN( 'C', 'P' )
GROUP BY dvc_id, status) c2
ON c1.trm_ver_hist_id = c2.trm_ver_hist_id
JOIN trm_dtl_vw c3 ON c1.dvc_id = c3.trm_id
WHERE c3.co_actv_ind = 'Y' AND c3.mach_hdwr_asscn_ind = 'Y' AND pin = 'ABC'
Can anyone please help me to make this query in JPA/JPQL?
It seems very complex and data specific. I would recommend using a native SQL query.
The JPA spec does not allow selects in the FROM clause, although EclipseLink does have some support for this.
http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Querying/JPQL#Sub-selects_in_FROM_clause

zend_db obtain "select ..., (select ...) as x from ... "

I have a query that i can't build with Zend_Db_Select
SELECT `f`.*,
(SELECT Sum(x) AS `y`
FROM z AS pf
WHERE pf.q_id = f.id) AS w
FROM f ...
WHERE ...
GROUP BY `f`.`id`
so at the moment i'm running it manually $db->fetchAll($sql).
How do i obtain
select f.* , (select ...) as `something` from ...
I was thinking using ->column('f.*, (select...)') but it didn't work,
it could work maybe with a left join if i do (select ..., id) and then join on that id, but i wanted to obtain THIS very sql query. Is it possible?
thanks
I would recommend the JOIN. You might get better performance as sub selects are usually hard for the database to optimize. It is also easy to write this with Zend_Db_Select. Alternately new Zend_Db_Expr might work for this.
$select = $db->select()
->from('f', array('f.foo', 'f.bar', new Zend_Db_Expr('SELECT Sum(x) AS `y`
FROM z AS pf
WHERE pf.q_id = f.id') => 'f'))
->where(...);

How to determine the size of a Full-Text Index on SQL Server 2008 R2?

I have a SQL 2008 R2 database with some tables on it having some of those tables a Full-Text Index defined. I'd like to know how to determine the size of the index of a specific table, in order to control and predict it's growth.
Is there a way of doing this?
The catalog view sys.fulltext_index_fragments keeps track of the size of each fragment, regardless of catalog, so you can take the SUM this way. This assumes the limitation of one full-text index per table is going to remain the case. The following query will get you the size of each full-text index in the database, again regardless of catalog, but you could use the WHERE clause if you only care about a specific table.
SELECT
[table] = OBJECT_SCHEMA_NAME(table_id) + '.' + OBJECT_NAME(table_id),
size_in_KB = CONVERT(DECIMAL(12,2), SUM(data_size/1024.0))
FROM sys.fulltext_index_fragments
-- WHERE table_id = OBJECT_ID('dbo.specific_table_name')
GROUP BY table_id;
Also note that if the count of fragments is high you might consider a reorganize.
If you are after a specific Catalogue
Use SSMS
- Clik on [Database] and expand the objects
- Click on [Storage]
- Right Click on {Specific Catalogue}
- Choose Propertie and click.
IN General TAB.. You will find the Catalogue Size = 'nn'
I use something similar to this (which will also calculate the size of XML-indexes, ... if present)
SELECT S.name,
SO.name,
SIT.internal_type_desc,
rows = CASE WHEN GROUPING(SIT.internal_type_desc) = 0 THEN SUM(SP.rows)
END,
TotalSpaceGB = SUM(SAU.total_pages) * 8 / 1048576.0,
UsedSpaceGB = SUM(SAU.used_pages) * 8 / 1048576.0,
UnusedSpaceGB = SUM(SAU.total_pages - SAU.used_pages) * 8 / 1048576.0,
TotalSpaceKB = SUM(SAU.total_pages) * 8,
UsedSpaceKB = SUM(SAU.used_pages) * 8,
UnusedSpaceKB = SUM(SAU.total_pages - SAU.used_pages) * 8
FROM sys.objects SO
INNER JOIN sys.schemas S ON S.schema_id = SO.schema_id
INNER JOIN sys.internal_tables SIT ON SIT.parent_object_id = SO.object_id
INNER JOIN sys.partitions SP ON SP.object_id = SIT.object_id
INNER JOIN sys.allocation_units SAU ON (SAU.type IN (1, 3)
AND SAU.container_id = SP.hobt_id)
OR (SAU.type = 2
AND SAU.container_id = SP.partition_id)
WHERE S.name = 'schema'
--AND SO.name IN ('TableName')
GROUP BY GROUPING SETS(
(S.name,
SO.name,
SIT.internal_type_desc),
(S.name, SO.name), (S.name), ())
ORDER BY S.name,
SO.name,
SIT.internal_type_desc;
This will generally give numbers higher than sys.fulltext_index_fragments, but when combined with the sys.partitions of the table, it will add up to the numbers returned from EXEC sys.sp_spaceused #objname = N'schema.TableName';.
Tested with SQL Server 2016, but documentation says it should be present since 2008.