Optimizing a WHERE clause with a dateadd function - tsql

For the business that I am working in I would like to get information on our customers. The base information I have on these customers is as follows:
Activation_Date stored in a Loans table, datatype is datetime.
ActivityDate stored in a CustomerDailyLoanActivity_Information table (a daily loans table to those interested, it is part of a datamart and stores for each day that a customer has been active with our company how much they have paid into their loan, so if a customer has an Activation_Date of 15-03-2017, it has ActivityDates in the CustomerDailyLoanActivity_Information table from 15-03-2017 up until now whereby each ActivityDate has a record in another column Sum_Paid_To_Date how much has been paid up until that ActivityDate). Datatype of ActivityDate is date.
What I would like to know is the following, I would like to know how much each customer has paid on 1, or 2, or 3, etc. months after his Activation_Date. So the query would look something like the following (slightly pseudo-code, the more important part is the WHERE clause).
SELECT
cldai.Sum_Paid_To_Date,
cldai.ActivityDate,
cldai.Customer_Account_Number
FROM
CustomerLoanDailyActivity_Information cldai
INNER JOIN
Loans l ON l.Customer_Account_Number = cldai.Customer_Account_Number
WHERE
(cldai.ActivityDate = CAST(l.Activation_Date AS date)
OR
cldai.ActivityDate = DATEADD(month, 1, CAST(l.Activation_Date AS date))
OR
cldai.ActivityDate = DATEADD(month, 2, CAST(l.Activation_Date AS date))
OR
cldai.ActivityDate = DATEADD(month, 3, CAST(l.Activation_Date AS date))
)
ORDER BY
l.Customer_Account_Number, cldai.ActivityDate ASC
So the problem is that this query is really really slow (because of the WHERE clause and because the cldai table is big (~6 GB)) and exits before any data is retrieved. A couple of problems that I have heard, and possible solutions, but haven't worked so far.
The CAST function makes the query really slow because it does a comparison with the ActivityDate column, which is indexed. I used CONVERT before but that was also really slow. I feel like I need to do the convert/cast though, because the ActivityDate is of date type and the Activation_Date is of datetime type, so there is a possibility that the time part of the datetime in Activation_Date will cause there to be no matches with the ActivityDate (e.g. Activation_Date for a given customer is 15-03-2017 09:00:00 so it will never match with ActivityDate 15-03-2017 because this might be converted to datetime 15-03-2017 00:00:00, which will never be equal because of the time part).
I have to use "DateTime" evaluations, which has been suggested as a solution, but I have no clue on how to apply this correctly.
I can't look at the execution plan because the DBA has blocked me from seeing that.
Any ideas on how to make this query perform more quickly? Any help would be greatly appreciated.

So a massive speedup was obtained by using a LEFT JOIN instead of an INNER JOIN and by not ordering the data on the server but on the client side. This reduced the query time from about an hour and 10 minutes to about 1 minute. It seems unbelievable but it's what happened.
Regards,
Tim.

If you are guaranteed to have a record for each day, you could apply use the row_number() function to apply row numbers for each group of customer loan repayment records, and then retrieve rows 1,31,61 and 91? This would avoid any date manipulation.

How about splitting this up into two steps? Step one - build a table with the four dates for each customer. Then step two, join this to your main CustomerLoanDailyActivity_Information table on date and customer account number. The second step would have a much simpler join, just an = between the ActivityDate and date entry in the table you have built.

Related

How to get data according to a user defined date in Tableau?

I am creating a tableau dashboard and it should load data according to the date the user enters. This is the logic that I am trying.
select distinct XX.ACCOUNT_NUM, (A.TOTAL_BILLED_TOT-A.TOTAL_PAID_TOT)/1000 arrears, ((A.TOTAL_BILLED_TOT-A.TOTAL_PAID_TOT)/1000 - XX.SUM_BILL ) TILL_REF_JUL20
FROM
(
select /*+ parallel (bill ,40 ) */ distinct account_num, sum(INVOICE_NET_mny + INVOICE_TAX_mny)/1000 SUM_BILL
from bill
where account_num in (Select account_num from MyTable)
and trunc(bill_dtm) > ('07-Aug-2021') –-Refmonth+1 month and 07 is a constant. Only month and year is changing
and cancellation_dtm is null
group by account_num
)xx , account a
where xx.ACCOUNT_NUM = A.account_num
Here is what I tried. First created a parameter called ref_Month. Then created a calculated field with this.
[Ref_Month]=MAKEDATE(DATETRUNC('year',(DATEADD('month', 1, [Ref_Month])), 07))
But I am getting an error. I am using a live connection. Is there any method to achive this?
I don't understand what this statement is trying to do:
[Ref_Month]=MAKEDATE(DATETRUNC('year',(DATEADD('month', 1, [Ref_Month])), 07))
If Ref_Month is a date, which it has to be for DATEADD to be a valid operation, why not make it the 7th of whatever starting month and year you are starting with? Then DATEADD('month',1, [Ref_Month]) will be a month later, still on the 7th. So you don't need DATETRUNC or MAKEDATE at all.
That said, how, when, and where are you trying to COMPUTE a PARAMETER,
let alone based on itself?

I need to add up lots of values between date ranges as quickly as possible using PostgreSQL, what's the best method?

Here's a simple example of what I'm trying to do:
CREATE TABLE daily_factors (
factor_date date,
factor_value numeric(3,1));
CREATE TABLE customer_date_ranges (
customer_id int,
date_from date,
date_to date);
INSERT INTO
daily_factors
SELECT
t.factor_date,
(random() * 10 + 30)::numeric(3,1)
FROM
generate_series(timestamp '20170101', timestamp '20210211', interval '1 day') AS t(factor_date);
WITH customer_id AS (
SELECT generate_series(1, 100000) AS customer_id),
date_from AS (
SELECT
customer_id,
(timestamp '20170101' + random() * (timestamp '20201231' - timestamp '20170101'))::date AS date_from
FROM
customer_id)
INSERT INTO
customer_date_ranges
SELECT
d.customer_id,
d.date_from,
(d.date_from::timestamp + random() * (timestamp '20210211' - d.date_from::timestamp))::date AS date_to
FROM
date_from d;
So I'm basically making two tables:
a list of daily factors, one for every day from 1st Jan 2017 until today's date;
a list of 100,000 "customers" all who have a date range between 1st Jan 2017 and today, some long, some short, basically random.
Then I want to add up the factors for each customer in their date range, and take the average value.
SELECT
cd.customer_id,
AVG(df.factor_value) AS average_value
FROM
customer_date_ranges cd
INNER JOIN daily_factors df ON df.factor_date BETWEEN cd.date_from AND cd.date_to
GROUP BY
cd.customer_id;
Having a non-equi join on a date range is never going to be pretty, but is there any way to speed this up?
The only index I could think of was this one:
CREATE INDEX performance_idx ON daily_factors (factor_date);
It makes a tiny difference to the execution time. When I run this locally I'm seeing around 32 seconds with no index, and around 28s with the index.
I can see that this is a massive bottleneck in the system I'm building, but I can't think of any way to make things faster. The ideas I did have were:
instead of using daily factors I could largely get away with monthly ones, but now I have the added complexity of "whole months and partial months" to work with. It doesn't seem like it's going to be worth it for the added complexity, e.g. "take 7 whole months for Feb to Aug 2020, then 10/31 of Jan 2020 and 15/30 of September 2020";
I could pre-calculate every average I will ever need, but with 1,503 factors (and that will increase with each new day), that's already 1,128,753 numbers to store (assuming we ignore zero date ranges and that my maths is right). Also my real world system has an extra level of complexity, a second identifier with 20 possible values, so this would mean having c.20 million numbers to pre-calculate. Also, every day the number of values to store grows exponentially;
I could take this work out of the database, and do it in code (in memory), as it seems like a relational database might not be the best solution here?
Any other suggestions?
The classic way to deal with this is to store running sums of factor_value, not (or in addition to) individual values. Then you just look up the running sum at the two end points (actually at the end, and one before the start), and take the difference. And of course divide by the count, to turn it into an average. I've never done this inside a database, but there is no reason it can't be done there.

Excluding date(s) from a report

Monday this week (26th August) was a public holiday in the UK. I need to exclude this date from some of my reports.
Simple enough but I have one database where the date is broken into 15 minute segments, for example:
2019-08-26 08:30:00.000
2019-08-26 16:15:00.000
The only way I can work out to exclude all these dates from the report would be to use NOT IN, for example:
AND a.mydate NOT IN ('2019-08-26 08:30:00.000', '2019-08-26 16:15:00.000')
Which seems quite a ponderous way of doing it to me.
There's also:
and a.mydate NOT BETWEEN '2019-08-26 08:30:00.000' and '2019-08-26 19:30:00.000'
but any advance on that would be useful to know about, if there is an advance on this, as there will be more, perhaps many more, public holidays to exclude, moving forwards.
I do not have a separate table with holiday dates in it, so I need to script as per the above.
Thank you
If you have discreet values, you could add them to a table and do a left join and put a filter in WHERE saying the exclusion table's Id should be null (meaning the join failed).
Sth like:
SELECT
mydata
FROM
mytable mt LEFT JOIN Holidays h ON mt.mydate = h.date
WHERE
h.Id IS NULL
This is just an alternative way made possible because you have discrete values, I'm not sure if it's actually faster or not...
This worked better:
and cast(a.mydate as date) not in
( '2019-08-26',
'2019-12-25',
'2019-12-26',
'2020-01-01',
. . . )

Year over year monthly sales

I am using SQL Server 2008 R2. Here is the query I have that returns monthly sales totals by zip code, per store.
select
left(a.Zip, 5) as ZipCode,
s.Store,
datename(month,s.MovementDate) as TheMonth,
datepart(year,s.MovementDate) as TheYear,
datepart(mm,s.MovementDate) as MonthNum,
sum(s.Dollars) as Sales,
count(*) as [TxnCount],
count(distinct s.AccountNumber) as NumOfAccounts
from
dbo.DailySales s
inner join
dbo.Accounts a on a.AccountNumber = s.AccountNumber
where
s.SaleType = 3
and s.MovementDate > '1/1/2016'
and isnull(a.Zip, '') <> ''
group by
left(a.Zip, 5),
s.Store,
datename(month, s.MovementDate),
datepart(year, s.MovementDate),
datepart(mm, s.MovementDate)
Now I'd like to add columns that compare sales, TxnCount, and NumOfAccounts to the same month the previous year for each zip code and store. I also would like each zip code/store combo to have a record for every month in the range; so zeros if null.
I do have a calendar table that I tried to use to get all months, but I ran into problems because of my "where" statements.
I know that both of these issues (comparing to previous year and including all dates in a date range) have been asked and answered before, and I've gotten them to work before myself, but this particular one has me running in circles. Any help would be appreciated.
I hope this is clear enough.
Thanks,
Tim
Treat the Query you have above as a data source. Run it as a CTE for the period you want to report, plus the period - 12 months (to get the historic data). (SalesPerMonth)
Then do a query that gets all the months you need from your calendar table as another CTE. This is the reporting months, not the previous year. (MonthsToReport)
Get a list of every valid zip code / Store combo - probably a select distinct from the SalesPerMonth CTE this would give you only combos that have at least one sale in the period (or historical period - you probably also want ones that sold last year, but not this year). Another CTE - StoreZip
Finally, your main query cross joins the StoreZip results with the MonthsToReport - this gives you the one row per StoreZip/Month combos you are looking for. Left join twice to the SalesPerMonth data, once for the month, once for the 1 year previous data. Use ISNULL to change any null records (no data) to zero.
Instead of CTEs, you could also do it as separate queries, storing the results in Temp tables instead. This may work better for large amounts of data.

How to optimize a batch pivotization?

I have a datetime list (which for some reason I call it column date) containing over 1k datetime.
adates:2017.10.20T00:02:35.650 2017.10.20T01:57:13.454 ...
For each of these dates I need to select the data from some table, then pivotize by a column t i.e. expiry, add the corresponding date datetime as column to the pivotized table and stitch together the pivotization for all the dates. Note that I should be able to identify which pivotization corresponds to a date and that's why I do it one by one:
fPivot:{[adate;accypair]
t1:select from volatilitysurface_smile where date=adate,ccypair=accypair;
mycols:`atm`s10c`s10p`s25c`s25p;
t2:`t xkey 0!exec mycols#(stype!mid) by t:t from t1;
t3:`t xkey select distinct t,tenor,xi,volofvol,delta_type,spread from t1;
result:ej[`t;t2;t3];
:result}
I then call this function for every datetime adates as follows:
raze {[accypair;adate] `date xcols update date:adate from fPivot[adate;accypair] }[`EURCHF] #/: adates;
this takes about 90s. I wonder if there is a better way e.g. do a big pivotization rather than running one pivotization per date and then stitching it all together. The big issue I see is that I have no apparent way to include the date attribute as part of the pivotization and the date can not be lost otherwise I can't reconciliate the results.
If you havent been to the wiki page on pivoting then it may be a good start. There is a section on a general pivoting function that makes some claims to being somewhat efficient:
One user reports:
This is able to pivot a whole day of real quote data, about 25 million
quotes over about 4000 syms and an average of 5 levels per sym, in a
little over four minutes.
As for general comments, I would say that the ej is unnecessary as it is a more general version of ij, allowing you to specify the key column. As both t2 and t3 have the same keying I would instead use:
t2 ij t3
Which may give you a very minor performance boost.
OK I solved the issue by creating a batch version of the pivotization that keeps the date (datetime) table field when doing the group by bit needed to pivot i.e. by t:t from ... to by date:date,t:t from .... It went from 90s down to 150 milliseconds.
fBatchPivot:{[adates;accypair]
t1:select from volatilitysurface_smile where date in adates,ccypair=accypair;
mycols:`atm`s10c`s10p`s25c`s25p;
t2:`date`t xkey 0!exec mycols#(stype!mid) by date:date,t:t from t1;
t3:`date`t xkey select distinct date,t,tenor,xi,volofvol,delta_type,spread from t1;
result:0!(`date`t xasc t2 ij t3);
:result}