Need some help on splitting the daily balance based on the Min and Max range.
I have a table with an account with a balance 19447.83 in a balance table on one particular day.
I have a range table where for every product where the balance is split as follows
Range Table
Product_Code Product_Description Min_Range Max_Range Interest_Rate
2000-0100 Saving 0 4999.99 0.01
2000-0100 Saving 5000 9999.99 0.02
2000-0100 Saving 10000 49999.99 0.03
2000-1111 Senior Savings 0 4999.99 0.03
2000-1111 Senior Savings 5000 9999.99 0.04
2000-1111 Senior Savings 10000 49999.99 0.05
Balance Table
Date Balance Product_Code Product_Description AccountNo
28/02/2019 19447.83 2000-0100 Saving 3059123
27/02/2019 19557.61 2000-0100 Saving 3059123
26/02/2019 19976.01 2000-0100 Saving 3059123
25/02/2019 20530.91 2000-0100 Saving 3059123
28/02/2019 12345 2000-1111 Senior Savings 4059123
27/02/2019 5456 2000-1111 Senior Savings 4059123
26/02/2019 9999 2000-1111 Senior Savings 4059123
25/02/2019 7893 2000-1111 Senior Savings 4059123
The balance on 28/02/2019 19447.83 Should be split into
0 to 4999.99 0.01
5000 to 9999.99 0.02
10000 to 19447.83 0.03
This is fairly basic joining and arithmetic:
declare #r table(Product_Code varchar(20),Product_Description varchar(20),Min_Range decimal(10,2),Max_Range decimal(10,2),Interest_Rate decimal(10,2));
insert into #r values('2000-0100','Saving',0,4999.99,0.01),('2000-0100','Saving',5000,9999.99,0.02),('2000-0100','Saving',10000,49999.99,0.03),('2000-1111','Senior Savings',0,4999.99,0.03),('2000-1111','Senior Savings',5000,9999.99,0.04),('2000-1111','Senior Savings',10000,49999.99,0.05);
declare #b table(BalanceDate date,Balance decimal(10,2),Product_Code varchar(20),Product_Description varchar(20),AccountNo int);
insert into #b values('20190228',19447.83,'2000-0100','Saving',3059123),('20190227',19557.61,'2000-0100','Saving',3059123),('20190226',19976.01,'2000-0100','Saving',3059123),('20190225',20530.91,'2000-0100','Saving',3059123),('20190228',12345,'2000-1111','Senior Savings',4059123),('20190227',5456,'2000-1111','Senior Savings',4059123),('20190226',9999,'2000-1111','Senior Savings',4059123),('20190225',7893,'2000-1111','Senior Savings',4059123);
select b.AccountNo
,b.Product_Code
,b.Product_Description
,b.BalanceDate
,b.Balance
,r.Min_Range
,r.Max_Range
,r.Interest_Rate
,case when b.Balance > r.Max_Range
then r.Max_Range - r.Min_Range
else b.Balance - r.Min_Range
end as Split_Balance
,r.Interest_rate * case when b.Balance > r.Max_Range
then r.Max_Range - r.Min_Range
else b.Balance - r.Min_Range
end as Split_Balance_Interest
from #b as b
join #r as r
on b.Product_Code = r.Product_Code
and b.Balance > r.Min_Range
order by b.AccountNo
,b.BalanceDate;
Related
Only three months into SQL, so excuse the level of ignorance.
Got stuck on something I'm not sure the process I'm following is correct.
Got to values, price and quantity. Trying to average the price or create smaller ranges, then add the quantities within that range.
Price Quantity
20289.7 0.001
21320 1.798
20259.4 1.724
20365 2.1
21066.6 0.055
20517.8 0.002
20836.9 0.037
Lets say for every $50 range on left, how many orders are placed in that range.
Seen something like the following
SELECT *
FROM (
SELECT CASE
WHERE price BETWEEN 15000 AND 18000 then '10 -18'
WHERE price BETWEEN 18000 AND 19000 then '18 - 19'
WHERE price BETWEEN 19000 AND 20000 then '19 - 20'
but I'm not sure if this is the correct path.
This is the correct path, but syntax is incorrect:
SELECT
CASE WHEN price < 15000 THEN
'<15'
WHEN price BETWEEN 15000 AND 18000 THEN
'15 -18'
WHEN price BETWEEN 18001 AND 19000 THEN
'18 - 19'
WHEN price BETWEEN 19001 AND 20000 THEN
'19 - 20'
ELSE
'>20'
END AS price_range,
sum(quantity) AS sum_quantity
FROM
orders
GROUP BY
CASE WHEN price < 15000 THEN
'<15'
WHEN price BETWEEN 15000 AND 18000 THEN
'15 -18'
WHEN price BETWEEN 18001 AND 19000 THEN
'18 - 19'
WHEN price BETWEEN 19001 AND 20000 THEN
'19 - 20'
ELSE
'>20'
END;
always provide edge cases, in case a new price gets inserted not within the range.
I have a big table with many rows. A data example is the following:
Currency
Value
Value_in_NOK
USD
100
800
USD
200
1600
SEK
120
108
USD
400
3200
SEK
240
216
USD
300
2400
EUR
15
150
EUR
30
300
The converted value is always in NOK.
What I want is to use a SELECT statemnet to create a distinct list of Currencies, including the NOK, with the currency rate made from the first row with the distinct Currency:
Currency
Currency_Rate
USD
8.000
SEK
0.900
EUR
10.000
NOK
1.000
Assuming there is a some column in your table that defines order of rows - for example timestamp (ts)
select Currency, array_agg(round(Value_in_NOK/Value, 3) order by ts limit 1)[offset(0)] as Currency_Rate
from your_table
group by Currency
union all
select 'NOK', 1.000
if applied to sample data in your question - output is
This is the code I ended up with that works perfect.
SELECT
Opportunity_First_year_value_Currency,
ARRAY_AGG(ROUND(SAFE_CAST(Opportunity_First_year_value_converted AS NUMERIC)/SAFE_CAST(Opportunity_First_year_value AS NUMERIC), 5)
ORDER BY
Opportunity_Close_Date DESC
LIMIT
1) [
OFFSET
(0)] AS Currency_Rate
FROM
`JOINED_Opportunity`
WHERE
SAFE_CAST(Opportunity_First_year_value_converted AS NUMERIC) > 0
GROUP BY
Opportunity_First_year_value_Currency
UNION ALL
SELECT
'NOK',
1.00000
I have data looks like this
MILEAGE January February ........December
0 0 0
2 0.066 0.052
3 0.081 0
5 0 0.062
6 0.080 0 .........
813 0 0 and so on
I want the data to look like this
Mileage January February ..... December
0 (Total of Mileage less and equal to zero for each month)
2000 Total of Mileage upto 2000 for each month
4000 Total of Mileage upto 4000 for each month
6000 Total of Mileage upto 6000 for each month
8000 and so on....
10000
12000
14000
2 thousand increment up till
50000
Thank you very much for your help. I am using SQL Server 2008 R2 and not sure how to achieve this
You can use a recursive CTE to make the table that lists the values on the fly as I show below. HOWEVER, you would only want to do this if it was an ad-hoc once in a while thing. If you are going to do it often (say every month) just make the table like you would make any other table and then you can add an index and join to it.
The most common way thing to do is to have a counting table 0, 1, 2, 3 etc up to some large number. Then you could get your result with SELECT val*2000 FROM counting_table WHERE val*2000 >= 5000. This counting table can be reused for many similar cases but is general purpose.
WITH mile_table as
(
-- use recursive cte to make a table with number 0 - 50000 by 2000
SELECT 0 as milage
UNION ALL
SELECT mile_table.milage+2000
FROM mile_table
WHERE mile_table.milage+2000 <= 50000
)
SELECT mile_table.milage,
sum(a.January) as January,
sum(a.February) as February,
--- ....
sum(a.December) as December,
FROM mile_table
JOIN your_table a ON a.milage >= mile_table.milage
GROUP BY mile_Table.milage
Try out something like this:
DECLARE #i as INT = -2000
CREATE TABLE #T
(
MileageFrom INT,
MileageTo INT
)
WHILE (#i <=50000)
BEGIN
INSERT INTO #T
values(#i, #i+2000)
SET #i= #i+2000
END
SELECT
A.MileageTo,
SUM(January) as january,
SUM(Feburary) as Feb ....,
SUM(DEC) as DEC
FROM YourTable A
JOIN #T B ON A.Mileage BETWEEN B.MileageFrom and B.MileageTo
You need to just tweak a bit the MileageFrom value to include all the values less than -2000
Hi I want to show the Result set in ascending order. I have created the SQL FIDDLE for the same.
select amount_range as amount_range, count(*) as number_of_items,
sum(amount) as total_amount
from (
select *,case
when amount between 0.00 and 2500.00 then '<=$2,500.00'
when amount between 2500.01 and 5000.00 then '$2,500.01 - $5,000.00'
when amount between 5000.01 and 7500.00 then '$5,000.01 - $7,500.00'
when amount between 7500.01 and 10000.00 then '$7,500.01 - $10,000.00'
else '>$10,000.01' end as amount_range
from Sales ) a
group by amount_range order by amount_range;
My Results should be like
<=$2,500.00 4 5000
$2,500.01 - $5,000.00 3 12000
$5,000.01 - $7,500.00 2 13000
$7,500.01 - $10,000.00 1 10000
>$10,000.01 1 15000
The easiest method will be to sort off of a value in each grouping, for example the minimum amount:
select amount_range as amount_range,
count(*) as number_of_items,
sum(amount) as total_amount
from (
select *,case
when amount between 0.00 and 2500.00 then '<=$2,500.00'
when amount between 2500.01 and 5000.00 then '$2,500.01 - $5,000.00'
when amount between 5000.01 and 7500.00 then '$5,000.01 - $7,000.00'
when amount between 7500.01 and 10000.00 then '$7,500.01 - $10,000.00'
else '>$10,000.01' end as amount_range
from Sales ) a
group by amount_range
order by min(amount);
In Postgres, your subquery could also return an array where the first element is the desired position and the second is the string describing the bucket. Then, the outer query can ORDER BY your positioning value.
select amount_range[2] as amount_range,
count(*) as number_of_items,
sum(amount) as total_amount
from (
select *,case
when amount between 0.00 and 2500.00 then ARRAY['1','<=$2,500.00']
when amount between 2500.01 and 5000.00 then ARRAY['2','$2,500.01 - $5,000.00']
when amount between 5000.01 and 7500.00 then ARRAY['3', '$5,000.01 - $7,000.00']
when amount between 7500.01 and 10000.00 then ARRAY['4', '$7,500.01 - $10,000.00']
else ARRAY['5','>$10,000.01'] end as amount_range
from Sales ) a
group by amount_range
order by amount_range[1];
The first method happens to be simpler for your example. The second method would be useful if you were bucketing by something more complicated than ranges.
I have a table like this:
+------------+------------------+
|temperature |Date_time_of_data |
+------------+------------------+
| 4.5 |9/15/2007 12:12:12|
| 4.56 |9/15/2007 12:14:16|
| 4.44 |9/15/2007 12:16:02|
| 4.62 |9/15/2007 12:18:23|
| 4.89 |9/15/2007 12:21:01|
+------------+------------------+
The data-set contains more than 1000 records and I want to check for the minimum variability.
For every 30 minutes if the variance of temperature doesn't exceed 0.2, I want all the temperature values of that half an hour replaced by NULL.
Here is a SELECT to get the start of a period for every record:
SELECT temperature,
Date_time_of_data,
date_trunc('hour', Date_time_of_data)+
CASE WHEN date_part('minute', Date_time_of_data) >= 30
THEN interval '30 minutes'
ELSE interval '0 minutes'
END as start_of_period
FROM your_table
It truncates the date to hours (9/15/2007 12:12:12 to 9/15/2007 12:12:00)
and then adds 30 minutes if the date initially had more than 30 minutes.
Next - use start_of_period to group results and get min and max for every group:
SELECT temperature,
Date_time_of_data,
max(Date_time_of_data) OVER (PARTITION BY start_of_period) as max_temp,
min(Date_time_of_data) OVER (PARTITION BY start_of_period) as min_temp
FROM (previou_select_here)
Next - filter out the records, where the variance is more than 0.2
SELECT temperature,
Date_time_of_data
FROM (previou_select_here)
WHERE (max_temp - min_temp) <=0.2
And finally update your table
UPDATE your_table
SET temperature = NULL
WHERE Date_time_of_data IN (previous_select_here)
You may need to correct some spelling mistakes in this queries, before they work. I havent tested them.
And you can simplify them, if you need to.
P.S. If you need to filter out the data with variance less than 0.2 , you can simply create a VIEW from the third SELECT with
WHERE (max_temp - min_temp) > 0.2
And use the VIEW instead of table.
This query should do the job:
with intervals as (
select
date_trunc('hour', Date_time_of_data) + interval '30 min' * round(date_part('minute', Date_time_of_data) / 30.0) as valid_interval
from T
group by 1
having var_samp(temperature) > 0.2
)
select * from T
where
date_trunc('hour', Date_time_of_data) + interval '30 min' * round(date_part('minute', Date_time_of_data) / 30.0) in (select valid_interval from intervals)
The inner query (labeled as intervals) returns times when variance is over 0.2 (having var_samp(temperature) > 0.2). date_trunc ... expression rounds Date_time_of_data to half hour intervals.
The query returns nothing on the provided dataset.
create table T (temperature float8, Date_time_of_data timestamp without time zone);
insert into T values
(4.5, '2007-9-15 12:12:12'),
(4.56, '2007-9-15 12:14:16'),
(4.44, '2007-9-15 12:16:02'),
(4.62, '2007-9-15 12:18:23'),
(4.89, '2007-9-15 12:21:01')
;