How do I calculate final price in table items based on criterias? Criterias would be set in table modifier. I was thinking about left join but is it possible in stored procedure to choose type of modifier? Every records would have only one criteria, they cant be combined.
items:
code | price_general
---------------------------------
BIKE | 50
CAR | 300
BOAT | 600
PLANE | 1200
modifier:
type | item | amount
----------------------------------
PERC | CAR | 20 (add 20% on top)
FIXE | BOAT | 700 (fixed price 700)
ADD | PLANE | 10 (add +10 value)
result should look like this
code | price_general | price_final
-------------------------------------------------
BIKE | 50 | 50
CAR | 300 | 360
BOAT | 600 | 700
PLANE | 1200 | 1210
Is this possible in TSQL? Or should I add additional business logic to C# code?
Thank you for your ideas.
You could use a CASE statement with your logic. Something like:
SELECT i.code, i.price_general,
price_final =
CASE m.type
WHEN 'PERC' THEN i.price_general*(1 + m.amount/100)
WHEN 'FIXE' THEN m.amount
WHEN 'ADD' THEN i.price_general + m.amount
ELSE i.price_general
END
FROM items i LEFT JOIN modifier m on i.item = m.code;
Apologies for any typos or syntax errors, I use Postgresql, but I hope you get the idea...
split the amt into up and fixed
just put 0 in up if you want all fixed
and 0 in fixed if you want all up
select items.code
, items.price_general
, isnull(modifier.item, items.price_general, items.price_general * modifier.up + modifier.fixed) as [price_final]
from items
left join modifier
on modifier.item = items.code
Try this
SELECT I.CODE,
I.PRICE_GENERAL,
CASE
WHEN M.TYPE LIKE 'ADD%[%]%' THEN I.PRICE_GENERAL * ( 1 + M.AMOUNT / 100 )
WHEN M.TYPE LIKE 'FIXED%' THEN M.AMOUNT
WHEN M.TYPE LIKE 'ADD%'
AND M.TYPE NOT LIKE 'ADD%[%]%' THEN I.PRICE_GENERAL + M.AMOUNT
ELSE I.PRICE_GENERAL
END
FROM ITEMS I
LEFT OUTER JOIN MODIFIER M
ON I.ITEM = M.CODE
Related
I have an unusual problem I'm trying to solve with SQL where I need to generate sequential numbers for partitioned rows but override specific numbers with values from the data, while not breaking the sequence (unless the override causes a number to be used greater than the number of rows present).
I feel I might be able to achieve this by selecting the rows where I need to override the generated sequence value and the rows I don't need to override the value, then unioning them together and somehow using coalesce to get the desired dynamically generated sequence value, or maybe there's some way I can utilise recursive.
I've not been able to solve this problem yet, but I've put together a SQL Fiddle which provides a simplified version:
http://sqlfiddle.com/#!17/236b5/5
The desired_dynamic_number is what I'm trying to generate and the generated_dynamic_number is my current work-in-progress attempt.
Any pointers around the best way to achieve the desired_dynamic_number values dynamically?
Update:
I'm almost there using lag:
http://sqlfiddle.com/#!17/236b5/24
step-by-step demo:db<>fiddle
SELECT
*,
COALESCE( -- 3
first_value(override_as_number) OVER w -- 2
, 1
)
+ row_number() OVER w - 1 -- 4, 5
FROM (
SELECT
*,
SUM( -- 1
CASE WHEN override_as_number IS NOT NULL THEN 1 ELSE 0 END
) OVER (PARTITION BY grouped_by ORDER BY secondary_order_by)
as grouped
FROM sample
) s
WINDOW w AS (PARTITION BY grouped_by, grouped ORDER BY secondary_order_by)
Create a new subpartition within your partitions: This cumulative sum creates a unique group id for every group of records which starts with a override_as_number <> NULL followed by NULL records. So, for instance, your (AAA, d) to (AAA, f) belongs to the same subpartition/group.
first_value() gives the first value of such subpartition.
The COALESCE ensures a non-NULL result from the first_value() function if your partition starts with a NULL record.
row_number() - 1 creates a row count within a subpartition, starting with 0.
Adding the first_value() of a subpartition with the row count creates your result: Beginning with the one non-NULL record of a subpartition (adding the 0 row count), the first following NULL records results in the value +1 and so forth.
Below query gives exact result, but you need to verify with all combinations
select c.*,COALESCE(c.override_as_number,c.act) as final FROM
(
select b.*, dense_rank() over(partition by grouped_by order by grouped_by, actual) as act from
(
select a.*,COALESCE(override_as_number,row_num) as actual FROM
(
select grouped_by , secondary_order_by ,
dense_rank() over ( partition by grouped_by order by grouped_by, secondary_order_by ) as row_num
,override_as_number,desired_dynamic_number from fiddle
) a
) b
) c ;
column "final" is the result
grouped_by | secondary_order_by | row_num | override_as_number | desired_dynamic_number | actual | act | final
------------+--------------------+---------+--------------------+------------------------+--------+-----+-------
AAA | a | 1 | 1 | 1 | 1 | 1 | 1
AAA | b | 2 | | 2 | 2 | 2 | 2
AAA | c | 3 | 3 | 3 | 3 | 3 | 3
AAA | d | 4 | 3 | 3 | 3 | 3 | 3
AAA | e | 5 | | 4 | 5 | 4 | 4
AAA | f | 6 | | 5 | 6 | 5 | 5
AAA | g | 7 | 999 | 999 | 999 | 6 | 999
XYZ | a | 1 | | 1 | 1 | 1 | 1
ZZZ | a | 1 | | 1 | 1 | 1 | 1
ZZZ | b | 2 | | 2 | 2 | 2 | 2
(10 rows)
Hope this helps!
The real world problem I was trying to solve did not have a nicely ordered secondary_order_by column, instead it would be something a bit more randomised (a created timestamp).
For the benefit of people who stumble across this question with a similar problem to solve, a colleague solved this problem using a cartesian join, who's solution I'm posting below. The solution is Snowflake SQL which should be possible to adapt to Postgres. It does fall down on higher override_as_number values though unless the from table(generator(rowcount => 1000)) 1000 value is not increased to something suitably high.
The SQL:
with tally_table as (
select row_number() over (order by seq4()) as gen_list
from table(generator(rowcount => 1000))
),
base as (
select *,
IFF(override_as_number IS NULL, row_number() OVER(PARTITION BY grouped_by, override_as_number order by random),override_as_number) as rownum
from "SANDPIT"."TEST"."SAMPLEDATA" order by grouped_by,override_as_number,random
) --select * from base order by grouped_by,random;
,
cart_product as (
select *
from tally_table cross join (Select distinct grouped_by from base ) as distinct_grouped_by
) --select * from cart_product;
,
filter_product as (
select *,
row_number() OVER(partition by cart_product.grouped_by order by cart_product.grouped_by,gen_list) as seq_order
from cart_product
where CONCAT(grouped_by,'~',gen_list) NOT IN (select concat(grouped_by,'~',override_as_number) from base where override_as_number is not null)
) --select * from try2 order by 2,3 ;
select base.grouped_by,
base.random,
base.override_as_number,
base.answer, -- This is hard coded as test data
IFF(override_as_number is null, gen_list, seq_order) as computed_answer
from base inner join filter_product on base.rownum = filter_product.seq_order and base.grouped_by = filter_product.grouped_by
order by base.grouped_by,
random;
In the end I went for a simpler solution using a temporary table and cursor to inject override_as_number values and shuffle other numbers.
I am new at postgresql and am having trouble wrapping my mind around why I am getting the results that I see.
I perform the following query
SELECT
name AS region_name,
COUNT(tripsq1.id) AS trips,
COUNT(DISTINCT user_id) AS unique_users,
COUNT(case when consumed_at = start_at then tripsq1.id end) AS first_day,
(SUM(case when consumed_at = start_at then tripsq1.id end)::NUMERIC(6,4))/COUNT(tripsq1.id)::NUMERIC(6,4) AS percent_on_first_day
FROM promotionsq1
INNER JOIN couponsq1
ON promotion_id = promotionsq1.id
INNER JOIN tripsq1
ON couponsq1.id = coupon_id
INNER JOIN regionsq1
ON regionsq1.id = region_id
WHERE promotion_name = 'TestPromo'
GROUP BY region_name;
and get the following result
region_name | trips | unique_users | first_day | percent_on_first_day
-------------------+-------+--------------+-----------+-----------------------
A | 3 | 2 | 1 | 33.3333333333333333
B | 1 | 1 | 0 |
C | 1 | 1 | 1 | 2000.0000000000000000
The first rows percentage gets calculated correctly while the third rows percentage is 20 times what it should be. The percent_on_first_day should be 100.00 since it is 100.0 * 1/1.
Any help would be greatly appreciated
I'm suspecting that the issue is because of this code:
SUM(case when consumed_at = start_at then tripsq1.id end)
This tells me you are summing the ids, which is meaningless. You probably want:
SUM(case when consumed_at = start_at then 1 end)
I have two tables: One with squares with columns x and y over the natural numbers, and another with points on this grid created by the first table. Example schema:
Grid Table
id | x | y
------------
123 | 1 | 1
234 | 1 | 2
345 | 2 | 1
456 | 2 | 2
Then, the points table:
id | x | y
----------------
12 | 1.23 | 1.23
23 | 2.89 | 1.55
Currently, using this query:
SELECT g.* FROM grid as g, points as p
WHERE p.id=23 AND floor(p.x)=g.x AND floor(p.y)=g.y;
I get the expected result, which is the grid square in which the point with id 23 resides (grid with id 345); However, when the table grid has 10,000,000 rows (the current situation I'm in), this query is incredibly slow, i.e. on the order of a few seconds.
I've found a workaround for this, but it's ugly:
SELECT g.* FROM grid as g, points as p
WHERE p.id=23 AND (p.x-.5)::integer=g.x AND (p.y-.5)::integer=g.y;
I get the expected result again, and in 11ms, but this feels hacky. Are there cleaner ways to do this? Any help is appreciated!
You can use a CTE, as it is evaluated once only.
WITH p2 AS (select floor(p.x) x,
floor(p.y) y
from points p
where p.id=23)
SELECT g.*
FROM grid g
INNER JOIN p2
ON p2.x=g.x and p2.y=g.y
I've searched the forums and while I see similar posts, they only address pieces of the full query I need to formulate (array_aggr, where exists, joins, etc.). If the question I'm posting has been answered, I will gladly accept references to those threads.
I did find this thread ...which is very similar to what I need, except it is for MySQL, and I kept running into errors trying to get it into psql syntax. Hoping someone can help me get everything together. Here's the scenario:
Attribute
attrib_id | attrib_name
UserAttribute
user_id | attrib_id | value
Here's a small example of what the data looks like:
Attribute
attrib_id | attrib_name
-----------------------
1 | attrib1
2 | attrib2
3 | attrib3
4 | attrib4
5 | attrib5
UserAttribute -- there can be up to 15 attrib_id's/value's per user_id
user_id | attrib_id | value
----------------------------
101 | 1 | valueA
101 | 2 | valueB
102 | 1 | valueC
102 | 2 | valueD
103 | 1 | valueA
103 | 2 | valueB
104 | 1 | valueC
104 | 2 | valueD
105 | 1 | valueA
105 | 2 | valueB
Here's what I'm looking for
Result
user_id | attrib1_value | attrib2_value
--------------------------------------------------------
101 | valueA | valueB
102 | valueC | valueD
103 | valueA | valueB
104 | valueC | valueD
105 | valueA | valueB
As shown, I'm looking for single rows that contain:
- user_id from the UserAttribute table
- attribute values from the UserAttribute table
Note: I only need attribute values from the UserAttribute table for two specific attribute names in the Attribute table
Again, any help or reference to an existing solution would be greatly appreciated.
UPDATE:
#ronin provided a query that gets the results desired:
SELECT ua.user_id
,MAX(CASE WHEN a.attrib_name = 'attrib1' THEN ua.value ELSE NULL END) AS attrib_1_val
,MAX(CASE WHEN a.attrib_name = 'attrib2' THEN ua.value ELSE NULL END) AS attrib_2_val
FROM UserAttribute ua
JOIN Attribute a ON (a.attrib_id = ua.attrib_id)
WHERE a.attrib_name IN ('attrib1', 'attrib2')
GROUP BY ua.user_id;
To build on that, I tried to add some 'LIKE' pattern matching within the 'WHEN' condition (against the ua.value), but everything ends up as the 'FALSE' value. Will start a new question to see if that can be incorporated if I cannot figure it out. Thanks all for the help!!
If each attribute only has a single value for a user, you can start by making a sparse matrix:
SELECT user_id
,CASE WHEN attrib_id = 1 THEN value ELSE NULL END AS attrib_1_val
,CASE WHEN attrib_id = 2 THEN value ELSE NULL END AS attrib_2_val
FROM UserAttribute;
Then compress the matrix using an aggregate function:
SELECT user_id
,MAX(CASE WHEN attrib_id = 1 THEN value ELSE NULL END) AS attrib_1_val
,MAX(CASE WHEN attrib_id = 2 THEN value ELSE NULL END) AS attrib_2_val
FROM UserAttribute
GROUP BY user_id;
In response to the comment, searching by attribute name rather than id:
SELECT ua.user_id
,MAX(CASE WHEN a.attrib_name = 'attrib1' THEN ua.value ELSE NULL END) AS attrib_1_val
,MAX(CASE WHEN a.attrib_name = 'attrib2' THEN ua.value ELSE NULL END) AS attrib_2_val
FROM UserAttribute ua
JOIN Attribute a ON (a.attrib_id = ua.attrib_id)
WHERE a.attrib_name IN ('attrib1', 'attrib2')
GROUP BY ua.user_id;
Starting with Postgres 9.4 you can use the simpler aggregate FILTER clause:
SELECT user_id
,MAX(value) FILTER (WHERE attrib_id = 1) AS attrib_1_val
,MAX(value) FILTER (WHERE attrib_id = 2) AS attrib_2_val
FROM UserAttribute
WHERE attrib_id IN (1,2)
GROUP BY 1;
For more than a few attributes or for top performance, look to crosstab() from the additional module tablefunc (Postgres 8.3+). Details here:
PostgreSQL Crosstab Query
What about something like this:
select ua.user_id, a.attrib_name attrib_value1, a2.attrib_name attrib_value2
from user_attribute ua
left join attribute a on a.atribute_id=ua.attribute_id and a.attribute_id in (1,2)
left join user_attribute ua2 on ua2.user_id=ua.user_id and ua2.attribute_id > ua.attribute_id
left join attribute a2 on a2.attribute_id=ua2.attribute_id and a2.attribute_id in (1,2)
I am new to JasperReports and I am trying to generate a Pie Chart using iReports 5.1.0.
I have counts of days taken which should compute the percentages of the 3 slices but what should I give in the Key Expression and Label Expression ?
Trying to customize the 3 slice labels as Within 5 days, More than 5 days and Tested but not referred.
I am getting counts through this code
SELECT SUM(subSet.days_taken <= 5) AS within_5_days,
SUM(subSet.days_taken > 5) AS more_than_5,
SUM(subSet.date_referred IS NULL) as not_yet_referred
FROM (select p.patient_id,
(CASE
WHEN st.smear_result <> 'NEGATIVE' OR st.gxp_result = 'MTB+' THEN (DATEDIFF(r.date_referred, MIN(st.date_smear_tested)))
ELSE
(CASE
WHEN st.smear_result = 'NEGATIVE' OR st.gxp_result = 'MTB-' THEN (DATEDIFF(r.date_referred, MAX(st.date_smear_tested)))
END) END) as days_taken,
r.date_referred as date_referred
from patient as p
left outer join sputum_test as st on p.patient_id = st.patient_id
left outer join referral as r on r.patient_id = st.patient_id
where p.suspected_by is not null
and (p.patient_status = 'SUSPECT' or
p.patient_status = 'CONFIRMED')
group by p.patient_id)
as subSet
This is also the DataSet run I am using.
Your help will be really appreciated.
What you do now is that you make three columns in one tuple so you probably get something similar to the following:'
--------------------------------------------------
| within_5_days | more_than_5 | not_yet_referred |
--------------------------------------------------
| 4 | 5 | 8 |
--------------------------------------------------
However the pie chart won't accept it in that format. Instead you want this:
-------------------------
| Type | Summ |
-------------------------
|within_5_days | 4 |
|more_than_5 | 5 |
|not_yet_referred| 8 |
-------------------------
With that you can have "Type" as your label expression and "Sum" as value expression. So you would have to change the query to something like this
select CASE
WHEN subSet.days_taken <= 5 THEN 'within_5_days'
WHEN subSet.days_taken > 5 THEN 'more_than_5'
WHEN subSet.date_referred IS NULL THEN 'not_yet_referred'
END CASE AS Type, 1 AS Summ ...
Then you can sum it.