how to ignore repeated value at sum time? - jasper-reports

I am new in JasperReports, I have one report which have tabular data Table Heading is
ID | Product | Amount | Discount
----------------------------------
1 | ABC | 12000 | 10
1 | ABC | 12000 | 10
1 | ABC | 12000 | 10
2 | ABC | 5000 | 20
2 | ABC | 5000 | 20
2 | ABC | 5000 | 20
2 | ABC | 5000 | 20
3 | ABC | 13000 | 25
3 | ABC | 13000 | 25
i want to use built in functionality of JasperReports like "print repeated value is unchecked" its display is not show repeated but when i try to do sum of amount it will give
the all amount sum but i want to get out put like 12000+5000+13000 only
so how can i get the result?
I WANT TO GET RESULT LIKE THIS
ID | Product | Amount | Discount
----------------------------------
1 | ABC | 12000 | 10
| ABC | |
| ABC | |
2 | XYZ | 5000 | 20
| XYZ | |
| XYZ | |
| XYZ | |
3 | PQR | 13000 | 25
| PQR | |
----------------------------------
Total 30000 55
----------------------------------

Related

postgreSQL question: get data by last date of each record and subtract from last date number of days

Please help me make a request. i'm at a dead end.
There are 2 tables:
“Trains”:
+----+---------+
| id | numbers |
+----+---------+
| 1 | 101 |
| 2 | 102 |
| 3 | 103 |
| 4 | 104 |
| 5 | 105 |
+----+---------+
“Passages”:
+----+--------------+-------+---------------------+
| id | train_number | speed | date_time |
+----+--------------+-------+---------------------+
| 1 | 101 | 26 | 2021-11-10 16:26:30 |
| 2 | 101 | 28 | 2021-11-12 16:26:30 |
| 3 | 102 | 24 | 2021-11-14 16:26:30 |
| 4 | 103 | 27 | 2021-11-15 16:26:30 |
| 5 | 101 | 29 | 2021-11-16 16:26:30 |
+----+--------------+-------+---------------------+
The goal is to go through the train numbers from the Trains table, take from the existing ones from the Passages table by the latest date (date_time) and the number of passages for “the last date for each train” - N days. as I understand date_time - interval "N days". should get something like:
+----+--------+---------------------+----------------+
| id | train | last_passage | count_passages |
+----+--------+---------------------+----------------+
| 1 | 101 | 2021-11-10 16:26:30 | 2 |
| 2 | 102 | 2021-11-14 16:26:30 | 1 |
| 3 | 103 | 2021-11-15 16:26:30 | 1 |
| 4 | 104 | null | 0 |
| 5 | 105 | null | 0 |
+----+--------+---------------------+----------------+
ps: "count_passages" - for example, last passage date minus 4 days
I tried through "where in" but I can’t create the necessary and correct request

SQL Server 2008 R2 - converting columns to rows and have all values in one column

I am having a hard time trying to wrap my head around the pivot/unpivot concepts and hoping someone can help or give me some guidance on how to approach my problem.
Here is a simplified sample table I have
+-------+------+------+------+------+------+
| SAUID | COM1 | COM2 | COM3 | COM4 | COM5 |
+-------+------+------+------+------+------+
| 1 | 24 | 22 | 100 | 0 | 45 |
| 2 | 34 | 55 | 789 | 23 | 0 |
| 3 | 33 | 99 | 5552 | 35 | 4675 |
+-------+------+------+------+------+------+
The end result I am looking for a table result similar below
+-------+-----------+-------+
| SAUID | OCCUPANCY | VALUE |
+-------+-----------+-------+
| 1 | COM1 | 24 |
| 1 | COM2 | 22 |
| 1 | COM3 | 100 |
| 1 | COM4 | 0 |
| 1 | COM5 | 45 |
| 2 | COM1 | 34 |
| 2 | COM2 | 55 |
| 2 | COM3 | 789 |
| 2 | COM4 | 23 |
| 2 | COM5 | 0 |
| 3 | COM1 | 33 |
| 3 | COM2 | 99 |
| 3 | COM3 | 5552 |
| 3 | COM4 | 35 |
| 3 | COM5 | 4675 |
+-------+-----------+-------+
Im looking around but most of the examples seem to use pivot but having a hard time trying to wrap that around my case as I need the values all in one column.
I hoping to experiment with some hardcoding to get fimilar with my example but my actual table columns are ~100 with varying #s of SAUID per table and looks like it will require dynamic sql?
Thanks for the help in advance.
Use UNPIVOT:
SELECT u.SAUID, u.OCCUPANCY, u.VALUE
FROM yourTable t
UNPIVOT
(
VALUE for OCCUPANCY in (COM1, COM2, COM3, COM4, COM5)
) u;
ORDER BY
u.SAUID, u.OCCUPANCY;
Demo

Add the values of score column for a specific user considering time decay factor

I asked a question in the following link and one of the members helped me to solve most of it (to calculate column t and column pre_score). But I need to calculate one more column. I explained the details in the following link.
Previous question
In summary, how I can calculate the intellectual-capital column using column t and column pre_score? intellectual-capital column considers the pre-score from all previous competitions and then multiplies each pre-score by the e^(number of days that have passed from that competition/500). in this example for each user we have at most 2 previous competitions but in my dataset it may be even more than 200 competitions therefore I need to have query that considers all scores from competitions and the time that have passed from each competition.
--> the value of e is approximately 2.71828
competitionId UserId t pre_score intelectual-capital
1 100
2 100 -4 3000 3000* POWER (e, -4/500)
3 100 -5 4000 3000*POWER(e,-9/500) + 4000*POWER(e, -5/500)
1 200
4 200 -19 3000 3000*POWER(e,-19/500)
1 300
3 300 -9 3000 3000*POWER(e,-9/500)
4 300 -10 1200 3000*POWER(e,-19/500)+ 1200*POWER(e,-10/500)
1 400
2 400 -4 3000 3000* POWER(e, -4/500)
3 400 -5 4000 3000* POWER(e, -9/500) + 4000*POWER(e,-5/500)
This result:
| prev_score | intellectual_capital | competitionsId | UserId | date | score | day_diff | t | prev_score |
|------------|----------------------|----------------|--------|----------------------|-------|----------|--------|------------|
| (null) | (null) | 1 | 100 | 2015-01-01T00:00:00Z | 3000 | -4 | (null) | (null) |
| 3000 | 2976.09 | 2 | 100 | 2015-01-05T00:00:00Z | 4000 | -5 | -4 | 3000 |
| 4000 | 6936.29 | 3 | 100 | 2015-01-10T00:00:00Z | 1200 | (null) | -5 | 4000 |
| (null) | (null) | 1 | 200 | 2015-01-01T00:00:00Z | 3000 | -19 | (null) | (null) |
| 3000 | 2888.13 | 4 | 200 | 2015-01-20T00:00:00Z | 1000 | (null) | -19 | 3000 |
| (null) | (null) | 1 | 300 | 2015-01-01T00:00:00Z | 3000 | -9 | (null) | (null) |
| 3000 | 2946.48 | 3 | 300 | 2015-01-10T00:00:00Z | 1200 | -10 | -9 | 3000 |
| 1200 | 4122.72 | 4 | 300 | 2015-01-20T00:00:00Z | 1000 | (null) | -10 | 1200 |
| (null) | (null) | 1 | 400 | 2015-01-01T00:00:00Z | 3000 | -4 | (null) | (null) |
| 3000 | 2976.09 | 2 | 400 | 2015-01-05T00:00:00Z | 4000 | -5 | -4 | 3000 |
| 4000 | 6936.29 | 3 | 400 | 2015-01-10T00:00:00Z | 1200 | (null) | -5 | 4000 |
Produced by this query, which now contains e
with Primo as (
select
*
, datediff(day,lead([date],1) over(partition by userid order by [date]),[date]) day_diff
from Table1
)
, Secondo as (
select
*
, lag(day_diff,1) over(partition by userid order by [date]) t
, lag(score,1) over(partition by userid order by [date]) prev_score
from primo
)
select
prev_score
, sum(prev_score*power(2.71828,t/500.0)) over(partition by userid order by [date]) intellectual_capital
, competitionsId,UserId,date,score,day_diff,t,prev_score
from secondo
Demo

Split postgres records into groups based on time fields

I have a table with records that look like this:
| id | coord-x | coord-y | time |
---------------------------------
| 1 | 0 | 0 | 123 |
| 1 | 0 | 1 | 124 |
| 1 | 0 | 3 | 125 |
The time column represents a time in milliseconds. What I want to do is find all coord-x, coord-y as a set of points for a given timeframe for a given id. For any given id there is a unique coord-x, coord-y, and time.
What I need to do however is group these points as long as they're n milliseconds apart. So if I have this:
| id | coord-x | coord-y | time |
---------------------------------
| 1 | 0 | 0 | 123 |
| 1 | 0 | 1 | 124 |
| 1 | 0 | 3 | 125 |
| 1 | 0 | 6 | 140 |
| 1 | 0 | 7 | 141 |
I would want a result similar to this:
| id | points | start-time | end-time |
| 1 | (0,0), (0,1), (0,3) | 123 | 125 |
| 1 | (0,140), (0,141) | 140 | 141 |
I do have PostGIS installed on my database, the times I posted above are not representative but I kept them small just as a sample, the time is just a millisecond timestamp.
The tricky part is picking the expression inside your GROUP BY. If n = 5, you can do something like time / 5. To match the example exactly, the query below uses (time - 3) / 5. Once you group it, you can aggregate them into an array with array_agg.
SELECT
array_agg(("coord-x", "coord-y")) as points,
min(time) AS time_start,
max(time) AS time_end
FROM "<your_table>"
WHERE id = 1
GROUP BY (time - 3) / 5
Here is the output
+---------------------------+--------------+------------+
| points | time_start | time_end |
|---------------------------+--------------+------------|
| {"(0,0)","(0,1)","(0,3)"} | 123 | 125 |
| {"(0,6)","(0,7)"} | 140 | 141 |
+---------------------------+--------------+------------+

Add many sort position to document

I have Document with field: _id, title, one, two, positionOne, positionTwo
For example:
_id | title | one | two | positionOne | positionTwo
1 | aaa | 12,2| 10 | |
2 | bbb | 3,2 | 12,2| |
3 | ccc | 12,6| 2 | |
4 | ddd | 3 | 4 | |
5 | eee | 5,5 | 5 | |
And I would like receive:
_id | title | one | two | positionOne | positionTwo
1 | aaa | 12,2| 10 | 2 | 3
2 | bbb | 3 | 12,2| 4 | 2
3 | ccc | 12,6| 2 | 1 | 5
4 | ddd | 3 | 4 | 4 | 4
5 | eee | 5,5 | 15 | 3 | 1
I would like add positions from max to min.
Is this possible only with Mongo queries to db?