JasperReports - state name on y axis of time series chart - jasper-reports

I have built a time series chart in jasperreports studio with the followin dataset:
"status" : [
{"value": 0, "timestamp":"16:30:07"},
{"value": 0, "timestamp":"17:31:07"},
{"value": 1, "timestamp":"18:32:07"},
{"value": 1, "timestamp":"19:33:07"},
{"value": 1, "timestamp":"20:34:07"},
{"value": 1, "timestamp":"21:35:07"},
{"value": 2, "timestamp":"22:36:07"},
{"value": 0, "timestamp":"23:37:07"}
]
and it works fine.
Now, I wanted to "translate" my value field on my dataset to 0 - Off, 1-Middle, 2-On and show that info on my y-axis.
The end result should be like this:
instead of this:
Does anyone know how can this be done?
Thanks in advance.

Related

Calculate differences between consecutive Kafka messages in one Topic

I have some temperature sensors which are generated Kafka messages to a Kafka topic (my-sensors-topic). The messages generally look like below.
{"Offset": 7, "Id": 1, "Time": 1643718777898, "Value": 21}
{"Offset": 6, "Id": 1, "Time": 1643718768592, "Value": 20}
{"Offset": 5, "Id": 2, "Time": 1643718755443, "Value": 21}
{"Offset": 4, "Id": 3, "Time": 1643718746678, "Value": 21}
{"Offset": 3, "Id": 4, "Time": 1643718733408, "Value": 22}
{"Offset": 2, "Id": 2, "Time": 1643718709450, "Value": 20}
{"Offset": 1, "Id": 3, "Time": 1643718667375, "Value": 22}
{"Offset": 0, "Id": 1, "Time": 1643718386944, "Value": 19}
What I want to do is for a new generated message:
{"Offset": 8, "Id": 2, "Time": 1643719318393, "Value": 21}
Firstly compare the "Time" differences (in milliseconds) with the last existed message that has the same Id. In this case:
{"Offset": 5, "Id": 2, "Time": 1643718755443, "Value": 21}
Because its the last existed message and also with Id 2.
Secondly, I want to subtract the "Time" differences between these two messages.
If the difference is greater than 60000, then it's counted as an error occurred for this sensor and I need to create a message to record the error and write the message to another Kafka topic (my-sensors-error-topic).
The created message perhaps look like:
{"Id": 2, "Time_lead": 1643719318393, "Time_lag": 1643718755443, "Letancy": 1562950}
//Latency is calculated by (Time_lead-Time_lag)
So later I can select the Count from my-sensors-error-topic by (sensor) Id so I know how many errors occurred for this sensor.
From my own investigation, to reach my scenario, I need to use Kafka Processor API with State Store. Some examples mentioned I can implement Processor interface while others mentioned using Transform.
Which way is better to implement my scenarios and how?

timestamps microsecond precision is reset to 000 when kafka deserializes data to create sql insert (?)

Using avro schema for data; have a field for timestamp named 'time'
and its like this:
{"name": "time", "type":
{"type": "long", "logicalType": "timestamp-micros"}},
The timestamp-micros could alternatively be timestamp-millis but
I want microseconds included that's why I chose this.
What is passed on here is :
{'time': datetime.datetime(2022, 1, 10, 6, 52, 53, 511281, tzinfo=)}
The problem is when this is deserialized I get something like
2022, 1, 10, 6, 52, 53, 511000
inserted into the database. This is shaving off the micros.
What is this related to?
What this is matched with

get array of all json field keys in postgresql

I have a table called user, and inside the table there is a field called friends, this field is a json type has a value as in the following example
{"blockList": {"199": {"date": 1453197190, "status": 1}, "215": {"date": 1459325611, "status": 1}, "219": {"date": 1454244074, "status": 1}, "225": {"date": 1453981312, "status": 1}, "229": {"date": 1459327685, "status": 1}}, "followers": {"211": {"date": 1452503369}, "219": {"date": 1452764627}, "334": {"date": 1456396375}}, "following": {"215": {"date": 1459325619}, "219": {"date": 1453622322}, "226": {"date": 1454244887}, "229": {"date": 1459327691}}, "friendList": {"213": {"date": 1453622410, "type": 2, "status": 1}, "214": {"date": 1452763643, "status": 1}, "215": {"date": 1455606872, "type": 2, "status": 2}, "218": {"date": 1453280047, "status": 1}, "219": {"date": 1453291227, "status": 2}, "221": {"date": 1453622410, "type": 2, "status": 1}, "224": {"date": 1453380152, "type": 2, "status": 1}, "225": {"date": 1453709357, "type": 2, "status": 2}, "226": {"date": 1454244088, "type": 2, "status": 1}, "229": {"date": 1454326745, "type": 2, "status": 2}}}
this record has a blockList object that is containning objects for blocked users.
what I need, is to return an array of all block list keys like this
["199", "215", "219", "225", "229"]
any help how can I write a plpgsql function to do that (return all object keys in an array)?
I'm a beginner in psotgresql, and need a help please.
Use json_object_keys to a set containing the outermost keys of a json object (so you'll need to select the object for the blockList key, which you can do with friends->'blockList'), and use array_agg to aggregate them into an array:
SELECT ARRAY_AGG(f)
FROM (
SELECT json_object_keys(friends->'blockList') f
FROM users
) u;
┌───────────────────────┐
│ array_agg │
├───────────────────────┤
│ {199,215,219,225,229} │
└───────────────────────┘
(1 row)
Note:
If you're using the jsonb type (and not the json one) you'll need to use the jsonb_object_keys function.
SELECT array_agg(ks) FROM (
SELECT json_object_keys(friends->'blockList') AS ks
FROM users
) x
I have created a SQL fiddle here to demonstrate.
Note: user is a reserved word, so I have called the table users.
I'm late to the party, but I would like to propose following way, borrowed from Erwin here:
SELECT ARRAY(SELECT json_object_keys(friends->'blockList')) FROM users;

How do I return an array of objects from jsonb?

I have the following table:
CREATE TABLE mytable (
id serial PRIMARY KEY
, employee text UNIQUE NOT NULL
, data jsonb
);
With the following data:
INSERT INTO mytable (employee, data)
VALUES
('Jim', '{"sales_tv": [{"value": 10, "yr": "2010", "loc": "us"}, {"value": 5, "yr": "2011", "loc": "europe"}, {"value": 40, "yr": "2012", "loc": "asia"}], "sales_radio": [{"value": 11, "yr": "2010", "loc": "us"}, {"value": 8, "yr": "2011", "loc": "china"}, {"value": 76, "yr": "2012", "loc": "us"}], "another_key": "another value"}'),
('Rob', '{"sales_radio": [{"value": 7, "yr": "2014", "loc": "japan"}, {"value": 3, "yr": "2009", "loc": "us"}, {"value": 37, "yr": "2011", "loc": "us"}], "sales_tv": [{"value": 4, "yr": "2010", "loc": "us"}, {"value": 18, "yr": "2011", "loc": "europe"}, {"value": 28, "yr": "2012", "loc": "asia"}], "another_key": "another value"}')
Notice that there are other keys in there besides just "sales_tv" and "sales_radio". For the queries below I just need to focus on "sales_tv" and "sales_radio".
I'm trying to return a list of objects for Jim for anything that starts with "sales_". In each object w/in the list I just need to return the value and the yr (ignoring "location" or any other keys) e.g.:
employee | sales_
Jim | {"sales_tv": [{"value": 10, "yr": "2010"}, {"value": 5, "yr": "2011"}, {"value": 40, "yr": "2012"}],
"sales_radio": [{"value": 11, "yr": "2010"}, {"value": 8, "yr": "2011"}, {"value": 76, "yr": "2012"}]}
I am able to get each of the values but without the year nor the list format I'd like:
SELECT t.employee, json_object_agg(a.k, d.value) AS sales
FROM mytable t
, jsonb_each(t.data) a(k,v)
, jsonb_to_recordset(a.v) d(yr text, value float)
WHERE t.employee = 'Jim'
AND a.k LIKE 'sales_%'
GROUP BY 1
Results:
employee | sales
---------- | --------
Jim | { "sales_tv" : 10, "sales_tv" : 5, "sales_tv" : 40, "sales_radio" : 11, "sales_radio" : 8, "sales_radio" : 76 }
The principle is the same as the question you asked yesterday, the first query (even though this question is yesterday's second query): peel away layers of hierarchy in your json data and then re-assemble it with whatever data you are interested in, into whatever new json format.
SELECT employee, json_object_agg(k, jarr) AS sales
FROM (
SELECT t.employee, a.k,
json_agg(json_build_object('value', d.value, 'yr', d.yr)) AS jarr
FROM mytable t,
jsonb_each(t.data) a(k, v),
jsonb_to_recordset(a.v) d(yr text, value float)
WHERE t.employee = 'Jim'
AND a.k like 'sales_%'
GROUP BY 1, 2) sub
GROUP BY 1;
In the FROM clause you break down the JSON hierarchy with functions like jsonb_each and jsonb_to_recordset. As the last function's name already implies, each of these produces a set of records that you can work with just like you would with any other table and its columns. In the column selection list you select the required data and the appropriate aggregate functions json_agg and json_object_agg to piece the JSON result back together. For every level of hierarchy you need one aggregate function and therefore one level of sub-query.

CouchDB: query reduced value on complex key with timeframe

Application user can perform different tasks. Each kind of task has unique identifier. Each user activity is recorded to database.
So we have following Event entity to keep in database:
{
"user_id": 1,
"task_id": 2,
"event_dt": [
2013, 11, 15, 10, 0, 0, 0
]
}
I need to know how many tasks of each type were performed by particular user during particular timeframe. Timeframe might be quite long (i.e. rolling chart for last year is requested).
For better understanding, map function might be something like:
emit([doc.user_id, doc.task_id, doc.event_dt], 1)
and it might be queried using group_level=2 (or group_level=1 in case just number of user events is needed).
Is it possible to answer above question by making single view query using map/reduce mechanism? Do I have to use list functionality (though it may cause performance issues)?
Just use flat key [doc.user_id, doc.task_id].concat(doc.event_dt) since it will simplify request and grouping logic:
with group_level=1: you'll get amount of tasks per user for all time
with group_level=2: amount of specific task ids per user for all time
with group_level=3: same as above but in context of specific year
with group_level=4: same as above but also grouped by months
etc. by days, hours, minutes and seconds
For instance, the result for group_level=3 may be:
{"rows":[
{"key": ["user1", "task1", 2012], "value": 3},
{"key": ["user1", "task2", 2013], "value": 14},
{"key": ["user1", "task3", 2013], "value": 15},
{"key": ["user2", "task1", 2012], "value": 9},
{"key": ["user2", "task4", 2012], "value": 26},
{"key": ["user2", "task4", 2013], "value": 53},
{"key": ["user3", "task1", 2013], "value": 5}
]}