Combine two Cronexpressions - quartz-scheduler

I have the next CronExpression in Siddhi (wso2 DAS):
define trigger periodicalTriggerStream at '0 0/15 * * * ?';
This expression is runing without problems, run every 15 mins
15, 30, 45 ....
I need that my trigger run when I start SIDDHI.
0, 15, 30, 45
Is posible combine two expressions?:
define trigger periodicalTriggerStream at '0 0/15 * * * ?';
define trigger periodicalTriggerStream at 'start';

Event triggers will generate events on an event stream with name same as the event trigger, having only one attribute with name "triggered_time" and type long. Basically, once the trigger emits an event, it behaves similar to an event stream. Therefore, we can put both cron events, start events in to a event stream and use it.
define trigger cronTriggerStream at '0 0/15 * * * ?';
define trigger startTriggerStream at 'start';
from cronTriggerStream
insert into periodicalTriggerStream;
from startTriggerStream
insert into periodicalTriggerStream;

Related

Dynamics SL: Adding Custom Tables

So I have been trying to add a custom table to a preexisting screen in Dynamics SL and I can't seem to get anything to work. Currently I have this on the Form_Load event
Private Sub Form1_Load()
Call VBA_SetAddr("bSOShipLot_Alias", bSOShipLot_Alias, nSOShipLot_Alias, LenB(bSOShipLot_Alias))
Call SqlCursorEx(CSR_SOShipLot_Alias, NOLEVEL, "CSR_SOShipLot_Alias", "SOShipLot_Alias", "SOShipLot_Alias")
End Sub
I try to add a cursor variable to the Module page
Public CSR_SOShipLot_Alias As Integer
but this just crashes the screen. The documentation for this sort of thing is scarce and I've looked through all the SDK documentation we have and have come up with almost nothing pertaining to this.
so to add a new table through the customization manager do the following:
1. Add Module file with setup of the custom table you're trying to add (see below for example)
Option Explicit
Attribute VB_Name = "FCGenKeyValDH"
Type FCGenKeyVal
Comments As String * 250
Crtd_DateTime As SDate
Crtd_Prog As String * 8
Crtd_User As String * 10
Key1 As String * 30
Key2 As String * 30
Key3 As String * 30
LinkedTable As String * 30
LUpd_DateTime As SDate
LUpd_Prog As String * 8
LUpd_User As String * 10
Purpose As String * 30
Id As Long
Status As String * 1
User1 As String * 30
User2 As String * 30
User3 As Double
User4 As Double
User5 As String * 10
User6 As String * 10
User7 As SDate
User8 As SDate
Value As String * 30
End Type
Public bFCGenKeyVal As FCGenKeyVal, nFCGenKeyVal As FCGenKeyVal
on the Form_Load() event add the reference to the buffer table and the SQL cursors
Call VBA_SetAddr("bFCGenKeyVal", bFCGenKeyVal, nFCGenKeyVal,LenB(bFCGenKeyVal))
Call SqlCursorEx(CSR_FCGenKeyVal, NOLEVEL, "CSR_FCGenKeyVal", "FCGenKeyVal", "FCGenKeyVal")
In the declaration file add the cursor variable
Public CSR_FCGenKeyVal As Integer
save changes, close screen and reopen. The table should appear in the add object wizard which tells you the custom table is linked to the screen.

IIF in postgres

I am attempting to convert an MS-Access query to a postgres statement so I can use it in SSRS. Seems to work great except for the IIF statement.
SELECT labor_sort_1.ncm_id
,IIf(labor_sort_1.sortby_employeeid = 3721
, ((labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 29 * labor_sort_1.number_of_ops)
, IIf(labor_sort_1.sortby_employeeid = 3722
, ((labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 24 * labor_sort_1.number_of_ops)
, IIf(labor_sort_1.sortby_employeeid = 3755, ((labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 24 * labor_sort_1.number_of_ops)
, ((labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 17 * labor_sort_1.number_of_ops)))) AS labor_cost
FROM ...
it returns the following message
function iif(boolean, interval, interval) does not exist
How would I solve this problem?
You'll need to switch the logic over to a CASE expression. CASE expression are standard for most RDBMS's so it's worth learning. In your case (pun intended) it would translate to:
CASE
WHEN labor_sort_1.sortby_employeeid = 3721
THEN (labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 29 * labor_sort_1.number_of_ops
WHEN labor_sort_1.sortby_employeeid = 3722
THEN (labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 24 * labor_sort_1.number_of_ops
WHEN labor_sort_1.sortby_employeeid = 3755
THEN (labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 24 * labor_sort_1.number_of_ops
ELSE
(labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 17 * labor_sort_1.number_of_ops)
END AS labor_cost
Which is a lot cleaner looking since you don't have to monkey with nested iif() issues and all that and should you need to add more employeeids to the list of hard-coded labor costs, it's no biggie.
You might also find it advantageous to us the IN condition instead so you only need two WHEN clauses:
CASE
WHEN labor_sort_1.sortby_employeeid = 3721
THEN (labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 29 * labor_sort_1.number_of_ops
WHEN labor_sort_1.sortby_employeeid IN (3722, 3755)
THEN (labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 24 * labor_sort_1.number_of_ops
ELSE
(labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime) * 24 * 17 * labor_sort_1.number_of_ops)
END AS labor_cost
Also, you could move the CASE expression into the equation so the logic only needs to determine whatever number you wish to multiply by:
(labor_sort_1.MaxUpdatedAt - labor_sort_1.MinNCMScanTime)
* 24
* CASE
WHEN labor_sort_1.sortby_employeeid = 3721 THEN 29
WHEN labor_sort_1.sortby_employeeid IN (3722,3755) THEN 24
ELSE 17
END
* labor_sort_1.number_of_ops AS labor_cost
    ((this is a Wiki, you can edit!))
Same as #Daniel's answer, but generalizing to any datatype.
CREATE or replace FUNCTION iIF(
condition boolean, -- IF condition
true_result anyelement, -- THEN
false_result anyelement -- ELSE
) RETURNS anyelement AS $f$
SELECT CASE WHEN condition THEN true_result ELSE false_result END
$f$ LANGUAGE SQL IMMUTABLE;
SELECT iif(0=1,1,2);
SELECT iif(0=0,'Hello'::text,'Bye'); -- need to say that string is text.
Good when you are looking for a public-snippets-library.
NOTE about IMMUTABLE and "PLpgSQL vs SQL".
The IMMUTABLE clause is very important for code snippets like this, because, as said in the Guide: "allows the optimizer to pre-evaluate the function when a query calls it with constant arguments"
PLpgSQL is the preferred language, except for "pure SQL". For JIT optimizations (and sometimes for parallelism) SQL can obtain better optimizations. Is something like copy/paste small piece of code instead of use a function call.
Important conclusion: this function, after optimizations, is so fast than the #JNevill's answer; it will compile to (exactly) the same internal representation. So, although it is not standard for PostgreSQL, it can be standard for your projects, by a centralized and reusable "library of snippets", like pg_pubLib.
I know this has been sitting around for a while but another option is to create a user defined function. If you happen to stumble upon this in your internet searches, this may be a solution for you.
CREATE FUNCTION IIF(
condition boolean, true_result TEXT, false_result TEXT
) RETURNS TEXT LANGUAGE plpgsql AS $$
BEGIN
IF condition THEN
RETURN true_result;
ELSE
RETURN false_result;
END IF;
END
$$;
SELECT IIF(2=1,'dan the man','false foobar');
Should text not tickle your fancy then try function overloading

PipelineDB: continuous view output stream unexpectedly shows same (old) and (new) values

I am using PipelineDB 0.9.7u3
I played a little with continuous view output stream to find out if I could get a new continuous view just with some the updates.
This is my test case.
CREATE STREAM stream_test(ticketid text, val int, status text);
-- simple continuous view on stream_test
CREATE CONTINUOUS VIEW cv_test AS
SELECT
ticketid,
min(val) as v0,
keyed_min(val, status) as v0_status
FROM stream_test
GROUP BY ticketid;
-- continuous view to keep cv_test's updates and insertions
CREATE CONTINUOUS VIEW cv_test_upin AS
SELECT
(new).ticketid,
(old).v0 as oldV0,
(old).v0_status as oldV0Status,
(new).v0 as newV0,
(new).v0_status as newV0Status
FROM output_of('cv_test')
-- continuous view to keep just some cv_test's updates
CREATE CONTINUOUS VIEW cv_test_up AS
SELECT
(new).ticketid,
(old).v0 as oldV0,
(old).v0_status as oldV0Status,
(new).v0 as newV0,
(new).v0_status as newV0Status
FROM output_of('cv_test')
WHERE (old).v0 != (new).v0;
Let's put some data.
INSERT INTO stream_test VALUES
('t1', 124, 'open'),
('t2', 190, 'pending')
And as expected:
select * from cv_test;
"t2";190;"pending"
"t1";124;"open"
select * from cv_test_upin;
"t2";;"";190;"pending"
"t1";;"";124;"open"
select * from cv_test_up;
Then, some updates.
INSERT INTO stream_test VALUES
('t2', 160, 'waiting'),
('t1', 100, 'pending')
And as expected:
select * from cv_test;
"t2";160;"waiting"
"t1";100;"pending"
select * from cv_test_upin;
"t2";;"";190;"pending"
"t1";;"";124;"open"
"t2";190;"pending";160;"waiting"
"t1";124;"open";100;"pending"
select * from cv_test_up;
"t2";190;"pending";160;"waiting"
"t1";124;"open";100;"pending"
Now, some new data and some updates.
INSERT INTO stream_test VALUES
('t2', 90, 'spam'),
('t3', 140, 'open'),
('t1', 80, 'closed')
select * from cv_test; returned as expected, but select * from cv_test_upin;did not.
...
"t2";160;"waiting";90;"spam"
"t3";;"";140;"open"
"t1";80;"closed";80;"closed"
I expected last "t1" to be "t1";100;"pending";80;"closed"
Bug or expected behaviour?
Thanks.
After digging into this, it appears that you have indeed discovered some unexpected behavior, and most likely it's a bug. We are going to resolve it shortly, here is the issue:
https://github.com/pipelinedb/pipelinedb/issues/1797
After it's resolved we will publish an updated release.

Esper: "time_order" seems not working

I'm using Esper (the event processing engine), the EPL query is:
select * from Event.ext:time_order(timestamp_event, 10000 minutes) where duration > 10
But the output is not ordered by "timestamp_event":
id int = 1, timestamp_event= 1412686800000, duration = 30
id int = 4, timestamp_event= 1412685900000, duration = 70
id int = 2, timestamp_event= 1412688600000, duration = 45
id int = 3, timestamp_event= 1412689500000, duration = 60
id int = 5, timestamp_event= 1412636400000, duration = 15
Why does not the "time_order(timestamp_event, 10000 minutes)" instruction work?
I think the problem is on Esper configuration, let's consider a simple query:
select * from Event.win:time(10 sec) order by id_event
This is the code of the "upate" method of the UpdateListener:
public void update(EventBean[] newEvents, EventBean[] oldEvents) {
EventBean event = newEvents[0];
System.out.println("id int = " + event.get("id_event") + ", timestamp_event = " + ((Long)event.get("timestamp_event")).toString());
But the output is non ordered by "id_event"!
id event = 1, timestamp_event = 1412686800000
id event = 4, timestamp_event = 1412687700000
id event = 2, timestamp_event = 1412687100000
id event = 3, timestamp_event = 1412687400000
id event = 5, timestamp_event = 1412688000000
It seems neither the "order by" instruction doesn't work, how is it possible?
The documentation says to select rstream as the events leaving are ordered and not events entering. See http://esper.codehaus.org/esper-5.0.0/doc/reference/en-US/html_single/index.html#view-time-order
You need to define some sort of time or length constrain. Your statement is simply returning all the entering events into the time_order window.
A statement like this for example would give you all the events in the right order, every 1 minute:
select * from Event.ext:time_order(timestamp_event, 10000 minutes)
where duration > 10 output snapshot every 1 minute
Or, you could define a data window and insert events into it like this:
create window OrderedEvents.ext:time_order(timestamp_event, 10000 minutes) as select * from Event;
insert into OrderedEvents select * from Event;
You can then use ad-hoc queries against it, and they will return the events in the correct order (although you could achieve the same with a win:time(10000 minutes) and adding an order by timestamp_event to your ad-hoc query).

Quartz scheduler cron

i have a job to be scheduled every "min" minutes ( "min" is a variable that indicates the minutes).
I tried the following syntax for the quartz scheduler
String expr = "0 0/"+min+" * * * ?";
the problem is that the job is fired only after "min" and i want it to be scheduled NOW and every "min"
can someone help me please?
Thanks in advance
regards
i find a solution by using this expression
String expr = "2 "+Integer.toString(Calendar.getInstance().get(Calendar.MINUTE)+1)+"/"+min+" * * * ?";
in order to schedule "about" now and after min
Hopes it helps
Bye