Execute in every 5 minutes with Chime Scheduler - scheduler

I am new to vertx. My application needs a scheduler which checks for the status in every five minutes. I am trying to use Chime which is a time scheduler verticle. Documentation shows how to listen to the time events and creating a scheduler with this piece of code
eventBus.send<JsonObject> (
"chime",
JsonObject {
"operation" -> "create",
"name" -> "my scheduler:my timer",
"description" -> JsonObject {
"type" -> "cron",
"seconds" -> "0",
"minutes" -> "30",
"hours" -> "16",
"days of month" -> "*",
"months" -> "*",
"days of week" -> "SundayL"
}
}
);
Where * represents any. How can I configure eventBus to execute the scheduler in every five minutes

For such a simple trigger, you could use an interval timer:
{
"type": "interval",
"delay": "300"
};
The delay is a value in seconds.

Related

Set delay for tumbling window trigger on a pipeline

I'm trying to create a tumbling window trigger to run every 1 hour and 10 minutes delay before the pipeline starts executing.
I created a test trigger with time interval of 5 minutes and delay of 10 minutes.
I expected the pipeline to run every 15 minutes (5 min interval + 10 min delay).
What I actually see in the Monitor section of the pipelines Runs and Triggers Runs that it runs every 5 minutes.
Isn't the delay should delay the pipeline execution?
Am I doing something wrong here?
Updated
Here's my trigger template:
{
"name": "[concat(parameters('factoryName'), '/trigger_test')]",
"type": "Microsoft.DataFactory/factories/triggers",
"apiVersion": "2018-06-01",
"properties": {
"annotations": [],
"runtimeState": "Started",
"pipeline": {
"pipelineReference": {
"referenceName": "exportData",
"type": "PipelineReference"
},
"parameters": {}
},
"type": "TumblingWindowTrigger",
"typeProperties": {
"frequency": "Minute",
"interval": 5,
"startTime": "2021-07-25T07:46:00Z",
"delay": "00:10:00",
"maxConcurrency": 50,
"retryPolicy": {
"intervalInSeconds": 30
},
"dependsOn": []
}
},
"dependsOn": [
"[concat(variables('factoryId'), '/pipelines/exportData')]"
]
}
I haven't found a concrete example and the docs are not very clear in terms for terminology.
From what I understand, when one trigger window finished running, the next trigger window starts running regardless of the delay specified.
According to the docs, "the delay doesn't alter the window startTime" which I assume means what I have mentioned above.

Azure Data factory trigger: Pipeline has to trigger every December on second Friday from end of the month

A pipeline has to trigger every December on second Friday from the end of the month.
I am trying to do this using scheduled trigger of ADF See Trigger Definition by using,
Start date of Dec 1st 2021
Recurrence of 12 months
No end date
Advanced recurrence option of weekdays with occurrance as -2 and day as Friday.
"name": "Dec_Last_But_One_Friday",
"properties": {
"annotations": [],
"runtimeState": "Stopped",
"pipelines": [
{
"pipelineReference": {
"referenceName": "pipeline_test_triggers",
"type": "PipelineReference"
}
}
],
"type": "ScheduleTrigger",
"typeProperties": {
"recurrence": {
"frequency": "Month",
"interval": 12,
"startTime": "2021-12-01T14:24:00Z",
"timeZone": "UTC",
"schedule": {
"monthlyOccurrences": [
{
"day": "Friday",
"occurrence": -2
}
]
}
Is this right way? how do I know it will be definitely trigger every year, the Second Friday from the end of the December month. Is there a way to see the future schedules in ADF?
Thank you!
Viewing future ADF schedule, this feature does not currently exist in Data Factory V2 at this time.
Due to advanced recurrence options are not perfect, we'd better check once a year.

gSuite Integeration Admin SDK Report API Date format

Hi Guys I am currently working on Gsuite Admin SDK Report API. I am successfully able to send the request and getting the response.
Now, the issue is that I am not able to identify the date format returned by the Activities.list().
Here is a snippet:
"events": [
{
"type": "event_change",
"name": "create_event",
"parameters": [
{
"name": "event_id",
"value": "jdlvhwrouwovhuwhovvwuvhw"
},
{
"name": "organizer_calendar_id",
"value": "abc#xyz.com"
},
{
"name": "calendar_id",
"value": "abc#xyz.com"
},
{
"name": "target_calendar_id",
"value": "abc#xyz.com"
},
{
"name": "event_title",
"value": "test event 3"
},
{
"name": "start_time",
"intValue": "63689520600"
},
{
"name": "end_time",
"intValue": "63689524200"
},
{
"name": "user_agent",
"value": "Mozilla/5.0"
}
]
}
]
Note: Please have a look at start_time and end_time and let me know if you guys have any idea about it.
Please have a look and share some info and let me know if any other infomation is needed.
I ran into this same question when parsing google calendar logs.
The time format they use are the number of seconds since January 1st, 0001 (0001-01-01).
I never found documentation where they referenced that time format. Google uses this instead of epoch for some of their app logs.
You can find an online calculator here https://www.epochconverter.com/seconds-days-since-y0
Use the one under "Seconds Since 0001-01-01 AD" and not the one under year zero.
So your start_time of "63689520600" converts to March 30, 2019 5:30:00 AM GMT.
If you want start_time to be in epoch, you could subtract 62135596800 seconds from the number. 62135596800 converts to January 1, 1970 12:00:00 AM when counting the number of seconds since 0001-01-01. Subtracting 62135596800 from the start_time would give you the number of seconds since January 1, 1970 12:00:00 AM AKA Epoch Time.
Hope this helps.

Fiware Orion CB subscriptions throttling

I have created this subscription :
curl localhost:1026/v2/subscriptions -s -S -H 'Accept: application/json' | python -mjson.tool
[
{
"description": "Update room temperature",
"expires": "2020-04-05T14:00:00.00Z",
"id": "5b104ace028f2284c5517f51",
"notification": {
"attrs": [
"temperature"
],
"attrsFormat": "normalized",
"http": {
"url": "http://MyUrl/getSub"
},
"lastNotification": "2018-05-31T19:19:42.00Z",
"metadata": [
"5b019ae132232812eccb6d50",
"device",
"16",
"Auto",
"30",
"greater"
],
"timesSent": 1
},
"status": "active",
"subject": {
"condition": {
"attrs": [
"temperature"
]
},
"entities": [
{
"id": "5aff0eef23102126a4aeeea2",
"type": "room"
}
]
},
"throttling": 60
}
and even though I have set the throttling at 60 (1 minute if I understand it right), when I change the value of the temperature, orion sends me a notification even if the change happened before the one minute mark (for example I change the temperature value every 10 seconds). Shouldn't a notification be sent only if a change occurred after 60 seconds or am I understanding something wrong?
What you describe seems to be the right behaviour. I mean, if the subscription has a throttling of 60 seconds, you shouldn't receive any new notifiction until 60 seconds have passed from the previous one.
Possible causes:
You have another subscripion in place that is being triggered. But I understand this is not the case, as such subscription should be shown in the GET /v2/subscriptions result.
There is a bug in Orion which causes throttling to be ignored. In that case, it would be interesting to do the same test using a subscription created using NGSIv1 (using POST /v1/subscribeContext) in order to know the reach of the bug.

Old notifications pushed immediately when subscription is created

I'm using Fiware Orion Context Broker version 0.20.
I notice that when I create a context subscription, my provided endpoint immediately gets notified about changes to the corresponding context elements that happened before I created the subscription.
To clarify: (note: I used these steps with a clean database)
I started the accumulator from the test package /usr/share/contextBroker/tests/accumulator-server.py 1028 /accumulate on
Created a context element using http://localhost:1026/v1/updateContext
{
"contextElements": [
{
"type": "Room",
"isPattern": "false",
"id": "Room1",
"attributes": [
{
"name": "temperature",
"value": "20"
}
]
}
],
"updateAction": "APPEND"
}
Then I created the subscription using http://localhost:1026/v1/subscribeContext
{
"entities": [
{
"type": "Room",
"isPattern": "true",
"id": ".*"
}
],
"attributes": [
"temperature"
],
"reference": "http://localhost:1028/accumulate",
"duration": "P1M",
"notifyConditions": [
{
"type": "ONCHANGE",
"condValues": [
"temperature"
]
}
],
"throttling": "PT5S"
}
I immediately receive the following content in the accumulator
POST http://localhost:1028/accumulate
Content-Length: 472
User-Agent: orion/0.20.0 libcurl/7.19.7
Host: localhost:1028
Accept: application/xml, application/json
Content-Type: application/json
{
"subscriptionId" : "55521671985dc3976b879780",
"originator" : "localhost",
"contextResponses" : [
{
"contextElement" : {
"type" : "Room",
"isPattern" : "false",
"id" : "Room1",
"attributes" : [
{
"name" : "temperature",
"type" : "",
"value" : "20"
}
]
},
"statusCode" : {
"code" : "200",
"reasonPhrase" : "OK"
}
}
]
}
Furthermore, if I create multiple contextElements before adding the subscription, they are all part to the contextResponses in the notification.
For my use case, this behavior is undesirable. The subscriptions are very dynamic (they come and go often throughout the lifecycle of the application) and I do not want the entire history every time I create a subscription. I only want to be notified about changes starting from the moment T created the subscription. (Not a history)
Did I overlook something in the documentation and can I resolve this by changing the contents of the subscription request? If not, is this generally accepted behavior for the context broker or just a plain bug?
It is the expected behaviour, as described in the manual:
You may wonder why accumulator-server.py is getting this message if you don't actually do any update. This is because the Orion Context Broker considers the transition from "non existing subscription" to "subscribed" as a change.
We understand that for some uses cases this is not convenient. However, behaving in the opossite way ruins another uses cases which need to know the "inicial state" before starting getting notifications corresponding to actual changes. The best solution to make everybody happy is to make this configurable, so each client can chose what it prefers. This feature is currently in our roadmap (see this issue in github.com).
While this gets implemented in Orion, in your case maybe a possible workaround is just ignore the first received nofitication belonging to a subscription (you can identify the subscription to which one notification belongs by the subscriptionId field in the notification payload). All the following notifications beloning to that subscription will correspond to actual changes.
EDIT: the posibility of avoiding initial notification has been finally implemented at Orion. Details are at this section of the documentation. It is now in the master branch (so if you use fiware/orion:latest docker you will get it) and will be include in next Orion version (2.2.0).