Not sure why Google's calendar is returning the wrong events - date

I've been trying to make use of the Google Calendar API. It uses RFC3339 timestamps, for that matter I've tried to modify the provided example 2011-06-03T10:00:00-07:00.
Right now, I'm trying to fetch the events between two dates I've modified manually (maybe there's the error?)
timeMin: 2018-05-26T10:00:00-07:00
timeMax: 2018-06-03T10:00:00-07:00
However, the events are not even within the date range, for example, this is the start date of one of the objects returned
"start": {
"dateTime": "2018-04-30T18:00:00-04:00",
"timeZone": "America/New_York"
},
"end": {
"dateTime": "2018-04-30T19:00:00-04:00",
"timeZone": "America/New_York"
},
Why is this happening? there are even some events from 2017.

Your calendar is probably in a different timezone. Your current returning timestamp indicates -4:00 timezone, and you originally set the event in -7:00.
Modify it to look like this:
{
"end": {
"dateTime": "2012-07-12T10:30:00.0z"
},
"start": {
"dateTime": "2012-07-12T09:30:00.0z"
}
}
This will set Pacific Standart Time.

Related

AWS EventBridge Input Transform to change date format

I´m try to call my own API method to monitor AWS EC2 status directly from a EventBridge rule.
For that, I need to use the Input Transform the adjust the payload to the one that I have implemented in my API (I don't want to change the API interface since is running for other clouds also)
But, what I can´t find how to do is to change tha date format...
{
"version": "0",
"id": "7bf73129-1428-4cd3-a780-95db273d1602",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2015-11-11T21:29:54Z",
"region": "us-east-1",
"resources": ["arn:aws:ec2:us-east-1:123456789012:instance/i-abcd1111"],
"detail": {
"instance-id": "i-abcd1111",
"state": "pending"
}
}
Can I map this "Time" field with the corresponding UnixTime ?
for exaple "time":"1659476412627"
Regards

gSuite Integeration Admin SDK Report API Date format

Hi Guys I am currently working on Gsuite Admin SDK Report API. I am successfully able to send the request and getting the response.
Now, the issue is that I am not able to identify the date format returned by the Activities.list().
Here is a snippet:
"events": [
{
"type": "event_change",
"name": "create_event",
"parameters": [
{
"name": "event_id",
"value": "jdlvhwrouwovhuwhovvwuvhw"
},
{
"name": "organizer_calendar_id",
"value": "abc#xyz.com"
},
{
"name": "calendar_id",
"value": "abc#xyz.com"
},
{
"name": "target_calendar_id",
"value": "abc#xyz.com"
},
{
"name": "event_title",
"value": "test event 3"
},
{
"name": "start_time",
"intValue": "63689520600"
},
{
"name": "end_time",
"intValue": "63689524200"
},
{
"name": "user_agent",
"value": "Mozilla/5.0"
}
]
}
]
Note: Please have a look at start_time and end_time and let me know if you guys have any idea about it.
Please have a look and share some info and let me know if any other infomation is needed.
I ran into this same question when parsing google calendar logs.
The time format they use are the number of seconds since January 1st, 0001 (0001-01-01).
I never found documentation where they referenced that time format. Google uses this instead of epoch for some of their app logs.
You can find an online calculator here https://www.epochconverter.com/seconds-days-since-y0
Use the one under "Seconds Since 0001-01-01 AD" and not the one under year zero.
So your start_time of "63689520600" converts to March 30, 2019 5:30:00 AM GMT.
If you want start_time to be in epoch, you could subtract 62135596800 seconds from the number. 62135596800 converts to January 1, 1970 12:00:00 AM when counting the number of seconds since 0001-01-01. Subtracting 62135596800 from the start_time would give you the number of seconds since January 1, 1970 12:00:00 AM AKA Epoch Time.
Hope this helps.

Azure Data Factory copy day before data from slicestart date

can somebody let me know how to get previous days data i.e 2017-07-28 etc from my onpremises file system if my pipleline start and end dates are
"start": "2017-07-29T00:00:00Z",
"end": "2017-08-03T00:00:00Z"
My pipeline's input is"FileSystemSource" and output is "AzureDataLakeStore". I have tried below JSON in my copy pipeline as input
"inputs": [
{
"name": "OnPremisesFileInput2"
"startTime": "Date.AddDays(SliceStart, -1)",
"endTime": "Date.AddDays(SliceEnd, -1)"
}
]
I have also tried defining "offset" in the input and output datasets and in the pipeline as follows
"availability": {
"frequency": "Day",
"interval": 1,
"offset": "-1.00:00:00",
"style": "StartOfInterval"
},
"scheduler": {
"frequency": "Day",
"interval": 1,
"offset": "-1.00:00:00",
"style": "StartOfInterval"
},
none of the above seems to be working. Request someone to help me.
I think a good strategy to do this is to think about yesterday's output as today's input. Azure Data Factory let's you run activities one after another in sequence using different data sources.
There's good documentation here
With an example like this:
Like this you can either have a temporary storage in between the two activities or use your main input data source but with a filter to get only yesterday's slice.
Your offset should be positive.
"availability": {
"frequency": "Day",
"interval": 1,
"offset": "01:00:00",
"style": "EndOfInterval"
}
In this case it will run for example on September 7th at 1:00 AM UTC and will run the slice from Sep 6th 0:00 UTC to Sept 7th UTC. Which is yesterday slice.
Your input dataset should be configured to use the SliceStart for the naming of the file
"partitionedBy": [
{
"name": "Slice",
"value": {
"type": "DateTime",
"date": SliceStart",
"format": "yyyymmdd"
}
}],
"typeProperties": {
"fileName": "{slice}.csv",
}
It would look for 20170906.csv file when executed on Sept 7th.

Is there a possibility to have another timestamp as dimension in Druid?

Is it possible to have Druid datasource with 2 (or multiple) timestmaps in it?
I know that Druid is time-based DB and I have no problem with the concept but I'd like to add another dimension with which I can work as with timestamp
e.g. User retention: Metric surely is specified to a certain date, but I also need to create cohorts based on users date of registration and rollup those dates maybe to a weeks, months or filter to only a certain time periods....
If the functionality is not supported, are there any plug-ins? Any dirty solutions?
Although I'd rather wait for official implementation for timestamp dimensions full support in druid to be made, I've found a 'dirty' hack which I've been looking for.
DataSource Schema
First things first, I wanted to know, how much users logged in for each day, with being able to aggregate by date/month/year cohorts
here's the data schema I used:
"dataSchema": {
"dataSource": "ds1",
"parser": {
"parseSpec": {
"format": "json",
"timestampSpec": {
"column": "timestamp",
"format": "iso"
},
"dimensionsSpec": {
"dimensions": [
"user_id",
"platform",
"register_time"
],
"dimensionExclusions": [],
"spatialDimensions": []
}
}
},
"metricsSpec": [
{ "type" : "hyperUnique", "name" : "users", "fieldName" : "user_id" }
],
"granularitySpec": {
"type": "uniform",
"segmentGranularity": "HOUR",
"queryGranularity": "DAY",
"intervals": ["2015-01-01/2017-01-01"]
}
},
so the sample data should look something like (each record is login event):
{"user_id": 4151948, "platform": "portal", "register_time": "2016-05-29T00:45:36.000Z", "timestamp": "2016-06-29T22:18:11.000Z"}
{"user_id": 2871923, "platform": "portal", "register_time": "2014-05-24T10:28:57.000Z", "timestamp": "2016-06-29T22:18:25.000Z"}
as you can see, my "main" timestamp to which I calculate these metrics is timestamp field, where register_time is only the dimension in stringy - ISO 8601 UTC format .
Aggregating
And now, for the fun part: I've been able to aggregate by timestamp (date) and register_time (date again) thanks to Time Format Extraction Function
Query looking like that:
{
"intervals": "2016-01-20/2016-07-01",
"dimensions": [
{
"type": "extraction",
"dimension": "register_time",
"outputName": "reg_date",
"extractionFn": {
"type": "timeFormat",
"format": "YYYY-MM-dd",
"timeZone": "Europe/Bratislava" ,
"locale": "sk-SK"
}
}
],
"granularity": {"timeZone": "Europe/Bratislava", "period": "P1D", "type": "period"},
"aggregations": [{"fieldName": "users", "name": "users", "type": "hyperUnique"}],
"dataSource": "ds1",
"queryType": "groupBy"
}
Filtering
Solution for filtering is based on JavaScript Extraction Function with which I can transform date to UNIX time and use it inside (for example) bound filter:
{
"intervals": "2016-01-20/2016-07-01",
"dimensions": [
"platform",
{
"type": "extraction",
"dimension": "register_time",
"outputName": "reg_date",
"extractionFn": {
"type": "javascript",
"function": "function(x) {return Date.parse(x)/1000}"
}
}
],
"granularity": {"timeZone": "Europe/Bratislava", "period": "P1D", "type": "period"},
"aggregations": [{"fieldName": "users", "name": "users", "type": "hyperUnique"}],
"dataSource": "ds1",
"queryType": "groupBy"
"filter": {
"type": "bound",
"dimension": "register_time",
"outputName": "reg_date",
"alphaNumeric": "true"
"extractionFn": {
"type": "javascript",
"function": "function(x) {return Date.parse(x)/1000}"
}
}
}
I've tried to filter it 'directly' with javascript filter but I haven't been able to convince druid to return the correct records although I've doublecheck it with various JavaScript REPLs, but hey, I'm no JavaScript expert.
Unfortunately Druid has only one time-stamp column that can be used to do rollup plus currently druid treat all the other columns as a strings (except metrics of course) so you can add another string columns with time-stamp values, but the only thing you can do with it is filtering.
I guess you might be able to hack it that way.
Hopefully in the future druid will allow different type of columns and maybe time-stamp will be one of those.
Another solution is add a longMin sort of metric for the timestamp and store the epoch time in that field or you convert the date time to a number and store it (eg 31st March 2021 08:00 to 310320210800)
As for Druid 0.22 it is stated in the documentation that secondary timestamps should be handled/parsed as dimensions of type long. Secondary timestamps can be parsed to longs at ingestion time with a tranformSpec and be transformed back if needed in querying time link.

Query value between two other values in Firebase

If I have the following structure in Firebase:
{
"images": {
"first_image": {
"url": "https://ima.ge/first_image",
"uploader": "john",
"start": "1465920841",
"end": "1466920841"
},
"second_image": {
"url": "https://ima.ge/second_image",
"uploader": "louis",
"start": "1465920987",
"end": "1476920987"
},
"third_image": {
"url": "https://ima.ge/third_image",
"uploader": "peter",
"start": "1465920990",
"end": "1484920990"
}
}
}
Is it possible to query for example all the images where the current timestamp is between "start" and "end"? So if the current timestamp (which is milliseconds since the Unix epoch) is 1475920987, it would return second_image and third_image but not first_image because the timestamp is not in the Range of start and end.
I'm afraid this is not possible because in other stackoverflow questions users pointed out that multiple queries are not possible. So if I want to do a voting system where the user can decide when others are allowed to vote, is it possible to show users only votable images? Even if multiple queries are not allowed?
I am using the iOS and Android Firebase SDKs but a language specific answer is not necessary.
You can use both queryStarting(atValue:) and queryEnding(atValue:) in a single query.
So you need something like:
let query = Database.database().reference().child("images")
.queryStarting(atValue:<startTimestamp>)
.queryEnding(atValue:<endTimestamp>)