AWS EventBridge Input Transform to change date format - date

I´m try to call my own API method to monitor AWS EC2 status directly from a EventBridge rule.
For that, I need to use the Input Transform the adjust the payload to the one that I have implemented in my API (I don't want to change the API interface since is running for other clouds also)
But, what I can´t find how to do is to change tha date format...
{
"version": "0",
"id": "7bf73129-1428-4cd3-a780-95db273d1602",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2015-11-11T21:29:54Z",
"region": "us-east-1",
"resources": ["arn:aws:ec2:us-east-1:123456789012:instance/i-abcd1111"],
"detail": {
"instance-id": "i-abcd1111",
"state": "pending"
}
}
Can I map this "Time" field with the corresponding UnixTime ?
for exaple "time":"1659476412627"
Regards

Related

Purchase event has been uploaded via Offline Conversions API, but the Event Set does not show any data

I followed the guide here: https://developers.facebook.com/docs/marketing-api/offline-conversions
Unlike "regular" Event Sets, which includes a "Test Events" tab in its dashboard, offline event sets don't seem to have this feature. You must either upload a CSV or call the API.
However, the offline event set shows no data coming from the API at all; the history tab only shows the CSV uploads, which were "last received 10 days ago". It doesn't even include the test upload I made today.
Is this a bug? How long should I wait for the data to appear in the events manager for my offline event?
Sample call
POST https://graph.facebook.com/v15.0/<offline_event_set_id>/events?access_token=<system_user_access_token>
{
"upload_tag": "store_data",
"data": [
{
"match_keys": {
"em": "<hashed>",
"ph": "<hashed>"
},
"currency": "PHP",
"value": 100,
"event_id": "test",
"event_name": "Purchase",
"event_time": "1669633380",
"custom_data": {
"event_source": "in_store"
},
"action_source": "physical_store",
"order_id": "test",
"data_processing_options": []
}
]
}
The response is as follows:
{"id":"<offline_event_set_id>","num_processed_entries":1}
Which seems to indicate that the event was uploaded successfully. But that event never shows up in the Overview tab of that offline event set.
Would appreciate any insights/guides elsewhere/answers, I've spent a few days on this with no success.
The "error": I was encoding the event_time as a string, whereas Facebook expects this value to be an integer. After updating my POST body to correct that, the events started showing up within minutes in the Overview tab.
{
"upload_tag": "store_data",
"data": [
{
"match_keys": {
"em": "<hashed>",
"ph": "<hashed>"
},
"currency": "PHP",
"value": 100,
"event_id": "test",
"event_name": "Purchase",
"event_time": 1669633380, // <-- The only change was removing the quotes
"custom_data": {
"event_source": "in_store"
},
"action_source": "physical_store",
"order_id": "test",
"data_processing_options": []
}
]
}
I really wish Facebook had returned some kind of error or warning, but at least I found the issue. Be careful with your data types, people!

Fanout data from Kafka to S3 using kafka connect

My kafka gets a wrapper payload in Json format. The wrapper payload looks like this:
{
"format": "wrapper",
"time": 1626814608000,
"events": [
{
"id": "item1",
"type": "product1",
"count": 200
},
{
"id": "item2",
"type": "product2",
"count": 300
}
],
"metadata": {
"schema": "schema-1"
}
}
I should export this to S3. But the catch here is, I should not store the wrapper. Instead, I should be storing the individual events based on the item.
For example, it should be stored in S3 as follows:
bucket/product1:
{"id": "item1", "type": "product1", "count": 200}
bucket/product2:
{"id": "item2", "type": "product2", "count": 300}
If you notice, the input is the wrapper with those events internally. However, my output should be each of those individual events stored in S3 in the same bucket with the product type as prefix.
My question is, is it possible to use Kafka Connect to do this? I see they have this Single Message transformer which seems to be a way to mutate data inside the object, but not to fanout like what I want. Even the signature looks like an R=>R
https://github.com/apache/kafka/blob/trunk/connect/api/src/main/java/org/apache/kafka/connect/transforms/Transformation.java
So based on my research, it does not seem possible. But I want to check if I am missing something before using a different option.
Transforms accept one event and output one event.
You need to use a stream processor such as Kafka Streams branch or flatMap functions to split an array of events into multiple events or multiple topics.

REST Postman How to use datafiles in Runner for more then one endpoint

I like to use the Runner in Postman to run / test a whole collection of endpoints. Each endpoint should get different parameter or request body data on each iteration.
So far i figured out the data file usage for one endpoint. See https://learning.postman.com/docs/running-collections/working-with-data-files/
But is there a way to provide data for more then one endpoint where the endpoints need different variables in the same run?
example:
[GET]categories/:categoryId?lang=en
[GET]articles/?filter[height]=10,40&sort[name]=desc
Datafile for first endpoint:
[{
"categoryId": 1123,
"lang": en
},
{
"categoryId": 3342,
"lang": de
}]
Datafile for second endpoint:
[{
"filter": "height",
"filterValue": "10,40",
"sort": "name",
"sortDir": "desc"
},
{
"filter": "material",
"filterValue": "chrome",
"sort": "relevance",
"sortDir": "asc"
}]
Right now, there is no way to add more data files. https://community.postman.com/t/pass-multiple-data-files-to-a-collection/899
My suggestion is
Separate each endpoint that need data file into different collections.
Use newman as library to run it all.

No GitHub label IDs?

We have a Github Entperprise set up locally but I can't for the life of me get label IDs using this API call: GET /repos/:owner/:repo/labels
I just end up getting the url, name, and color values:
{
"url": "https://ourdomain/api/v3/repos/username/repo/labels/bug",
"name": "bug",
"color": "ee0701"
}
The official documentation however shows that I should be getting the id and default values as well:
{
"id": 208045946,
"url": "https://api.github.com/repos/octocat/Hello-World/labels/bug",
"name": "bug",
"color": "f29513",
"default": true
}
I'm sure I'm just missing some setting somewhere, but can't seem to find anything.
Thanks in advance.

GET Values from a custom field via JIRA REST API

I would like to GET all drop down options for a custom field. For system fields, I use the following URI:
http://localhost:8080/rest/api/2/project/XXXX/components
(for components, versons, etc. Basically system fields), so I tried the following for a custom field
http://localhost:8080/rest/api/2/project/XXXX/customfield_10000
and got a 404 error. I'm not sure what I'm doing wrong as I've been googling for the past 19 hours. The best I search result I got was the following documentation: JIRA Developers Documentation
Please assist, I'm not sure What I'm missing
You can get that information either from the createmeta or editmeta REST resources.
Use editmeta if you want to retrieve the available options when editing a specific issue. E.g.
GET /rest/api/2/issue/TEST-123/editmeta
Use createmeta when you want to retrieve the options for a project in combination with an issue type. E.g.
GET /rest/api/2/issue/createmeta?projectKeys=MYPROJ&issuetypeNames=Bug&expand=projects.issuetypes.fields
The customfields with options will be returned like this:
"customfield_12345": {
"schema": {
"type": "string",
"custom": "com.atlassian.jira.plugin.system.customfieldtypes:select",
"customId": 12345
},
"name": "MySelectList",
"allowedValues": [
{
"self": "http://jira.url/rest/api/2/customFieldOption/14387",
"value": "Green",
"id": "14387"
},
{
"self": "http://jira.url/rest/api/2/customFieldOption/14384",
"value": "Blue",
"id": "14384"
}
]
}