REST Postman How to use datafiles in Runner for more then one endpoint - rest

I like to use the Runner in Postman to run / test a whole collection of endpoints. Each endpoint should get different parameter or request body data on each iteration.
So far i figured out the data file usage for one endpoint. See https://learning.postman.com/docs/running-collections/working-with-data-files/
But is there a way to provide data for more then one endpoint where the endpoints need different variables in the same run?
example:
[GET]categories/:categoryId?lang=en
[GET]articles/?filter[height]=10,40&sort[name]=desc
Datafile for first endpoint:
[{
"categoryId": 1123,
"lang": en
},
{
"categoryId": 3342,
"lang": de
}]
Datafile for second endpoint:
[{
"filter": "height",
"filterValue": "10,40",
"sort": "name",
"sortDir": "desc"
},
{
"filter": "material",
"filterValue": "chrome",
"sort": "relevance",
"sortDir": "asc"
}]

Right now, there is no way to add more data files. https://community.postman.com/t/pass-multiple-data-files-to-a-collection/899
My suggestion is
Separate each endpoint that need data file into different collections.
Use newman as library to run it all.

Related

ADF - Loop through a large JSON file in a dataflow

We currently receive some metadata information from a third party supplier in the form of a JSON file.
The JSON file contains definitions of some tables which need to be loaded into SQL via ADF.
The JSON file looks like this, it's a list of tables and their data types
"Tables": [
{
"name": "account",
"description": "account",
"$type": "LocalEntity",
"attributes": [
{
"dataType": "guid",
"maxLength": "-1",
"name": "Id"
},
{
"dataType": "string",
"maxLength": "250",
"name": "name"
}
]
},
{
"name": "customer",
"description": "account",
"$type": "LocalEntity",
"attributes": [
{
"dataType": "guid",
"maxLength": "-1",
"name": "Id"
},
{
"dataType": "string",
"maxLength": "100",
"name": "name"
}
]
}
]
What we need to do is to loop through this JSON and via an ADF data flow we create the required tables in the destination database.
We initially designed the Pipeline with a lookup activity that loads the JSON file then pass the output to a foreach loop. This worked really well when we had only a small JSON file but as we started using real data, the JSON file was over the limit of 4MB resulting in the lookup activity throwing an error.
We then tried using a mapping dataflow by loading the JSON as a source, then setting the sink as a cache and outputting this to an output variable which we then loop through but again this works with smaller datasets but as soon as the dataset is large enough it can't parse it to an output.
I am sure this should be easy to do but just can't get my head around it!
Here is the sample procedure to loop through large JSON file in a Dataflows.
Create a Linked service and dataset to the json file path.
Provision the dataset to the source in the dataflows.
By the flatten formatter will get the input columns from the source through Unroll by option with required input.
Create linked service and dataset to the sink path.
Attach the data flow work item to the Data Flow activity.
Will get result as per the expectations in the sql db.

Fanout data from Kafka to S3 using kafka connect

My kafka gets a wrapper payload in Json format. The wrapper payload looks like this:
{
"format": "wrapper",
"time": 1626814608000,
"events": [
{
"id": "item1",
"type": "product1",
"count": 200
},
{
"id": "item2",
"type": "product2",
"count": 300
}
],
"metadata": {
"schema": "schema-1"
}
}
I should export this to S3. But the catch here is, I should not store the wrapper. Instead, I should be storing the individual events based on the item.
For example, it should be stored in S3 as follows:
bucket/product1:
{"id": "item1", "type": "product1", "count": 200}
bucket/product2:
{"id": "item2", "type": "product2", "count": 300}
If you notice, the input is the wrapper with those events internally. However, my output should be each of those individual events stored in S3 in the same bucket with the product type as prefix.
My question is, is it possible to use Kafka Connect to do this? I see they have this Single Message transformer which seems to be a way to mutate data inside the object, but not to fanout like what I want. Even the signature looks like an R=>R
https://github.com/apache/kafka/blob/trunk/connect/api/src/main/java/org/apache/kafka/connect/transforms/Transformation.java
So based on my research, it does not seem possible. But I want to check if I am missing something before using a different option.
Transforms accept one event and output one event.
You need to use a stream processor such as Kafka Streams branch or flatMap functions to split an array of events into multiple events or multiple topics.

AWS EventBridge Input Transform to change date format

I´m try to call my own API method to monitor AWS EC2 status directly from a EventBridge rule.
For that, I need to use the Input Transform the adjust the payload to the one that I have implemented in my API (I don't want to change the API interface since is running for other clouds also)
But, what I can´t find how to do is to change tha date format...
{
"version": "0",
"id": "7bf73129-1428-4cd3-a780-95db273d1602",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "123456789012",
"time": "2015-11-11T21:29:54Z",
"region": "us-east-1",
"resources": ["arn:aws:ec2:us-east-1:123456789012:instance/i-abcd1111"],
"detail": {
"instance-id": "i-abcd1111",
"state": "pending"
}
}
Can I map this "Time" field with the corresponding UnixTime ?
for exaple "time":"1659476412627"
Regards

Add files to Salesforce CMS channel folder via Connect API?

I'm developing an integration that will programmatically create product entries in Salesforce, and part of that process needs to be the addition of product images. I'm using the Connect API and am able to make a GET call to the right folder like this (I've scrambled the IDs and what not for this example):
https://example.salesforce.com/services/data/v52.0/connect/cms/delivery/channels/0591G0000000006/contents/query?folderId=9Pu1M000000fxUMSYI
That returns a payload like this:
{
"currentPageUrl": "/services/data/v52.0/connect/cms/delivery/channels/0ap1G0000000006/contents/query?page=0&pageSize=250",
"items": [
{
"contentKey": "MCZ2YVCGLNSBETNIG5P5QMIS4KNA",
"contentNodes": {
"source": {
"fileName": "PET Round.jpg",
"isExternal": false,
"mediaType": "Image",
"mimeType": "image/jpeg",
"nodeType": "MediaSource",
"referenceId": "05T0R000005MthL",
"resourceUrl": "/services/data/v52.0/connect/cms/delivery/channels/0ap1G0000000007/media/MCY2YVCGLNSBETNIG5P4QMIS4KNA/content",
"unauthenticatedUrl": "/cms/delivery/media/MCZ2YVCGLNSBETNIG5P4QMIS4KNA",
"url": "/cms/delivery/media/MCY2YVCGLNSBETNIG5P4QMIS4KNA"
},
"title": {
"nodeType": "NameField",
"value": "844333"
}
},
"contentUrlName": "844333",
"language": "en_US",
"managedContentId": "20T0R0000008U9qUAE",
"publishedDate": "2021-08-18T16:20:57.000Z",
"title": "844333",
"type": "cms_image",
"typeLabel": "Image",
"unauthenticatedUrl": "/cms/delivery/v52.0/0DB1G0000008tfOWAU/contents/20Y0R0000008y9qUAE?oid=00D0R000000OI7GUAW"
}
]
}
I am also able to retrieve images by contentKey with a GET call like this:
https://example.salesforce.com/services/data/v52.0/connect/cms/delivery/channels/0ap1G0000000007/media/MCZ2ZVCGLNSBETMIG5P4QMIS4KNA/content
Anyone know what the endpoint should look like and what parameters etc it should have? I'm having trouble finding anything for this specific scenario in the docs but surely there's a way.
Thanks!

GET Values from a custom field via JIRA REST API

I would like to GET all drop down options for a custom field. For system fields, I use the following URI:
http://localhost:8080/rest/api/2/project/XXXX/components
(for components, versons, etc. Basically system fields), so I tried the following for a custom field
http://localhost:8080/rest/api/2/project/XXXX/customfield_10000
and got a 404 error. I'm not sure what I'm doing wrong as I've been googling for the past 19 hours. The best I search result I got was the following documentation: JIRA Developers Documentation
Please assist, I'm not sure What I'm missing
You can get that information either from the createmeta or editmeta REST resources.
Use editmeta if you want to retrieve the available options when editing a specific issue. E.g.
GET /rest/api/2/issue/TEST-123/editmeta
Use createmeta when you want to retrieve the options for a project in combination with an issue type. E.g.
GET /rest/api/2/issue/createmeta?projectKeys=MYPROJ&issuetypeNames=Bug&expand=projects.issuetypes.fields
The customfields with options will be returned like this:
"customfield_12345": {
"schema": {
"type": "string",
"custom": "com.atlassian.jira.plugin.system.customfieldtypes:select",
"customId": 12345
},
"name": "MySelectList",
"allowedValues": [
{
"self": "http://jira.url/rest/api/2/customFieldOption/14387",
"value": "Green",
"id": "14387"
},
{
"self": "http://jira.url/rest/api/2/customFieldOption/14384",
"value": "Blue",
"id": "14384"
}
]
}