Convert json to string Velocity Template - aws-api-gateway

Is there a utility to convert JSON to string in AWS API Gateway Mapping Template /Velocity

Related

Mapping template for API gateway to firehose results in avoid invalid json

I am using API Gateway to receive data from a webhook and pushing it to Kinesis. Using mapping template, I create a JSON using data from the webhook payload.
The problem is, when I encode it to base64 and Kinesis decodes it, it ends up becoming an invalid json. The ":" get replaced by "=".
Here's my mapping template:
#set ($account = $util.parseJson($input.params().header.get("some-field")).account) //fetching a field from a header
{
"StreamName": "$input.params('stream-name')",
"Data": "$util.base64Encode(
{"Payload": "$input.json('$.payload')",
"Account": "$account",
"Event": "$input.json('$.event')"
})",
"PartitionKey": "$account"
}
What's happening is, the Data JSON object is getting corrupted when it gets decoded by Firehose for inline parsing (for dynamic partitioning). I get
{
Payload={"a":1,"b":2},
Account=abcdehdds,
Event="abc"
}
Notice the = instead of :, making this an invalid JSON.
What's strange is when the data is pushed from Firehose to s3 without any dynamic partitioning, it's a perfect JSON.
I'm at my wits end trying to solve this. Would appreciate any help.

azure data factory - removing backslahes from a string - replace function does not work

I have a web task passing data to an api in data factory.
The api only accepts a malformed json fragment:
[
{
"RowNumber":1,"Tag":"ddddd",
"LastUpdateDateTime":"2022-07-26T13:14:28Z"
}
]
Datafactory wont allow this to be converted to json with the json function as its not valid json.
But if its sent in the body as a string, all the double quotes are escaped:
[
{
\"RowNumber\":1,
\"Tag\":\"ddddd\",
\"LastModifiedDate\":\"2022-07-26T13:14:28Z\"
}
]
is there any way I can remove the backslashes in data factory without to do all this in an azure function?
seems replace does remove the back slashes when sending the data between tasks, it just puts it in again if i try to view the output of the task!
using replace fixed the issue.

API Gateway Access Log using Cloudformation

I need to enable Custom Access Logging in API Gateway. The cloudformation is written in yaml. But for the format of the custom logs it is in json, xml such formats but nothing is mentioned how to set format of access log in yaml. Does anyone knows how to do it?
From the CloudFormation user guide, the Format attribute requires your input to be String.
DestinationArn: String
Format: String
For example:
DestinationArn: !Sub ${ApiAccessLogGroup.Arn}
Format: "{ 'requestId':'$context.requestId', 'ip': '$context.identity.sourceIp', 'caller':'$context.identity.caller', 'user':'$context.identity.user','requestTime':'$context.requestTime', 'xrayTraceId':'$context.xrayTraceId', 'wafResponseCode':'$context.wafResponseCode', 'httpMethod':'$context.httpMethod','resourcePath':'$context.resourcePath', 'status':'$context.status','protocol':'$context.protocol', 'responseLength':'$context.responseLength' }"
To simplify your String or make it looks better, please use !Sub as this post.

What is the appropriate way to build JSON within a Data Factory Pipeline

In my earlier post, SQL Server complains about invalid json, I was advised to use an 'appropriate methods' for building a json string, which is to be inserted into a SQL Server table for logging purposes. In the earlier post, I was using string concatenation to build a json string.
What is the appropriate tools/functions to build json within a Data Factory pipeline? I've looked into the json() and string() functions, but they would still rely on concatenation.
Clarification: I'm trying to generate a logging message that looks like this: Right now I'm using string concatenation to generate the logging json. Is there a better, more elegant (but lightweight) way to generate the json data?
{ "EventType": "DataFactoryPipelineRunActivity",
"DataFactoryName":"fa603ea7-f1bd-48c0-a690-73b92d12176c",
"DataFactoryPipelineName":"Import Blob Storage Account Key CSV file into generic SQL table using Data Flow Activity Logging to Target SQL Server",
"DataFactoryPipelineActivityName":"Copy Generic CSV Source to Generic SQL Sink",
"DataFactoryPipelineActivityOutput":"{runStatus:{computeAcquisitionDuration:316446,dsl: source() ~> ReadFromCSVInBlobStorage ReadFromCSVInBlobStorage derive() ~> EnrichWithDataFactoryMetadata EnrichWithDataFactoryMetadata sink() ~> WriteToTargetSqlTable,profile:{ReadFromCSVInBlobStorage:{computed:[],lineage:{},dropped:0,drifted:1,newer:1,total:1,updated:0},EnrichWithDataFactoryMetadata:{computed:[],lineage:{},dropped:0,drifted:1,newer:6,total:7,updated:0},WriteToTargetSqlTable:{computed:[],lineage:{__DataFactoryPipelineName:{mapped:false,from:[{source:EnrichWithDataFactoryMetadata,columns:[__DataFactoryPipelineName]}]},__DataFactoryPipelineRunId:{mapped:false,from:[{source:EnrichWithDataFactoryMetadata,columns:[__DataFactoryPipelineRunId]}]},id:{mapped:true,from:[{source:ReadFromCSVInBlobStorage,columns:[id]}]},__InsertDateTimeUTC:{mapped:false,from:[{source:EnrichWithDataFactoryMetadata,columns:[__InsertDateTimeUTC]}]},__DataFactoryName:{mapped:false,from:[{source:EnrichWithDataFactoryMetadata,columns:[__DataFactoryName]}]},__FileName:{mapped:false,from:[{source:EnrichWithDataFactoryMetadata,columns:[__FileName]}]},__StorageAccountName:{mapped:false,from:[{source:EnrichWithDataFactoryMetadata,columns:[__StorageAccountName]}]}},dropped:0,drifted:1,newer:0,total:7,updated:7}},metrics:{WriteToTargetSqlTable:{rowsWritten:4,sinkProcessingTime:1436,sources:{ReadFromCSVInBlobStorage:{rowsRead:4}},stages:[{stage:3,partitionTimes:[621],bytesWritten:0,bytesRead:24,streams:{WriteToTargetSqlTable:{type:sink,count:4,partitionCounts:[4],cached:false},EnrichWithDataFactoryMetadata:{type:derive,count:4,partitionCounts:[4],cached:false},ReadFromCSVInBlobStorage:{type:source,count:4,partitionCounts:[4],cached:false}},target:WriteToTargetSqlTable,time:811}]}}},effectiveIntegrationRuntime:DefaultIntegrationRuntime (East US)}",
"DataFactoryPipelineRunID":"63759585-4acb-48af-8536-ae953efdbbb0",
"DataFactoryPipelineTriggerName":"Manual",
"DataFactoryPipelineTriggerType":"Manual",
"DataFactoryPipelineTriggerTime":"2019-11-05T15:27:44.1568581Z",
"Parameters":{
"StorageAccountName":"fa603ea7",
"FileName":"0030_SourceData1.csv",
"TargetSQLServerName":"5a128a64-659d-4481-9440-4f377e30358c.database.windows.net",
"TargetSQLDatabaseName":"TargetDatabase",
"TargetSQLUsername":"demoadmin"
},
"InterimValues":{
"SchemaName":"utils",
"TableName":"vw_0030_SourceData1.csv-2019-11-05T15:27:57.643"
}
}
You can using Data Flow, it help you build the JSON string within pipeline in Data Factory.
Here's the Data Flow tutorial: Mapping data flow JSON handling.
It can help you:
Creating JSON structures in Derived Column
Source format options
Hope this helps.

jose4j JWT's claims set's attribute type other than string object

I have been using jose4j version 0.6.0 for Json Web Token(JWT) generation. All is good up-till token generation, token verification . JWT's claims payload can have number of elements like version, tokenId, issuer,permissions etc. I'm passing TokenPermissions object which is standard object in oneM2M release 2 specification i.e.
JwtClaims claims = new JwtClaims();
claims.setIssuer("DAS#ServiceProvider");
claims.setAudience("CSE001"); //
.....
.........
TokenPermissions tokenPerms = new TokenPermissions();
TokenPermission tokenPerm = new TokenPermission();
tokenPerm.getResourceIDs().add("RXYZ");
tokenPerm.setPrivileges(setOfAcr);// setOfACr is another object on oneM2M
tokenPerms.getPermission().add(tokenPerm);
claims.setClaim("permissions",tokenPerms);
Above snippet of code generates following JWT Claim Set
{iss=DAS#ServiceProvider, aud=CSE001, exp=1508999613, jti=H1wm_yaOe61Co-wND7wBAw#DAS#CDOT-SP, iat=1508996013, nbf=1508995953, sub=subject, email=mail#example.com, groups=[group-one, other-group, group-three], version=1.0.0, permissions=cdot.onem2m.resource.xsd.TokenPermissions#7f3b97fd}
Whole to the token passes the signature and claims validation but when is I try of typecast permission attribute to TokenPermissions it through error.
tokenPermsObject = jwtClaims.getClaimValue("permissions",TokenPermissions.class);
It through below error :
org.jose4j.jwt.MalformedClaimException: The value of the 'permissions' claim is not the expected type (xyz.xsd.TokenPermissions#7f3b97fd - Cannot cast java.lang.String to xyz.xsd.TokenPermissions.TokenPermissions)
What type of claims object could be passed in jose4j JWT, does I have to mandatorily pass text in claims set. Any help would be highly appreciated.
jose4j's JSON processing was derived from the JSON.simple toolkit and is fairly basic in how it converts between JSON and Java objects. It will do strings, numbers, booleans, maps and lists.
If you want/need to use a more sophisticated JSON library you can use setPayload(...) on JsonWebSignature when creating the JWT and give it the the JSON string you've produced elsewhere. And when consuming a JWT, String getRawJson() on JwtClaims will give you the JSON string payload that you can hand off to some other lib.