Specifying Numbers in VTL on AMS API Gateway - aws-api-gateway

Doc Ref: http://docs.aws.amazon.com/apigateway/latest/developerguide/models-mappings.html
In AMS VTL one specifies dictionary fields in a model schema thus:
"field1" : {"type":"string"},
"field2" : {"type":"number"},
and so a mapping template can populate such fields thus:
#set($inputRoot = $input.path('$'))
"questions" :
[
#foreach($elem in $inputRoot)
{
"field1" : "$elem.field1",
"field2" : $elem.field2
}#if($foreach.hasNext),#end
#end
]
However... my iOS app complains the received data isn't in JSON format. If I add quotes around $elem.field2 then iOS accepts the JSON and converts all fields to strings.
My Lambda function is returning is returning a standard JSON list of dictionaries with field2 defined as an integer.
But APIG returns strings for all my fields, delimited with {} and a prefix:
{S=some text}
{N=10000000500}
So I can see that field2 isn't a number but a string {N=10000000500}.
How do I handle numbers in this system?

Undocumented but you can simply specify the type after the field name in a mapping template:
#set($inputRoot = $input.path('$'))
"questions" :
[
#foreach($elem in $inputRoot)
{
"field1" : "$elem.field1.S",
"field2" : $elem.field2.N
}#if($foreach.hasNext),#end
#end
]
Note that string fields need to be delimited in quotes.

Related

Velocity in AWS Api Gateway: how to access array of objects

So in AWS Api Gateway, I'm querying my DynamoDB and gets this JSON as reply:
https://pastebin.com/GpQady4Z
So, Items is an array of 3 objects. I need to extract the properties of those objects: TS, Key and CamID.
I'm using Velocity in the Integration Response. Here is my Mapping Template:
#set($count = $input.json('$.Count'))
#set($items = $input.json('$.Items'))
{
"count" : $count,
"items" : $items,
"first_item": $items[0]
},
The result from API Gateway:
{
"count" : 3,
"items" : [{"TS":{"N":"1599050893346"},"Key":{"S":"000000/000000_2020-08-02-12.48.13.775-CEST.mp4"},"CamID":{"S":"000000"}},{"TS":{"N":"1599051001832"},"Key":{"S":"000000/000000_2020-08-02-12.50.01.220-CEST.mp4"},"CamID":{"S":"000000"}},{"TS":{"N":"1599051082769"},"Key":{"S":"000000/000000_2020-08-02-12.51.22.208-CEST.mp4"},"CamID":{"S":"000000"}}],
"first_item":
}
first_item always returns empty value
While in a pure array like this:
#set($foo = [ 42, "a string", 21, $myVar ])
"test" : $foo[0]
"test" returns 42
Why is my code not working on array of objects?
$items is a JSON string (not JSON object), so $items[0] doesn't make sense.
If you want to access the first item, use $input.json('$.Items[0]').
If you want to iterate over them, you can convert the JSON string to object first using $util.parseJson()

Pig's AvroStorage LOAD removes unicode chars from input

I am using pig to read avro files and normalize/transform the data before writing back out. The avro files have records of the form:
{
"type" : "record",
"name" : "KeyValuePair",
"namespace" : "org.apache.avro.mapreduce",
"doc" : "A key/value pair",
"fields" : [ {
"name" : "key",
"type" : "string",
"doc" : "The key"
}, {
"name" : "value",
"type" : {
"type" : "map",
"values" : "bytes"
},
"doc" : "The value"
} ]
}
I have used the AvroTools command-line utility in conjunction with jq to dump the first record to JSON:
$ java -jar avro-tools-1.8.1.jar tojson part-m-00000.avro | ./jq --compact-output 'select(.value.pf_v != null)' | head -n 1 | ./jq .
{
"key": "some-record-uuid",
"value": {
"pf_v": "v1\u0003Basic\u0001slcvdr1rw\u001a\u0004v2\u0003DayWatch\u0001slcva2omi\u001a\u0004v3\u0003Performance\u0001slc1vs1v1w1p1g1i\u0004v4\u0003Fundamentals\u0001snlj1erwi\u001a\u0004v5\u0003My Portfolio\u0001svr1dews1b2b3k1k2\u001a\u0004v0\u00035"
}
}
I run the following pig commands:
REGISTER avro-1.8.1.jar
REGISTER json-simple-1.1.1.jar
REGISTER piggybank-0.15.0.jar
REGISTER jackson-core-2.8.6.jar
REGISTER jackson-databind-2.8.6.jar
DEFINE AvroLoader org.apache.pig.piggybank.storage.avro.AvroStorage();
AllRecords = LOAD 'part-m-00000.avro'
USING AvroLoader()
AS (key: chararray, value: map[]);
Records = FILTER AllRecords BY value#'pf_v' is not null;
SmallRecords = LIMIT Records 10;
DUMP SmallRecords;
The corresponding record for the last command above is as follows:
...
(some-record-uuid,[pf_v#v03v1Basicslcviv2DayWatchslcva2omiv3Performanceslc1vs1v1w1p1g1i])
...
As you can see the unicode chars have been removed from the pf_v value. The unicode characters are actually being used as delimiters in these values so I will need them in order to fully parse the records into their desired normalized state. The unicode characters are clearly present in the encoded .avro file (as demonstrated by dumping the file to JSON). Is anybody aware of a way to get AvroStorage to not remove the unicode chars when loading records?
Thank you!
Update:
I have also performed the same operation using Avro's python DataFileReader:
import avro.schema
from avro.datafile import DataFileReader, DataFileWriter
from avro.io import DatumReader, DatumWriter
reader = DataFileReader(open("part-m-00000.avro", "rb"), DatumReader())
for rec in reader:
if 'some-record-uuid' in rec['key']:
print rec
print '--------------------------------------------'
break
reader.close()
This prints a dict with what looks like hex chars substituted for the unicode chars (which is preferable to removing them entirely):
{u'value': {u'pf_v': 'v0\x033\x04v1\x03Basic\x01slcvi\x1a\x04v2\x03DayWatch\x01slcva2omi\x1a\x04v3\x03Performance\x01slc1vs1v1w1p1g1i\x1a'}, u'key': u'some-record-uuid'}

Mongodb: Indexing field of sub-document that can be either text or array

I have a collection of documents representing messages. Each message has multiple fields that change from message to message. They are stored in a "fields" array of sub-documents.
Each element in this array contains the label and value of a field.
Some fields may contain long lists of strings (IP addresses, URLs, etc.) - each string appears in a new line within that field. Lists can be thousands of lines long.
For that purpose, each element also stores a "type" - type 1 represents a standard text, while type 2 represents a list. When there's a type 2 field, the "value" in the sub-document is an array of the list.
It looks something like this:
"fields" : [
{
"type" : 1,
"label" : "Observed on",
"value" : "01/09/2016"
},
{
"type" : 1,
"label" : "Indicator of",
"value" : "Malware"
},
{
"type" : 2,
"label" : "Relevant IP addresses",
"value" : [
"10.0.0.0",
"190.15.55.21",
"11.132.33.55",
"109.0.15.3"
]
}
]
I want all fields values to be searchable and indexed, whether these values are in a standard string or in an array within "value".
Would setting up a standard index on "fields.value" index both type 1 and type 2 content? do I need to set up two indexes?
Thanks in advance!
When creating a new index, mongodb will automatically switch to Multikey index if it stumbles across an array in a document on the indexed field.
Which means that simply:
collection.createIndex( { "fields.value": 1 } )
should work just fine.
see: https://docs.mongodb.com/v3.2/core/index-multikey/

How to send Nested Array parameter using alamofire's Multipart form data

How to send this parameter to multipart
let dictionary = [
"user" :
[
"email" : "\(email!)",
"token" : "\(loginToken!)"
],
"photo_data" :[
"name" : "Toko Tokoan1",
"avatar_photo" : photo,
"background_photo" : photo,
"phone" : "0222222222",
"addresses" :[[
"address" : "Jalan Kita",
"provinceid" : 13,
"cityid" : 185,
"postal" : "45512"
]],
"banks" :[[
"bank_name" : "PT Bank BCA",
"account_number" : "292993122",
"account_name" : "Tukiyeum"
]]
]
]
I tried this below code but I can't encode value (which in NSDic) to utf 8
for (key, value) in current_user {
if key == "avatar_photo" || key == "background_photo"{
multipartFormData.appendBodyPart(fileURL: value.data(using: String.Encoding.utf8)!, name: key) // value error because its NSDic
}else{
multipartFormData.appendBodyPart(data: value.data(using: String.Encoding.utf8)!, name: value) // value error because its NSDic
}
}
value in append body part cannot be use because it's NSDictionary not string. How the right way to put that parameter in multipartformdata?
It is allowed to have nested multiparts.
The use of a Content-Type of multipart in a body part within another multipart entity is explicitly allowed. In such cases, for obvious reasons, care must be taken to ensure that each nested multipart entity must use a different boundary delimiter.
RFC 1341
So you have to do the same you did on the outer loop: Simply loop through the contents of the dictionary generating key-value pairs. Obviously you have to set a different part delimiter, so the client can distinguish a nested part change from a top-level part change.
Maybe it is easier to send the whole structure as application/json.

Mongo db : query on nested json

Sample json object :
{ "_id" : ObjectId( "55887982498e2bef5a5f96db" ),
"a" : "x",
"q" : "null",
"p" : "",
"s" : "{\"f\":{\"b\":[\"I\"]},\"time\":\"fs\"}" }
need all documents where time = fs
My query :
{"s":{"time" : "fs"}}
above returns zero products but that is not true.
There are two problems here. First of all s is clearly a string so your query cannot work. You can use $regex as below but it won't be very efficient:
{s: {$regex: '"time"\:"fs"'}}
I would suggest converting s fields to proper documents. You can use JSON.parse to do it. Documents can be updated based on a current value using db.foo.find().snapshot().forEach. See this answer for details.
Second problem is your query is simply wrong. To match time field you should use dot notation:
{"s.time" : "fs"})