Why does Apigee Key/Value Map has such format - key-value

I'm trying to understand why Apigee has such a format for Key/Value maps. When creating a key you should POST a JSON like this:
{
"name": "Map_name",
"entry": [
{
"name": "Key1",
"value": "value_one"
}, {
"name": "Key2",
"value": "value_two"
}
]
}
Note that entry is an array.
When you'r accessing a Key/Value Map you should use a policy like this:
<KeyValueMapOperations mapIdentifier="Map_name">
<Scope>environment</Scope>
<Get assignTo="foo_variable" index="2">
<Key>
<Parameter>Key2</Parameter>
</Key>
</Get>
</KeyValueMapOperations>
As you see, you need to specify both key name and index! Isn't it redundant? Accessing values by index is a bit inconvenient... That's not saying it is 1-based (so Pascal!). Why should I even care about the indices?

I think each key is multi-valued array within the Map. So each key can have more than one values. The array index is for identifying values within the multi-valued key. Not for the entire Map.

Related

Azure Functions (PowerShell): How to use table input binding?

I have a very simple input binding in my powershell function.
# Input bindings are passed in via param block.
param($Request, $table, $TriggerMetadata)
I am kind of a beginner in Powershell, though I had plenty of experience in other languages back then.
I do understand the Azure Table Datamodel in general
Can someone please explain the structure of the binding in terms of Powershell? Is it a Hashtable, Dictionary, array of entities, of properties? What (exactly) is an entity or a property? (again in terms of Powershell-Objects)
Actually I simply want to lookup an entity that has only one property by PartitionKey and RowKey in the most simple way!
$value = $table['mykey']
Thanks alot!
What (exactly) is an entity or a property? (again in terms of Powershell-Objects)
As far as I know,
An Entity in the Azure Table Storage is a set of properties such as database row (Eg, key-value pair).
Every entity has a Composite Primary Key (combination of Partition Key and Row Key) and also a timestamp.
Partition is a column in every entity.
Azure Table Storage does not enforce any schema structure of all the entities that are collectively making up the table.
Every entity will have different types of partitions/columns.
Partition Key:
A Partition key is used to partition a table to support load balancing. It's used to specify which partition an object belongs to.
Row Key: A Row Key is used to uniquely identify an entity (record) in a given partition.
And I believe you have to work upon the entity that has one property (1 Partition-Row Key) which is the format of:
$inputCloudTable = $storageTable.CloudTable
Here is the MS Doc that provides example of Working on the Entities as a PowerShell Objects.
I solved it with plain old Powershell - for debugging purposes I used "ConvertTo-JSON" to inspect the binding object.
I also added a binding that refines to the respective partion upfront in the function.json by simply adding the partitionkey property to the binding. It filters the entities accordingly upfront.
Then it is straightforward to work with the binding (when you're able to use Powershell properly):
#select entity by property (here RowKey is used as primary Key) using "Where-Object" (alias "?")
$entity = $binding | ?{ $_.RowKey -eq $id }
#get an arbitrary property from the azure table entity
$property = $entity.someproperty
So as expected, no AzTable module / cmdlet neded !
You can add PartitionKey and RowKey to the binding input so that you can filter without having to return all results.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-table-input?tabs=in-process%2Ctable-api&pivots=programming-language-powershell
{
"name": "PersonEntity",
"type": "table",
"tableName": "Person",
"partitionKey": "Test",
"rowKey": "{queueTrigger}",
"connection": "MyStorageConnectionAppSetting",
"direction": "in"
}
There is also an oData filter option within the binding which isn't included in the documentation examples.
{
"name": "PersonEntity",
"type": "table",
"tableName": "Person",
"filter": "(age ge '{myNum}')",
"connection": "MyStorageConnectionAppSetting",
"direction": "in"
}
You also don't need to use the queue binding if you need HTTP input.
{
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get"
],
"route": "route/{FirstName}"
},
{
"name": "PersonEntity",
"type": "table",
"tableName": "Person",
"partitionKey": "{FirstName}",
"connection": "MyStorageConnectionAppSetting",
"direction": "in"
},
{
"type": "http",
"direction": "out",
"name": "Response"
}
]
}

Cosmos DB unique key

It is possible to create unique constraint on property in subcollection in json?
I have json like that:
{
"id":111,
"DataStructs":[
"Name": "aaaaa",
"DataStructCells": [
{
"RowName": "Default",
"ColumnName": "Perfect / HT",
"CellValue": "0.1"
},
{
"RowName": "Default",
"ColumnName": "100% / HT",
"CellValue": "0.2"
}
]
]
}
And I wanted to add unique key to prevent to add two the same RowName: Default
When I create collection I added unique key: /DataStructs/DataStructCells/RowName but it doesn't work.
No, that is impossible. Unique keys can only work with different documents within a logic partition. To achieve your requirement, you can split your array to different documents in a same logic partition. You can refer to this How do I define unique keys involving properties in embedded arrays in Azure Cosmos DB?

Is it possible in Grafana using a source data table with a JSON field, to get an attribute from that field?

We configure Grafana to use a table input data source, it works very well with the fields already defined (like time, status, values, etc.).
But now a new field has been added to the table that is a serialized JSON object, returned from a process we can not modify.
We need to use a value (timestamp) that is a property of this serialized object in that table string field.
One serialized field value example is this:
{"timestamp":"2020-02-23T18:25:44.012Z","status":"fail","errors":[{"timestamp":"2020-02-23T18:25:43.511Z","message":"invalid key: key is shorter than minimum 16 bytes"},{"timestamp":"2020-02-23T18:25:43.851Z","message":"unauthorized: authorization not possible"}]}
The pretty print is:
{
"timestamp": "2020-02-23T18:25:44.012Z",
"status": "fail",
"errors": [
{
"timestamp": "2020-02-23T18:25:43.511Z",
"message": "invalid key: key is shorter than minimum 16 bytes"
},
{
"timestamp": "2020-02-23T18:25:43.851Z",
"message": "unauthorized: authorization not possible"
}
]
}
Is there any way to use a value like: field.timestamp or field.errors[0].timestamp ?
Is there a Plugin that allows it ?, or is not possible at all ?
Use PostgreSQL JSON column select in your Grafana query, e.g.:
SELECT
field->'timestamp',
...

Is it possible to query by array content?

Using the FIWARE PointOfInterest data model I would like to filter by POI category which is an array. For instance
http://130.206.118.244:1027/v2/entities?type=PointOfInterest&options=keyValues&attrs=name,category&limit=100&q=category=="311"
having as entity instances something like
{
"id": "Museum-f85a8c66d617c23d33847f8110341a29",
"type": "PointOfInterest",
"name": "The Giant Squid Centre",
"category":
[
"311"
]
},
{
"id": "Museum-611f228f42c7fbfa4bd58bad94455055",
"type": "PointOfInterest",
"name": "Museo Extremeño e Iberoamericano de Arte Contemporáneo",
"category":
[
"311"
]
},
Looking to the NGSIv2 specification it seems that works in the way you mention:
Single element, e.g. temperature==40. For an entity to match, it must contain the target property (temperature) and the target property value must be the query value (40) (or include the value, in case the target property value is an array).
I mean, in particular the part that says:
...or include the value, in case the target property value is an array.

MongoDB - Document with different type of value

I'm very new to MongoDB, i tell you sorry for this question but i have a problem to understand how to create a document that can contain a value with different "type:
My document can contain data like this:
// Example ONE
{
"customer" : "aCustomer",
"type": "TYPE_ONE",
"value": "Value here"
}
// Example TWO
{
"customer": "aCustomer",
"type": "TYPE_TWO",
"value": {
"parameter1": "value for parameter one",
"parameter2": "value for parameter two"
}
}
// Example THREE
{
"customer": "aCustomer",
"type": "TYPE_THREE",
"value": {
"anotherParameter": "another value",
{
"someParameter": "value for some parameter",
...
}
}
}
Customer field will be even present, the type can be different (TYPE_ONE, TYPE_TWO and so on), based on the TYPE the value can be a string, an object, an array etc.
Looking this example, i should create three kind of collections (one for type) or the same collection (for example, a collection named "measurements") can contain differend kind of value on the field "value" ?
Trying some insert in my DB instance i dont get any error (i'm able to insert object, string and array on property value), but i would like to know if is the correct way...
I come from RDBMS, i'm a bit confused right now.. thanks a lot for your support.
You can find the answer here https://docs.mongodb.com/drivers/use-cases/product-catalog
MongoDB's dynamic schema means that each need not conform to the same schema.