How to find ID for existing Fi-Ware sensors - fiware-orion

I'm working with Fi-Ware and I would like to include existing information from smartcities on my project. Clicking on the link below I could find information about how is the ID pattern and type of different device (for example OUTSMART.NODE.).
https://forge.fi-ware.org/plugins/mediawiki/wiki/fiware/index.php/Publish/Subscribe_Broker_-_Orion_Context_Broker_-_User_and_Programmers_Guide#Sample_code
However, I don't know the after that pattern
I've tried random numbers (OUTSMART.NODE.1 or OUTSMART.NODE.0001).
Is there some kind of list or somewhere to find that information??
Thank you!

In order to know the particular entity IDs for a given type, you can use a "discovery" query on the type associated to the sensor with the .* global pattern. E.g., in order to get the IDs associated to type "santander:traffic" you could use:
{
"entities": [
{
"type": "santander:traffic",
"isPattern": "true",
"id": ".*"
}
],
"attributes" : [
"TimeInstant"
]
}
Using "TimeInstant" in the "attributes" field is not strictly needed. You can leave "attribute" empty, in order to get all the attributes from each sensor. However, if you are insterested only in the IDs, "TimeInstant" would suffice and you will save length in the JSON response (the respone of the above query is around 17KB, while if you use an empty "attributes" field, the response will be around 48KB).
EDIT: since the update to Orion 0.14.0 in orion.lab.fi-ware.org on July 2nd, 2014 the NGSI API implements pagiation. The default limit is 20 entities so if you want to get all them, you will need to implement pagination in your cliente, using limit and details URI parameters. Have a look to the pagination section in the user manual for details.

Related

list contains for structure data type in DMN decision table

I am planning to use Drools for executing the DMN models. However I am having trouble to write a condition in DMN Decision table where the input is an array of objects with structure data type and condition is to check if the array contains an object with specific fields. For ex:
Input to decision table is as below:
[
{
"name": "abc",
"lastname": "pqr"
},
{
"name": "xyz",
"lastname": "lmn"
},
{
"name": "pqr",
"lastname": "jkl"
}
]
Expected output: True if the above list contains an element that match {"name": "abc", "lastname": "pqr"} both on the same element in the list.
I see that FEEL has support for list contains, but I could not find a syntax where objects in array are not of primitive types like number,string etc but structures. So, I need help on writing this condition in Decision table.
Thanks!
Edited description:
I am trying to achieve the following using the decision table wherein details is list of info structure. Unfortunately as you see I am not getting the desired output wherein my input list contains the specific element I am looking for.
Input: details = [{"name": "hello", "lastname": "world"}]
Expected Output = "Hello world" based on condition match in row 1 of the decision table.
Actual Output = null
NOTE: Also in row no 2 of the decision table, I only check for condition wherein I am only interested in the checking for the name field.
Content for the DMN file can be found over here
In this question is not clear the overall need and requirements for the Decision Table.
For what pertaining the part of the question about:
True if the above list contains an element that match {"name": "abc", "lastname": "pqr"}
...
I see that FEEL has support for list contains, but I could not find a syntax where objects in array are not of primitive types like number,string etc but structures.
This can be indeed achieved with the list contains() function, described here.
Example expression
list contains(my list, {"name": "abc", "lastname": "pqr"})
where my list is the verbatim FEEL list from the original question statement.
Example run:
giving the expected output, true.
Naturally 2 context (complex structures) are the same if all their properties and fields are equivalent.
In DMN, there are multiple ways to achieve the same result.
If I understand the real goal of your use case, I want to suggest a better approach, much easier to maintain from a design point of view.
First of all, you have a list of users as input so those are the data types:
Then, you have to structure a bit your decision:
The decision node at least one user match will go trough the user list and will check if there is at least one user that matches the conditions inside the matching BKM.
at least one user match can implemented with the following FEEL expression:
some user in users satisfies matching(user)
The great benefit of this approach is that you can reason on specific element of your list inside the matching BKM, which makes the matching decision table extremely straightforward:

What's the best practice to get user profile in RESTful API design

In my case, each user has a profile with lots of attributes, e.g. gender, age, name. What's the best practice to design the RESTful API to get those attributes? The followings are possible solutions:
Get all attributes in a single call
Get all attributes:
Request: GET http://api.domain.com/users/id/profile
Response: {"name" : "Jim", "gender" : "male", "age" : 12}
Get attribute one-by-one
Get attributes list:
Request: GET http://api.domain.com/users/id/profile
Response: { "attributes" : ["name", "gender", "age"] }
Get a specified attribute:
Request: GET http://api.domain.com/users/id/profile/name
Response: {"name" : "Jim"}
With the first solution, the client gets all attributes in a single call. However, the problem is that there's too many attributes, and we'll add more attributes to the profile. I'm wondering which one is better?
If you have lots and lots of attributes, another approach would be to group them.
In REST, everything needs to be a resource (for example, but not limited to, something identifiable by an URL).
So you could have
GET http://api.domain.com/users/id/profile
and you get
{ "categories" : ["names", "address", "interests", "jobhistory", "publications", "blogs", "skills"] }
and then you query further. That does imply multiple trips but you would not have to query the many attributes one by one, ending up with 50 queries out of 75 attributes, for example, but might need 3 queries to get the 50 attributes you want.
Definitely the first option seems much better primarily because of saving multiple calls - mind you clients as well, it will be much easier for them to fetch what they need in a single call instead of calling - more or less - the same resources multiple times.
It seems that what are you looking for is called resource expansion - you can read about it e.g. here.
In short it assumes that the response you send is configurable with query params. If no params are included some basic subset of attributes is returned. If params to be expanded are sent - the basic subset is returned along with other attributes listed in query param. You can also mix two approaches. Some of parameters might be expanded via query params other may be called as subresources - it depends arbitrarily on the size of a resource.
I would recommend to divide user profile attributes into logical categories and make these categories available to your clients through query parameters like ...
names (array of names (aliases)) : first, last, middle, prefix
addresses (array of addresses) : street, apt, city, state, country, county
jobs (array of jobs) : company, designation, start_date, end_date, city, state, country
Provide an API that returns the up-to-date list of categories available on user profile as documentation can get outdated.
GET http://api.domain.com/users/profiles/categories
Response:
{
"categories": ["names", "address", "interests", "jobs" ],
"links": [
{
"rel": "users.profiles.categories",
"href": "http://api.domain.com/users/profiles/categories"
}, {
"rel": "users.profiles.category.names",
"href": "http://api.domain.com/users/profiles?categories=names"
}, {
"rel": "users.profiles.category.addresses",
"href": "http://api.domain.com/users/profiles?categories=addresses"
}, {
"rel": "users.profiles.category.interests",
"href": "http://api.domain.com/users/profiles?categories=interests"
}, {
"rel": "users.profiles.category.jobs",
"href": "http://api.domain.com/users/profiles?categories=jobs"
}, {
"rel": "users.profiles.category.all",
"href": "http://api.domain.com/users/profiles?categories=all"
}
]
}
With the above HATEOAS and depending on the categories mentioned in query parameter, your service can query those entities from database and form the response and return back to your client.
GET http://api.domain.com/users/id/profile?categories=names,address,interests,jobs
NOTE: if comma(,) cannot be not directly used in URL then you can use %2C (url-encoded value of ,).
Additionally, your actual GET API can also return HATEOAS to sub categories if the user doesn't use the categories API to get all sub categories.
This is just one of the way where you can using additional informative end point provide available/supported categories (query parameters) and HATEOAS will help your client to navigate thru those available sub-categories.

Inserting multiple key value pair data under single _id in cloudant db at various timings?

My requirement is to get json pair from mqtt subscriber at different timings under single_id in cloudant, but I'm facing error while trying to insert new json pair in existing _id, it simply replace old one. I need at least 10 json pair under one _id. Injecting at different timings.
First, you should make sure about your architectural decision to update a particular document multiple times. In general, this is discouraged, though it depends on your application. Instead, you could consider a way to insert each new piece of information as a separate document and then use a map-reduce view to reflect the state of your application.
For example (I'm going to assume that you have multiple "devices", each with some kind of unique identifier, that need to add data to a cloudant DB)
PUT
{
"info_a":"data a",
"device_id":123
}
{
"info_b":"data b",
"device_id":123
}
{
"info_a":"message a"
"device_id":1234
}
Then you'll need a map function like
_design/device/_view/state
{
function (doc) {
emit(doc.device_id, 1);
}
Then you can GET the results of that view to see all of the "info_X" data that is associated with the particular device.
GET account.cloudant.com/databasename/_design/device/_view/state
{"total_rows":3,"offset":0,"rows":[
{"id":"28324b34907981ba972937f53113ac3f","key":123,"value":1},
{"id":"d50553d206d722b960fb176f11841974","key":123,"value":1},
{"id":"eaa710a5fa1ff4ba6156c997ddf6099b","key":1234,"value":1}
]}
Then you can use the query parameters to control the output, for example
GET account.cloudant.com/databasename/_design/device/_view/state?key=123&include_docs=true
{"total_rows":3,"offset":0,"rows":[
{"id":"28324b34907981ba972937f53113ac3f","key":123,"value":1,"doc":
{"_id":"28324b34907981ba972937f53113ac3f",
"_rev":"1-bac5dd92a502cb984ea4db65eb41feec",
"info_b":"data b",
"device_id":123}
},
{"id":"d50553d206d722b960fb176f11841974","key":123,"value":1,"doc":
{"_id":"d50553d206d722b960fb176f11841974",
"_rev":"1-a2a6fea8704dfc0a0d26c3a7500ccc10",
"info_a":"data a",
"device_id":123}}
]}
And now you have the complete state for device_id:123.
Timing
Another issue is the rate at which you're updating your documents.
Bottom line recommendation is that if you are only updating the document once per ~minute or less frequently, then it could be reasonable for your application to update a single document. That is, you'd add new key-value pairs to the same document with the same _id value. In order to do that, however, you'll need to GET the full doc, add the new key-value pair, and then PUT that document back to the database. You must make sure that your are providing the most recent _rev of that document and you should also check for conflicts that could occur if the document is being updated by multiple devices.
If you are acquiring new data for a particular device at a high rate, you'll likely run into conflicts very frequently -- because cloudant is a distributed document store. In this case, you should follow something like the example I gave above.
Example flow for the second approach outlined by #gadamcox for use cases where document updates are not required very frequently:
[...] you'd add new key-value pairs to the same document with the same _id value. In order to do that, however, you'll need to GET the full doc, add the new key-value pair, and then PUT that document back to the database.
Your application first fetches the existing document by id: (https://docs.cloudant.com/document.html#read)
GET /$DATABASE/100
{
"_id": "100",
"_rev": "1-2902191555...",
"No": ["1"]
}
Then your application updates the document in memory
{
"_id": "100",
"_rev": "1-2902191555...",
"No": ["1","2"]
}
and saves it in the database by specifying the _id and _rev (https://docs.cloudant.com/document.html#update)
PUT /$DATABASE/100
{
"_id": "100",
"_rev": "1-2902191555...",
"No":["1","2"]
}

Is it possible to process objects in a Google Cloud Storage bucket in FIFO order?

In my web app, I need to pull objects from gcs one by one and process them.
So the question is,
"How do I send a request to gcs to get the next unprocessed object?"
What I’d like to do is to simply rely on the sort order provided by gcs and then just process the objects in this sorted list one by one.
That way, I only need to keep track of the last processed item in my app.
I’d like to rely on the sort order provided by the timeCreated timestamp on each individual object in the bucket.
When I query my bucket via the JSON API, I notice that the objects are returned sorted by timeCreated from oldest to newest.
For example, this query ...
returns this list ...
{
"items": [
{
"name": "cars_train/00001.jpg",
"timeCreated": "2016-03-23T19:19:47.506Z"
},
{
"name": "cars_train/00002.jpg",
"timeCreated": "2016-03-23T19:19:49.320Z"
},
{
"name": "cars_train/00003.jpg",
"timeCreated": "2016-03-23T19:19:50.228Z"
},
{
"name": "cars_train/00004.jpg",
"timeCreated": "2016-03-23T19:19:51.377Z"
},
{
"name": "cars_train/00005.jpg",
"timeCreated": "2016-03-23T19:19:51.778Z"
},
{
"name": "cars_train/00006.jpg",
"timeCreated": "2016-03-23T19:19:52.817Z"
},
{
"name": "cars_train/00007.jpg",
"timeCreated": "2016-03-23T19:19:53.868Z"
},
{
"name": "cars_train/00008.jpg",
"timeCreated": "2016-03-23T19:19:54.925Z"
},
{
"name": "cars_train/00009.jpg",
"timeCreated": "2016-03-23T19:19:58.426Z"
},
{
"name": "cars_train/00010.jpg",
"timeCreated": "2016-03-23T19:19:59.323Z"
}
]
}
This sort order by timeCreated is exactly what I need, though I’m not certain if I can rely on this always being true?
So, I could code my app to process this list by simply searching for the first timeCreated value greater than the last object that processed.
The problem is this list can be very large and searching through a huge list every single time the user presses the NEXT button is too computationally expensive.
I would like to be able to specify in my query to gcs to filter the list so that I return only the single item that I need.
The API does allow me to set the maxResults returned to a value of 1.
However, I do not see an option that would allow me to return only objects whose timeCreated value is greater than the value I specified.
I think what I am trying to do is probably fairly common, so I’m guessing that a solution may exist for this problem.
One work around for this problem is to physically move an object that has been processed to another bucket.
That way the first item in the list would always be the newest one and I could simply send the request with maxCount=1.
But this adds complexity because it forces me have have 2 separate buckets for every project instead of 1.
Is there a way to filter this list of objects to only include ones whose timeCreated date is above a specified value?
In MySQL, it might be something like ...
SELECT name
FROM bucket
WHERE timeCreated > X
ORDER BY timeCreated
LIMIT 1
You can configure object change notifications on the bucket, and get a notification each time a new object arrives. That would allow you to process new objects without scanning a long listing each time. It also avoids the problem that listing a bucket is only eventually consistent (so, recently uploaded objects may not show up immediately when you list objects; I don't know if that's a problem for your app).
Details about object change notification are documented at https://cloud.google.com/storage/docs/object-change-notification.
Object listing in GCS is not sorted by timeCreated. Object listing results are always in alphabetical order. In your example, those two things merely happen to catch up.
If you want to get a list of objects in the order they were uploaded, you must ensure that each object has a name alphabetically later than the name of any object uploaded before it. Even then, however, you must take care, as object listing is eventually consistent, which means that objects you upload may not immediately show up in a listing.
If some ordering of objects is critically important, it would be a good idea to maintain a separate index of the objects and their timestamps in a separate data structure, perhaps populated via object change notifications as Mike suggested.

JIRA REST API -- How to Query On Issue Status Name

I'm using the JIRA REST API and I would like to query for all issues in "Resolved" status. The status field looks like this:
"status": {
"self": "https:\/\/jira.atlas.xx.com\/rest\/api\/2\/status\/5",
"description": "A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.",
"iconUrl": "https:\/\/jira.atlas.xx.com\/images\/icons\/statuses\/resolved.png",
"name": "Resolved",
"id": "5",
"statusCategory": {
"self": "https:\/\/jira.atlas.xx.com\/rest\/api\/2\/statuscategory\/3",
"id": 3,
"key": "done",
"colorName": "green",
"name": "Complete"
}
}
Currently the only way to know to do this is to query for status=5. It would be nice to make the query more intuitive and look for all issues using the string "Resolved" status. Here is the query I'm using:
https://jira.atlas.xx.com/rest/api/2/search?jql=project=MYPROJECT and status=5 and fixVersion=15824&fields=id,key,description,status
Is it possible to query on status name?
Yes, you can query on status name as well.
I think you should use the official Jira documentation, especially this when using advanced searching options like JQL queries.
This documentation describes every part of your possible JQL queries. If you look at the Fields reference section, there you will find the fields as well as the possible attributes on which you can search. For example, in case of Status field:
You can search by Status name or Status ID (i.e. the number that JIRA automatically allocates to a Status).
According to this, your query can be modified easily
https://jira.atlas.xx.com/rest/api/2/search?jql=project=MYPROJECT and status=Resolved and fixVersion=15824&fields=id,key,description,status
If you can type the JQL in the browser, you can use it as a string for the REST API search resource. So you can indeed search by status name, suitable quoted