Orion Context Broker Using Match Pattern with number - fiware-orion

For example I have a person like below.
I wan to query person has phoneNumber contains "354".
I will use the query like this: GET /v2/entities?q=phoneNumber~=354.
So is it possible to do the query like this in orion context broker?
As I have seen that the match pattern only support target property as string.
Match pattern: ~=. The value matches a given pattern, expressed as a
regular expression, e.g. color~=ow. For an entity to match, it must
contain the target property (color) and the target property value must
match the string in the right-hand side, 'ow' in this example (brown
and yellow would match, black and white would not). This operation is
only valid for target properties of type string.
http://telefonicaid.github.io/fiware-orion/api/v2/stable/ Section: Simple Query Language
{
"type": "Person",
"isPattern": "false",
"id": "1",
"attributes": [
{
"name": "phoneNumber",
"type": "string",
"value": "0102354678"
}
]
}
Many thanks.

It works as you said.
For instance, using Orion 2.2.0 with an empty database in localhost:1026, creating an entity like the one you propose (but using NGSIv2 endpoint, as NGSIv1 is a deprecated API):
$ curl localhost:1026/v2/entities -H 'content-type: application/json' -d #- <<EOF
{
"type": "Person",
"id": "1",
"phoneNumber": {
"type": "string",
"value": "0102354678"
}
}
EOF
Then, you can do a query with "354" pattern and you will get the entity:
$ curl -s -S 'localhost:1026/v2/entities?q=phoneNumber~=354' | python -mjson.tool
[
{
"id": "1",
"phoneNumber": {
"metadata": {},
"type": "string",
"value": "0102354678"
},
"type": "Person"
}
]
On the contrary, if you do a query for a non-matching pattern (such as "999") you will not get any entity:
$ curl -s -S 'localhost:1026/v2/entities?q=phoneNumber~=999' | python -mjson.tool
[]

Related

Using REST API to create alerting rule in Kibana fails on 400 "Invalid action groups: default"

I have ELK cloud v. 7.13.2 and trying to create alert rule with slack action via REST API. This is my curl invocation:
curl -u ****** -s -H 'kbn-xsrf: true' -H 'Content-Type: application/json' https://***********.westeurope.azure.elastic-cloud.com:9243/api/alerting/rule -X POST -d #src/rules/cpu_utilization.json
I am expecting that new rule is created, but unfortunately I am getting following error:
{
"statusCode": 400,
"error": "Bad Request",
"message": "Invalid action groups: default"
}
The contents of src/rules/cpu_utilization.json are:
{
"params": {
"nodeType": "host",
"criteria": [
{
"comparator": ">",
"timeSize": 1,
"metric": "cpu",
"threshold": [
80
],
"timeUnit": "m"
}
],
"sourceId": "default"
},
"consumer": "alerts",
"schedule": {
"interval": "1m"
},
"tags": [],
"name": "CPU2",
"throttle": "1000d",
"enabled": true,
"rule_type_id": "metrics.alert.inventory.threshold",
"notify_when": "onThrottleInterval",
"actions": [
{
"group": "default",
"id": "fce4c27f-d22a-4209-858c-253a06511c1b",
"params": {
"message": "{{alertName}} - {{context.group}} is in a state of {{context.alertState}}\n\nReason:\n{{context.reason}}"
}
}
]
}
Documentation says clearly:
Properties of the action objects:
group
(Required, string) Grouping actions is recommended for escalations for different types of alerts. If you don’t need this, set this value to default.
Is this a bug in ELK or I am doing something wrong? I am able to use API for other purposes, like listing rules, deleting rules etc. I am also capable of creating a rule without an action, but this doen`t seem to be too useful...
OKAY, I got an answer from ELK support. Apparently, you can use another endpoint to list all rule types GET /api/alerting/rule_types. Then you need to find your type and lookup property default_action_group_id - it will hold the correct value. Eg. in the above example it was:
"default_action_group_id": "metrics.inventory_threshold.fired"

How to use Schema registry for Kafka Connect AVRO

I have started exploring Kafka and Kafka connect recently and did some initial set up .
But wanted to explore more on schema registry part .
My schema registry is started now what i should do .
I have a AVRO schema stored in avro_schema.avsc.
here is the schema
{
"name": "FSP-AUDIT-EVENT",
"type": "record",
"namespace": "com.acme.avro",
"fields": [
{
"name": "ID",
"type": "string"
},
{
"name": "VERSION",
"type": "int"
},
{
"name": "ACTION_TYPE",
"type": "string"
},
{
"name": "EVENT_TYPE",
"type": "string"
},
{
"name": "CLIENT_ID",
"type": "string"
},
{
"name": "DETAILS",
"type": "string"
},
{
"name": "OBJECT_TYPE",
"type": "string"
},
{
"name": "UTC_DATE_TIME",
"type": "long"
},
{
"name": "POINT_IN_TIME_PRECISION",
"type": "string"
},
{
"name": "TIME_ZONE",
"type": "string"
},
{
"name": "TIMELINE_PRECISION",
"type": "string"
},
{
"name": "AUDIT_EVENT_TO_UTC_DT",
"type": [
"string",
"null"
]
},
{
"name": "AUDIT_EVENT_TO_DATE_PITP",
"type": "string"
},
{
"name": "AUDIT_EVENT_TO_DATE_TZ",
"type": "string"
},
{
"name": "AUDIT_EVENT_TO_DATE_TP",
"type": "string"
},
{
"name": "GROUP_ID",
"type": "string"
},
{
"name": "OBJECT_DISPLAY_NAME",
"type": "string"
},
{
"name": "OBJECT_ID",
"type": [
"string",
"null"
]
},
{
"name": "USER_DISPLAY_NAME",
"type": [
"string",
"null"
]
},
{
"name": "USER_ID",
"type": "string"
},
{
"name": "PARENT_EVENT_ID",
"type": [
"string",
"null"
]
},
{
"name": "NOTES",
"type": [
"string",
"null"
]
},
{
"name": "SUMMARY",
"type": [
"string",
"null"
]
}
]
}
Is my schema is valid .I converted it online from JSON ?
where should i keep this schema file location i am not sure about .
Please guide me with the step to follow
.
I am sending records from Lambda function and from JDBC source both .
So basically how can i enforce AVRO schema and test ?
Do i have to change anything in avro-consumer properties file ?
Or is this correct way to register schema
./bin/kafka-avro-console-producer \
--broker-list b-3.**:9092,b-**:9092,b-**:9092 --topic AVRO-AUDIT_EVENT \
--property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}'
curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"schema" : "{\"type\":\"struct\",\"fields\":[{\"type\":\"string\",\"optional\":false,\"field\":\"ID\"},{\"type\":\"string\",\"optional\":true,\"field\":\"VERSION\"},{\"type\":\"string\",\"optional\":true,\"field\":\"ACTION_TYPE\"},{\"type\":\"string\",\"optional\":true,\"field\":\"EVENT_TYPE\"},{\"type\":\"string\",\"optional\":true,\"field\":\"CLIENT_ID\"},{\"type\":\"string\",\"optional\":true,\"field\":\"DETAILS\"},{\"type\":\"string\",\"optional\":true,\"field\":\"OBJECT_TYPE\"},{\"type\":\"string\",\"optional\":true,\"field\":\"UTC_DATE_TIME\"},{\"type\":\"string\",\"optional\":true,\"field\":\"POINT_IN_TIME_PRECISION\"},{\"type\":\"string\",\"optional\":true,\"field\":\"TIME_ZONE\"},{\"type\":\"string\",\"optional\":true,\"field\":\"TIMELINE_PRECISION\"},{\"type\":\"string\",\"optional\":true,\"field\":\"GROUP_ID\"},{\"type\":\"string\",\"optional\":true,\"field\":\"OBJECT_DISPLAY_NAME\"},{\"type\":\"string\",\"optional\":true,\"field\":\"OBJECT_ID\"},{\"type\":\"string\",\"optional\":true,\"field\":\"USER_DISPLAY_NAME\"},{\"type\":\"string\",\"optional\":true,\"field\":\"USER_ID\"},{\"type\":\"string\",\"optional\":true,\"field\":\"PARENT_EVENT_ID\"},{\"type\":\"string\",\"optional\":true,\"field\":\"NOTES\"},{\"type\":\"string\",\"optional\":true,\"field\":\"SUMMARY\"},{\"type\":\"string\",\"optional\":true,\"field\":\"AUDIT_EVENT_TO_UTC_DT\"},{\"type\":\"string\",\"optional\":true,\"field\":\"AUDIT_EVENT_TO_DATE_PITP\"},{\"type\":\"string\",\"optional\":true,\"field\":\"AUDIT_EVENT_TO_DATE_TZ\"},{\"type\":\"string\",\"optional\":true,\"field\":\"AUDIT_EVENT_TO_DATE_TP\"}],\"optional\":false,\"name\":\"test\"}"}' http://localhost:8081/subjects/view/versions
what next i have to do
But when i try to see my schema i get only below
curl --silent -X GET http://localhost:8081/subjects/AVRO-AUDIT-EVENT/versions/latest
this is the result
{"subject":"AVRO-AUDIT-EVENT","version":1,"id":161,"schema":"{\"type\":\"string\",\"optional\":false}"}
Why i do not see my full registered schema
Also when i try to delete schema
i get below error
{"error_code":405,"message":"HTTP 405 Method Not Allowed"
i am not sure if my schema is registered correctly .
Please help me.
Thanks in Advance
is my schema valid
You can use the REST API of the Registry to try and submit it and see...
where should i keep this schema file location i am not sure about
It's not clear how you're sending messages...
If you actually wrote Kafka producer code, you store it within your code (as a string) or as a resource file.. If using Java, you can instead use the SchemaBuilder class to create the Schema object
You need to rewrite your producer to use Avro Schema and Serializers if you've not already
If we create AVRO schema will it work for Json as well .
Avro is a Binary format, but there is a JSONDecoder for it.
what should be the URL for our AVRO schema properties file ?
It needs to be the IP of your Schema Registry once you figure out how to start it. (with schema-registry-start)
Do i have to change anything in avro-consumer properties file ?
You need to use the Avro Deserializer
is this correct way to register schema
.> /bin/kafka-avro-console-producer \
Not quite. That's how you produce a message with a schema (and you need to use the correct schema). You also must provide --property schema.registry.url
You use the REST API of the Registry to register and verify schemas

hbase fuzzy/filter list matching for REST from shell

I am trying to formulate some RESTful calls to return specific data from a hbase table using fuzzy logic OR multiple filters (filterList). My rowkey is made up of 'BatchId + UserId + Timestamp' + 'ModelId', as an example i would like to be able to find all people where rowkey contains UserId of xyz' & 'ModelId' of 'yxz' (irrespective of BatchId & Timestamp values).
I've had no luck replicating fuzzy filters from the shell, as a last resort i'm attempting to use a filter list (multiple filters) to filter on each of the columns individually (this comes at a performance cost, which i can accept).
With regards the filter list, when trying to filter on the RK itself, i am not sure what value to pass for qualifier & column family too or the syntax for adding more than one filter, any help greatly appreciated.
Find my curl command & the args file contents for filterlist below.
CURL:
curl -vi -X PUT -H "Content-Type:text/xml" -d #args.xml "host-rest-machine-address/namespace:table/scanner"
ARGS.XML:
<Scanner batch ="1024">
<filter>
{
"type": "FilterList",
"op": "MUST_PASS_ONE",
"filters": [{
"type": "FilterList",
"op": "MUST_PASS_ALL",
"filters": [{
"type": "FamilyFilter",
"op": "EQUAL",
"comparator": {
"type": "BinaryComparator",
"value": "Y2Yx"
}
}, {
"type": "QualifierFilter",
"op": "EQUAL",
"comparator": {
"type": "BinaryComparator",
"value": "cm93S2V5"
}
}, {
"type": "RowFilter",
"op": "EQUAL",
"comparator": {
"type": "BinaryComparator",
"value": "MjAwMDAyMDE4OTM3Mw=="
}
}]
}]
}
</filter>
</Scanner>
My column family: cf1 (not sure if this is applicable when searching against row key?)
Qualifier: column name in hbase table (also unsure of how to refer to rowkey here - have tried row, rowkey, my sql alias on import with no luck)
Value: value to filter on for given column/table
Note - all values being passed are base_64 encoded
Thanks in advance

How can I use the BigQuery REST API from the command line?

Attempting to make a plain GET request to one of the BigQuery REST APIs gives an error that looks like this:
curl https://www.googleapis.com/bigquery/v2/projects/$PROJECT_ID/jobs/$JOBID
Output:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "required",
"message": "Login Required",
"locationType": "header",
"location": "Authorization",
...
What is the correct way to invoke one of the REST APIs from the command-line, such as the query or insert APIs? The API reference has a "Try this API", but the examples don't translate directly to something you can run from the command-line.
As a disclaimer, when working from the command-line, using the bq tool will usually be sufficient, or for more complex use cases, the BigQuery client libraries enable programming with BigQuery from multiple languages. It can still be useful sometimes to make plain requests to the REST APIs to see how certain APIs work at a low level, however.
First, make sure that you have installed the Google Cloud SDK. This should include the gcloud and bq command-line tools. If you haven't already, authorize your account by running this command from your terminal:
gcloud auth login
This should prompt you to log in and then give you an access code that you can paste into your terminal. (The exact process may change over time).
Now let's try a query using the BigQuery REST API, calling the jobs.query method. Modify this script with your own project name, which you can find from the Google Cloud Console, then paste the script into your terminal:
PROJECT="YOUR_PROJECT_NAME"
QUERY="\"SELECT 1 AS x, 'foo' AS y;\""
REQUEST="{\"kind\":\"bigquery#queryRequest\",\"useLegacySql\":false,\"query\":$QUERY}"
echo $REQUEST | \
curl -X POST -d #- -H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/queries
If it worked, you should see output that looks like this:
{
"kind": "bigquery#queryResponse",
"schema": {
"fields": [
{
"name": "x",
"type": "INTEGER",
"mode": "NULLABLE"
},
{
"name": "y",
"type": "STRING",
"mode": "NULLABLE"
}
]
},
"jobReference": {
"projectId": "<your project ID>",
"jobId": "<your job ID>"
},
"totalRows": "1",
"rows": [
{
"f": [
{
"v": "1"
},
{
"v": "foo"
}
]
}
],
"totalBytesProcessed": "0",
"jobComplete": true,
"cacheHit": false
}
If you haven't set up the bq command-line tool, you can use bq init from your terminal to do so. Once you have, you can try running the same query using it:
bq query --use_legacy_sql=False "SELECT 1 AS x, 'foo' AS y;"
You can also see the REST API requests that the bq tool makes by passing the --apilog= option:
bq --apilog= query --use_legacy_sql=False "SELECT [1, 2, 3] AS x;"
Now let's try an example using the jobs.insert method instead of the query API. Run this script, replacing YOUR_PROJECT_NAME with your project name:
PROJECT="YOUR_PROJECT_NAME"
QUERY="\"SELECT 1 AS x, 'foo' AS y;\""
REQUEST="{\"configuration\":{\"query\":{\"useLegacySql\":false,\"query\":${QUERY}}}}"
echo $REQUEST | \
curl -X POST -d #- -H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/jobs
Unlike the query API, which returned a response immediately, you will see a result that looks similar to this:
{
"kind": "bigquery#job",
"etag": "\"<etag string>\"",
"id": "<project name>:<job ID>",
"selfLink": "https://www.googleapis.com/bigquery/v2/projects/<project name>/jobs/<job ID>",
"jobReference": {
"projectId": "<project name>",
"jobId": "<job ID>"
},
"configuration": {
"query": {
"query": "SELECT 1 AS x, 'foo' AS y;",
"destinationTable": {
"projectId": "<project name>",
"datasetId": "<anonymous dataset>",
"tableId": "<anonymous table>"
},
"createDisposition": "CREATE_IF_NEEDED",
"writeDisposition": "WRITE_TRUNCATE",
"useLegacySql": false
}
},
"status": {
"state": "RUNNING"
},
"statistics": {
"creationTime": "<timestamp millis>",
"startTime": "<timestamp millis>"
},
"user_email": "<your email address>"
}
Notice the status:
"status": {
"state": "RUNNING"
},
If you want to check on the job now, you can use the jobs.get method. Similar to before, run this from your terminal, using the job ID from the output in the previous step:
PROJECT="YOUR_PROJECT_NAME"
JOB_ID="YOUR_JOB_ID"
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/jobs/$JOB_ID
If the query is done, you'll get a response that indicates as much:
...
"status": {
"state": "DONE"
},
...
Finally, we can make a request to fetch the query results, also using the REST API.
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/queries/$JOB_ID
The output will look similar to when we used the jobs.query method above:
{
"kind": "bigquery#getQueryResultsResponse",
"etag": "\"<etag string>\"",
"schema": {
"fields": [
{
"name": "x",
"type": "INTEGER",
"mode": "NULLABLE"
},
{
"name": "y",
"type": "STRING",
"mode": "NULLABLE"
}
]
},
"jobReference": {
"projectId": "<project ID>",
"jobId": "<job ID>"
},
"totalRows": "1",
"rows": [
{
"f": [
{
"v": "1"
},
{
"v": "foo"
}
]
}
],
"totalBytesProcessed": "0",
"jobComplete": true,
"cacheHit": true
}

Jira Rest API: Requesting issue(s) of a specific user in one or more projects (Beginner)

For testing and practice purposes I want to create a specific request in Jira by using its REST api:
I want to list all issues from a specific user in one or more specific projects.
I tried it with SOAP UI but I was not able to create or get my results with easy GET-HTTP requests (I don't know how to combine more values and parameter together). The other way would be to use a script language but here I don't know what to use.
The documentation is somewhat confusing for a beginner like me and I would like to know how combine different values and paramter and how to start in an easy way.
Try to use advance rest client for chrome browsers to make your Rest requests.
The examples below (from official documentation) are for Curl usage but its simple to pass them to advance rest client. Dont forget the authentication.
Link to advance rest client
Example of create issue:
Request
curl -D- -u fred:fred -X POST --data {see below} -H "Content-Type: application/json" http://localhost:8090/rest/api/2/issue/
Data
{
"fields": {
"project":
{
"key": "TEST"
},
"summary": "REST ye merry gentlemen.",
"description": "Creating of an issue using project keys and issue type names using the REST API",
"issuetype": {
"name": "Bug"
}
}
}
Response
{
"id":"39000",
"key":"TEST-101",
"self":"http://localhost:8090/rest/api/2/issue/39000"
}
Example of making a query issue:
Request:
curl -D- -u fred:fred -X GET -H "Content-Type: application/json" http://kelpie9:8081/rest/api/2/search?jql=assignee=fred
Response:
{
"expand": "schema,names",
"startAt": 0,
"maxResults": 50,
"total": 6,
"issues": [
{
"expand": "html",
"id": "10230",
"self": "http://kelpie9:8081/rest/api/2/issue/BULK-62",
"key": "BULK-62",
"fields": {
"summary": "testing",
"timetracking": null,
"issuetype": {
"self": "http://kelpie9:8081/rest/api/2/issuetype/5",
"id": "5",
"description": "The sub-task of the issue",
"iconUrl": "http://kelpie9:8081/images/icons/issue_subtask.gif",
"name": "Sub-task",
"subtask": true
},
},
"customfield_10071": null
},
"transitions": "http://kelpie9:8081/rest/api/2/issue/BULK-62/transitions",
},
{
"expand": "html",
"id": "10004",
"self": "http://kelpie9:8081/rest/api/2/issue/BULK-47",
"key": "BULK-47",
"fields": {
"summary": "Cheese v1 2.0 issue",
"timetracking": null,
"issuetype": {
"self": "http://kelpie9:8081/rest/api/2/issuetype/3",
"id": "3",
"description": "A task that needs to be done.",
"iconUrl": "http://kelpie9:8081/images/icons/task.gif",
"name": "Task",
"subtask": false
},
"transitions": "http://kelpie9:8081/rest/api/2/issue/BULK-47/transitions",
}
]
}