Can I change a Cloud SQL instance's number of CPUs and memory via API or programmatically? - google-cloud-sql

I would like to edit a PostgreSQL instance's number of CPUs and Memory to custom values such as 2 vCPUs and 5 GB of memory each, via the API, but haven't found a way to do so.
The Instance settings page shows Cores and Memory as options, but when I try setting a simple JSON with the curl example given here,
{
"settings": {
"cores": 2,
"memory": 5
}
}
nothing happens.
I found a way to get existing settings, via curl -X GET -H "Authorization: Bearer "$(gcloud auth print-access-token) -H "Content-Type: application/json; charset=utf-8" "https://sqladmin.googleapis.com/v1/projects/MYPROJECT/instances/MYINSTANCE"
The returned JSON has dataDiskSizeGb, but nothing related to CPUs or memory that is obvious to me.
{
"kind": "sql#instance",
"state": "RUNNABLE",
"databaseVersion": "POSTGRES_12",
"settings": {
"authorizedGaeApplications": [],
"tier": "db-custom-1-3840",
"kind": "sql#settings",
"availabilityType": "ZONAL",
"pricingPlan": "PER_USE",
"replicationType": "SYNCHRONOUS",
"activationPolicy": "ALWAYS",
"ipConfiguration": {
"privateNetwork": "projects/MYPROJECT/global/networks/default",
"authorizedNetworks": [],
"ipv4Enabled": true
},
"locationPreference": {
"zone": "southamerica-east1-c",
"kind": "sql#locationPreference"
},
"dataDiskType": "PD_SSD",
"maintenanceWindow": {
"kind": "sql#maintenanceWindow",
"hour": 0,
"day": 0
},
"backupConfiguration": {
"startTime": "08:00",
"kind": "sql#backupConfiguration",
"location": "us",
"backupRetentionSettings": {
"retentionUnit": "COUNT",
"retainedBackups": 7
},
"enabled": true,
"replicationLogArchivingEnabled": false,
"pointInTimeRecoveryEnabled": false,
"transactionLogRetentionDays": 7
},
"settingsVersion": "4",
"storageAutoResizeLimit": "0",
"storageAutoResize": false,
"dataDiskSizeGb": "10"
},
"etag": "079...039",
"ipAddresses": [
{
"type": "PRIMARY",
"ipAddress": "xx.xxx.x.xxx"
},
{
"type": "OUTGOING",
"ipAddress": "xx.xx.xxx.xx"
},
{
"type": "PRIVATE",
"ipAddress": "xx.xx.xxx.xx"
}
],
"serverCaCert": {
"kind": "sql#sslCert",
"certSerialNumber": "0",
"cert": "-----BEGIN CERTIFICATE-----\nMII......c=\n-----END CERTIFICATE-----",
"commonName": "C=US,O=Google\\, Inc,CN=Google Cloud SQL Server CA,dnQualifier=9f7...e0c",
"sha1Fingerprint": "fff...8fb",
"instance": "MYINSTANCE",
"createTime": "2021-10-05T17:59:18.971Z",
"expirationTime": "2031-10-03T18:00:18.971Z"
},
"instanceType": "CLOUD_SQL_INSTANCE",
"project": "MYPROJECT",
"serviceAccountEmailAddress": "abc...#gcp-sa-cloud-sql.iam.gserviceaccount.com",
"backendType": "SECOND_GEN",
"selfLink": "https://sqladmin.googleapis.com/v1/projects/MYPROJECT/instances/MYINSTANCE",
"connectionName": "MYPROJECT:southamerica-east1:MYINSTANCE",
"name": "MYINSTANCE",
"region": "southamerica-east1",
"gceZone": "southamerica-east1-c",
"createTime": "2021-10-05T17:57:47.539Z"
}

To update the number of CPUs and Memory, on the "settings" REST reference there is a field "tier" that represents the CPU and Memory of the instance. In your example it is "db-custom-1-3840", the values represents the CPU and Memory (db-custom-[CPU]-[Memory]) this means that it has 1 CPU and 3840 memory. To change the machine to 2vCPU and and 5GB memory "tier" should have a value of "db-custom-2-5120".
For testing purposes I initially created a 4 vCPU with 26 GB memory.
For reference see initial instance configuration:
To change CPU and memory see steps below:
request.json:
{
"settings": {
"tier": "db-custom-2-5120"
}
}
NOTE: The value of the memory should be a mutiple of 256MB thus the value 5120.
Curl command:
curl -X PATCH \
-H "Authorization: Bearer "$(gcloud auth print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d #request.json \
"https://sqladmin.googleapis.com/v1/projects/your-project-name/instances/your-instance-name"
This will return a long running operation:
When I run GET curl -X GET -H "Content-Type: application/json" -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) https://sqladmin.googleapis.com/v1/projects/your-project-name/instances/your-instance-name the change is reflected.
See GET response snippet:
See new instance information at Cloud Console > SQL > Edit:

Related

I cannot send southbound commands via the context broker (Orion-LD)

Service group provisioning:
curl -iX POST 'http://localhost:4041/iot/services' \
-H 'fiware-service: openiot' \
-H 'fiware-servicepath: /' \
-H 'Content-Type: application/json' \
--data-raw '{
"services": [
{
"apikey": "4jggokgpepnvsb2uv4s40d59ov",
"entity_type": "LightFixture",
"resource": ""
}
]
}'
Actuator provisioning:
curl -L -X POST 'http://localhost:4041/iot/devices' \
-H 'fiware-service: openiot' \
-H 'fiware-servicepath: /' \
-H 'Content-Type: application/json' \
--data-raw '{
"devices": [
{
"device_id": "LightFixture00",
"entity_name": "urn:ngsi-ld:LightFixture:00",
"entity_type": "LightFixture",
"protocol": "PDI-IoTA-JSON",
"transport": "MQTT",
"commands": [
{
"name": "on",
"type": "command"
},
{
"name": "off",
"type": "command"
}
],
"static_attributes": [
{
"name": "refPole",
"type": "Relationship",
"value": "urn:ngsi-ld:Pole:0"
}
]
}
]
}'
Sending the command through the IoT agent (works correctly):
curl -L -X PATCH 'http://localhost:4041/ngsi-ld/v1/entities/urn:ngsi-ld:LightFixture:00/attrs/on' \
-H 'fiware-service: openiot' \
-H 'fiware-servicepath: /' \
-H 'Content-Type: application/json' \
--data-raw '{
"type": "Property",
"value": ""
}'
Sending the command to the context broker (Orion-LD):
curl -L -X PATCH 'http://localhost:1026/ngsi-ld/v1/entities/urn:ngsi-ld:LightFixture:00/attrs/on' \
-H 'NGSILD-Tenant: openiot' \
-H 'Content-Type: application/json' \
--data-raw '{
"type": "Property",
"value": ""
}'
This does not work:
msg=***** ERROR Entity/Attribute not found: Entity 'urn:ngsi-ld:LightFixture:00', Attribute 'on'
When I make the following request:
curl -L -X GET 'http://localhost:1026/ngsi-ld/v1/entities/urn:ngsi-ld:LightFixture:00' \
-H 'NGSILD-Tenant: openiot' \
-H 'Accept: application/json'
Contrary to what appears in this tutorial, I don't have the "on" and "off" commands and I suspect this is the reason for the above error:
{
"id": "urn:ngsi-ld:LightFixture:00",
"type": "LightFixture",
"refPole": {
"object": "urn:ngsi-ld:Pole:0",
"type": "Relationship",
"observedAt": "2022-08-15T01:44:00.605Z"
},
"on_status": {
"value": {
"#type": "commandStatus",
"#value": "OK"
},
"type": "Property",
"observedAt": "2022-08-15T01:24:07.900Z"
},
"on_info": {
"value": {
"#type": "commandResult",
"#value": ""
},
"type": "Property",
"observedAt": "2022-08-15T01:24:07.900Z"
},
"off_status": {
"value": {
"#type": "commandStatus",
"#value": "OK"
},
"type": "Property",
"observedAt": "2022-08-15T01:44:00.605Z"
},
"off_info": {
"value": {
"#type": "commandResult",
"#value": ""
},
"type": "Property",
"observedAt": "2022-08-15T01:44:00.605Z"
}
}
Context sources (curl -L -X GET 'http://localhost:1026/ngsi-ld/v1/csourceRegistrations' -H 'NGSILD-Tenant: openiot'):
[
{
"id": "urn:ngsi-ld:ContextSourceRegistration:...",
"type": "ContextSourceRegistration",
"endpoint": "http://iot-agent:4041",
"information": [
{
"entities": [
{
"id": "urn:ngsi-ld:LightFixture:00",
"type": "LightFixture"
}
],
"properties": [
"on",
"off"
]
}
]
}
]
Edit
After updating the IoT agent to the latest version ("1.24.0"), the request curl -L -X GET 'http://localhost:1026/ngsi-ld/v1/entities/urn:ngsi-ld:LightFixture:00' -H 'NGSILD-Tenant: openiot now returns the on and off commands:
{
"id": "urn:ngsi-ld:LightFixture:00",
"type": "LightFixture",
"refPole": {
"type": "Relationship",
"object": "urn:ngsi-ld:Pole:0",
},
"on_status": {
"type": "Property",
"value": {
"#type": "commandStatus",
"#value": "UNKNOWN"
}
},
"on_info": {
"type": "Property",
"value": {
"#type": "commandResult",
"#value": " "
}
},
"off_status": {
"type": "Property",
"value": {
"#type": "commandStatus",
"#value": "UNKNOWN"
}
},
"off_info": {
"type": "Property",
"value": {
"#type": "commandResult",
"#value": " "
}
},
"on": {
"type": "Property",
"value": {
"#type": "command",
"#value": ""
}
},
"off": {
"type": "Property",
"value": {
"#type": "command",
"#value": ""
}
}
}
But I still get the same error:
msg=***** ERROR Entity/Attribute not found: Entity 'urn:ngsi-ld:LightFixture:00', Attribute 'on' (https://uri=etsi=org/ngsi-ld/default-context/on) (status code: 404)
One thing that may be relevant - GET request output:
In this tutorial:
"on": {
"type": "command",
"value": ""
},
"off": {
"type": "command",
"value": ""
}
Mine:
"on": {
"type": "Property",
"value": {
"#type": "command",
"#value": ""
}
},
"off": {
"type": "Property",
"value": {
"#type": "command",
"#value": ""
}
}
This appears to be #context related. Internally an NGSI-LD context broker holds all its attributes as expanded URIs. With a GET these are usually reduced to short names using a compaction operation, so you see short attribute names, however when you PATCH you must be careful to supply the correct user #context as they payload is expanded prior to processing.
If you do curl -L -X GET 'http://localhost:1026/ngsi-ld/v1/entities/urn:ngsi-ld:LightFixture:00' and don't supply a user #context then the entity returned will expand all attributes.
As a check, you probably want to GET with and without your user context:
curl -iX GET 'http://localhost:1026/ngsi-ld/v1/entities/urn:ngsi-ld:LightFixture:00' \
-H ''NGSILD-Tenant: openiot' \
-H 'Link: <http://context/ngsi-context.jsonld>; rel="http://www.w3.org/ns/json-ld#context"; type="application/ld+json"'
This should return all attributes using short names.
curl -iX GET 'http://localhost:1026/ngsi-ld/v1/entities/urn:ngsi-ld:LightFixture:00' \
-H ''NGSILD-Tenant: openiot'
This should return all attributes using long URIs.
It could be the case that "on" has been previously been defined in your user #context. Now if you do a PATCH and don't supply the user #context, only the core NGSI-LD #context is processed. This contains the final line:
"#vocab": "https://uri.etsi.org/ngsi-ld/default-context/"
Which means that all unknown attributes are placed under default-context/. However, I assume that the term "on" has registered using a different URI, so that https://uri=etsi=org/ngsi-ld/default-context/on is not recognised as an attribute.
You can check the registrations using:
curl -G -iX GET 'http://localhost:1026/ngsi-ld/v1/csourceRegistrations/' \
-H 'Accept: application/ld+json' \
-H 'Link: <http://context/ngsi-context.jsonld>; rel="http://www.w3.org/ns/json-ld#context"; type="application/ld+json"' \
-d 'type=LightFixture'
When running an IoT Agent in NGSI-LD mode, you must supply a user #context - this is usually a Docker Environment variable:
- "IOTA_JSON_LD_CONTEXT=http://context/ngsi-context.jsonld"
That is the user #context used to expand the entity attribute URIs, and is supplied with the registration of the command.
Obviously if you omit the Link header you can also check the expanded attributes:
curl -G -iX GET 'http://localhost:1026/ngsi-ld/v1/csourceRegistrations/' \
-H 'Accept: application/ld+json' \
-d 'type=http://whatever/my/uri-is/LightFixture'
If something doesn't expand (like Property) is defined in core. If an attribute doesn't expand, it has fallen into the default context.

Getting the latest execution for a job via the Rundeck API

I'm using the latest version of Rundeck (3.3.10) and I'm having trouble getting the latest execution for a job via the Rest API.
If I call api/38/job//executions?max=1 it doesn't seem to bring back the latest execution if it is still running. Ideally, I'd also like to be able get the latest execution Start Time, End Time, User and result for each job in a single API call, but I'd resigned myself to calling the API once per job. There doesn't seem to be any way to sort the executions you get back from the API - they seem to be sorted by status first, so the running jobs appear at the end of the list.
Does anyone know a way around this? Thanks.
Yo can get that information using: executions?status=running&max=1 call.
Script example:
#!/bin/sh
# protocol
protocol="http"
# basic rundeck info
rdeck_host="localhost"
rdeck_port="4440"
rdeck_api="38"
rdeck_token="YRVaZikt64Am85RyLo1nyq8U1Oe4Q8J7 "
# specific api call info
rdeck_job="03f28add-84f2-4013-b8f5-e48feaf5977c"
# api call
curl --location --request GET "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/job/$rdeck_job/executions?status=running&max=1" \
--header "Accept: application/json" \
--header "X-Rundeck-Auth-Token: $rdeck_token" \
--header "Content-Type: application/json"
Output:
{
"paging": {
"count": 1,
"total": 1,
"offset": 0,
"max": 1
},
"executions": [
{
"id": 7,
"href": "http://localhost:4440/api/38/execution/7",
"permalink": "http://localhost:4440/project/ProjectEXAMPLE/execution/show/7",
"status": "running",
"project": "ProjectEXAMPLE",
"executionType": "user",
"user": "admin",
"date-started": {
"unixtime": 1617304896289,
"date": "2021-04-01T19:21:36Z"
},
"job": {
"id": "03f28add-84f2-4013-b8f5-e48feaf5977c",
"averageDuration": 13796,
"name": "HelloWorld",
"group": "",
"project": "ProjectEXAMPLE",
"description": "",
"href": "http://localhost:4440/api/38/job/03f28add-84f2-4013-b8f5-e48feaf5977c",
"permalink": "http://localhost:4440/project/ProjectEXAMPLE/job/show/03f28add-84f2-4013-b8f5-e48feaf5977c"
},
"description": "sleep 20; echo \"hi\"",
"argstring": null,
"serverUUID": "630be43c-e71f-4102-be96-d017dd22233e"
}
]
}

How can I use the BigQuery REST API from the command line?

Attempting to make a plain GET request to one of the BigQuery REST APIs gives an error that looks like this:
curl https://www.googleapis.com/bigquery/v2/projects/$PROJECT_ID/jobs/$JOBID
Output:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "required",
"message": "Login Required",
"locationType": "header",
"location": "Authorization",
...
What is the correct way to invoke one of the REST APIs from the command-line, such as the query or insert APIs? The API reference has a "Try this API", but the examples don't translate directly to something you can run from the command-line.
As a disclaimer, when working from the command-line, using the bq tool will usually be sufficient, or for more complex use cases, the BigQuery client libraries enable programming with BigQuery from multiple languages. It can still be useful sometimes to make plain requests to the REST APIs to see how certain APIs work at a low level, however.
First, make sure that you have installed the Google Cloud SDK. This should include the gcloud and bq command-line tools. If you haven't already, authorize your account by running this command from your terminal:
gcloud auth login
This should prompt you to log in and then give you an access code that you can paste into your terminal. (The exact process may change over time).
Now let's try a query using the BigQuery REST API, calling the jobs.query method. Modify this script with your own project name, which you can find from the Google Cloud Console, then paste the script into your terminal:
PROJECT="YOUR_PROJECT_NAME"
QUERY="\"SELECT 1 AS x, 'foo' AS y;\""
REQUEST="{\"kind\":\"bigquery#queryRequest\",\"useLegacySql\":false,\"query\":$QUERY}"
echo $REQUEST | \
curl -X POST -d #- -H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/queries
If it worked, you should see output that looks like this:
{
"kind": "bigquery#queryResponse",
"schema": {
"fields": [
{
"name": "x",
"type": "INTEGER",
"mode": "NULLABLE"
},
{
"name": "y",
"type": "STRING",
"mode": "NULLABLE"
}
]
},
"jobReference": {
"projectId": "<your project ID>",
"jobId": "<your job ID>"
},
"totalRows": "1",
"rows": [
{
"f": [
{
"v": "1"
},
{
"v": "foo"
}
]
}
],
"totalBytesProcessed": "0",
"jobComplete": true,
"cacheHit": false
}
If you haven't set up the bq command-line tool, you can use bq init from your terminal to do so. Once you have, you can try running the same query using it:
bq query --use_legacy_sql=False "SELECT 1 AS x, 'foo' AS y;"
You can also see the REST API requests that the bq tool makes by passing the --apilog= option:
bq --apilog= query --use_legacy_sql=False "SELECT [1, 2, 3] AS x;"
Now let's try an example using the jobs.insert method instead of the query API. Run this script, replacing YOUR_PROJECT_NAME with your project name:
PROJECT="YOUR_PROJECT_NAME"
QUERY="\"SELECT 1 AS x, 'foo' AS y;\""
REQUEST="{\"configuration\":{\"query\":{\"useLegacySql\":false,\"query\":${QUERY}}}}"
echo $REQUEST | \
curl -X POST -d #- -H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/jobs
Unlike the query API, which returned a response immediately, you will see a result that looks similar to this:
{
"kind": "bigquery#job",
"etag": "\"<etag string>\"",
"id": "<project name>:<job ID>",
"selfLink": "https://www.googleapis.com/bigquery/v2/projects/<project name>/jobs/<job ID>",
"jobReference": {
"projectId": "<project name>",
"jobId": "<job ID>"
},
"configuration": {
"query": {
"query": "SELECT 1 AS x, 'foo' AS y;",
"destinationTable": {
"projectId": "<project name>",
"datasetId": "<anonymous dataset>",
"tableId": "<anonymous table>"
},
"createDisposition": "CREATE_IF_NEEDED",
"writeDisposition": "WRITE_TRUNCATE",
"useLegacySql": false
}
},
"status": {
"state": "RUNNING"
},
"statistics": {
"creationTime": "<timestamp millis>",
"startTime": "<timestamp millis>"
},
"user_email": "<your email address>"
}
Notice the status:
"status": {
"state": "RUNNING"
},
If you want to check on the job now, you can use the jobs.get method. Similar to before, run this from your terminal, using the job ID from the output in the previous step:
PROJECT="YOUR_PROJECT_NAME"
JOB_ID="YOUR_JOB_ID"
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/jobs/$JOB_ID
If the query is done, you'll get a response that indicates as much:
...
"status": {
"state": "DONE"
},
...
Finally, we can make a request to fetch the query results, also using the REST API.
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/queries/$JOB_ID
The output will look similar to when we used the jobs.query method above:
{
"kind": "bigquery#getQueryResultsResponse",
"etag": "\"<etag string>\"",
"schema": {
"fields": [
{
"name": "x",
"type": "INTEGER",
"mode": "NULLABLE"
},
{
"name": "y",
"type": "STRING",
"mode": "NULLABLE"
}
]
},
"jobReference": {
"projectId": "<project ID>",
"jobId": "<job ID>"
},
"totalRows": "1",
"rows": [
{
"f": [
{
"v": "1"
},
{
"v": "foo"
}
]
}
],
"totalBytesProcessed": "0",
"jobComplete": true,
"cacheHit": true
}

Orion notification complex payload

I'm trying to use Orion notification to send SMS with Plivo.
This is how I send an SMS directly with Plivo:
curl -X POST https://api.plivo.com/v1/Account/MAMDA5ZDJIMDM1/Message/ -L -u MAMDA5ZDJIM:YzhiNDJjODNhNDkxMjhiYTgxZD -H 'Content-Type: application/json' -d #- <<EOF
{
"src": "0039414141414",
"dst": "0039414747111",
"text": "test SMS"
}
EOF
How should I encode it in Orion? I tried:
curl localhost:1026/v2/subscriptions -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d #- <<EOF
{
"description": "A subscription to get info about WS_UPPA_Sensor2",
"subject": {
"entities": [
{
"id": "Sensor1",
"type": "SensingDevice"
}
],
"condition": {
"attrs": [
"temperature"
]
}
},
"notification": {
"httpCustom": {
"url": "https://api.plivo.com/v1/Account/MAMDA5ZDJIMDM1NZVMZD/Message/",
"headers": {
"Authorization": "Basic TUFNREE1WkRKSU1ETTFOWlZNWkQ6WXpoaU5ESmpPRE5oTkRreE1qaGlZVGd4WkRkaE5qYzNPV1ZsTnpZMA=="
},
"payload": "{%22src%22%3A%2200393806412092%22%2C%22dst%22%3A%2200393806412093%22%2C%22text%22%3A%22test%20SMS%20from%20Waziup%22}"
},
"attrs": [
"temperature"
]
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}
EOF
Is there another way than percent encoding?
URL encoding (I understand is the one you refer by "percent encoding") is the only one which have an special treatment in custom notifications (details described as part of the Orion documentation).
In fact, taking into account the existing one is complete (I mean, any text can be expressed in the terms of URL encoding) there is no need of adding any other.

Orion doesn't notify Cygnus

I followed the official documentation about cygnus and orion. All generic enablers are deployed correctly, without errors in their log files. But something strange happens, Orion never notifies Cygnus.
To test this mechanism I followed the example with Car entity provided in the official documentation.
My entity creation bash script:
(curl $1:1026/v1/updateContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d #- | python -mjson.tool) <<EOF
{
"contextElements": [
{
"type": "Car",
"isPattern": "false",
"id": "Car1",
"attributes": [
{
"name": "speed",
"type": "integer",
"value": "75"
},
{
"name": "fuel",
"type": "float",
"value": "12.5"
}
]
}
],
"updateAction": "APPEND"
}
EOF
My entity subscription bash script:
(curl $1:1026/v1/subscribeContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'Fiware-Service: vehicles' --header 'Fiware-ServicePath: /4wheels' -d #- | python -mjson.tool) <<EOF
{
"entities": [
{
"type": "Car",
"isPattern": "false",
"id": "Car1"
}
],
"attributes": [
"speed",
"oil_level"
],
"reference": "http://$2:5050/notify",
"duration": "P1M",
"notifyConditions": [
{
"type": "ONCHANGE",
"condValues": [
"speed"
]
}
],
"throttling": "PT1S"
}
EOF
My entity update bash script:
(curl $1:1026/v1/updateContext -s -S --header 'Content-Type: application/json' --header 'Accept: application/json' -d #- | python -mjson.tool) <<EOF
{
"contextElements": [
{
"type": "Car",
"isPattern": "false",
"id": "Car1",
"attributes": [
{
"name": "speed",
"type": "integer",
"value": $2
}
]
}
],
"updateAction": "UPDATE"
}
EOF
Note: Orion responds to all requests.
After executing these scripts, cygnus must receive reported information from orion and save it in the database, but nothing happens.
Neither in /var/log/cygnus/cygnus.log file or in /var/log/contextBroker/contextBroker.log file are reported any information about orion notification.
Note: If I use the notify.sh script provided in the official documentation, Cygnus works well and saves all data in the database.
Note: I read in other questions problems about open ports but those don't apply to mine.
EDIT 1
After I subscribe the orion, the response is:
{
"subscribeResponse": {
"duration": "P1M",
"subscriptionId": "563e12b4f4d8334d599753e0",
"throttling": "PT1S"
}
}
And when I update anentity, orion returns it:
{
"contextResponses": [
{
"contextElement": {
"attributes": [
{
"name": "speed",
"type": "integer",
"value": ""
}
],
"id": "Car1",
"isPattern": "false",
"type": "Car"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}
]
}
To GET entity from orion I used the following script:
(curl $1:1026/v1/queryContext -s -S --header 'Content-Type: application/json' \
--header 'Accept: application/json' -d #- | python -mjson.tool) <<EOF
{
"entities": [
{
"type": "Car",
"isPattern": "false",
"id": "Car1"
}
]
}
EOF
Response:
{
"contextResponses": [
{
"contextElement": {
"attributes": [
{
"name": "fuel",
"type": "float",
"value": "12.5"
},
{
"name": "speed",
"type": "integer",
"value": "123"
}
],
"id": "Car1",
"isPattern": "false",
"type": "Car"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}
]
}
Note The speed value was updated with success.
Taking into account the Fiware-Service and Fiware-ServicePath headers in subscription request, it has been performeed in the "/4wheels" service path of the service "vehicles". However, entity creation request doesn't use such headers, so it is created in the default service path ("/") of the default service. Thus, the subscription is not "covering" the entity, so updates in the entity are not triggering notifications.
One solution to the problem would be to create the entity in the same service and service path of the subscription, i.e. "/4wheels" service path of the service "vehicles".
Please, check Orion official documentation about service and service path concepts.