I have difficulties with making the accumulation server to work. I started it, however it doesn't give any results if OCB receive for example new subscription. The process looks like this:
I start acc. server as told in tutorial from freshly cloned repo of OCB. As a result i get in console:
tmp#tmp-VirtualBox:~/fiware-orion/scripts$ ./accumulator-server.py --port 1028 --url /accumulate --host ::1 --pretty-print -v
verbose mode is on
port: 1028
host: ::1
server_url: /accumulate
pretty: True
https: False
Running on http://[::1]:1028/ (Press CTRL+C to
And after this nothing at all happens. If I make a subscription (the most basic one from the tutorial) I get response in the medium from which i made the request:
< HTTP/1.1 201 Created
< Connection: Keep-Alive
< Content-Length: 0
< Location: /v2/subscriptions/5ab5248e50bfc821d0a1b1e0
< Fiware-Correlator: 45df4ff6-2eb3-11e8-912c-0242ac110003
< Date: Fri, 23 Mar 2018 16:00:14 GMT
However, and that might be the culprit, status of subscription is set on failed (checked with asking for listing all subscriptions and in Orion Context Explorer). And cannot be changed to inactive for instance. Everything is running as intended (I guess). OCB is running as a container in docker which is installed on LUbuntu, and is working really well. It might be my error, cuz I'm using Insomnia to communicate with OCB and could mixed something, but the response from OCB is that everything is allright. Any help will be appreciated.
EDIT:
Acc. server is not working. I got:
* Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 1028 failed: Connection refused
* Failed to connect to localhost port 1028: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 1028
after running the check command (curl -vvvv localhost:1028/accumulate).
Regarding making subscription I POST this payload:
{
"description": "A subscription to get info about Room1",
"subject": {
"entities": [
{
"id": "Room1",
"type": "Room"
}
],
"condition": {
"attrs": [
"pressure"
]
}
},
"notification": {
"http": {
"url": "http://localhost:1028/accumulate"
},
"attrs": [
"temperature"
]
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}
to a localhost:1026/v2/subscriptions URL. Beforehand entities and their arguments and types are allright. After creating, I request get on all subscriptions and get:
[
{
"id": "5ab7d819209f52528cc2faf7",
"description": "A subscription to get info about Room1",
"expires": "2040-01-01T14:00:00.00Z",
"status": "failed",
"subject": {
"entities": [
{
"id": "Room1",
"type": "Room"
}
],
"condition": {
"attrs": [
"pressure"
]
}
},
"notification": {
"timesSent": 1,
"lastNotification": "2018-03-25T17:10:49.00Z",
"attrs": [
"temperature"
],
"attrsFormat": "normalized",
"http": {
"url": "http://localhost:1028/accumulate"
},
"lastFailure": "2018-03-25T17:10:49.00Z"
},
"throttling": 5
}
]
I guess he fails cuz did not send a notification, but I'm not sure.
I see two problems here.
First, accumulator is not working. Maybe is a weird networking problem which combines an IPv4 name lookup (i.e. curl localhost:1028/accumulate is solved as curl 127.0.0.1:1028/accumulate by the OS) with an accumulator listening only in the IPv6 interface (i.e. only in ::1 but not in 127.0.0.1). I understand you are running the curl commmand in the same host where accumulator is listening, isn't it?
My recomendation is to play with the --host accumualtor parameter (e.e. --host 127.0.0.1) and use a direct IP in the curl command in order to make it work.
The second problem is due to you are using localhost as notification endpoint:
"url": "http://localhost:1028/accumulate"
This means port 1028 inside the docker container where Orion is running. However, as far as I understand, your accumulator server runs outside the container, in the containers host. Thus, you should use an IP which allows you to reach the host from the container (and ensure no network traffic blocker is in place, e.g. firewall). So, your question here translates to "How to reach docker containers host from a docket container" (I'm not sure of the answer but there should be pretty much literature about the topic out there :)
The accumulation server needs to be run on available physical interface. To put it simply interactions using loopback interface with Orion Context Broker run as a Docker container are almost impossible. For sure as far as virtualization of host running host comes in place (as is in my situation).
Available interfaces can be checked in linux using
ip addr
After choosing one that is matching our requirements, we run accumulator as has been told before, however ip address for it is the one that we choose. Then we add subscription to OCB using address used while launching acc. server and are good to go, communication is alright.
Related
I am new to apache APISIX, I want to configure the routing in the Apache APISIX gateway. First I have followed the APISIX official document. In that document, they have used "httpbin.org:80" for the upstream server. and it works for me, If I set the upstream new upstream server which is run in my localhost(127.0.0.1) it does not work for me. it throws a bad gateway error(502)
If anyone knows the answer to fix this issue, please let me know.
{
"methods": [
"GET"
],
"host": "example.com",
"uri": "/anything/*",
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}
The above routing configuration is working for me. Here is the API GATEWAY (http://127.0.0.1:9080/anything/*) routes the request to http://httpbin.org:80/anything/*)
{
"methods": [
"GET"
],
"host": "example.com",
"uri": "/anything/*",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:3001": 1
}
}
}
In the above configuration, I have configured the routing to service and that service is running on my local machine, and that port is 30001. Now if I call the API (http://127.0.0.1:9080/anything/*) it does not route my request to the server (http://127.0.0.1:3001/anything/*), instead it throws a bad gateway error.
const http = require('http')
const hostname = '127.0.0.1'
const port = 3001
const server = http.createServer((req, res) => {
res.statusCode = 200
res.setHeader('Content-Type', 'text/plain')
res.end('Hello World\n')
})
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`)
})
Here the above code is my backend server, which is running as an upstream server.
If you know the answer to debug the bad gateway exception, kindly let me know.
can you confirm which port you're using first because you're using 3001 but 30001 in your description; then try to access it directly instead of using Gateway to proxy it to check if it's accessible.
I have faced the exact issue.
I'm assuming you are deploying APISIX with docker-compose or docker (with is recommended in their official documentation).
The docker applications are running on a docker bridge network named apisix. This is why your localhost application is not reachable by APISIX.
I'm creating a demo for Actions on Google.
When running the following command:
./gactions --verbose test --action_package action.json --project chatbot-36b55
I'm getting the following error:
Checking for updates...
Successfully fetched update metadata
Finished checking for updates -- no updates available
Pushing the app for the Assistant for testing...
POST /v2/users/me/previews/chatbot-36b55:updateFromAgentDraft?updateMask=previewActionPackage.actionPackage.actions%2Cpre
viewActionPackage.actionPackage.conversations%2CpreviewActionPackage.actionPackage.types%2CpreviewActionPackage.startTime
stamp%2CpreviewActionPackage.endTimestamp HTTP/1.1
Host: actions.googleapis.com
User-Agent: Gactions-CLI/2.0.7 (linux; amd64; stable/6f4c996f8ee63dc5760c7728f674abe37bfe5fc4)
Content-Length: 329
Content-Type: application/json
Accept-Encoding: gzip
{"name":"users/me/previews/chatbot-36b55","previewActionPackage":{"actionPackage":{"actions":[{"fulfillment":{"conversati
onName":"HelloWorld"},"intent":{"name":"actions.intent.MAIN"},"name":"MAIN"}],"conversations":{"HelloWorld":{"name":"Hell
oWorld","url":"http://35.189.xx.xx/"}}},"name":"users/me/previews/chatbot-36b55"}}
Reading credentials from: creds.data
ERROR: Failed to test the app for the Assistant
ERROR: Request contains an invalid argument.
Field Violations:
# Field Description
1 URL is invalid 'http://35.189.xx.xx/'
2017/07/20 14:42:50 Server did not return HTTP 200
I just followed the steps to create the actions package.
This is my actions.json file:
{
"actions": [
{
"name": "MAIN",
"fulfillment": {
"conversationName": "HelloWorld"
},
"intent": {
"name": "actions.intent.MAIN"
}
}
],
"conversations": {
"HelloWorld": {
"name": "HelloWorld",
"url": "http://35.189.xx.xx/"
}
}
}
Do I need to have https set up to test this? Anyone know how I can get around it if that is the issue?
The documentation at https://developers.google.com/actions/reference/rest/Shared.Types/ConversationFulfillment states for the url parameter:
The HTTPS endpoint for the conversation (HTTP is not supported).
Additionally, the URL must be accessible from the public Internet (you don't show the full IP address, for good reason, so I can't tell if this is true or not).
Either way, you may be able to use something like ngrok to create an HTTPS endpoint and secure tunnel to your 35.189.x.x host. This will give you a public DNS entry and HTTPS endpoint. See also https://developers.google.com/actions/tools/ngrok for some details about using ngrok with Actions.
Based on this post (Fiware - Context broker: Issue with NGSIv2 subscriptions) a few months ago it was under discussion whether Cygnus supported NGSIv2 or not. It was commented that the issue were schedule but not yet implemented.
Question: Is it implemented already? how can we know?
My confusion reminds because when creating a subscription based on NGSIv2, and outcome of successfully created message pop up (i.e., 201), but still cannot monitor my subscription record into Orion.
I'm creating my subscription like this:
Content-Type:application/json
Accept: application/json
Fiware-Service: test
Fiware-ServicePath: /device
{
"description": "One subscription to rule them all",
"subject": {
"entities": [ {
"idPattern": ".*",
"type": "smarthphone" } ],
"condition": {
"attrs": [ "battery" ],
"expression": { "q": "battery!=0" }
}
},
"notification": {
"http": {
"url": "<MY COSMOS IP>:5050/notify" },
"attrs": [ "battery" ]
},
"expires": "2120-04-05T14:00:00.00Z",
"throttling": 1
}
And this is what I get:
Connection: Keep-Alive
Content-Length: 0
Location: /v2/subscriptions/587c62fcfebdbe5f74bad77b
Fiware-Correlator: f9a96bd0-dbb1-11e6-93ea-0242ac110004
Date: Mon, 16 Jan 2017 06:06:52 GMT
But when I retrieve such subscription it does't show:
.../v2/subscriptions/587c62fcfebdbe5f74bad77b
Any hint of what am I doing wrong?
"Cygnus does not support NGSIv2" means no NGSIv2 notifications are accepted in the service port (by default, TCP/5050). For the time being, only NGSIv1 notifications are accepted.
Nevertheless, what we have added to Cygnus API is a convenience operation about subscribing to Orion, either using NGSIv1 or NGSIv2 subscription format. I guess that's what you have tested (without success). Internally, such an operation implements just a forwarding (to the given Orion endpoint) of the given subscription. If Cygnus API says everything went OK, it is because Orion said everuything went OK.
Anyway, I'll edit this post once I perform a test by my side. In the meantime, you can ignore Cygnus API and use Orion API directly.
If you have entities created with headers Fiware-Service: test and
Fiware-ServicePath: /device , you also need to use these headers in your requests (GET, PUT, etc.).
So I have a service like as follow:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "monitoring-grafana",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/monitoring-grafana",
"uid": "be0f72b2-c482-11e5-a22c-fa163ebc1085",
"resourceVersion": "143360",
"creationTimestamp": "2016-01-26T23:15:51Z",
"labels": {
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "monitoring-grafana"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 3000,
"nodePort": 0
}
],
"selector": {
"name": "influxGrafana"
},
"clusterIP": "192.168.182.76",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
However, whenever I try to access it through the proxy API, it always fails with this response.
http://10.32.10.44:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
Error: 'dial tcp 192.168.182.132:3000: getsockopt: no route to host'
Trying to reach: 'http://192.168.182.132:3000/'
It happens on all of my services also, not just the one posted.
What could be going wrong? Is something not installed?
Looking at the error you posted it seems like the traffic can not be routed from your master to the Docker subnet of your node. The easiest way to validate this is to open a shell on your master and perform a request on your podIP:daemonPort: curl -I http://192.168.182.132:3000
Each node in your cluster should be able to communicate with every other node, and every Docker subnet should be routable. For most deployments you will need to setup an extra network fabric to make this happen, like flannel or Weave.
Take a look at Getting started from Scratch >> Network
Something else is funny. The cluster IP used by your service (192.168.182.76) and the pod IP of the endpoint (192.168.182.132) seem to be in the same subnet. However you need 3 different subnets:
one for the hosts
one for the Docker bridges (--bip flag of Docker)
one for the service (--service-cluster-ip-range= of the API server)
In my case I didn't realize that I have active firewall that was simply preventing access to the ports needed by kubernetes. Quick and crude solution is to run systemctl stop firewalld on the master and all minion nodes and of course you can just open ports needed instead
I am Very new to Kurento. I went through its json-rpc documentation from this link.
http://www.kurento.org/docs/5.0.3/mastering/kurento_protocol.html
1) I have installed a local kurento server which runs on the port 8888.
2) I used a tool called wscat to establish a connection to the kurento-websocket.
3) I tried to connect to the kurento-server with below command
wscat -c ws://localhost:8888/kurento
After that i got the connected prompt from the server.
From the above kurento protocol documentation link. I have used the below request json
{
"jsonrpc": "2.0",
"id": 1,
"method": "create",
"params": {
"type": "PlayerEndPoint",
"creationParams": {
"pipeline": "6829986",
"uri": "http://host/app/video.mp4"
},
"sessionId": "c93e5bf0-4fd0-4888-9411-765ff5d89b93"
}
}
But according to the docs the response which i should get after sending this request is like this.
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"value": "442352747",
"sessionId": "c93e5bf0-4fd0-4888-9411-765ff5d89b93"
}
}
But i am getting
{
"error":
{"code":-32603,
"message":"Unexpected error while processing method: Factory PlayerEndPoint not found"
},
"id":1,
"jsonrpc":"2.0"
}
If i am not wrong the above request-json is used to create a new media pipeline for player end point which is used to stream http://host/app/video.mp4.
Is there any problem in my request-json object or do i have to do something before giving this request.
please help me.
You have several problems. The first is that PlayerEndpoint is not correctly spelled (note the lower case "p" PlayerEnd-p-oint). The second is that you need to first to create a MediaPipeline before you can create a PlayerEndpoint or any other media element.
If you are new to Kurento, my recommendation is that you should try to use the official Kurento client implementations (currently available in Java and JavaScript). If you want to create your very own Kurento client, you'll need to read carefully the documentation because there are a lot of details you'll need to manage (e.g. the distributed garbage collector, the WebSocket reconnection mechanisms, etc.)