I am new to apache APISIX, I want to configure the routing in the Apache APISIX gateway. First I have followed the APISIX official document. In that document, they have used "httpbin.org:80" for the upstream server. and it works for me, If I set the upstream new upstream server which is run in my localhost(127.0.0.1) it does not work for me. it throws a bad gateway error(502)
If anyone knows the answer to fix this issue, please let me know.
{
"methods": [
"GET"
],
"host": "example.com",
"uri": "/anything/*",
"upstream": {
"type": "roundrobin",
"nodes": {
"httpbin.org:80": 1
}
}
}
The above routing configuration is working for me. Here is the API GATEWAY (http://127.0.0.1:9080/anything/*) routes the request to http://httpbin.org:80/anything/*)
{
"methods": [
"GET"
],
"host": "example.com",
"uri": "/anything/*",
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:3001": 1
}
}
}
In the above configuration, I have configured the routing to service and that service is running on my local machine, and that port is 30001. Now if I call the API (http://127.0.0.1:9080/anything/*) it does not route my request to the server (http://127.0.0.1:3001/anything/*), instead it throws a bad gateway error.
const http = require('http')
const hostname = '127.0.0.1'
const port = 3001
const server = http.createServer((req, res) => {
res.statusCode = 200
res.setHeader('Content-Type', 'text/plain')
res.end('Hello World\n')
})
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`)
})
Here the above code is my backend server, which is running as an upstream server.
If you know the answer to debug the bad gateway exception, kindly let me know.
can you confirm which port you're using first because you're using 3001 but 30001 in your description; then try to access it directly instead of using Gateway to proxy it to check if it's accessible.
I have faced the exact issue.
I'm assuming you are deploying APISIX with docker-compose or docker (with is recommended in their official documentation).
The docker applications are running on a docker bridge network named apisix. This is why your localhost application is not reachable by APISIX.
Related
I am trying to implement socket Io into an application. However nestjs WebSocketGateway doesnt seem to get triggered when the port isnt defined. However when using a different port for WebSocketGateway the frontend connects succesfully.
This works
#WebSocketGateway(8001, {
path: '/socket-chat',
cors: {
origin: '*',
},
})
This doesnt work
#WebSocketGateway({
path: '/socket-chat',
cors: {
origin: '*',
},
})
When using the same port as the API, the middleware gets triggered however it seems to be working and sucessfully passing the request. I would like to use the same port but not sure how. Thank you :)
I want to deploy an IDAS (FIWARE Backend Device Manager) i.e. IOTA instance that will communicate and send data to an already existing Orion Context Broker instance running in a different virtual machine from the one hosting IDAS. Is that possible? Or is it necessary for the two services to be in the same virtual machine?
I am using an IoTAgent-JSON (I think it's 1.6.2 version) for MQTT transport.
This is the config.js file (I have already replaced the contextBroker host with the host of my Orion Context Broker, as you can see, it was "localhost" before):
var config = {};
config.mqtt = {
host: 'localhost',
port: 1883,
thinkingThingsPlugin: true
};
config.iota = {
logLevel: 'DEBUG',
timestamp: true,
contextBroker: {
host: '147.27.60.182',
port: '1026'
},
server: {
port: 4041
},
deviceRegistry: {
type: 'mongodb'
},
mongodb: {
host: 'localhost',
port: '27017',
db: 'iotagentjson'
},
types: {},
service: 'howtoService',
subservice: '/howto',
IoTA endpoints:
http://147.27.60.202:5351/iot/services
(Fiware-Service: openiot, Fiware-ServicePath: /, X-Auth-Token: [TOKEN])
http://147.27.60.202:4041/iot/devices/
(Fiware-Service: tourguide, Fiware-ServicePath: /)
My Orion Context Broker (where I want to send data) endpoint:
http://147.27.60.182:1026/v2
P.S.: I have tried to change the mongodb host, too.
Image: how the service runs
Yes, you can have Orion and one or more IOTAs running in different virtual machine. The only requirement is mutual interconnection, I mean IOTA needs access to the Orion endpoint and Orion needs access to IOTA endpoint.
Check contextBroker.url (alternatively contextBroker.host and contextBroker.port) and providerUrl IOTA configuration parameters in IOTA documentation.
I have difficulties with making the accumulation server to work. I started it, however it doesn't give any results if OCB receive for example new subscription. The process looks like this:
I start acc. server as told in tutorial from freshly cloned repo of OCB. As a result i get in console:
tmp#tmp-VirtualBox:~/fiware-orion/scripts$ ./accumulator-server.py --port 1028 --url /accumulate --host ::1 --pretty-print -v
verbose mode is on
port: 1028
host: ::1
server_url: /accumulate
pretty: True
https: False
Running on http://[::1]:1028/ (Press CTRL+C to
And after this nothing at all happens. If I make a subscription (the most basic one from the tutorial) I get response in the medium from which i made the request:
< HTTP/1.1 201 Created
< Connection: Keep-Alive
< Content-Length: 0
< Location: /v2/subscriptions/5ab5248e50bfc821d0a1b1e0
< Fiware-Correlator: 45df4ff6-2eb3-11e8-912c-0242ac110003
< Date: Fri, 23 Mar 2018 16:00:14 GMT
However, and that might be the culprit, status of subscription is set on failed (checked with asking for listing all subscriptions and in Orion Context Explorer). And cannot be changed to inactive for instance. Everything is running as intended (I guess). OCB is running as a container in docker which is installed on LUbuntu, and is working really well. It might be my error, cuz I'm using Insomnia to communicate with OCB and could mixed something, but the response from OCB is that everything is allright. Any help will be appreciated.
EDIT:
Acc. server is not working. I got:
* Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 1028 failed: Connection refused
* Failed to connect to localhost port 1028: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 1028
after running the check command (curl -vvvv localhost:1028/accumulate).
Regarding making subscription I POST this payload:
{
"description": "A subscription to get info about Room1",
"subject": {
"entities": [
{
"id": "Room1",
"type": "Room"
}
],
"condition": {
"attrs": [
"pressure"
]
}
},
"notification": {
"http": {
"url": "http://localhost:1028/accumulate"
},
"attrs": [
"temperature"
]
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}
to a localhost:1026/v2/subscriptions URL. Beforehand entities and their arguments and types are allright. After creating, I request get on all subscriptions and get:
[
{
"id": "5ab7d819209f52528cc2faf7",
"description": "A subscription to get info about Room1",
"expires": "2040-01-01T14:00:00.00Z",
"status": "failed",
"subject": {
"entities": [
{
"id": "Room1",
"type": "Room"
}
],
"condition": {
"attrs": [
"pressure"
]
}
},
"notification": {
"timesSent": 1,
"lastNotification": "2018-03-25T17:10:49.00Z",
"attrs": [
"temperature"
],
"attrsFormat": "normalized",
"http": {
"url": "http://localhost:1028/accumulate"
},
"lastFailure": "2018-03-25T17:10:49.00Z"
},
"throttling": 5
}
]
I guess he fails cuz did not send a notification, but I'm not sure.
I see two problems here.
First, accumulator is not working. Maybe is a weird networking problem which combines an IPv4 name lookup (i.e. curl localhost:1028/accumulate is solved as curl 127.0.0.1:1028/accumulate by the OS) with an accumulator listening only in the IPv6 interface (i.e. only in ::1 but not in 127.0.0.1). I understand you are running the curl commmand in the same host where accumulator is listening, isn't it?
My recomendation is to play with the --host accumualtor parameter (e.e. --host 127.0.0.1) and use a direct IP in the curl command in order to make it work.
The second problem is due to you are using localhost as notification endpoint:
"url": "http://localhost:1028/accumulate"
This means port 1028 inside the docker container where Orion is running. However, as far as I understand, your accumulator server runs outside the container, in the containers host. Thus, you should use an IP which allows you to reach the host from the container (and ensure no network traffic blocker is in place, e.g. firewall). So, your question here translates to "How to reach docker containers host from a docket container" (I'm not sure of the answer but there should be pretty much literature about the topic out there :)
The accumulation server needs to be run on available physical interface. To put it simply interactions using loopback interface with Orion Context Broker run as a Docker container are almost impossible. For sure as far as virtualization of host running host comes in place (as is in my situation).
Available interfaces can be checked in linux using
ip addr
After choosing one that is matching our requirements, we run accumulator as has been told before, however ip address for it is the one that we choose. Then we add subscription to OCB using address used while launching acc. server and are good to go, communication is alright.
I'm creating a demo for Actions on Google.
When running the following command:
./gactions --verbose test --action_package action.json --project chatbot-36b55
I'm getting the following error:
Checking for updates...
Successfully fetched update metadata
Finished checking for updates -- no updates available
Pushing the app for the Assistant for testing...
POST /v2/users/me/previews/chatbot-36b55:updateFromAgentDraft?updateMask=previewActionPackage.actionPackage.actions%2Cpre
viewActionPackage.actionPackage.conversations%2CpreviewActionPackage.actionPackage.types%2CpreviewActionPackage.startTime
stamp%2CpreviewActionPackage.endTimestamp HTTP/1.1
Host: actions.googleapis.com
User-Agent: Gactions-CLI/2.0.7 (linux; amd64; stable/6f4c996f8ee63dc5760c7728f674abe37bfe5fc4)
Content-Length: 329
Content-Type: application/json
Accept-Encoding: gzip
{"name":"users/me/previews/chatbot-36b55","previewActionPackage":{"actionPackage":{"actions":[{"fulfillment":{"conversati
onName":"HelloWorld"},"intent":{"name":"actions.intent.MAIN"},"name":"MAIN"}],"conversations":{"HelloWorld":{"name":"Hell
oWorld","url":"http://35.189.xx.xx/"}}},"name":"users/me/previews/chatbot-36b55"}}
Reading credentials from: creds.data
ERROR: Failed to test the app for the Assistant
ERROR: Request contains an invalid argument.
Field Violations:
# Field Description
1 URL is invalid 'http://35.189.xx.xx/'
2017/07/20 14:42:50 Server did not return HTTP 200
I just followed the steps to create the actions package.
This is my actions.json file:
{
"actions": [
{
"name": "MAIN",
"fulfillment": {
"conversationName": "HelloWorld"
},
"intent": {
"name": "actions.intent.MAIN"
}
}
],
"conversations": {
"HelloWorld": {
"name": "HelloWorld",
"url": "http://35.189.xx.xx/"
}
}
}
Do I need to have https set up to test this? Anyone know how I can get around it if that is the issue?
The documentation at https://developers.google.com/actions/reference/rest/Shared.Types/ConversationFulfillment states for the url parameter:
The HTTPS endpoint for the conversation (HTTP is not supported).
Additionally, the URL must be accessible from the public Internet (you don't show the full IP address, for good reason, so I can't tell if this is true or not).
Either way, you may be able to use something like ngrok to create an HTTPS endpoint and secure tunnel to your 35.189.x.x host. This will give you a public DNS entry and HTTPS endpoint. See also https://developers.google.com/actions/tools/ngrok for some details about using ngrok with Actions.
I would like to know how can I check if my Wildfly Server is running and the WebApp has been deployed?
Currently I check only if the server is running.
public static boolean checkServerAvailable() {
try {
String url = Constants.URL_APP + ":" + Constants.APPSERVER_PORT;
HttpURLConnection.setFollowRedirects(false);
// note : you may also need
// HttpURLConnection.setInstanceFollowRedirects(false)
HttpURLConnection con = (HttpURLConnection) new URL(url)
.openConnection();
con.setRequestMethod("HEAD");
if (con.getResponseCode() == HttpURLConnection.HTTP_OK) {
return true;
}
else
return false;
} catch (Exception e) {
return false;
}
}
But I need to know if the Wildfly server also my web app deployed successull.
To start with you could add your webapp url to the url you're creating above. Instead of connecting to, for example, http://localhost:8080/ and looking for a HTTP 200 response, you could connect to http://localhost:8080/yourApp and do the same. That implies that you have something at the root context to respond.
An arguably better solution would be to have a "heartbeat" or "status" service in your web application. This would be something like http://localhost:8080/yourApp/status. The method or service could just return a 200 implying that the service is up. But it could also really check that your application is healthy. For example, it could check available memory or make sure that the database is up or a multitude of other things. The code you show would just use the full URL of the status service.
You can use the management API provided by WildFly. The API is described here for different versions of WildFly.
For WildFly9 - See https://wildscribe.github.io/Wildfly/9.0.0.Final/deployment/index.html
You could use following URL to check the status of deployment. You do need a management user for authentication.
Standalone Mode:
http://localhost:9990/management/deployment/<deployment_name>
For domain mode:
http://localhost:9990/management/host/<host_name>/server/<serer_name>/deployment/<deployment_name>
Sample JSON response (assuming you deployed an EAR file with some sub-deployments):
{
"content": [{
"hash": {
"BYTES_VALUE": "2gH7ddtUxsbzBJEJ/z4T1jYERRU="
}
}],
"enabled": true,
"enabled-time": 1468861076770,
"enabled-timestamp": "2016-07-18 18:57:56,770 CEST",
"name": "myapplication.ear",
"owner": null,
"persistent": true,
"runtime-name": "myapplication.app.ear",
"subdeployment": {
"myapplication.impl.jar": null,
"myapplication.web.war": null
},
"subsystem": null
}
Sample request using curl:
curl --digest -D - http://localhost:9990/management --header "Content-Type: application/json" -d '{"operation":"read-resource", "include-runtime":"true", "address":["deployment","myapplication.app.ear"] }' -u user:password
I aws looking for something to check if wildfly is running, here I saw:
systemctl status wildfly
And also were so useful:
systemctl stop wildfly
systemctl restart wildfly
systemctl start wildfly