I have a debian Jupyter running on port 8888.
I want to make it easier to connect to my server,so I have a node.js
app running that forwards request to jupyter.mydomain.com:80 to
port 8888, andother domains to other ports.
This way I dont have to remember the ports of different apps, and instead
can refer to the server with different dns names. All the different names
are setup as a links in the dns server.
Now Jupyter works this way; but the Websockets that report the result of
calculations do not due to security error.
Is there any setting how I can get this to work?
Regards
Andreas
node-http-proxy is a node proxy that supports websockets. Your node app that's proxying requests must also proxy websocket connections.
JupyterHub is a multi-user server for spawning and authenticating single-user notebook servers, and it uses configurable-http-proxy, a subclass of node-http-proxy that adds some live configuration, to relay connections to notebooks. If you use NHP or CHP for your proxy app, the websockets should work.
From the node-http-proxy readme:
You can activate the websocket support for the proxy using ws:true
in the options.
//
// Create a proxy server for websockets
//
httpProxy.createServer({
target: 'ws://localhost:9014',
ws: true
}).listen(8014);
Related
I am using Appache Tomcat to host webpages that can be accessed by authenticated users and a UDP socket has been opened on port 14550 in which devices sends a stream of communication messages. The system is working fine in the local network. I tried to host the this in Openshift and later found that Openshift does not allow external UDP communication. Now I am considering Amazon EC2 instance, new VM in Azure or in GCP. I would like to know that will there be any issue in using the sockets from my application. Thank you in advance.
No, on AWS EC2 everything is allowed, you just need to configure your Security Group to allow specific web traffic, UDP traffic can also be allowed their.
I was also looking for possible workaround for this issue, but it's quite easy irrespective of what platform language you are using to develop socket program on AWS EC2, as am using Node.js nginx in my case, this should work for all supporting platforms.
Configure Security Group
In the AWS console, open the EC2 tab.
Select the relevant region and click on Security Group.
You should have an default security group if you
have launched an Elastic Beanstalk instance in that region for your
app.
click on Actions button at top, and select Edit inbound rules.
here in Type column select All UDP, or you can set some Custom UDP
rule as well to listen at your socket port.
And there just enter port of your UDP server Ex: 2020.
And that's it!
Note: If something is not working, check the "Events" tab in the Beanstalk application / environments and find out what went wrong.
I am currently developing a service application that pulls data from Mongo and returns it to consumers. There is a layer of authentication involved and I am using Heroku to host the service. Mongo was being hosted on MongoLabs, but there were some significant performance concerns and so we have moved to hosting Mongo on one of our cloud servers. We want to be able to secure access to Mongo using a firewall, white-listing the ip address of the service app on Heroku.
There are a couple of issues with this.
Issues
Well, at least these are main ones...
Heroku, while providing some nice features like easily managing cluster settings, s/w upgrades, etc., draws ip addresses from a pool. While the dns value of an application's url may not change, the underlying ip address can and will change.
to be better secured, mongo-server01 is placed behind a firewall that requires rules to be added using static ip addresses to allow access.
Since Heroku can't provide static ip addresses, we need to consider options for how Heroku can access mongo-server01 while still protecting the data it hosts.
Static IP addresses for outbound requests
There seem to be a couple of options, specifically for Heroku. Fixie and QuotaGuard Static both seem to serve that function, but these seem to be geared toward HTTP and HTTPS communication only (perhaps not even HTTPS).
Mongo doesn't use HTTP, it uses its own network protocol over port 27017, by default
https://groups.google.com/forum/embed/#!topic/mongodb-user/eX_RIv2cZVw
Does this mean these proxies won't work for calls to Mongo? In theory, there doesn't seem to be any reason that a proxy is only for HTTP or HTTPS requests. That being said, there doesn't seem to be any way to get in to these Heroku plugins and configure the proxy to use a different port or to handle Mongo's particular protocol.
If we could get into the proxy, perhaps we could put an additional set of ssh keys in place so the ssl tunnel chain could continue on to mongo-server01. But there doesn't see to be any way to ssh to these proxies or access configuration through the plugin dashboards.
The question (finally!)
How can one connect from Heroku to a firewalled host to get data from MongoDb? Are there proxies that can be used to achieve this?
The simple approach. Won't work because Heroku applications don't use static ip addresses.
Using a proxy. The Heroku proxy plugins don't know how to proxy mongodb protocol. Can't install ssh keys on proxy for ssh tunneling.
What can be done to get a connection without opening up the Mongo server to the world?
I spoke with the folks at QuotaGuard and they do have something that does the trick.
we offer a SOCKS proxy which should do the trick as it proxies at the TCP layer
https://devcenter.heroku.com/articles/quotaguardstatic#socks-proxy-setup
I did need to make a simple change to bin/qgsocksify
#SOCKS_DIR="$(dirname $(dirname $(readlink -f ${BASH_SOURCE[0]})))/vendor/dante
SOCKS_DIR="${HOME}/vendor/dante"
After that, the proxy worked like a charm.
I noticed with Postgres and other databases, the database itself runs a local version of a server.
For example, mine is running on localhost:5432.
Curiously, I went to my web browser and tried typing in that address to see what I'd get, but I got a response that "This Web Page is Not Available".
I also tried things like localhost:5432/mydata but also to no avail.
Shouldn't I be able to see something if I visit the database through my web browser? If yes, how do you do it? If not, why not?
Postgres is a service running on a port. A web server is also a service running on a port (80 and/or 443 usually). There are a lot of things running on various ports on any server, heck, on any single computer. That doesn't mean that everything is interchangeable. Ports 80 and 443 are commonly agreed to serve HTTP(S) connections. HTTP is a specific protocol which specifies how two things can communicate on a specific port. Postgres is not speaking HTTP; you need to speak Postgres' particular protocol if you want to talk to it. The browser does not speak that protocol, and Postgres doesn't by default offer communication in any protocol a browser understands.
A web browser expects to "talk" to servers using a protocol it supports. Webbrowsers support obviously http. Some do support other protocols, like ftp. But your postgres does not speak http. So you don't see anything. The port number is just telling over which channel the server is accessible. Any protocol can be routed over any port, but usually http can be reached over port 80. Your postgress over port 5432.
How generic of a PaaS is OpenShift Origin? From looking at the architecture overview, it seems very web-centric. Can I use OpenShift Origin to build a private cloud where I can run arbitrary apps, not just web-based apps?
As the title of my post indicates, my pressing question is whether it is possible to create an OpenShift app that can open a socket and ingest UDP traffic -- I don't need (and don't want) an haproxy for this app, and I don't want all the UDP traffic to first go through the node host's proxy.
Essentially, I'd like to know if I can deploy an app to a node, and have that app be able to receive UDP packets from an external-facing port on that node. Is this possible?
The RedHat docs, Configuring the Port Proxy, make me think this isn't possible:
applications listen for connections on the loopback interface. The node runs a proxy that listens on external-facing ports and forwards incoming requests to the appropriate application
I'm hoping there is a way around this restriction. Would a custom cartridge work?
As far as I know thats not possible at this time. However I would suggest asking the developers for Openshift origin on either the mailing lists or you could check on #openshift-dev on freenode.
Our application uses the PayPal api, in order to test it PayPal needs to be able to post data to a serlvet on our servers. This is no problem in production however when running in GWT-Dev mode I cannot seem to get GWT to work through my home router. GWT is running on port 8888 and I have added the needed firewall rules to get this to work.
Does GWT somehow stop requests from working from outside the local area network? I tried -bindAddress 192.167.x.x but it did not work.
For security reasons the jetty server used in gwt dev mode only binds to localhost.
If you want to bind it to all intefaces use the parameter -bindAddress 0.0.0.0
To make sure the servlets are reachable try to connect from a different host on your network (e.g. with Telnet).