I have my server hosted on Heroku. The data source for my app is an external to my app. The following is the way to fetch the data :
Initialize a process that connects to a socket # the external-party server.
Save the data that comes through this socket connection.
Now my question is, Is it possible on Heroku to launch such processes, which needs to run constantly for-ever, listening to a socket on an external server?
A processes in Heroku can only listen to HTTP traffic on port 80. Like andy mentioned, Node.js is your best bet for running a service like this on Heroku.
I think this might be a job for Node.js which you can run on heroku. The logic flow will be to connect to the party server with a node.js app and then when data is received it will trigger a "callback" method. This method can then make a web request back to a Rails server with the data.
For examples of something like this, checkout the pubnub node.js sample app:
https://github.com/pubnub/pubnub-api/tree/master/nodejs
If I understand you correctly you need to launch a background process on heroku that connects to an external server -- this process then saves data from the api locally?
Accessing an external service:
That I'm aware of Heroku does not restrict access to external hosts or ports. Indeed, I have an app that connects to my mongodb database on mongohq.
Long running process: This is certainly possible using the new Celadon Cedar stack. The new cedar stack uses a concept called a Procfile, this enables running any script (e.g. ruby, bash, node.js) as a process.
Saving the data: Heroku has a read-only filesystem (excepting /tmp), so you'll need to save the data coming from the API in a database (or somewhere similar).
Related
Trying to establish a data connection using REST from QlikCloud account to a locally running application. I get an errror:
Connection to local resources is not allowed
The application running on my laptop is having a REST API enabled.
I was not able to use QlikSense Desktop so I had to login through the browser to QlikCloud.
I also tried giving the ipaddress of my laptop instead of localhost. It still throws as error:
Connection to http://<ip_address>:1000/v1/documents?uri=/csv/myFile.csv is not allowed
Should I be running my application only on a server? Any help is appreciated.
The data connections are "executed" in the context of the Qlik Engine. Which means that when specifying localhost the connection will try and load the data from the machine where the Engine is running. In Qlik Cloud case - this will be some machine in Qlik's cloud.
You can:
use QS Desktop (you've mentioned that this is not working for you)
host your service somewhere on the interned where the Engine can reach it
use some service (like ngrok) that can tunnel the local server to a public url which then access from Qlik
Today I'm hosting a Laravel v4 web application on a MacMini. Why a Mac? Because I created the application logic in Objective-C (leveraging my experience with iOS dev). Whether or not this was the right choice isn't the point of the question.
What I'm interested in knowing is how can I separate my web and application server. For instance, if I put my web server on Linode (or whatever) how do I go about communicating back and forth between the web server and the application server? Is there some sort of resource I can look to to understand how to do this?
Assumptions
Here's some assumptions I'm making:
I'm guessing Laravel and the Objetive-C Application are part of the same "system" and so I'm just gonna treat this as if you need a web server to send requests to a PHP application.
The Linode server will be a web server which sends request to the PHP application (Laravel)
Hosting PHP Applications
There are three moving parts:
The web server (Apache, Nginx)
The application gateway (PHP-FPM)
The application
The gateway and the code must live on the same computer/server. The web server can live on a separate computer/server.
This means you'll need your Macintosh to run PHP-FPM, which can then listen for remote connects and send them to the PHP application.
Macintosh
Install php-fpm on your mac. Make sure it can listen for remote network connections. This is usually done in the www.conf file in the listen directory, you can listen for connections on the remote network interface (whatever IP address the computer is assigned).
Linode
Install Nginx or Apache and have it proxy FastCGI requests off to your macintosh server at the macintosh's IP address (the one you set up to listen to addresses in the step above).
Firewalls
You may need to ensure the firewalls at both ends allow incoming/outgoing connects on the networks being used to communicate to eachother.
I am having some issues with the MeteorJs app that I am working on. I am working for a client and we are using his dedicated server for our app's deployment. The server has php installed and is already running apache server (a php app is live on server). The server itself is running a version of CentOS.
I bundled my meteor app and uploaded it on server using my cPanel access (it is not root level access). I also created an ssh key and logged into the server using that ssh access.
I used export command to set my MONGO_URL to mongodb://localhost:27017/<db-name> (Version 2.6.3 of MongoDB in installed on server) and PORT to 3000. From here I ran the app using node package "pm2".
Now the issue is that when the app runs it accesses the database for data.
The request is made from client side.
The server receives the request (seen in the live log)
The server fetches data from db and logs it in the terminal.
But then it takes somewhere around 10-15 seconds to send that data back to the client.
There is not extra commands or computation between logging the data fetched from server and returning it to client.
But if I change the mongo URI to my instance of MongoLab, everything works fine and there are no delays. My client prefers that the mongo runs on his dedicated server.
As a programmer I know it would be difficult to answer this question with limited information and no hands-on debugging. But I was hoping someone else experienced this issue and was able to resolve. I just installed mongodb on the server without any further configurations. Is it that I need to install any further packages or do any configurations?
you need to set MONGO_OPLOG_URL to enable oplog tailing feature. when oplog tailing is disabled it takes around 10-15 seconds to send that data to the client.
export MONGO_OPLOG_URL like this.
MONGO_OPLOG_URL=mongodb://localhost/local
I am curious if it is possible (as well as how) to connect FROM a website hosted by/in Windows Azure TO an outside server (IP and port) using a TCP socket. I have no ability to change the outside infrastructure, but could change the way I access it - for example, maybe rewriting the request to connect using socket.io or something else (which may or may not solve the problem). But since I already have it working outside of Azure, it would be nice if it was simply a configuration to enable my web site to call to this outside system, which only exposes its functionality through a socket connection. Thanks!
So I've tested this particular example on my local machine:
http://bjorngylling.com/2011-04-13/postgres-listen-notify-with-node-js.html
It worked! So now when I update a specific table, and am running my node.js file(from the tutorial) -I get an instant notification on my Terminal(mac)!! cool!
But how do I implement this onto a client's browser??
First of all, in the node.js script you'll notice that I have to connect to the database with my username and password:
pgConnectionString = "postgres://username:pswd#localhost/db";
I obviously can't have that floating around in the .js file the user's browser downloaded.
Plus I don't even know what scripts I'd have to include in the <head>. I can't find anything anywhere on how this is used in the real world.... All I see are neat little examples you can use in your command line.
Any advice, or guidance in the right direction would be awesome! Thanks.
You can't.
Node.js runs directly on your server, speaking directly to the native libraries on that machine. I'm not sure exactly what the postgres driver you are using does, but either it speaks to the postgres libraries OR it speaks directly with sockets on the local or a remote database server.
Neither of these methods can be used directly from a browser environment (it can't speak directly to the native libraries and it can't speak "raw" sockets).
What you can do is to have the web client speak to your own server process on a server (running node.js or similar), which would then speak to the database on behalf of the client.
Assuming you also need to database server to be able to initiate notifications to the client, you would need to use a bi-directional communication module like socket.io or similar.
You can do: combine your JS running on node.js which accesses Postgres listening for events with a node.js based WebSocket server, implement PubSub and push out to HTML5 browsers .. WebSocket capable ones.
Another option: use a generic WebSocket to TCP bridge like https://github.com/kanaka/websockify and implement the Postgres client protocol in JS to run in browser. That is doable, but probably not easy / for the faint hearted.