Horizontal scaling of node.js server instances on a single machine - deployment

Running a web server on node.js is a simple thing to do (as seen by its excellent examples and documentation) but I wonder how you can fully use the CPU resources of a dedicated server?
Since node.js is single-threaded the only way to take advantage of multiple processors is via multiple processes. Of course, only one process can bind to a port so it seems there would have to be a master/worker pattern wherein the master forks children, binds to the incoming port, and delegates incoming connections (and the actual processing work) to the children. (Perhaps via a hungry-consumer pattern?)
Is this the best way to scale a web server running node.js? If so, are there libraries to simplify the master/worker pattern? If not, what patterns or deployment setups are recommended to best use the entire resources of a dedicated machine?
(Is this a better question for ServerFault?)

Multi-node is a library that provides the master/worker pattern.
If the server processes don't need to be able to talk to each other, and you aren't using Socket.IO, a simple option would be to just start one process/core, bind to local ports, and use something like nginx or HAProxy to load balance between them.

If you're using express, I'd use tj's Cluster: http://learnboost.github.com/cluster/
It provides 'transparent' cpu based load balancing, which is nice because you can use your existing express app, and it scales it across cores relatively painlessly.

Related

Is it possible to deploy a cluster with one service running locally?

Say you have 3 or more services that communicate with each other constantly, if they are deployed remotely to the same cluster all is good cause they can see each other.
However, I was wondering how could I deploy one of those locally, using minikube for instance, in a way that they are still able to talk to each other.
I am aware that I can port-forward the other two so that the one I have locally deployed can send calls to the others but I am not sure how I could make it work for the other two also be able to send calls to the local one.
TL;DR Yes, it is possible but not recommended, it is difficult and comes with a security risk.
Charlie wrote very well in the comment and is absolutely right:
Your local service will not be discoverable by a remote service unless you have a direct IP. One other way is to establish RTC or Web socket connection between your local and remote services using an external server.
As you can see, it is possible, but also not recommended. Generally, both containerization and the use of kubernetes tend to isolate environments. If you want your services to communicate with each other anyway being in completely different clusters on different machines, you need to configure the appropriate network connections over the public internet. It also may come with a security risk.
If you want to set up the environment locally, it will be a much better idea to run these 3 services as an independent whole. Also take into account that the Minikube is mainly designed for learning and testing certain solutions and is not entirely suitable for production solutions.

Could I replace RabbitMQ with native kubernetes messaging queue

I didn't find could we replace rabbitMQ/activeMQ/SQS with native kubernetes messaging queue?
or they are totally different in terms of features?
It is a totally different mechanism.
Kubernetes internal queues is not a real "queues" you can use in external applications, they are a part of internal messaging system and manage only objects which are parts of Kubernetes.
Moreover, Kubernetes doesn't provide any message queue as a service for external apps (except a situation when your app actually service one of K8s objects).
If you are not sure which service is better for your app - try to check queues.io.
That is a list of almost all available MQ engines with some highlights.
If you are referring to the Parallel Processing Using a Work Queue approach, you can technically use any queuing system, because the main logic is in the code used to get the items from the queue, Kubernetes is used only to control the parallelism.
If the idea is to use the queue algorithm used internally by kubernetes. it is not exposed as a a service for external applications, you would have to copy the code and implement in you application.

What is the canonical way to deploy Scala/Akka microservices?

We are going to end up with dozens of these microservices (most are Akka-based), and I'm unsure how to best manage their deployment. Specifically, they are built to be independent of each other and as specialized and distributed as possible.
My question stems from the fact that all of them are too small for their own individual JVMs; even if we were to host them on AWS nano instances, we'll still end up with about 40 machines if you factor in redundancy, and such a high number is simply not needed. Three medium size instances could (and do) easily handle the entire workload.
Currently, I just group them into "container" applications, somewhat randomly, and then run these container applications on larger JVMs.
However, there has to be a better way. I am not aware of any application servers for Akka where you can just "deploy actors", so I wanted to get some insight on how others run Akka microservices in production (and specifically how to manage deployment).
This is probably not limited to Scala and Akka, but most other platforms have dedicated app servers where you deploy these things.
IMHO, the canonical way is to use a service orchestration tool, and that would indeed run them in individual processes, each with their own JVM.
That's the only way you get the decoupling, isolation, resilience you want with microservices, only this way you'll be able to deploy, update, stop, start them individually.
You're saying:
My question stems from the fact that all of them are too small for
their own individual JVMs; even if we were to host them on AWS nano
instances
You seem to treat JVM and Amazon VMs as equivalent, but that's not the case. You can have multiple JVM processes on a single virtual machine.
I suggest you have a look at service orchestration tools such as
Lightbend Production Suite / Service Orchestration
or Kubernetes
These are just examples, there are others. Note that this tool category will give you a lot of features you'll sooner or later need anyway, such as easy scaling, log consolidation, service lookup, health checks / service failure handling etc.

Apache Thrift Service Auto Discovery

I want to develop some local network services using apache thrift. There should be multiple services waiting for ONE master to connect to them and use them exclusively until the master releases them. The services are written in multiple languages.
The choice to use thrift was done because I need some simple remote procedure call mechansim for communication between the services that is fast and supports multiple languages. While thrift is good for RPC, I need some mechanism to locate the service TCP addresses and ports via some auto-discovery mechanism before to be able to connect the thrift server/clients with each other without hardwiring the addresses.
What are the possibilities for auto-discovering of such sort of services do I have?
Thanks!
There is nothing which you just plug into your scheme of things. You can build something similar using Apache ZooKeeper. Netflix's curator provides a good set of tools to build this, on top of ZooKeeper. See https://github.com/Netflix/curator

How to deploy Node.js in cloud for high availability using multi-core, reverse-proxy, and SSL

I have posted this to ServerFault, but the Node.js community seems tiny there, so I'm hoping this bring more exposure.
I have a Node.js (0.4.9) application and am researching how to best deploy and maintain it. I want to run it in the cloud (EC2 or RackSpace) with high availability. The app should run on HTTPS. I'll worry about East/West/EU full-failover later.
I have done a lot of reading about keep-alive (Upstart, Forever), multi-core utilities (Fugue, multi-node, Cluster), and proxy/load balancers (node-http-proxy, nginx, Varnish, and Pound). However, I am unsure how to combine the various utilities available to me.
I have this setup in mind and need to iron out some questions and get feedback.
Cluster is the most actively developed and seemingly popular multi-core utility for Node.js, so use that to run 1 node "cluster" per app server on non-privileged port (say 3000). Q1: Should Forever be used to keep the cluster alive or is that just redundant?
Use 1 nginx per app server running on port 80, simply reverse proxying to node on port 3000. Q2: Would node-http-proxy be more suitable for this task even though it doesn't gzip or server static files quickly?
Have minimum 2x servers as described above, with an independent server acting as a load balancer across these boxes. Use Pound listening 443 to terminate HTTPS and pass HTTP to Varnish which would round robin load balance across the IPs of servers above. Q3: Should nginx be used to do both instead? Q4: Should AWS or RackSpace load balancer be considered instead (the latter doesn't terminate HTTPS)
General Questions:
Do you see a need for (2) above at all?
Where is the best place to terminate HTTPS?
If WebSockets are needed in the future, what nginx substitutions would you make?
I'd really like to hear how people are setting up current production environments and which combination of tools they prefer. Much appreciated.
It's been several months since I asked this question and not a lot of answer flow. Both Samyak Bhuta and nponeccop had good suggestions, but I wanted to discuss the answers I've found to my questions.
Here is what I've settled on at this point for a production system, but further improvements are always being made. I hope it helps anyone in a similar scenario.
Use Cluster to spawn as many child processes as you desire to handle incoming requests on multi-core virtual or physical machines. This binds to a single port and makes maintenance easier. My rule of thumb is n - 1 Cluster workers. You don't need Forever on this, as Cluster respawns worker processes that die. To have resiliency even at the Cluster parent level, ensure that you use an Upstart script (or equivalent) to daemonize the Node.js application, and use Monit (or equivalent) to watch the PID of the Cluster parent and respawn it if it dies. You can try using the respawn feature of Upstart, but I prefer having Monit watching things, so rather than split responsibilities, I find it's best to let Monit handle the respawn as well.
Use 1 nginx per app server running on port 80, simply reverse proxying to your Cluster on whatever port you bound to in (1). node-http-proxy can be used, but nginx is more mature, more featureful, and faster at serving static files. Run nginx lean (don't log, don't gzip tiny files) to minimize it's overhead.
Have minimum 2x servers as described above in a minimum of 2 availability zones, and if in AWS, use an ELB that terminates HTTPS/SSL on port 443 and communicates on HTTP port 80 to the node.js app servers. ELBs are simple and, if you desire, make it somewhat easier to auto-scale. You could run multiple nginx either sharing an IP or round-robin balanced themselves by your DNS provider, but I found this overkill for now. At that point, you'd remove the nginx instance on each app server.
I have not needed WebSockets so nginx continues to be suitable and I'll revisit this issue when WebSockets come into the picture.
Feedback is welcome.
You should not bother serving static files quickly. If your load is small - node static file servers will do. If your load is big - it's better to use a CDN (Akamai, Limelight, CoralCDN).
Instead of forever you can use monit.
Instead of nginx you can use HAProxy. It is known to work well with websockets. Consider also proxying flash sockets as they are a good workaround until websocket support is ubiquitous (see socket.io).
HAProxy has some support for HTTPS load balancing, but not termination. You can try to use stunnel for HTTPS termination, but I think it's too slow.
Round-robin load (or other statistical) balancing works pretty well in practice, so there's no need to know about other servers' load in most cases.
Consider also using ZeroMQ or RabbitMQ for communications between nodes.
This is an excellent thread! Thanks to everyone that contributed useful information.
I've been dealing with the same issues the past few months setting up the infrastructure for our startup.
As people mentioned previously, we wanted a Node environment with multi-core support + web sockets + vhosts
We ended up creating a hybrid between the native cluster module and http-proxy and called it Drone - of course it's open sourced:
https://github.com/makesites/drone
We also released it as an AMI with Monit and Nginx
https://aws.amazon.com/amis/drone-server
I found this thread researching how to add SSL support to Drone - tnx for recommending ELB but I wouldn't rely on a proprietary solution for something so crucial.
Instead I extended the default proxy to handle all the SSL requests. The configuration is minimal while the SSL requests are converted to plain http - but I guess that's preferable when you're passing traffic between ports...
Feel free to look into it and let me know if it fits your needs. All feedback welcomed.
I have seen AWS load balancer to load balance and termination + http-node-proxy for reverse proxy, if you want to run multiple service per box + cluster.js for mulicore support and process level failover doing extremely well.
forever.js on cluster.js could be good option for extreme care you want to take in terms of failover but that's hardly needed.