I have blocking (synchronous) web-framework with uWSGI. Task if to prepare /health endpoint for kubernetes to ensure that pod is alive. There is no issue with endpoint itself, but issue is in sync nature. For example, if I define 8 processes in uWSGI and web-application processes 8 heavy requests, call to /health will be queued and depends on timeouts, kubernetes may not receive response in some period of time and decides to kill/restart pod. Of course I can run another web-service on different port but it will require changes in code and increase complexity of deployment. Maybe I'm missing something and it's possible to define exclusive worker in uWSGI to process /health endpoint in non-blocking way? Thanks in advance!
WSGI is an inherently synchronous protocol. There has been some work to create a new async-friendly replacement called ASGI, but it's only implemented by the Django Channels project AFAIK. While you can mount a sync-mode WSGI app in an async server (generally using threads), you can't go the other way. So you could, for example, write some twisted.web code that grabs requests to /health and handles them natively via Twisted and hands off everything else to the WSGI container. I don't know of anything off the shelf for this though.
Related
I have an asp.net core 3.1 web api app. Can I write an end point by which if I call this the entire application will go into hung or deadlock situation? or a background task where I wrote some code to make this happen?
I am trying to test Kubernetes Liveness probe which I wrote as a health check endpoint in asp.net core web API application.
Thanks.
If it's for testing some kind of hung situation, you can do something like making the thread / process to sleep. Make sure that the sleep period is greater than the timeoutSeconds configuration of the livenessProbe.
Or if you want to simulate the app to be down, you can also return http status >= 400 (ex: 500 / Internal Server Error) in the API response, so that livenessProbe will think the app is not healthy.
livenessProbe also has failureThreshold which is used to indicate how many failures to tolerate before K8 will terminate the pod, so if you want the pod to be terminated after just 1 failure, set it to 1.
I tried profiling a gRPC java server. And i see the below set of thread pools majorly.
grpc-default-executor Threads : Created 1 for each incoming request.
grpc-default-worker-ELG Threads: May be to listen on the incoming gRPC requests & assign to the above "grpc-default-executor" thread.
Overall, is gRPC java server, Netty style or Jetty/Tomcat style? Or it can configured to run as both ways?
gRPC Java server is exposed closer to Jetty/Tomcat style, except that it is asynchronous. That is, in normal Servlets each request consumes a thread until it is complete. While newer Servlet versions let you detach from the dedicated thread and continue work asynchronously (freeing the thread for other use) that is more uncommon. In gRPC you are free to work in either style. Note that gRPC uses a cachedThreadPool by default to reuse threads; on server-side it's a good idea to replace the default executor with your own, generally fixed-size, pool via ServerBuilder.executor().
Internally gRPC Java uses the Netty-style. That means fully non-blocking. You may use ServerBuilder.directExecutor() to run on the Netty threads. Although in that case you may want to specify the NettyServerBuilder.bossEventLoopGroup(), workerEventLoopGroup(), and for compatibility channelType().
As far as I know you can specify using the directExecutor() when building the GRPC server / client that will ensure all work is done in the IO thread and so threads will be shared. The default is to not do this for safety reasons as you will need to be very careful about what you do if you are in the IO Thread (like you should never block there).
We are transitioning from building applications on monolith application servers, to more microservices oriented applications on Spring Boot. We will publish health information with SB Actuator through HTTP or JMX.
What are the options/best practices to monitor services, that will be around 30-50 in total? Thanks for your input!
Not knowing too much detail about your architecture and services, here are some suggestions that represent (a subset of) the strategies that have been proven in systems i've worked on in production. For this I am assuming you are using one container/VM per micro service:
If your services are stateless (as they should be :-) and you have redundancy (as you should have :-) then you set up your load balancer to call your /health on each instance and if the health check fails then the load balancer should take the instance out of rotation. Depending on how tolerant your system is, you can set up various rules that define failure instead of just a single failure (e.g. 3 consecutive, etc.)
On each instance run a Nagios agent that calls your health check (/health) on the localhost. If this fails, generate an alert that specifies which instance failed.
You also want to ensure that a higher level alert is generated if none of your instances are healthy for a given service. You might be able to set this up in your load balancer or you can set up a monitor process outside the load balancer that calls your service periodically and if it does not get any response (i.e. none of the instances are responding) then it should sound all alarms. Hopefully this condition is never triggered in production because you dealt with the other alarms.
Advanced: In a cloud environment you can connect the alarms with automatic scaling features. In that way, unhealthy instances are torn down and healthy ones are brought up automatically every time an instance of a service is deemed unhealthy by the monitoring system
When we have a cluster of machines behind a load balancer (lb), generally hardware load balancer have persistent connections,
Now when we need to deploy some update on all machines (rolling update), the way to do is by bringing one machine Out of rotation, looks for no request sent to that server via lb. When the app reached no request state then update manually.
With 70-80 servers in picture this becomes very painful.
Can someone have a better way of doing it.
70-80 servers is a very horizontally scaled implementation... good job! Better is a very relative term, hopefully one of these suggestions count as "better".
Implement an intelligent health check for the application with the ability to adjust the health check while the application is running. What we do is have the health check start failing while the application is running just fine. This allows the load balancer to automatically take the system out of rotation. Our stop scripts query the load balancer to make sure that it is out of rotation and then shuts down normally which allows the existing connections to drain.
Batch multiple groups of systems together. I am assuming that you have 70 servers to handle peak load. This means that you should be able to restart several at a time. A standard way to do this is to implement a simple token granting service with a maximum of 10 tokens. Have your shutdown scripts checkout a token before continuing.
Another way to do this is with blue/green deploys. That means that you have an entire second server farm and then once the second server farm is updated switch load balancing to point to the new server farm.
This is an alternate to option 3. Install both versions of the app on the same servers and then have an internal proxy service (like haproxy) switch the connections between the version of the app that is deployed. For example:
haproxy listening on 8080
app version 0.1 listening on 9001
app version 0.2 listening on 9002
Once you are happy with the deploy of app version 0.2 switch haproxy to send traffic to 9002. When you release version 0.3 then switch load balancing back to 9001 etc.
I have posted this to ServerFault, but the Node.js community seems tiny there, so I'm hoping this bring more exposure.
I have a Node.js (0.4.9) application and am researching how to best deploy and maintain it. I want to run it in the cloud (EC2 or RackSpace) with high availability. The app should run on HTTPS. I'll worry about East/West/EU full-failover later.
I have done a lot of reading about keep-alive (Upstart, Forever), multi-core utilities (Fugue, multi-node, Cluster), and proxy/load balancers (node-http-proxy, nginx, Varnish, and Pound). However, I am unsure how to combine the various utilities available to me.
I have this setup in mind and need to iron out some questions and get feedback.
Cluster is the most actively developed and seemingly popular multi-core utility for Node.js, so use that to run 1 node "cluster" per app server on non-privileged port (say 3000). Q1: Should Forever be used to keep the cluster alive or is that just redundant?
Use 1 nginx per app server running on port 80, simply reverse proxying to node on port 3000. Q2: Would node-http-proxy be more suitable for this task even though it doesn't gzip or server static files quickly?
Have minimum 2x servers as described above, with an independent server acting as a load balancer across these boxes. Use Pound listening 443 to terminate HTTPS and pass HTTP to Varnish which would round robin load balance across the IPs of servers above. Q3: Should nginx be used to do both instead? Q4: Should AWS or RackSpace load balancer be considered instead (the latter doesn't terminate HTTPS)
General Questions:
Do you see a need for (2) above at all?
Where is the best place to terminate HTTPS?
If WebSockets are needed in the future, what nginx substitutions would you make?
I'd really like to hear how people are setting up current production environments and which combination of tools they prefer. Much appreciated.
It's been several months since I asked this question and not a lot of answer flow. Both Samyak Bhuta and nponeccop had good suggestions, but I wanted to discuss the answers I've found to my questions.
Here is what I've settled on at this point for a production system, but further improvements are always being made. I hope it helps anyone in a similar scenario.
Use Cluster to spawn as many child processes as you desire to handle incoming requests on multi-core virtual or physical machines. This binds to a single port and makes maintenance easier. My rule of thumb is n - 1 Cluster workers. You don't need Forever on this, as Cluster respawns worker processes that die. To have resiliency even at the Cluster parent level, ensure that you use an Upstart script (or equivalent) to daemonize the Node.js application, and use Monit (or equivalent) to watch the PID of the Cluster parent and respawn it if it dies. You can try using the respawn feature of Upstart, but I prefer having Monit watching things, so rather than split responsibilities, I find it's best to let Monit handle the respawn as well.
Use 1 nginx per app server running on port 80, simply reverse proxying to your Cluster on whatever port you bound to in (1). node-http-proxy can be used, but nginx is more mature, more featureful, and faster at serving static files. Run nginx lean (don't log, don't gzip tiny files) to minimize it's overhead.
Have minimum 2x servers as described above in a minimum of 2 availability zones, and if in AWS, use an ELB that terminates HTTPS/SSL on port 443 and communicates on HTTP port 80 to the node.js app servers. ELBs are simple and, if you desire, make it somewhat easier to auto-scale. You could run multiple nginx either sharing an IP or round-robin balanced themselves by your DNS provider, but I found this overkill for now. At that point, you'd remove the nginx instance on each app server.
I have not needed WebSockets so nginx continues to be suitable and I'll revisit this issue when WebSockets come into the picture.
Feedback is welcome.
You should not bother serving static files quickly. If your load is small - node static file servers will do. If your load is big - it's better to use a CDN (Akamai, Limelight, CoralCDN).
Instead of forever you can use monit.
Instead of nginx you can use HAProxy. It is known to work well with websockets. Consider also proxying flash sockets as they are a good workaround until websocket support is ubiquitous (see socket.io).
HAProxy has some support for HTTPS load balancing, but not termination. You can try to use stunnel for HTTPS termination, but I think it's too slow.
Round-robin load (or other statistical) balancing works pretty well in practice, so there's no need to know about other servers' load in most cases.
Consider also using ZeroMQ or RabbitMQ for communications between nodes.
This is an excellent thread! Thanks to everyone that contributed useful information.
I've been dealing with the same issues the past few months setting up the infrastructure for our startup.
As people mentioned previously, we wanted a Node environment with multi-core support + web sockets + vhosts
We ended up creating a hybrid between the native cluster module and http-proxy and called it Drone - of course it's open sourced:
https://github.com/makesites/drone
We also released it as an AMI with Monit and Nginx
https://aws.amazon.com/amis/drone-server
I found this thread researching how to add SSL support to Drone - tnx for recommending ELB but I wouldn't rely on a proprietary solution for something so crucial.
Instead I extended the default proxy to handle all the SSL requests. The configuration is minimal while the SSL requests are converted to plain http - but I guess that's preferable when you're passing traffic between ports...
Feel free to look into it and let me know if it fits your needs. All feedback welcomed.
I have seen AWS load balancer to load balance and termination + http-node-proxy for reverse proxy, if you want to run multiple service per box + cluster.js for mulicore support and process level failover doing extremely well.
forever.js on cluster.js could be good option for extreme care you want to take in terms of failover but that's hardly needed.