How Gunicorn handles TCP Connections for Sync Workers - sockets

Gunicorn is based on the pre-fork worker model. There is Master Process and Sync Workers. Each Sync Worker Process handles a single request at a time. How does Gunicorn manage TCP Connection ?
Does the client create socket connection directly with each worker, which is listening on defined port, but two processes cannot listen on the same port ?
Does client create socket connection with Master process and it forwards the request via IPC to worker ?

Related

In vert.x, does it make sense to create multiple HttpServer instances in a runtime?

I created a verticle named HttpServerVerticle and inside it let it create a HttpServer instance by vertx.createHttpServer(), then in my main verticle I deployed this HTTP verticle with > 1 instances by vertx.deployVerticle("xxx.xxx.HttpServerVerticle", deploymentOptionsOf(instances = 2)).
Does it make sense to have multiple HttpServer instances in a runtime? If it does, why I did not see similar error messages like "8080 port is already in use"?
Vert.x will actually round-robin between your HttpServer instances listening on the same port:
When several HTTP servers listen on the same port, vert.x orchestrates the request handling using a round-robin strategy...
So, when [a] verticle is instantiated multiple times as with: vertx run io.vertx.examples.http.sharing.HttpServerVerticle -instances 2, what’s happening? If both verticles would bind to the same port, you would receive a socket exception. Fortunately, vert.x is handling this case for you. When you deploy another server on the same host and port as an existing server it doesn’t actually try and create a new server listening on the same host/port. It binds only once to the socket. When receiving a request it calls the server handlers following a round robin strategy...
Consequently the servers can scale over available cores while each Vert.x verticle instance remains strictly single threaded, and you don’t have to do any special tricks like writing load-balancers in order to scale your server on your multi-core machine.
So it is both safe and encouraged to creating multiple instances of HttpServers, if required to scale across cores.

Marathon how to health check a background worker

I got a standard rails app with 2 process types, the web process and the worker process. both running as tasks on marathon.
Is there a way to define health check to the worker process, the process does not listen on any port, whats recommended here?

How to create two Aerospike Clusters on same L2 network

I am using two aerospike clusters(each with one node/machine only).
Since both machine are on same LAN, they try to connect each other trying to form single cluster. Because of this I was getting error(while inserting record):
Error: (11) AEROSPIKE_ERR_CLUSTER
So on my ubuntu setup(one of two machines) I blocked port 9918 using cmd:
ufw block 9918
After block cmd, aerospike clusters started working(I was able to insert record).
Whats better way to avoid two Aerospike machines on same LAN to not communicate with each other ?
Just make sure to change the multicast address and/or port in the heartbeat configuration so the 2 nodes don't try to send heartbeat to each other.
heartbeat {
mode multicast # Send heartbeats using Multicast
address 239.1.99.2 # multicast address
port 9918 # multicast port
interval 150 # Number of milliseconds between heartbeats
timeout 10 # Number of heartbeat intervals to wait
# before timing out a node
}
Alternatively, you can also switch to mode mesh and have only the node itself in the mesh-see-address-port list:
heartbeat {
mode mesh # Send heartbeats using Mesh (Unicast) protocol
port 3002 # port on which this node is listening to
# heartbeat
mesh-seed-address-port 192.168.1.100 3002 # IP address for seed node in the cluster
# This IP happens to be the local node
interval 150 # Number of milliseconds between heartbeats
timeout 10 # Number of heartbeat intervals to wait before
# timing out a node
}

Migrating Established TCP connection with docker containers

Is it possible to transparently migrate an established TCP connection along with the Docker container from one node to another?
My use case is scaling/re-scheduling an web-app which relies on WebSockets but I believe there would be more use cases for other application protocols and plain tcp.
What I'm looking for is a way to do it completely transparently for client applications. I'm aware it's possible to reconnect upon disconnection but this is not what I need.
I've been looking at SockMI agent but it seems to be still in beta and missing documentation.
If I understand this correctly the migration would require the following at high-level:
Trigger scaling action (when it all needs to start)
Launch replacement container on new node
Freeze container's processes on original node
Put tcp connections on hold
Transfer the processes and their state across to new node
Migrate the TCP connection
Is it possible to transparently migrate an established TCP connection ... from one node to another?
No.

Does the TCP connection created by requests persist when using Celery workers?

I plan to have celery workers sending notifications to mobile devices (e.g. through GCM) and wish to avoid opening needlessly many TCP connections
I have a celery task defined like this:
#task()
def send_push_notification(
requests.post(...)
)
Suppose this task is executed by a worker on a single machine. Will each subprocess open only one TCP connection? Or will a new connection be created each time the task is executed?
If I want to reuse the same TCP connection for each subprocess, how would I go about doing that?
Thanks!