How to make Vertx server to serve request in parallel? If lets say there are 50 user who are submitting HTTP request to vertx server then I want all user request should be served in parallel ?
Asking in context of vertx 2 manual
As far as I know, it is the same than vertx 3: http servers handle requests in parallel, unless you block the event loop.
For all 50 user requests to be served in parallel, run your verticle with increased number of instances which will in short scale your application.
Run 50 instances of Java verticle,
vertx run MyVerticle.java -instances 50
From vert.x manual,
-instances : The number of instances of the verticle to instantiate in the Vert.x server. Each verticle instance is strictly single threaded so to scale your application across available cores you might want to deploy more than one instance.
Analogy : one user request/one vert.x instance
Related
I created a verticle named HttpServerVerticle and inside it let it create a HttpServer instance by vertx.createHttpServer(), then in my main verticle I deployed this HTTP verticle with > 1 instances by vertx.deployVerticle("xxx.xxx.HttpServerVerticle", deploymentOptionsOf(instances = 2)).
Does it make sense to have multiple HttpServer instances in a runtime? If it does, why I did not see similar error messages like "8080 port is already in use"?
Vert.x will actually round-robin between your HttpServer instances listening on the same port:
When several HTTP servers listen on the same port, vert.x orchestrates the request handling using a round-robin strategy...
So, when [a] verticle is instantiated multiple times as with: vertx run io.vertx.examples.http.sharing.HttpServerVerticle -instances 2, what’s happening? If both verticles would bind to the same port, you would receive a socket exception. Fortunately, vert.x is handling this case for you. When you deploy another server on the same host and port as an existing server it doesn’t actually try and create a new server listening on the same host/port. It binds only once to the socket. When receiving a request it calls the server handlers following a round robin strategy...
Consequently the servers can scale over available cores while each Vert.x verticle instance remains strictly single threaded, and you don’t have to do any special tricks like writing load-balancers in order to scale your server on your multi-core machine.
So it is both safe and encouraged to creating multiple instances of HttpServers, if required to scale across cores.
I have a task that takes approximately 3 minutes to run. It pulls data from a remote server and makes cpu-intensive analysis on it. This task will be invoked by an api call. Upon the api call, i am planning to give client a unique task id and assign the task to a celery worker. Then the client will poll the server with the given task id to see if the task is completed by celery worker and its result it saved to a result backend. I think of using nginx, gunicorn, flask and dockerize them for a easy deploy in case i need to distribute this architecture across multiple machines.
The problem is that the client may poll different servers due to load balancer and if not handled well, the polled server’s celery’s result backend might not have the task’s result but other server’s celery result backend has it.
Is it possible to use a single result backend over multiple celery instances and make different celery instances wuery the same result backend? What might be other possible ways to solve this other than using cloud storage like S3?
Would I have this problem only if I have multiple machines or would it happen even if I have multiple gunicorn instances in a single machine where nginx acts as a load balancer on them?
Not that it is possible to use a single result backend by all Celery workers, but that is the only setting that makes sense! Same goes for the broker in most cases, unless you have a complicated Celery infrastructure with exchanges, and complicated routes...
I have blocking (synchronous) web-framework with uWSGI. Task if to prepare /health endpoint for kubernetes to ensure that pod is alive. There is no issue with endpoint itself, but issue is in sync nature. For example, if I define 8 processes in uWSGI and web-application processes 8 heavy requests, call to /health will be queued and depends on timeouts, kubernetes may not receive response in some period of time and decides to kill/restart pod. Of course I can run another web-service on different port but it will require changes in code and increase complexity of deployment. Maybe I'm missing something and it's possible to define exclusive worker in uWSGI to process /health endpoint in non-blocking way? Thanks in advance!
WSGI is an inherently synchronous protocol. There has been some work to create a new async-friendly replacement called ASGI, but it's only implemented by the Django Channels project AFAIK. While you can mount a sync-mode WSGI app in an async server (generally using threads), you can't go the other way. So you could, for example, write some twisted.web code that grabs requests to /health and handles them natively via Twisted and hands off everything else to the WSGI container. I don't know of anything off the shelf for this though.
I tried profiling a gRPC java server. And i see the below set of thread pools majorly.
grpc-default-executor Threads : Created 1 for each incoming request.
grpc-default-worker-ELG Threads: May be to listen on the incoming gRPC requests & assign to the above "grpc-default-executor" thread.
Overall, is gRPC java server, Netty style or Jetty/Tomcat style? Or it can configured to run as both ways?
gRPC Java server is exposed closer to Jetty/Tomcat style, except that it is asynchronous. That is, in normal Servlets each request consumes a thread until it is complete. While newer Servlet versions let you detach from the dedicated thread and continue work asynchronously (freeing the thread for other use) that is more uncommon. In gRPC you are free to work in either style. Note that gRPC uses a cachedThreadPool by default to reuse threads; on server-side it's a good idea to replace the default executor with your own, generally fixed-size, pool via ServerBuilder.executor().
Internally gRPC Java uses the Netty-style. That means fully non-blocking. You may use ServerBuilder.directExecutor() to run on the Netty threads. Although in that case you may want to specify the NettyServerBuilder.bossEventLoopGroup(), workerEventLoopGroup(), and for compatibility channelType().
As far as I know you can specify using the directExecutor() when building the GRPC server / client that will ensure all work is done in the IO thread and so threads will be shared. The default is to not do this for safety reasons as you will need to be very careful about what you do if you are in the IO Thread (like you should never block there).
Newbie alert.
I'm trying to write a simple module in Vertx that polls the database (PostGres) every 10 seconds and pushes the results to the clients. I'm thinking of confining the blocking code (queries the database via JDBC) in a worker verticle and rest of the above layers are completely non-blocking and async.
This module will be packaged as a jar and distributed to a different apps (typically webapps) which can subscribe to the event bus via the javascript bridge.
My question here is in a clustered environment where I have 5 processes of the webapp running with the vertx modules, how can I ensure that there's only one vertx verticle querying the database. I don't want all the verticles querying the database and add more load. Or is there a different way to think to solve this problem. I'm using Vertx version 3.4.1
So there are 2 ways how your verticle can be multiplied:
If you instantiate multiple instances when you deploy your verticle
If you start to cluster your vert.x instances in different jvm's or different hosts
You could try to control the number of instances of your verticle which executes the query. Means you ensure, that the verticle only exists in one of your vert.x instances and your verticle is deployed with only one instance.
But this has several drawbacks:
your deployment is not transparent, means your cluster nodes differ in the deployment structure.
if your cluster node dies, where the query verticle is running, then you have no fallback.
So the best thing is, to deploy the verticle on all instances and synchronize it.
I see 3 possibilites:
Use hazelcast (the clustermanager of vert.x) to synchronize
http://vertx.io/docs/apidocs/io/vertx/spi/cluster/hazelcast/HazelcastClusterManager.html#getLockWithTimeout-java.lang.String-long-io.vertx.core.Handler-
There are also datastructures available, which are synchronized over
the cluster
http://vertx.io/docs/apidocs/io/vertx/spi/cluster/hazelcast/HazelcastClusterManager.html#getSyncMap-java.lang.String-
Use your database as synchronization point. you could add a simple
table which stores the last execution time in millis. The polling
modules, will first check if it is time to execute the next poll. If
the polling module executes the poll it also updates the time. This
has to be done in one transaction with a explicit lock on the time
table.
You use redis with the https://redis.io/commands/getset
functionality. You can store the time in millis in a key and ensure
with the getset method, that the upgrade of the time is atomic. So only the polling module which could set the key in redis, will execute the poll.
I'm giving out my naive solution here, I don't know if it would completely solve your problem or not but here is my thought process.
1) Polling bit, yes indeed you can have a worker verticle for blocking call's [ or else you could use Async bit here too IMHO because you already have Async Postgress JDBC client ] for the every 10secs part. code snippet like this can help you
vertx.setPeriodic(10000, id -> {
// This handler will get called every 10 seconds
JsonObject jdbcObject = fetchFromJdbc();
eventBus.publish("INTRESTED_PARTIES", jdbcObject);
});
2) For the listening part all the other verticles can subscribe to event bus and listen for the that address and would be getting the message whenever things would happen
3) This is for ensuring part that not all running instances of your jar start polling the database, for this I think the best possible way to handle would be not deploying the verticle in any jar and running the verticle in an standalone way using runtime vertx command like
vertx run DatabasePoller.java -cluster
And if you really want to be very fancy you could throw in Service Discovery for ensuring part that if the service of the verticle is already register then no other deployments would trigger registrations.
But I want to give you thumbs up on considering the events for getting that information much better way for handling inter-system communication.