Is Micronaut HttpClient safe to use concurrently? - httpclient

Is additional synchronization needed to work with same http client object in multiple threads?

No synchronization is needed to use the client across multiple threads

Related

Vertx WebClient shared vs single across multiple verticles?

I am using vert.x as an api gateway to route calls to downstream services.
As of now, I am using single web client instance which is shared across multiple verticles (injected through guice)
Does it make sense for each verticle to have it's own webclient? Will it help in boosting performance? (My each gateway instance runs 64 vericles and handles approximately 1000 requests per second)
What are the pros and cons of each approach?
Can someone help to figure out what's the ideal strategy for the same?
Thanks
Vert.x is optimized for using a single WebClient per-Verticle. Sharing a single WebClient instance between threads might work, but it can negatively affect performance, and could lead to some code running on the "wrong" event-loop thread, as described by Julien Viet, the lead developer of Vert.x:
So if you share a web client between verticles, then your verticle
might reuse a connection previously open (because of pooling) and you
will get callbacks on the event loop you won't expect. In addition
there is synchronization in the web client that might become contented
when used intensively from different threads.
Additionally, the Vert.x documentation for HttpClient, which is the underlying object used by WebClient, explicitly states not to share it between Vert.x Contexts (each Verticle gets its own Context):
The HttpClient can be used in a Verticle or embedded.
When used in a Verticle, the Verticle should use its own client
instance.
More generally a client should not be shared between different Vert.x
contexts as it can lead to unexpected behavior.
For example a keep-alive connection will call the client handlers on
the context of the request that opened the connection, subsequent
requests will use the same context.

Vertx multiple servers in one verticles

I noticed that we can use Vert.x to write multiple Verticles and communicate using EventBus. Is this way different from writing some servers in just one Verticle?
You can create different servers in the same verticle, but all user requests will be handled by the same event loop.
This might work just fine for you use case. However usually it's best for clarity/performance to separate concerns.

Do apache ignite REST APIs provide implicit locks?

I want to perform get and put operations on an ignite cache using ignite REST APIs. In my application, multiple systems will be performing these operations simultaneously.
https://apacheignite.readme.io/docs/rest-api
Yes, cache writes and reads are safe to execute simultaneously from multiple threads or clients.

How useful will JDBC be in an event-driven program?

I'm writing an event-driven architecture in Scala and I need to manage a database using it.
I was wondering if using JDBC, which only supports synchronous calls, would be a good solution to my problem?
I'd thought of writing an asynchronous wrapper for the calls to JDBC, but will it really tackle my concerns of the thread being blocked because of the database call?
This is really good question, and actually there's no single good answer to it.
It really depends on your database, its protocol and driver implementation. First of all, some databases, e.g. Cassandra, have asynchronous capabilities built into the protocol level. Should make it easier to work in the event-driven model, right? Not exactly - if you get gigabytes of data over slow connection you may block on the network level.
Other databases have only synchronous protocol and thus can block your resources, right? Not exactly - there are connection pools that prevent some of the issues with blocking.
So, depending on your application architecture and data you're accessing, you may need to isolate a layer of data access, that will wrap the JDBC connections and provide asynchronous capabilities. This layer would scale up and down depending on availability of open connections (e.g. actors holding the connection, and supervisor that will spawn new DB connection actors if there are no free databases, and create new threads using PinnedDispatcher).
In other cases, with specific drivers, you may stick with just JDBC wrapped into Future an hope that driver does it magic for you.
If you build large-scale application, you may want to even separate persistence access logic completely into, say, RabbitMQ, and use RPC for accessing the database.
As far as I know JDBC drivers are synchronous. Maybe you can design your system so your "main" actors asynchronously dispatch requests to background "JDBC" actors that deal with the JDBC driver?
I also try to implement fully asnyc architecture and DB calls are my bottleneck. I found that good idea is to use connection polled JDBC C3P0 driver. Here is usage example in Scala Slick framework link. Connection pooling is one of the solutions in which you have ready to use connections to your DB. It's better than spawning new connection for each request and after complition removing that connection. Here is full website od C3P0 project link

Scala/Play/Akka: remote application to application communication

If I have two Scala/Play applications on different servers, what would be the best way for them to communicate for sending small bits of data both ways?
RESTful approach
Akka remote actors
Something else?
I was initially thinking about Akka remote actors, but there's one question that I can't find answer for: how is authorisation between the two applications handled in such a case?
The “small bits of data” part would fit Akka remoting quite well, but as you note there is nothing at the transport level which could be used to perform authentication or authorization: Akka systems trust each other implicitly (the background is that remoting has been developed with clusters in mind). You can of course include the necessary security tokens (hashes, signatures, etc.) in your messages and perform the checking yourself in the receiving actors, and you can also limit which actor paths can be looked up from outside of the system, see the 2.3.0 documentation.
If on the other hand you have an established infrastructure for authentication and authorization on the HTTP layer, then you might be better off using RESTful APIs with that instead.