When Verticles get assigned to event loop threads, then what is the work of event loop after that internally? - vert.x

When we create an HTTP server inside a verticle, then does it mean that the event loop thread creates this server? If not what is the work of the event loop and who creates this HTTP server?
When we create multiple instances of a vertice, then what happens to the event loop?

Q1. When we create an HTTP server inside a verticle, then does it mean that the event loop thread creates this server?
A: No
Q2. If not what is the work of the event loop ?
In a standard reactor implementation there is a single event loop
thread which runs around in a loop delivering all events to all
handlers as they arrive. Vert.x works differently here. Instead of a
single event loop, each Vertx instance maintains several event loops.
By default we choose the number based on the number of available cores
on the machine, but this can be overridden.
The event loop thread is used to dispatch events to handlers.
Q3. Who creates this HTTP server?
A: Creating an HTTP server is done from the Vertx instance. The method used is createHttpServer()
HttpServer server = vertx.createHttpServer();
Q4. When we create multiple instances of a verticle, then what happens to the event loop?
The function of the event loop threads remain the same. With many vertical instances, there are just more event loop threads.
Images are gotten from this post, which I would also recommend to go through.

Event Loop is an implementation of Reactor design patter.
It’s goal is to continuously check for new events, and each time a new event comes in, to quickly dispatch it to someone who knows how to handle it.
Check this article for more details:
https://alexey-soshin.medium.com/understanding-vert-x-event-loop-46373115fb3e

Related

LabVIEW: Correct way to control loop execution from an event structure

The program used has a a few control buttons (start, pause, stop, close) that are supposed to control the execution of a subVI test program. This is mainly a loop doing a certain task.
The question is: what is the proper approach to control the program flow (while loop) from the event structure? E.g. a start button press is supposed to start the flow - can I just wire this to the while loop? Wouldn't that mean polling the value? Or would it be passed only after the event is fired?
So far I have a loop for the events (multiple button value changes) and one other loop for the test program:
I assume something like the Queued Message Handler can be used for that? How can I achieve that the current loop run is stopped?
You need to use Event-driven producer-consumer pattern.
Long story short: producer loop handles user events (like button press), and sends commands to consumer loop. Usually, communication between loops is done via queues. Queue sends cluster, which consists of state (which is either string or enum data type, LabVIEW queued state machine: enum or string?) and data (which is variant data type).
Consumer loop enqueues messages from the queue, and based on state runs corresponding logic. Consumer loop has case structure, and selector is state value from the queue.
There is a lot of different guides on the internet. Great overview is this presentation Introduction to LabVIEW Design Patterns, also you could check this one Event-driven producer-consumer state machine, or even video LabVIEW code: Event-driven producer-consumer state machine (walk-through), etc.

What is the most efficient way to implement a complicated reactive program in vertx? Should I depend on event bus?

I want to implement a complicated reactive program in vertx, which contains multiple blocking operation steps. There seems several ways to implement it AFAIK, there may be other ways as well, what is the most efficient way in terms of throughout and response time, in a multi-core computer?
Separate each operation step in different verticles, and use event bus to communicate with these verticles.
Make all operations in one verticle, chain all operations with Future composition
Make all operations in one verticle, chain all operations with RxJava 2
According to Vertx core document, "There is a single event bus instance for every Vert.x instance and it is obtained using the method eventBus", the 1st way seems less efficient than others because the data transmission between verticles is in a single event bus thread, while for others, multiple instances of the verticle can be created so that more cores are used as event loop thread. Do I understand correctly?

callback mechanism function with ChannelAwareMessageListener

I have a queue where listener is listening to a queue with some pre-fetch count lets say 10. It is passing these 10 elements to some processor.
The processor may process some task or may not or delay it. And i want to dequeue it ( channel.basicAck(message.getMessageProperties().getDeliveryTag(), false); ) from queue after i receive such information.
What could be best way to achieve that ? One idea came to me that i create another queue and i will push processed messages with delivery tag and channel reference into it from processor. And i will listen to this new queue and do ack based on that.
Channels are not thread-safe - see the rabbitmq documentation.
Channel instances must not be shared between threads. Applications should prefer using a Channel per thread instead of sharing the same Channel across multiple threads. While some operations on channels are safe to invoke concurrently, some are not and will result in incorrect frame interleaving on the wire. Sharing channels between threads will also interfere with * Publisher Confirms.
You should only use the channel exposed by a ChannelAwareMessageListener on the listener thread itself.
If you are trying to achieve concurrency, it's generally better to use the container's concurrentConsumers property rather than handing messages off to other threads.

Rx -several producers/one consumer

Have been trying to google this but getting a bit stuck.
Let's say we have a class that fires an event, and that event could be fired by several threads at the same time.
Using Observable.FromEventPattern, we create an Observable, and subscribe to that event. How exactly does Rx manage multiple those events being fired at once? Let's say we have 3 events fired in quick succession on different threads. Does it queue them internally, and then call the Subscribe delegate synchronously for each one? Let's say we were subscribing on a thread pool, can we still guarantee the Subscriptions would be processed separately in time?
Following on from that, let's say for each event, we want to perform an action, but it's a method that's potentially not thread safe, so we only want one thread to be in this method at a time. Now I see we can use an EventLoop Scheduler, and presumably we wouldn't need to implement any locking on the code?
Also, would observing on the Current Thread be an option? Is Current Thread the thread that the event was fired from, or the event the subscription was set up on? i.e. Is that current thread guaranteed to always be the same or could be have 2 threads running ending up in the method at the same time?
Thx
PS: I put an example together but I always seem to end up on the samethread in my subscrive method, even when I ObserveOn the threadpool, which is confusing :S
PSS: From doing a few more experiments, it seems that if no Schedulers are specified, then RX will just execute on whatever thread the event was fired on, meaning it processes several concurrently. As soon as I introduce a scheduler, it always runs things consecutively, no matter what the type of the scheduler is. Strange :S
According to the Rx Design Guidelines, an observable should never call OnNext of an observer concurrently. It will always wait for the current call to complete before making the next call. All Rx methods honor this convention. And, more importantly, they assume you also honor this convention. When you violate this condition, you may encounter subtle bugs in the behavior of your Observable.
For those times when you have source data that does not honor this convention (ie it can produce data concurrently), they provide Synchronize.
Observable.FromEventPattern assumes you will not be firing concurrent events and so does nothing to prevent concurrent downstream notifications. If you plan on firing events from multiple threads, sometimes concurrently, then use Synchronize() as the first operation you do after FromEventPattern:
// this will get you in trouble if your event source might fire events concurrently.
var events = Observable.FromEventPattern(...).Select(...).GroupBy(...);
// this version will protect you in that case.
var events = Observable.FromEventPattern(...).Synchronize().Select(...).GroupBy(...);
Now all of the downstream operators (and eventually your observer) are protected from concurrent notifications, as promised by the Rx Design Guidelines. Synchronize works by using a simple mutex (aka the lock statement). There is no fancy queueing or anything. If one thread attempts to raise an event while another thread is already raising it, the 2nd thread will block until the first thread finishes.
In addition to the recommendation to use Synchronize, it's probably worth having a read of the Intro to Rx section on scheduling and threading. It Covers the different schedulers and their relationship to threads, as well as the differences between ObserveOn and SubscribeOn, etc.
If you have several producers then there are RX methods for combining them in a threadsafe way
For combining streams of the same type of event into a single stream
Observable.Merge
For combining stream of different types of events into a single stream using a selector to transform the latest value on each stream into a new value.
Observable.CombineLatest
For example combining stock prices from different sources
IObservable<StockPrice> source0;
IObservable<StockPrice> source1;
IObservable<StockPrice> combinedSources = source0.Merge(source1);
or create balloons at the current position every time there is a click
IObservable<ClickEvent> clicks;
IObservable<Position> position;
IObservable<Balloons> balloons = clicks
.CombineLatest
( positions
, (click,position)=>new Balloon(position.X, position.Y)
);
To make this specifically relevant to your question you say there is a class which combines events from different threads. Then I would use Observable.Merge to combine the individual event sources and expose that as an Observable on your main class.
BTW if your threads are actually tasks that are firing events to say they have completed here is an interesting patterns
IObservable<Job> jobSource;
IObservable<IObservable<JobResult>> resultTasks = jobSource
.Select(job=>Observable.FromAsync(cancelationToken=>DoJob(token,job)));
IObservable<JobResult> results = resultTasks.Merge();
Where what is happening is you are getting a stream of jobs in. From the jobs you are creating a stream of asynchronous tasks ( not running yet ). Merge then runs the tasks and collects the results. It is an example of a mapreduce algorithm. The cancellation token can be used to cancel running async tasks if the observable is unsubscribed from (ie canceled )

Implementing timer using select

I am planning to write a small timer library in C using timerfd_create.
The basic user of this library will have two threads
application thread
Timer thread
There will be a queue between these two threads so that whenever the application wants to start a timer, it will push a message into the queue which the timer thread will then read and create an FD for it and put it in select.
The problem with the above approach is that the timer thread being a single thread would be blocked in the select system call and would not know if a message has been posted in his receive queue to start a timer.
One way around this is to let the select timeout every "tick" and then check for messages in the queue. Is their a better way to do this?
I was also thinking of raising an Interrupt every time the application puts a message in the select queue to interrupt the select. Does that work well with Multi-threaded applications?
Platform : Unix
If you insist on having multiple threads post timers to a dedicated timer thread sitting in select(2), then why not use eventfd(2) or just an old-good self-pipe trick to signal that new timers are available. Include the event file descriptor to the pollable set, wait on all of them.
Which platform(s) are you wanting to target? Under Windows, for instance, there are much better ways to handle this without using select(), such as PostThreadMessage() and WaitMessage().
If you are using timerfd's then there is no need for a dedicated timer thread, just write the application around an event loop using select, poll, or epoll, etc.