I am seeing the below statement in vertx document and I want to understand is the verticle is automatically undeploy when process completed or we need to explicitly call the undeploy method.
Automatic clean-up in verticles
If you’re creating consumers and producer from inside verticles, those consumers and producers will be automatically closed when the verticle is undeployed.
Thanks.
This statement indicates that when a verticle is undeployed, Vert.x resources like event bus consumers, event bus producers or HTTP servers will be closed automatically.
As for when verticles are undeployed, they are if you do it manually or if you started them via the Vert.x command line or the Launcher class.
Related
We are pointing our kafka consumer to farm of partitions for load balancing.
When using a rest controller endpoint to pause the kafka consumer the service only pauses a few partitions and not all of them. We want all the partitions to be paused but are unable to get them all even with repeated calls. How would you suggest we accomplish this? Hazelcast?
Thanks
consumer.pause() would only pause the instance on which it is called, not the entire consumer group.
A load balancer wouldn't be able to target all of your REST endpoints that wrap the consumers since the requests are randomly routed between all instances, so yes, you'd need some sort of external variable. Zookeeper would be a better option unless you already have a Hazelcast system. Apache Curator is a high-level Zookeeper client that can be used for this. For example, a shared counter could be set for 0 state of paused, non-zero for non-paused.
I am creating a Vertx verticle in JS(nodeJS) using es4x or vertx3-full package. I am using npm because of a npm-specific package. Now I have another verticle written in Java. I want these verticles to connect. How to do this?
From Vertx documentation
An application would typically be composed of many verticle instances
running in the same Vert.x instance at the same time. The different
verticle instances communicate with each other by sending messages on
the event bus.
So basically you can
send a message from one verticle using eventBus.send(ADDRESS, message, sendMessageHander) and
consume it from the other verticle using eventBus.consumer(ADDRESS, receivedMessageHandler)
There is a fully runnable open source example here
There is also publish/subscribe message patter that you can find in the documentation.
In kafka connect SinkTask implementation, If there is an exception which is unavoidable, if I invoke stop() method from my code, does the connector stop altogether ?
Only that task that encountered the exception will stop
The connect cluster can have multiple connectors which won't stop, and there can be other tasks for a single connect depending on your configurations that, for example, would be assigned different, clean, data that could be processed
I am using JBOSS messaging in the following way:
1) Two JBOSS instances using 'all' config (i.e. clustered config)
2) One replicated queue created on each JBOSS instance wiht same JNDI name (clustered = true)
3) one producer attach locally to the queue on each instance (i.e. both the producer on both the nodes keep on adding messages to this replicated queue)
4) One JBOSS instance is marked as "consumer node" and queue message consumer is started on only this node (i.e. messages will be consumed on only one node). There is a logic which will decide which JBOSS instance is marked as "consumer node"
5) PostOffice used is clustered
6) server peer configured to not enforce message sequencing.
7) produced messages are non-persistent (deliveryMode = NON_PERSISTENT)
But I am facing problem with this. Messages produced on "non consumer node" do not get replicated to the queue on the "consumer node" and hence not available for consumption.
I enabled the logging and checked that postoffice finds two queues but only delivers to the local queue as it discovers that the remote queue is recoverable.
Any idea how to set it working?
FYI: I believe a message can be delivered to only one queue (local or remote). So, I want only one queue which is distributed but I am currently getting 2 different distributed queues (however their JNDI name is same). Is this a problem? If yes, how to solve this? Weblogic provides the option of creating a queue on admin server and thus a shared queue is possible there. What is the similar mechanism in JBOSS messaging? Or should I need to approach this problem as 2 queues which are synchronized. If yes, then how to achieve synchronization between them?
Thanks for taking out sometime to help me!!
Regards
We have a server app that is deployed across to server machines, each running JBOSS 4.2.2. We use JBOSS messaging with MDBs to communicate between the systems. Currently we need to start the servers in a very specific order so that JBOSS can connect properly. If a server starts and doesn't see its resources it never tries again. This is problematic and time consuming in testing when we're bouncing servers constantly. We believe that if we could specify a retry flag in JBOSS could reattempt to get the connection.
Is there a flag/config option in JBOSS that would reattempt to obtain JMS connections on failure at startup?
I am quite new to the JMS technology, so it is entirely possible that I have mixed up some terms here. Since this capability is to be used in house experimental or deprecated options are acceptable.
Edit: The problem is that a consumer starts up with no producer available and subsequently fails, never to try again. If a consumer and producer are up and the producer dies the consumer will retry for the producer to come back.
I'm 95% sure that JBoss MDBs do retry connections like that. If your MDBs are not receiving messages as you expect, I think something else is wrong. Do the MDBs depend on any other resources. Perhaps posting your EJB descriptors (META-IF/ejb-jar.xml and META-IF/jboss.xml) would help.