I am creating a Vertx verticle in JS(nodeJS) using es4x or vertx3-full package. I am using npm because of a npm-specific package. Now I have another verticle written in Java. I want these verticles to connect. How to do this?
From Vertx documentation
An application would typically be composed of many verticle instances
running in the same Vert.x instance at the same time. The different
verticle instances communicate with each other by sending messages on
the event bus.
So basically you can
send a message from one verticle using eventBus.send(ADDRESS, message, sendMessageHander) and
consume it from the other verticle using eventBus.consumer(ADDRESS, receivedMessageHandler)
There is a fully runnable open source example here
There is also publish/subscribe message patter that you can find in the documentation.
Related
I'm pretty new to Kafka Streams. Right now I'm trying to understand the basic principles of this system.
This is a quote from the following article https://www.confluent.io/blog/introducing-kafka-streams-stream-processing-made-simple/
You just use the library in your app, and start as many instances of the app as you like, and Kafka will partition up and balance the work over these instances.
Right now it is not clear to me how it works. Where will the business logic(computation tasks) of Kafka Streams be executed? It will be executed inside of my application, or it is just a client for Kafka cluster and this client will only prepare tasks that will be executed on Kafka cluster? If no, how to properly scale the computation power of my Kafka Streams application? Is it possible to execute inside Yarn or something like this? This way is it a good idea to implement the Kafka Streams application as an embedded component of the core application(let's say web application in my case) or it should be implemented as a separate service and deployed to Yarn/Mesos(if it is possible) separately from the main web application? Also, how to prepare Kafka Streams application to be ready deploy with Yarn/Mesos application management frameworks?
You stream processing code is running inside your applications -- it's not running in the Kafka cluster.
You can deploy anyway you like: Yarn/Mesos/kubernetes/WAR/Chef whatever. The idea is to embed it directly into your application to avoid setting up a processing cluster.
You don't need to prepare Kafka Streams for a deployment method -- it's completely agnostic to how it gets deployed. For Yarn/Mesos you would deploy it as any other Java application within the framework.
I am seeing the below statement in vertx document and I want to understand is the verticle is automatically undeploy when process completed or we need to explicitly call the undeploy method.
Automatic clean-up in verticles
If you’re creating consumers and producer from inside verticles, those consumers and producers will be automatically closed when the verticle is undeployed.
Thanks.
This statement indicates that when a verticle is undeployed, Vert.x resources like event bus consumers, event bus producers or HTTP servers will be closed automatically.
As for when verticles are undeployed, they are if you do it manually or if you started them via the Vert.x command line or the Launcher class.
I am learning kafka and vertx and I got across the following statements,
1.Kafka module allows to receive events published by other Vert.x verticles and send those events to Kafka broker.
2.Application sends messages to Kafka module using Vertx bus
3.Kafka module acts as a producer
Anyone letting me know how they are programmed, would be very helpful. Thanks.
I found the source code here, but I am looking for an simpler example. https://github.com/zanox/mod-kafka
That Kafka module only acts as a producer - not as a consumer. So it's intent is to publish messages external (outbound) from the system. The example on the referenced Github link is very easy to follow, if you follow along with the following Kafka quick start at the same time - http://kafka.apache.org/081/documentation.html#quickstart
You don't have to use modules, you can use the Kafka Java client within your Verticles. Modules are intended as a mechanism for re-use and providing common functionality.
In Vertx 3.0 (next release), the module system is removed anyway.
How do I configure activemq such that a JMS message published to a topic is passed on to two JMS queue.
Is this possible in activemq?
Or
Is it better to use a simple topic with two subscribers. Both picking up their own copy of a message.
Instead of trying to configure the Broker to do this you are better off using Apache Camel to create the routing behaviour you are looking for. Camel routes can be embedded in you ActiveMQ instance.
I'd like to host apps that uses queue to communicate with each other on OpenShift.
One kind of apps - producers will put some data to the queue and another type - consumer will process the message. My question is how to implement message queue. I've thought about two approaches:
Create an app with JBoss, HornetQ and consumer and create proxy port for HornetQ, so that producers can send messages there.
Create an app with JBoss and consumer, and make a JBoss's HornetQ available to producers. It sounds a bit better for me, but I don't know if I can make queue available to producers and how it works if there are more instances of consumer on different nodes (and different JBoss instances).
I'm not sure how else to answer you besides showing you a link on how to use Wildfly. You can just use the Wildfly Cartridge:
https://www.openshift.com/quickstarts/wildfly-8
If you provide me some extra context I can try to enrich the answer a bit better. I need to know what is your problem, and what's not working.
If you just want to know how to configure Wildfly with HornetQ, the Wildfly cartridge I posted is the way to go.