How can I programmatically create JMS topics and queues with HornetQ?
http://docs.jboss.org/hornetq/2.2.14.Final/user-manual/en/html/management.html
and if you read the chapter about how core addresses are translated into JMS objects:
http://docs.jboss.org/hornetq/2.2.14.Final/api/org/hornetq/api/core/client/ClientSession.html#createQueue(org.hornetq.api.core.SimpleString,%20org.hornetq.api.core.SimpleString,%20boolean)
And you could also use just the core api if you like.
Related
What is the difference between queue and jms-queue declaration in the server configuration in the activemq-messaging subsystem?
Could the queue be used with a MessageDriven bean instead of a jms-queue?
I'm using Wildlfy 19 and Artemis 2.12.0
ActiveMQ Artemis supports the JMS API, but it also supports industry standard protocols like AMQP, STOMP, OpenWire, & MQTT. Therefore the broker's underlying address model is not JMS-specific but rather more generic & flexible in order to support numerous different use-cases.
The bare queue refers to the underlying "core" queue from ActiveMQ Artemis. I believe WildFly exposes this low-level component to support unforeseen use-cases. The queue configuration gives control over the address and routing-type which are the other two main components of the ActiveMQ Artemis address model.
The jms-queue refers to the traditional JMS-based resource which MDBs and other JMS-based application components will use in a Java/Jakarta EE application server. It gives you control of JNDI entries which queue does not. It serves as a kind of familiar "wrapper" around the core queue. That's why there's so much overlap with the attributes and operations between the two.
There's really no reason to use queue in lieu of jms-queue unless you absolutely must. A jms-queue is more straight-forward to configure and understand for almost all use-cases. The only reason to use a queue is if you needed to control the address and routing-type in a way that isn't allowed by jms-queue. This is highly unlikely for JMS applications.
It is possible, for example, to send messages to or consumer a message from a queue, but since queue lacks a way to configure JNDI bindings the JMS client would have to instantiate the queue directly using javax.jms.Session.createQueue(String). However, this method is discouraged as it reduces the portability of the application.
I am learning Apache Kafka (as a messaging system) and in that process came to know of term StateStore , link here
I am also aware of Apache kafka streams, the client API.
Is StateStore applicable for Apache kafka in the context of messaging systems or it is applicable to Apache Kafka Streams.
Does Apache have their "own" implementation of StateStore or use third party implementation (for example, rockdsb.
Can anyone help me understand this.
Adding an overview to the good concise explanation about StateStore in the context of Kafka Streams and your question.
Kafka Broker in a nutshell
In a messaging context your work simplified would be:
Publishing state (producing messages)
Saving messages for a period of time for later consumption (retention time)
Consuming state (getting the messages)
And in a nutshell #2 plus fault tolerance and keeping track of the position of your consumer groups' reads (offsets) is what a Kafka broker does for you.
Kafka client API's
Apart from that Kafka provides client libraries for your common patterns of working with messages:
Producer - Publish messages to Kafka topics
Consumer - Subscribe to Kafka topics
Connect - Create reliable integrations with external stores such as various DBMS.
Streams - DSL and utilities aimed to simplify development of common streaming application patterns.
Admin - Programmatically manage / monitor Kafka resources.
Kafka Streams State Stores
I'll quote the great explanation from the Streams Architecture docs (I highly recommend Kafka docs as they are built very good and for any level of experience).
Kafka Streams provides so-called state stores, which can be used by stream processing applications to store and query data, which is an important capability when implementing stateful operations. The Kafka Streams DSL, for example, automatically creates and manages such state stores when you are calling stateful operators such as join() or aggregate(), or when you are windowing a stream.
As you can see the StateStore is used as a helper for extending the built-in abilities from a single message processing context to multi-message processing, thus enabling more complex functions over a bunch of messages (all the messages passed in a time window, aggregation functions over several messages, etc.)
I'll add to that that RocksDB is the default implementation used by Kafka and can be changed as was mentioned in previous answer.
Also if you want to explore more here is a link to the great intro videos form Apache Kafka's official docs:
Streams API intro videos
Have an awesome learning experience!
StateStore is applicable to kafka streams context.
Some processors like reduce or aggregate are stateful operations.
Kafka streams use state stores to manage this. By default, it uses rocksDB, but it is customizable.
I am evaluating the vert.x framework to see if I can reduce the Kafka based communications between my microservices developed using spring boot.
The question is:
Can I replace
Kafka with vert.x event bus and
spring boot microservices with vert.x based verticles
To answer quickly, I would say it depends on your needs.
Yes, the eventbus can be a good way to handle natively communication between microservices verticles using an asynchronous and non-blocking paradigm.
But in some cases you could need:
to handle some common enterprises patterns like replay mechanisms, persistence of messages, transactional reading
to be able to process some kind of messages in a chronological order
to handle communication between multiples kind of microservices that aren't all written with the same framework/toolkit or even programming language
to handle reliability, resilience and
failure recovery when all your consumers/microservices/verticles are died
to handle dynamic horizontal scalability and monitoring of your consumers/microservices/verticles
to be able to work with a single cluster deployed in multi-datacenters and multi-regions
In those cases I'd prefer to choose Apache Kafka over the native eventbus or an old fascioned JMS compliant system.
It's not forbidden to use both eventbus and kafka in the same microservices architecture according to your real needs. For example, you could have one kafka consumers group reading a kafka topic to handle scaling, monitoring, failure recovery and reply mechanism and then handle communication between your sub-verticles through the eventbus.
I'll clarify a little bit for the scalability and monitoring part and explain why I think it's more simple to handle that with Kafka over the native eventbus and cluster mode with vert.x : Kafka allow us to know in real time (through JMX metrics and the describe command):
the "lag" of a topic which corresponds to
the number of unread messages
the number of consumers of each group that are listening a topic
the number of partitions of a topic affected of each consumers
i/o metrics
So it's possible to use an ElasticStack or Prometheus+Grafana solution to monitor those metrics and use them to handle a dynamic scalability (when you know that there's a need to increase temporarily the number of consumers for example according to the lag metric and the number of partitions and the cpu/ram/swap metrics of your hosts).
To answer the second question vert.x or SpringBoot my answer will be not very objective but I'd vote for vert.x for its performances on the JVM and especially for its simplicity. I'm a little tired of the Spring factory and its big layers of abstraction that hides a lot of issues under a mountain of annotations triggering a mountain of AOP.
Moreover, In the Java world of microservices, there's other alternatives to SpringBoot like the different implementations of Microprofile (thorntail project for example).
The event-bus is not persistent. You should use it for fast verticle-to-verticle communications, and more generally to dispatch events where you know that you can loose them if you have some crash.
Kafka streams are persistent, and you should send events there because either you want other (possibly non-Vert.x) applications to consume them, and/or because you want to ensure that these events are not being lost in case of failure.
A reactive (read "scalable and fault-tolerant") Vert.x application typically uses a combination of both the event-bus and some replicable messaging systems like AMQP / Kafka / etc.
On the question:
Can I replace spring boot microservices with vert.x based verticles?
Yes, definitely, although the 2 have different programming models.
If you want a more progressive approach and use Spring for structuring your application while using Vert.x for resource efficiency over your I/O and event processing then you can mix them, see https://github.com/vert-x3/vertx-examples/tree/master/spring-examples for examples.
Take a look at the Quarkus framework: in the workshop section you'll find Vert.x and Apache Kafka combined!
Server A publishes data to the topic source/topic and two durable subscribers sub-b and sub-c are configured to listen to the topic. Subscribers sub-b and sub-c will receive the identical data.
Is there a way to configure in HornetQ using multiple core bridges to publish message from sub-b channel to server B and from sub-c channel to server C.
As per the Horentq documentation they suggest to use core bridge instead of JMS bridge if possible.
It's always preferable to use a core bridge if you can.
bridgeType Schema definition does not seem to support to use subscriber name as in the case of JMS bridge bean definition.
The workaround I came up with is to use JMS but I was wondering if anyone came across this issue before and would you mind sharing your thoughts on this?
A JMS topic (i.e. source/topic) is represented in the broker simply as an address. A JMS subscription (i.e. sub-b and sub-c) is represented in the broker as a queue associated with the relevant address (source/topic in this case). The queues internal name is a combination of details from the JMS subscriber (e.g. client ID, subscription name, etc.). When a message is sent to the JMS topic the broker routes a reference to that message to each subscription so that every subscription gets every message (assuming their selectors match).
A core bridge listens for messages arriving in an queue and then forwards those messages to an address either locally or on a remote broker.
In your case, you can create a bridge which listens on the queue of the JMS subscription and then forwards that message to a remote broker of your choosing.
I am developing a mobile messaging app. I was going through technology needed and found two MQTT & Apache Kafta. To me both seems doing the same thing in the same way (in terms of subscribing & publishing to a topic).
I heard that MQTT is fit for mobiles as it is very light weight ? So basically what is the difference between these two and what are the advantage of each on other?
The main motive behind Kafka is scalability.
MQTT is a protocol with public specification for lightweight client / message broker communications, allowing publish/subscribe exchanges. Multiple implementations of client libraries and brokers (Mosquitto, JoramMQ...) exist and are virtually compatible. MQTT just specifies the transport, and vaguely the application part (i.e. how data is handled and possibly stored, how clients are authorized...). The spec is not clear if data consumed on a topic is only real-time or possibly persistent. The spec doesn't state anything about how the message broker implementing MQTT could/should scale.
On the other hand, Apache Kafka is a message broker based on an internal "commit log": its focus is storing massive amounts of data on disk, and allowing consumption in real-time or later (as long as data is still available on disk). It's designed to be deployable as cluster of multiple nodes, with good scalability properties. Kafka uses its own network protocol.
So you are comparing two different things here: a standard pub/sub protocol (with multiple implementations), and a specific message storing/distributing software, vaguley of the same family with its own protocol.
I'd say that if you need to store massive amount of messages, to ensure batch processing, look more at Kafka. If you have lots of clients/apps exchanging messages in real-time on many independent topics look more at the MQTT (or even AMQP) message broker implementations.
MQTT is a standard protocol (with many implementations). Kafka (which is also a protocol) is normally used by downloading it from the Apache website or e.g. a Confluent Docker image.
It is like comparing apples and oranges, both exist for very different reasons.
Most use cases I see in IoT environments combine both MQTT and Apache Kafka. The edge devices speak MQTT protocol (for the benefits it has in edge environments. These are then forwarded to Apache Kafka to get the events into the rest of the enterprise architecture.
You can do this either via a MQTT Broker like HiveMQ + Apache Kafka or via a MQTT Proxy (so that you don't need the MQTT Broker). Both options have trade-offs, of course.
See this example of how to combine MQTT with Apache Kafka. Or go directly to the Github code: "Deep Learning UDF for KSQL for Streaming Anomaly Detection of MQTT IoT Sensor Data".
I also created a live demo about how to integrate Apache Kafka and MQTT.