How to create exclusive queue consumer in Mule? - queue

In ActiveMQ you configure an exclusive consumer for a queue like:
Queue_Name_Here?consumer.exclusive=true
How to configure an exclusive consumer like above in Mule?

You need to URL encode the queue name, as Mule might try to decode the parameters as Mule transport options, which they are not.
<jms:inbound-endpoint queue="Queue_Name_Here%3Fconsumer.exclusive%3Dtrue"
connector-ref="Active_MQ"
doc:name="JMS"/>

I was using a jms:activemq-xa-connector for distributed transactions and changed it to a jms:activemq-connector which now works with your URLEncoding solutions. Not sure why distributed transaction connector does not work.
Thanks Petter.

Related

Kafka Streams - disable internal topic creation

I work in an organization where we must use the shared Kafka cluster.
Due to internal company policy, the account we use for authentication has only the read/write permissions assigned.
We are not able to request the topic-create permission.
To create the topic we need to follow the onboarding procedure and know the topic name upfront.
As we know, Kafka Streams creates internal topics to persist the stream's state.
Is there a way to disable the fault tolerance and keep the stream state in memory or persist in the file system?
Thank you in advance.
This entirely depends how you write the topology. For example, map/filter/forEach, etc stateless DSL operators don't create any internal topics.
If you actually need to do aggregation, and build state-stores, then you really shouldn't disable topics. Yes, statestores are stored either in-memory or as RocksDB on disk, but they're still initially stored as topics so they can actually be distributed, or rebuilt in case of failure.
If you want to prevent them, I think you'll need an authorizer class defined on the broker that can restrict topic creation based, at least, on client side application.id and client.id regex patterns, but there's nothing you can do at the client config.

Is it possible to have a DeadLetter Queue topic on Kafka Source Connector side?

Is it possible to have a DeadLetter Queue topic on Kafka Source Connector side?
We have a challenge with the events processed by the IBM MQ Source connector, which is processing N number of messages but sending N-100 messages, where 100 messages are the Poison messages.
But from below blog by Robin Moffatt, I can see it is not doable to have DLQ on Source Connectors side.
https://www.confluent.io/blog/kafka-connect-deep-dive-error-handling-dead-letter-queues/
Below note mentioned in above article:
Note that there is no dead letter queue for source connectors.
1Q) Please confirm if anyone used the Deadletter queue for the IBM MQ Source Connector (below is the documentation)
https://github.com/ibm-messaging/kafka-connect-mq-source
2Q) Is anyone used the DLQ on any other source connectors side?
3Q) Why it is a limitation on not having DLQ on source connector side?
Thanks.
errors.tolerance is available for source connectors too - refer docs
However, if you compare that to sinks, no, DLQ options are not available. You would instead need to parse the Connector logs with the event details, then pipe that to a topic on your own.
Overall, how would the source connectors decide what events are bad? A network connection exception means that no messages would be read at all, so there's nothing to produce. If messages fail to serialize to Kafka events, then they also would fail to be produced... Your options are either to fail-fast, or skip and log.
If you're just wanting to send through binary data as-is, then nothing would be "poisonous" it can be done with the ByteArrayConverter class, but that's not really a good use case for Kafka Connect since it's primarily designed around Structured types with parsible Schemas, but at least with that option, data gets into Kafka and you can use Kstreams to branch/filter good messages from the bad ones

Is it necessary to use transactions explicitly in Kafka Streams to get "effectively once" behaviour?

A Confluence article states
Stream processing applications written in the Kafka Streams library can turn on exactly-once semantics by simply making a single config change, to set the config named “processing.guarantee” to “exactly_once” (default value is “at_least_once”), with no code change required.
But as transactions are said to be used, I would like to know: Are transactions used implicitly by Kafka Streams, or do I have to use them explicitly?
In other words, do I have to call something like .beginTransaction() and .commitTransaction(), or is all of this really being taken care of under the hood, and all that remains for me to be done is fine-tuning commit.interval.ms and cache.max.bytes.buffering?
Kafka Streams is using the transactions API to achieve exactly-once semantics implicitly, so you do not need to set any other configuration.
If you continue reading the blog it says:
"More specifically, when processing.guarantee is configured to exactly_once, Kafka Streams sets the internal embedded producer client with a transaction id to enable the idempotence and transactional messaging features, and also sets its consumer client with the read-committed mode to only fetch messages from committed transactions from the upstream producers."
More details can be found in KIP-129: Streams Exactly-Once Semantics

Is it possible in Spring Kafka to send a messages that will expire on a per message (not per template or higher) basis

I am trying to use Kafka as a request-response system between two clients much like RabbitMQ and I was wondering if it is possible to set the expiration of a message so that after it is posted it will automatically get deleted from the Kafka servers.
I'm trying to do it on a per message level as well (but even if it were per-topic it is okay, but I'd like to use the same template if possible).
I was checking ProducerRecord, but all it had was timestamp. I also don't see any mention of it in KafkaHeaders
Kafka records are deleted in segments (a group of messages) based on overall topic retention.
Spring is just a client. It doesn't control the server side logic of the log cleaner.

Parallel processing of JMS messages?

Is it possible to create a pool of Message Listeners or a Message Driven Beans to process messages from a JMS queue or topic in parallel ?
I am using JBoss and JBoss's JMS
Yes, if the MDB pool size is greater than one, JBoss should create multiple MDBs to process the messages in parallel.
Absolutely. I've done it with JMS queues to create a multi-server pool of listeners in order to process large numbers of transactions. You can use the Competing Consumers pattern. I used a modified one, since we needed to process messages in order within accounts. We used a lease mechanism to allocate servers to account number ranges, providing failover and scalability.
We were using Tibco's JMS provider, but it works with any JMS provider.