I am using JMS for getting the yahoo stock quotes asynchronously. I am creating JMSContext on the Producer side and I would like to use the same context of producer in consumer class as well. So when I make it public static then JMSContext is set to null. So can JMSContext be public and static? is there any other way to create the JMSContext in consumer? I am using netbeans to implement this task.
JMSContext is a Java object do can have what ever visibility you require for your application architecture. However read the JMS spec and you'll see that only 1 thread can use it at any one time. If you can enforce that in your application you can share the context, but if that doesn't make sense don't. It's not the JMS provider's job to enforce this threading restriction.
Related
I use spring cloud stream with kafka. I have a topic X, with partition Y and consumer group Z. Spring boot starter parent 2.7.2, spring kafka version 2.8.8:
#StreamListener("input-channel-name")
public void processMessage(final DomainObject domainObject) {
// some processing
}
It works fine.
I would like to have an endpoint in the app, that allows me to re-read/re-process (seek right?) all the messages in X.Y (again). But not after rebalancing (ConsumerSeekAware#onPartitionsAssigned) or after app restart (KafkaConsumerProperties#resetOffsets) but on demand like this:
#RestController
#Slf4j
#RequiredArgsConstructor
public class SeekController {
#GetMapping
public void seekToBeginningForDomainObject() {
/**
* seekToBeginning for X, Y, input-channel-name
*/
}
}
I just can't achieve that. Is it even possible ?. I understand that I have to do that on the consumer level, probably the one that is created after #StreamListener("input-channel-name") subscription, right ? but I've no clue how to obtain that consumer. How can I execute seek on demand to make kafka send the messages to the consumer again ? I just want to reset the offset for X.Y.Z to 0 to just make the app, load and process all the messages again.
https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/spring-cloud-stream-binder-kafka.html#rebalance-listener
KafkaBindingRebalanceListener.onPartitionsAssigned() provides a boolean to indicate whether this is an initial assignment Vs. a rebalance assignment.
Spring cloud stream does not currently support arbitrary seeks at runtime, even though the underlying KafkaMessageDrivenChannelAdapter does support getting access to a ConsumerSeekCallback (which allows arbitrary seeks between polls). It would need an enhancement to the binder to allow access to this code.
It is possible, though, to consume idle container events in an event listener; the event contains the consumer, so you could do arbitrary seeks under those conditions.
Here is my situation:
We have a Spring cloud Stream 3 Kafka service connected to multiple topics in the same broker but I want to control connecting to a specific topic based on properties.
Every topic has its own binder and binding but the broker is the same for all.
I tried disabling the binding (that was the only solution I found so far) by using the property below and that works for the StreamListener to not receive messages but the connection to the topic and rebalancing is still happening.
spring:
cloud:
stream:
bindings:
...
anotherBinding:
consumer:
...
autostartup: false
I wonder if there is any setting on binder level that prevents it from starting. One of the topics consumer should only be available in one of the environments.
Thanks
Disabling the bindings by setting autoStartup to false should work, I am not sure what the issue is.
It doesn't look like you are using the new functional model, but the StreamListener. If you are using the functional model, here is another thing that you can try. You can disable the bindings by not including the corresponding functions at runtime. For example, assume you have the following two consumers.
#Bean
public Consumer<String> one() {}
#Bean
public Consumer<String> two() {}
When running this app, you can provide the property spring.cloud.function.definition to include/exclude functions. For instance, when you run it with spring.cloud.function.definition=one, then the consumer two will not be activated at all. When running with spring.cloud.function.definition=two, then the consumer one will not be activated.
The downside to the above approach is that if you decide to start the other function once the app started (given autoStartup is false on the other function), it will not work as it was not part of the original bindings through spring.cloud.function.definition. However, based on your requirements, this is probably not an issue as you know which environments are targeted for the corresponding topics. In other words, if you know that consumer one needs to always consume from the topic one, then you don't include consumer two as part of the definition.
Do I need a separate KafkaTemplate for DeadLetterPublishingRecoverer?
I have a KafkaTemplate used to send messages to Kafka and then I have a KafkaListenerContainerFactory with SeekToCurrentErrorHandler and DeadLetterPublishingRecoverer which in turn require me to provide a KafkaTempate. Do I really need this another template just for dlq handling or should I maybe use this KafkaTemplate for my normal kafka operations? I suppose I could also use a non generic KafkaTemplate for both but I suspect that is far from the best practice.
If the generic types are different, you can either configure 2 templates, or use <Object, Object> (as long as your serializer can handle both types).
let my describe the rationale behind my question:
We have a Micronaut-based application consuming messages from Kafka broker.
The consumed messages are processed and fed to another remote "downstream" application.
If this downstream application is going to restart purposely, it will take a while to get ready accepting further messages from our Micronaut-based application.
So we have the idea to send out Micronaut application a request to SUSPEND/PAUSE consumption of messages from Kafka (e.g. via HTTP to an appropriate endpoint).
The KafkaConsumer interface seems to have appropriate methods to achieve this goal like
public void pause(java.util.Collection<TopicPartition> partitions)
public void resume(java.util.Collection<TopicPartition> partitions)
But how to get a reference to the appropriate KafkaConsumer instance fed in to our HTTP endpoint?
We've tried to get it injected to the constructor of the HTTP endpoint/controller class, but this yields
Error instantiating bean of type [HttpController]
Message: Missing bean arguments for type: org.apache.kafka.clients.consumer.KafkaConsumer. Requires arguments: AbstractKafkaConsumerConfiguration consumerConfiguration
It's possible to get a reference of KafkaConsumer instance as method parameter with #Topic annotated receive methods as describes in the Micronaut Kafka documentation,
but this would result in storing this reference as instance variable, get it accessed by the HTTP endpoint, etc. pp. ... which sounds not very convincing:
You get a reference to the KafkaConsumer ONLY when receiving the next message! This might by appropriate for SUSPENDING/PAUSING, but not for RESUMING!
By the way, calling KafkaConsumer.resume(...) on a reference saved as instance variable yields
java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access
at org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2201)
at org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2185)
at org.apache.kafka.clients.consumer.KafkaConsumer.resume(KafkaConsumer.java:1842)
[...]
I think the same holds true when implementing KafkaConsumerAware interface to store a reference of the freshly created KafkaConsumer instance.
So are there any ideas how to handle this in an appropriate way?
Thanks
Christian
I have an application with a high level of load and performance's critical .
Now, I'm migrating the application to use EJB. I'm very worried about using EJB to consume messages on queues because transactionality can decrease the performance.
Now, I'm consuming X messages in the same transaction, but I don't know how do the same using MDBs.
Is it possible to consume a block of messages in an MDB using only one transaction?
It is not guaranteed that the same MDB will process the stream of messages.
I think you can achieve what you want by using a stateless bean with an #Asynchronous invocation, and passing your set of messages.
Something like that:
#Stateless
public class AsynchProcessor {
#Asynchronous
public void processMessages(Set<MyMessage> messages) {....}
}
Decorate your method with Future if necessary, then in your client.
Set<MyMessage> messages = ...
asynchProcessor.processMessages(messages)