Can someone please guide me on how to intercept MQTT messages on ActiveMQ Artemis broker? I tried as suggested in the manual but the MQTT messages are not intercepting. However the publishing and subscribing of messages are working fine.
Interceptor class:
public class InterceptorExample implements Interceptor {
#Override
public boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException {
System.out.println("Packet intercepted");
return true;
}
}
I add the interceptor to the configuration in addMQTTConnector method
protected void addMQTTConnector() throws Exception {
.
.
.
List<String> incomingInterceptors = new ArrayList<>();
incomingInterceptors.add("org.apache.activemq.artemis.core.protocol.mqtt.InterceptorExample");
server.getConfiguration().setIncomingInterceptorClassNames(incomingInterceptors);
}
full code for the broker class is at https://codeshare.io/snZsB
I filled a feature request for Interceptor support in MQTT. It is already implemented and was released in Artemis 1.4.0.
In Artemis 1.3.0, only messages sent over the core protocol (and maybe one more other than MQTT) could be intercepted.
Related
I'm trying to run a Micronaut test including kafka with testcontainers.
For my test I need that my code and the kafka server share the same port but I can not configure the port in kafka:
#Container
static KafkaContainer kafka =
new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:latest"));
It is generating a random port and it is not possible to configure it.
Another possibility is to change the application.yml property that the producer user for the kafka server but I can not find any soluciĆ³n also.
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, configuration.kafkaUrl);
Your test class needs to implement TestPropertyProvider and override getProperties():
#MicronautTest
class MySpec extends Specification implements TestPropertyProvider {
private static final Collection<Book> received = new ConcurrentLinkedDeque<>()
static KafkaContainer kafka = new KafkaContainer(
DockerImageName.parse('confluentinc/cp-kafka:latest'))
#Override
Map<String, String> getProperties() {
kafka.start()
['kafka.bootstrap.servers': kafka.bootstrapServers]
}
// tests here
See this official Micronaut guide for a detailed tutorial:
https://guides.micronaut.io/latest/micronaut-kafka-gradle-groovy.html
I have followed the below documentation https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/ to handle deserialization exceptions.
It works fine, the message gets logged and move forward, but everytime I restart the server the bad messages are logged again.
Is there a way I can skip/acknowledge the bad message once it is logged so that it doesnt get picked up again on restart the server
Consumer YAML
spring:
kafka:
bootstrap-servers: localhost:9092
consumer:
# Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
# Delegate deserializers
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: org.sprngframework.kafka.support.serializer.JsonDeserializer
Also, NOTE:=
From the above, in logs I see it gives a warn that spring.deserializer.value.delegate.class is given -> but not a known config.
Have also confgured this
#Bean
#Configuration
#EnableKafka
public class KafkaConfiguration {
/**
* Boot will autowire this into the container factory.
*/
#Bean
public LoggingErrorHandler errorHandler() {
return new LoggingErrorHandler();
}
}
Could someone please advise on the same
I am using spring kafka with the embedded kafka for JUnit test, it gives an error for every test on windows:
Error deleting C:\Users:LXX691\AppData\Local\Temp\kafka-1103610162480947200/.lock: The process cannot access the file because it is being used by another process.
I just did the basic configuration like below
#SpringBootTest(webEnvironment = RANDOM_PORT)
#RunWith(SpringRunner.class)
public class KafkaTest {
#Autowired
EmbeddedKafkaBroker broker;
#Before
void setUp() throws Exception() {
// setup producer and consumers
}
#Test
void test() {
producer.send(new ProducerRecord<>("topic", "content"));
}
}
Any suggestion to resolve or any workaround is appreciated.
This is a known issue in Apache Kafka: https://issues.apache.org/jira/browse/KAFKA-8145.
Unfortunately there is nothing in Spring Kafka we can do on the matter.
See more info here: Kafka: unable to start Kafka - process can not access file 00000000000000000000.timeindex and here https://github.com/spring-projects/spring-kafka/issues/194
We are using Spring Kafka to consume records in batches. We are sometimes facing an issue where the application starts and it doesn't consume any records even though there are enough unread messages. Instead we continuously see info logs saying.
[INFO]-[FetchSessionHandler:handleError:440] - [Consumer clientId=consumer-2, groupId=groupId] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 1027: org.apache.kafka.common.errors.DisconnectException.
People are facing this issue and everyone says to ignore it, since it is just a info log. Even, we see after sometime the application starts picking up the records without doing anything. But, it is very unpredictable on how long it might take to start consuming records :(
We didn't see this error when we were using Spring cloud stream. Not sure if we have missed any configuration in spring-kafka.
Anyone faced this issue in past, please let us know if we are missing something. We have huge load in our topics and if there is a lot of lag, could this happen?
We are using Spring Kafka of 2.2.2.RELEASE
Spring boot 2.1.2.RELEASE
Kafka 0.10.0.1 (We understand it's very old, because of unavoidable reasons we are having to use this :()
Here is our code:
application.yml
li.topics: CUSTOM.TOPIC.JSON
spring:
application:
name: DataPublisher
kafka:
listener:
type: batch
ack-mode: manual_immediate
consumer:
enable-auto-commit: false
max-poll-records: 500
fetch-min-size: 1
fetch-max-wait: 1000
group-id: group-dev-02
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer:CustomResourceDeserialiser
auto-offset-reset: earliest
Consumer:
public class CustomKafkaBatchConsumer {
#KafkaListener(topics = "#{'${li.topics}'.split(',')}", id = "${spring.kafka.consumer.group-id}")
public void receiveData(#Payload List<CustomResource> customResources,
Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitions,
#Header(KafkaHeaders.OFFSET) List<Long> offsets) {
}
}
Deserialiser:
public class CustomResourceDeserialiser implements Deserializer<CustomResource> {
#Override
public void configure(Map<String, ?> configs, boolean isKey) {
}
#Override
public CustomResource deserialize(String topic, byte[] data) {
if (data != null) {
try {
ObjectMapper objectMapper = ObjectMapperFactory.getInstance();
return objectMapper.readValue(data, CustomResource.class);
} catch (IOException e) {
log.error("Failed to deserialise with {}",e.getMessage());
}
}
return null;
}
#Override
public void close() {
}
}
This could be because of this Kafka-8052 - Intermittent INVALID_FETCH_SESSION_EPOCH error on FETCH request issue. This is fixed in Kafka 2.3.0
Unfortunately, as of Aug 21, 2019 Spring cloud streams haven't upgraded it's dependencies yet with 2.3.0 release of kafka-clients yet.
You can try adding these as explicit dependencies in your gradle
compile ('org.apache.kafka:kafka-streams:2.3.0')
compile ('org.apache.kafka:kafka-clients:2.3.0')
compile ('org.apache.kafka:connect-json:2.3.0')
compile ('org.apache.kafka:connect-api:2.3.0')
Update
This could also be caused by kafka Broker - client incompatibility. If your cluster is behind the client version you might see all kinds of odd problems such as this. Example, let's say, your kafka broker is on 1.x.x and your kafka-consumer is on 2.x.x, this could happen
I have faced the same problem before, solution was either the decrease current partition count or increase the number of consumers. In my case, we have ~100M data on 60 partition and I came across the same error when single pod is running. I scaled 30 pods (30 consumers) and the problem was solved.
Using spring integration Kafka dsl, I wonder why listener not receive messages? But the same application If I replace spring integration DSL with a method annotated with KafkaListener is able to consume messages fine.
What am I missing with DSL?
DSL code that does not consume:
#Configuration
#EnableKafka
class KafkaConfig {
//consumer factory provided by Spring boot
#Bean
IntegrationFlow inboundKafkaEventFlow(ConsumerFactory consumerFactory) {
IntegrationFlows
.from(Kafka
.messageDrivenChannelAdapter(consumerFactory, "kafkaTopic")
.configureListenerContainer({ c -> c.groupId('kafka-consumer-staging') })
.id("kafkaTopicListener").autoStartup(true)
)
.channel("logChannel")
.get()
}
}
logChannel (or any other channel I use), does not reflect inbound messages.
Instead of the above code, If I use plain listener, it works fine to consume messages.
#Component
class KafkaConsumer {
#KafkaListener(topics = ['kafkaTopic'], groupId = 'kafka-consumer-staging')
void inboundKafkaEvent(String message) {
log.debug("message is {}", message)
}
}
Both approaches uses same application.properties for Kafka consumer.
You are missing the fact that you use Spring Integration, but you haven't enable it in your application. You don't need to do that for Kafka though, since you are not going to consume it with the #KafkaListener. So, to enable Spring Integration infrastructure, you need to add #EnableIntegration on your #Configuration class: https://docs.spring.io/spring-integration/docs/5.1.6.RELEASE/reference/html/#configuration-enable-integration