I am using rabbitmq to send message between 2 services in micro-service.
I am having a problem. Can I setting the queue pause push message to consumer and continue to process message when I want? Or make consumer pause to get message out from queue and continue to get message when I want (But don't use the way stop/start consumer. Because I can't do it in my system.)?
If YES, can I do it by RabbitMQ Management HTTP API?
No you can't do it. These are consuming policies. Maybe you can stop the publish.
Read also thread about: https://groups.google.com/forum/#!topic/rabbitmq-users/68-DPZN4b_Q
Related
We have a Spring Boot application producing messages to a AWS MSK Kafka cluster. Every now and then our MSK cluster gets an automatic security update (or such) and after that our KafkaTemplate producer loses connection to the cluster or something so all sends end up in a timeout. The producer doesn't recover from this automatically and keeps on trying to send messages. The following idempotent sends throw an exception:
org.apache.kafka.common.errors.ClusterAuthorizationException: The producer is not authorized to do idempotent sends
Restarting the producer application fixes the issue. Our producer is very simple application using KafkaTemplate to send messages without any custom retry logic or such.
One suggestion was to add a producer reset call to the error handler but testing the solution is very hard as there seems to be no real way to reproduce the issue.
https://docs.spring.io/spring-kafka/api/org/springframework/kafka/core/ProducerFactory.html#reset()
Any ideas why this happens and what is the best way to fix it?
We have an open issue to close the producer on any timeout...
https://github.com/spring-projects/spring-kafka/issues/2251
Contributions are welcome.
We have containerized ActiveMQ Artemis 2.16.0 and deployed it as a K8s deployment for KEDA.
We use STOMP using stomp.py python module. The ACK-mode is set as client-individual and consumerWindowSize = 0 on the connection. We are promptly acknowledging the message as soon as we read it.
The problem is, sometimes, the message count in the web console does not become zero even after all the messages are actually consumed and acknowledged. When I browse the queue, I don't see any messages in it. This is causing KEDA to spin up pods unnecessarily. Please refer to the attached screenshots I attached in the JIRA for this issue.
I fixed the issue in my application code. My requirement was one queue listener should consume only one message and exit gracefully. So, soon after sending ACK for the consumed message, I disconnected the connection, instead of waiting for the sleep duration to disconnect.
Thanks, Justin, for spending time on this.
I am writing HTTP Sink plugin for Kafka connect. Its purpose is to send HTTP requests for each message in the configured Kafka topic. I want to send the message to dead letter queue in case HTTP request fails. Can I make use of dead letter queue configuration provided in sink plugin ?
The reason for this question is that, it has been mentioned in kafka connect documentation and several blogs that only errors in transformer and converter will be send to dead letter queue and not the ones during PUT. Since the task of sending the http request is done in PUT. So I am think, is there a way to send failed http messages to DLQ ? If not, is it possible to send the message to some other kafka topic for further processing ?
According to #Neil this might be informative,
Kip 610 (implemented in Kafka 2.6) added DLQ support for issues when interacting with end system. Kip 298 added DLQ but only on issues prior to sink system interaction.
Check the versions of your connect cluster and sink version and see if it supports it.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-610%3A+Error+Reporting+in+Sink+Connectors
I have a rest based application deployed in server(tomcat) ,
Every request comes to server it takes 1 second of time to serve, now I have a issue, sometimes Server receive more request then it is capable of serving which making server non responsive. Now I was thinking if I can store the requests in a queue so that server can pull request and serve that request and handle the pick time issue.
Now I was thinking can Kafka be helpful for this, if yes any pointer where I can start.
You can use Kafka (or any other messaging system for this ex- ActiveMQ, RabbitMQ etc).
When WebService receives request, add request (with all details required to process it) in Kafka queue (using Kafka message producer details)
Separate service (having Kafka consumer details) will read from topic(queue) and process it.
In case need to send message to client when request is processed, server can push information to client using WebSocket (Or client can poll for request status however this need request status endpoint and will cause load on that endpoint).
Apache Kafka would be helpful in your case. If you use a Kafka broker, it will allow you to face with a peak of requests. The requests will be stored in a queue as you mentionned and be treated by your server at its own speed.
As your are using tomcat, I guess you developped your server in Java. Apache Kafka propose a Java API which is quite easy to use.
I am reading messages from Kafka and processing them in storm.
I can see some messages which are failing in Storm UI.
I want to log these messages and figure out why these messages failed. There are no messages in logs as such for this.
If they have failed then they are supposed to be replayed by the Spout again. The KafkaSpout should have a fail method which you can use to identify the failed messageIds.
This might provide some direction.
The logs of every topology execution are stored in the logs folder (apache-storm-0.10.0/logs).