How to configure an ActiveMQ Artemis queue not be created again - activemq-artemis

I have an embedded ActiveMQ Artemis 2.17.0 broker that has no pre-configured queues. I'd like for the client to connect to the broker and have its queue created automatically if it doesn't exist. This works just fine the 1st time when I run the client but on the second time I connect again to the broker the client throws an error :
errorType=QUEUE_EXISTS message=AMQ229019: Queue hornetq already exists on address router
Is it possible to configure the client or the server in such a way that if a queue is already in existence it should not attempt to recreate it again?
Below are the programs I am running:
Server
try {
ActiveMQServer server = new ActiveMQServerImpl(new ConfigurationImpl()
.setPersistenceEnabled(true)
.setBindingsDirectory("./router/data/bindings")
.setLargeMessagesDirectory("./router/data/large")
.setPagingDirectory("./router/data/paging")
.setJournalDirectory("./router/data/journal")
.setSecurityEnabled(false)
.addAcceptorConfiguration("tcp", "tcp://0.0.0.0:61617?protocols=CORE,AMQP"));
server.start();
} catch (Exception ex) {
System.err.println(ex);
}
Client
ServerLocator serverLocator = ActiveMQClient.createServerLocator("tcp://127.0.0.1.3:61617");
ClientSessionFactory factory = serverLocator.createSessionFactory();
ClientSession session = factory.createSession();
session.createQueue(new QueueConfiguration("router::hornetq")
.setAutoCreateAddress(Boolean.FALSE)
.setAutoCreated(Boolean.FALSE)
.setRoutingType(RoutingType.ANYCAST));
ClientProducer producer = session.createProducer("router::hornetq");
ClientMessage message = session.createMessage(true);
message.getBodyBuffer().writeString("Core Queue Message");
producer.send(message);
session.start();
ClientConsumer consumer = session.createConsumer("router::hornetq");
ClientMessage msgReceived = consumer.receive();
System.out.println("message = " + msgReceived.getBodyBuffer().readString());
session.close();
I'm using the fully qualified queue name here (i.e. router::hornetq) because I have multiple queues on the router address.

Your client is using the core API which is a low-level API that does not support auto-queue creation. Your client is creating (or attempting to create) the queue manually, e.g.:
session.createQueue(new QueueConfiguration("router::hornetq")
.setAutoCreateAddress(Boolean.FALSE)
.setAutoCreated(Boolean.FALSE)
.setRoutingType(RoutingType.ANYCAST));
The exception you're seeing is no doubt being thrown here if the queue exists already. Creating a queue is not idempotent so you have 3 options from what I can see:
Simply catch the ActiveMQQueueExistsException that is being thrown here and ignore it.
Use ClientSession.queueQuery to see if the queue exists before you attempt to create it. If it exists then don't try to create it again. If it doesn't exist then create it. That said, if you have lots of clients like this running concurrently it's still possible you could get an ActiveMQQueueExistsException due to race conditions between the clients.
Use a client/protocol that supports auto-creation like the core JMS client or AMQP.
A few other things are worth mentioning:
Since you're not explicitly calling ClientSession.createAddress() you probably want to use setAutoCreateAddress(Boolean.TRUE).
Your consumer doesn't have to use router::hornetq. It can just use hornetq and it will receive any messages sent to the hornetq queue. Queue names are universally unique in the broker.

Related

Unable to exchange message in RabbitMQ-FASTApi application. Getting Error: Unprocessable Entity error

I am trying to build a very simple RabbitMQ-FastAPI based application that would be able to run on Kubernetes. Created two separate APIs for producer and consumer.
Producer API:
#app.get("/hello/{message}")
def post_message(message: str = Body(..., example="Hello World!")):
try:
connection = pika.BlockingConnection(
pika.ConnectionParameters(host="rabbitmq-0.rabbitmq-headless.keda.svc.cluster.local", port=5672,
credentials=pika.PlainCredentials("user", "PASSWORD")))
channel = connection.channel()
#channel.exchange_declare(exchange='logs', exchange_type='direct')
#severity = ['hello', 'message', 'error']
#messages = ['Hafizur', 'message', 'error']
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='', routing_key='hello', body=message)
connection.close()
return {"Successfully sended {} do queue".format(message)}
Consumer API:
#app.post("/message")
def get_message():
try:
connection = pika.BlockingConnection(
pika.ConnectionParameters(host="rabbitmq-0.rabbitmq-headless.keda.svc.cluster.local", port=5672,
credentials=pika.PlainCredentials("user", "PASSWORD")))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_consume(on_message_callback=callback, queue="hello", auto_ack=True)
print(" [*] Waiting for messages. To exit press CTRL+C")
channel.start_consuming()
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
From here I am building two docker images, one for the producer and another for the consumer. Using those built two deployments on K8S. Did port forwarding and accessed to the producer API and tried to send some message to RabbitMQ queue. But getting Error: Unprocessable Entity.
I am seeking your help to solve this issue.

ActiveMQ Artemis publish message loss during HA fail-over

I use ActiveMQ Artemis 2.17.0, and I'm looking to avoid message loss in a producer during fail-over.
Message publish loss handled during Artemis active to passive switch by catching ActiveMQUnBlockedException and sending the message again.
The brokers are configured as active/passive HA shared-store. Active node configured in host1 and passive node configured in host2.
Url is:
(tcp://host1:61616,tcp://host2:61616)?ha=true&reconnectAttempts=-1&blockOnDurableSend=false
blockOnDurableSend set to false for high throughput.
During active to passive switch publishing code throws ActiveMQUnBlockedException but not during passive to active switching.
We're using Spring 4.2.5 and CachingConnectionFactory for connection factory.
I'm using the following code to send messages:
private void sendMessageInternal(final ConnectionFactory connectionFactory, final Destination queue, final String message)
throws JMSException {
try (final Connection connection = connectionFactory.createConnection();) {
connection.start();
try (final Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
final MessageProducer producer = session.createProducer(queue);) {
final TextMessage textMessage = session.createTextMessage(message);
producer.send(textMessage);
}
} catch (JMSException thr) {
if (thr.getCause() instanceof ActiveMQUnBlockedException) {
// consider as fail-over disconnection, send message again.
} else {
throw thr;
}
}
}
In host1 machine, Artemis deployed as master - node1.
In host2 machine, Artemis deployed as slave - node2.
following steps I did to simulate fail-over
node1 and node2 started
node1 started as live server and node2 started as backup server
killed node1, node2 become live server
client publish code threw ActiveMQUnBlockedException and handled to send message again
started node1 again. node1 become live server and node2 become backup again
client publish code did not throw ActiveMQUnBlockedException and loss in message.
Getting following error stack during step #3. ( Killed node1 and node2 become Live server).
javax.jms.JMSException: AMQ219016: Connection failure detected. Unblocking a blocking call that will never get a response
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:540)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:434)
at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.sessionStop(ActiveMQSessionContext.java:470)
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.stop(ClientSessionImpl.java:1121)
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.stop(ClientSessionImpl.java:1110)
at org.apache.activemq.artemis.jms.client.ActiveMQSession.stop(ActiveMQSession.java:1244)
at org.apache.activemq.artemis.jms.client.ActiveMQConnection.stop(ActiveMQConnection.java:339)
at org.springframework.jms.connection.SingleConnectionFactory$SharedConnectionInvocationHandler.localStop(SingleConnectionFactory.java:644)
at org.springframework.jms.connection.SingleConnectionFactory$SharedConnectionInvocationHandler.invoke(SingleConnectionFactory.java:577)
at com.sun.proxy.$Proxy5.close(Unknown Source)
at com.eu.amq.failover.test.ProducerNodeTest.sendMessageInternal(ProducerNodeTest.java:133)
at com.eu.amq.failover.test.ProducerNodeTest.sendMessage(ProducerNodeTest.java:110)
at com.eu.amq.failover.test.ProducerNodeTest.main(ProducerNodeTest.java:90)
The ActiveMQUnBlockedException you're getting is coming from Spring's invocation of javax.jms.Connection#stop. It's not related to sending a message. Re-sending a message when you get this specific exception could result in a duplicate message.
Ultimately your problem is directly related to setting blockOnDurableSend=false. This tells the client to "fire and forget." In other words the client won't wait for a response from the broker to ensure the message actually made it successfully. This lack of waiting increases throughput but decreases reliability.
If you really want to mitigate potential message loss you have two main options.
Set blockOnDurableSend=true. This will reduce message throughput, but it's the simplest way to guarantee the message arrived at the broker successfully.
Use a CompletionListener. This will allow you to keep blockOnDurableSend=false, but the application will still be informed if there are problems sending the message although the information will be provided asynchronously. This feature was added in JMS 2 specifically for this kind of scenario. See the JavaDoc for more details.

Vertx Java Client throwing "SMF AD bind response error" while connecting solace vmr Server

When I am trying to connect solace VMR Server and deliver the messages from a Java client called Vertx AMQP Bridge.
I am able to connect the Solace VMR Server but after connecting, not able to send messages to solace VMR.
I am using below sender code from vertx client.
public class Sender extends AbstractVerticle {
private int count = 1;
// Convenience method so you can run it in your IDE
public static void main(String[] args) {
Runner.runExample(Sender.class);
}
#Override
public void start() throws Exception {
AmqpBridge bridge = AmqpBridge.create(vertx);
// Start the bridge, then use the event loop thread to process things thereafter.
bridge.start("13.229.207.85", 21196,"UserName" ,"Password", res -> {
if(!res.succeeded()) {
System.out.println("Bridge startup failed: " + res.cause());
return;
}
// Set up a producer using the bridge, send a message with it.
MessageProducer<JsonObject> producer =
bridge.createProducer("T/GettingStarted/pubsub");
// Schedule sending of a message every second
System.out.println("Producer created, scheduling sends.");
vertx.setPeriodic(1000, v -> {
JsonObject amqpMsgPayload = new JsonObject();
amqpMsgPayload.put(AmqpConstants.BODY, "myStringContent" + count);
producer.send(amqpMsgPayload);
System.out.println("Sent message: " + count++);
});
});
}
}
I am getting the error below:
Bridge startup failed: io.vertx.core.impl.NoStackTraceThrowable:
Error{condition=amqp:not-found, description='SMF AD bind response
error', info={solace.response_code=503, solace.response_text=Unknown
Queue}} Apr 27, 2018 3:07:29 PM io.vertx.proton.impl.ProtonSessionImpl
WARNING: Receiver closed with error
io.vertx.core.impl.NoStackTraceThrowable:
Error{condition=amqp:not-found, description='SMF AD bind response
error', info={solace.response_code=503, solace.response_text=Unknown
Queue}}
I have created queue and also topic correctly in solace VMR but not able to send/receive messages. Am I missing any configuration from solace VMR Server side? Is there any code-change required in the Vertx Sender Java code above? I am getting the error trace above when delivering message. Can someone help on the same?
Vertx AMQP Bridge Java client :https://vertx.io/docs/vertx-amqp-bridge/java/
There are a few different reason why you may be encountering this error.
It could be that the client is not authorized to publish guaranteed messages. To fix this, you need to enable "guaranteed endpoint create" in the client-profile on the Solace router side.
It may also be that the application is using Reply Handling. This is not currently supported with the Solace router. Support for this will be added in the 8.11 release of the Solace VMR. A workaround for this would be to set ReplyHandlingSupport to false.
AmqpBridgeOptions().setReplyHandlingSupport(false);
There is also a known issue in the Solace VMR which causes this error when unsubscribing from a durable topic endpoint. A fix for this issue will also be in the 8.11 release of the Solace VMR. A workaround for this is to disconnect the client without first unsubscribing.

Riak Java HTTPClientAdapter TCP CLOSE_WAIT

TLDR:
Lots of TCP connections in OPEN_WAIT status shutting down server
Setup:
riak_1.2.0-1_amd64.deb installed on Ubuntu12
Spring MVC 3.2.5
riak-client-1.1.0.jar
Tomcat7.0.51 hosted on Windows Server 2008 R2
JRE6_45
Full Description:
How do I ensure that the Java RiakClient is properly cleaning up it's connections to that I'm not left with an abundance of CLOSE_WAIT tcp connections?
I have a Spring MVC application which uses the Riak java client to connect to the remote instance/cluster.
We are seeing a lot of TCP Connections on the server hosting the Spring MVC application, which continue to build up until the server can no longer connect to anything because there are no ports available.
Restarting the Riak cluster does not clean the connections up.
Restarting the webapp does clean up the extra connections.
We are using the HTTPClientAdapter and REST api.
When connecting to a relational database, I would normally clean up connections by either explicitly calling close on the connection, or by registering the datasource with a pool and transaction manager and then Annotating my Services with #Transactional.
But since using the HTTPClientAdapter, I would have expected this to be more like an HttpClient.
With an HttpClient, I would consume the Response entity, with EntityUtils.consume(...), to ensure that the everything is properly cleaned up.
HTTPClientAdapter does have a shutdown method, and I see it being called in the online examples.
When I traced the method call through to the actual RiakClient, the method is empty.
Also, when I dig through the source code, nowhere in it does it ever close the Stream on the HttpResponse or consume any response entity (as with the standard Apache EntityUtils example).
Here is an example of how the calls are being made.
private RawClient getRiakClientFromUrl(String riakUrl) {
return new HTTPClientAdapter(riakUrl);
}
public IRiakObject fetchRiakObject(String bucket, String key, boolean useCache) {
try {
MethodTimer timer = MethodTimer.start("Fetch Riak Object Operation");
//logger.debug("Fetching Riak Object {}/{}", bucket, key);
RiakResponse riakResponse;
riakResponse = riak.fetch(bucket, key);
if(!riakResponse.hasValue()) {
//logger.debug("Object {}/{} not found in riak data store", bucket, key);
return null;
}
IRiakObject[] riakObjects = riakResponse.getRiakObjects();
if(riakObjects.length > 1) {
String error = "Got multiple riak objects for " + bucket + "/" + key;
logger.error(error);
throw new RuntimeException(error);
}
//logger.debug("{}", timer);
return riakObjects[0];
}
catch(Exception e) {
logger.error("Error fetching " + bucket + "/" + key, e);
throw new RuntimeException(e);
}
}
The only option I can think of, is to create the RiakClient separately from the adapter so I can access the HttpClient and then the ConnectionManager.
I am currently working on switching over to the PBClientAdapter to see if that might help, but for the purposes of this question (and because the rest of the team may not like me switching for whatever reason), let's assume that I must continue to connect over HTTP.
So it's been almost a year, so I thought I would go ahead and post how I solved this problem.
The solution was to change the client implementation we were using to the HTTPClientAdapter provided by the java client, passing in the configuration to implement pools and max connections. Here's some code example of how to do it.
First, we are on an older version of RIAK, so here's the amven dependency:
<dependency>
<groupId>com.basho.riak</groupId>
<artifactId>riak-client</artifactId>
<version>1.1.4</version>
</dependency>
And here's the example:
public RawClient riakClient(){
RiakConfig config = new RiakConfig(riakUrl);
//httpConnectionsTimetolive is in seconds, but timeout is in milliseconds
config.setTimeout(30000);
config.setUrl("http://myriakurl/);
config.setMaxConnections(100);//Or whatever value you need
RiakClient client = new RiakClient(riakConfig);
return new HTTPClientAdapter(client);
}
I actually broke that up a bit in my implementation and used Spring to inject values; I just wanted to show a simplified example for it.
By setting the timeout to something less than the standard five minutes, the system will not hang to the connections for too long (so, 5 minutes + whatever you set the timeout to) which causes the connectiosn to enter the close_wait status sooner.
And of course setting the max connections in the pool prevents the application from opening up 10's of thousands of connections.

Handling connection failures in apache-camel

I am writing an apache-camel RabbitMQ consumer. I would like to react somehow to connection problems (i.e. try to reconnect). Is it possible to configure apache-camel to automatically reconnect?
If not, how can I find out that a connection to the queue was interrupted? I've done the following test:
start the queue (and some producer)
start my consumer (it was getting messages as expected)
stop the queue (the messages stopped arriving, as expected, but no exception was thrown)
start the queue (no new messages were received)
I am using camel in Scala (via akka-camel), but a Java solution would be probably also OK
You can pass in the flag automaticRecoveryEnabled=true to the URI, Camel will reconnect if the connection is lost.
For automatic RabbitMQ resource recovery (Connections/Channels/Consumers/Queues/Exchanages/Bindings) when failures occur, check out Lyra (which I authored). Example usage:
Config config = new Config()
.withRecoveryPolicy(new RecoveryPolicy()
.withMaxAttempts(20)
.withInterval(Duration.seconds(1))
.withMaxDuration(Duration.minutes(5)));
ConnectionOptions options = new ConnectionOptions().withHost("localhost");
Connection connection = Connections.create(options, config);
The rest of the API is just the amqp-client API, except your resources are automatically recovered when failures occur.
I'm not sure about camel-rabbitmq specifically, but hopefully there's a way you can swap in your own resource creation via Lyra.
Current camel-rabbitmq just create a connection and the channel when the consumer or producer is started. So it don't have a chance to catch the connection exception :(.