I'm using Activem MQ 5.3.1
My configuration is good for classical async messaging
I try to use a QueueRequestor
The message is effectively sended, recieved.
But when it's time to answer on the temp queue i've got this exception raised
javax.jms.InvalidDestinationException: Cannot publish to a deleted Destination: temp-queue://ID:......
the destination doesn't exist
I'm using the default conf for activemq
Any idea??
I just find my answer
the implementation of queuerequestor is made for send and recieve on the same jmsSession.
That's why the reciever of the requestor never seen any message, and why the temp destination could'n be use topublish message
My solution is to create a requestor with two session.
The actuall implmentation will be very similar to the one on the blog post above
Related
I am using Mirth Connect 3.5.0.8232.
I have a Database Reader as source connector and a JavaScript writer as destination connector. I decided to put some fancy code in the destination, doing four separate things, which should follow one after the other. Basically I just wrote the code and it seemed to me that it was too long and too clumsy, so I decided to split it into 4 destinations that would be daisy-chained, via the "Wait for previous destination" option.
The question is : How do I interrupt this chain of execution if an error occurs on one of the destinations?
I found a JIRA issue from 2013 saying that actually the errors that would occur in the body of the Destination Connector would not prevent the message from going to all other Destinations. And it states that the 2.X version behavior is still current, i.e. an error that would occur in the Destination Transformer, will actually stop the message from propagating.
I tried throwing errors in both the Destination body, and in Destination Response Transformer, and in both cases the message would continue to other Destinations. I also tried returning ResponseFactory.getErrorResponse from the Destination body with no luck. I also tried setting responseStatus to ERROR in Destination Response Transformer to no avail. Did they mean the normal Transformer/Filter?
Also - maybe my particular solution of splitting a task into 4 distinct destinations was NOT the reason why the destinations were created in the first place? I think that the documentation states that destinations are basically what the actual word Destination stands for.
If the above case is true, maybe there are better ways of organizing the code functionally in Mirth? I think including external JS files is not allowed in JavaScript writer - even if it were, i would prefer everything to sit inside the Channel itself and be exportable/importable as a single file.
Thank you.
Yep, when an error is thrown from a filter/transformer, it's considered truly "exceptional" and so message flow is stopped (subsequent destinations in the same chain are not executed).
If an error is thrown from the actual destination dispatcher or from the response transformer, that destination is marked as ERROR, but subsequent destinations will still be executed.
You can still stop the message flow if you want though. Use filters on your subsequent destinations:
I'm having following testing scenario for Spring Cloud Stream based application.
I'm having one topic for my application with two queues. BindingKey for first queue is named "cities", for the second queue is the binding key "persons".
Please howto set routing key for the Spring Cloud Stream Rabbit producer??? To distinquish where the message will be consumed?
This is my binding configuration:
spring.config.name=streaming
spring.cloud.stream.bindings.citiesChannel.destination=streamInput
spring.cloud.stream.bindings.citiesChannel.group=cities
spring.cloud.stream.rabbit.bindings.citiesChannel.consumer.durableSubscription=true
spring.cloud.stream.rabbit.bindings.citiesChannel.consumer.bindingRoutingKey=cities
spring.cloud.stream.bindings.personsChannel.destination=streamInput
spring.cloud.stream.bindings.personsChannel.group=persons
spring.cloud.stream.rabbit.bindings.personsChannel.consumer.durableSubscription=true
spring.cloud.stream.rabbit.bindings.personsChannel.consumer.bindingRoutingKey=persons
spring.cloud.stream.bindings.producingChannel.destination=streamInput
The only way howto distinguish where the message will be send(cities or persons queue) when publishing into producingChannel is through "spring.cloud.stream.bindings.producingChannel.producer.requiredGroups" property, but this is higly unusable. Because I don't wanna know anything about the queue where my message is going to land...This is AMPQ antipattern.
I want nothing simplier then just have similar functionality like through RabbitTemplate.setRoutingKey(String routingKey) method when publishing into producingChannel...:-(
Use routingKeyExpression on the producer side - see the documentation.
Since it's an expression, you'll need quotes: 'cities' or if the same producer sends to both, something like headers['whereToSendHeader'].
For those of us using yaml:
spring:
cloud:
stream:
rabbit:
bindings:
somechannel:
producer:
bindingRoutingKey: routingKey
routing-key-expression: '"routingKey"'
Source
Note the bindingRoutingKey above - this is used if you expect your queue to be bound at producer startup when using spring.cloud.stream.bindings.someChannel.producer.requiredGroups
Yes, thanks a lot.
Adding
spring.cloud.stream.rabbit.bindings.producingChannel.producer.routingKeyExpression='persons'
causes messages to land in a streaming.persons queue and
spring.cloud.stream.rabbit.bindings.producingChannel.producer.routingKeyExpression='cities'
in the streaming.cities queue. Exactly what I wanted.
Thank you. Seems that we're going to use Spring Cloud Stream in the project afterall..:-)
Another (AMQP-agnostic) way of doing this is by using the dynamic destination support, i.e. http://docs.spring.io/autorepo/docs/spring-cloud-stream-docs/Chelsea.SR2/reference/htmlsingle/#dynamicdestination
The main difference in the case of Rabbit is that you'll end up having 2 separate exchanges ('cities' and 'persons') - so it does not leverage the routing support there, but it is portable to other messaging systems such as Kafka for example.
We are using hornetq-core 2.2.21.Final stand-alone after reading a non-transnational message , the message still remains in queue although it acknowledge
session is created using
sessionFactory.createSession(true, true, 0)
locator setting:
val transConf = new TransportConfiguration(classOf[NettyConnectorFactory].getName,map)
val locator = HornetQClient.createServerLocatorWithoutHA(transConf)
locator.setBlockOnDurableSend(false)
locator.setBlockOnNonDurableSend(false)
locator.setAckBatchSize(0) // also tried without this setting
locator.setConsumerWindowSize(0)// also tried without this setting
Message is acknowledge using message.acknowledge ()
I think that the problem might be two queues on the same address
also tried to set the message expiration but it didn't help , messages are still piling up in the queue
please advise
It seems you are using the core api. Are you explicitly calling acknowledge on the messages?
If you have two queues on the same address ack will only ack the messages on the queue you are consuming. On that case the system is acting normally.
I am trying to configure syslog-ng destination path to use unix-stream sockets for Inter process communication. I have gone throgh this documentation http://www.balabit.com/sites/default/files/documents/syslog-ng-ose-3.3-guides/en/syslog-ng-ose-v3.3-guide-admin-en/html/configuring_destinations_unixstream.html .
My syslog.conf(only part of it) for the same is as follows:
source s_dxtcp { tcp(ip(0.0.0.0) port(514)); };
filter f_request {program("dxall");};
destination d_dxall_unixstream {unix-stream("/var/run/logs/all.log");};
log {source(s_dxtcp); filter(f_request); destination(d_dxall_unixstream);};
When I restart my syslog-ng server, I have got the following message:
Connection failed; fd='11', server='AF_UNIX(/var/run/logs/all.log)',
local='AF_UNIX(anonymous)', error='Connection refused (111)'
Initiating connection failed, reconnecting; time_reopen='60'
What this error signifies? How can I use unix sockets with syslog-ng? Could any one help me out.
Till now I am not able to create a Unix Domain Socket for inter process communication. But I got a way around it. All I want is a one way communication to send data created at syslog-ng to a running java program(a process, I can say). This I achieved with Using Named Pipes in Syslog-ng. Documents for achieving is http://www.balabit.com/sites/default/files/documents/syslog-ng-ose-3.4-guides/en/syslog-ng-ose-v3.4-guide-admin/html-single/index.html#configuring-destinations-pipe .
Reading from Named Pipe is same as reading from a normal file. One important point to note is that Reader process(here the Java program) should be started before Syslog-ng, (Writer, that writes log messages to the Named pipe).
Reason, Writer will block until there is a Reader. Absence of Reader will lead to loss of some messages, that got accumulated before Reader Started. And there should be only one instance of Reader. If there are multiple readers, the second reader will get null pointer exception, as the message it want to read is already read by the first Reader. Kindly note that this is from my experience. Let me know, If I am wrong.
Is there any way to purge an outgoing queue. It doesn't appear that I can do it with the MMC snap-in and when i try to purge it in code i get an error Format name is invalid the computer it's sending the messages to does not exist, so they will never be sent, however the queues filled up the max storage space for MSMQ so everytime my application tries to send another message i get the insufficient resources exception.
I've tried the following formats and they all fail with the exception format name is invalid
DIRECT=OS:COMPUTER\private$\queuename
OS:COMPUTER\private$\queuename
COMPUTER\private$\queuename
You should be able to purge it manually from the MMC snap-in. MSMQ gets very stingy when it reaches its storage limits, so a lot of operations will fail with "permission denied" and things like that.
The long-term solution obviously is to modify the configuration so there is enough storage space for your particular usage patterns.
Edit: You might be running into a limitation in the managed API related to admin capabilities and remote queues. Take a look at this article by Ingo Rammer. It even includes a p-invoke example.
it is possible use managed code to purge an outgoing queue:
using (var msgQueue = new MessageQueue(GetPrivateMqPath(queueName, remoteIP), QueueAccessMode.ReceiveAndAdmin))
{
msgQueue.Purge();
}
in which GetPrivateMqPath is:
if (!string.IsNullOrEmpty(remoteIP))
return String.Format("FORMATNAME:DIRECT=TCP:{0}\\private$\\{1}", remoteIP, queueName);
else
return #".\private$\" + queueName;
QueueAccessMode.ReceiveAndAdmin points to outgoing queue.
You could try FORMATNAME:DIRECT=OS:computer\PRIVATE$\queuename.