ActiveMQ-Artemis 2.6 Set queue dead letter address using JMS management API - activemq-artemis

I'm trying to set the dead letter address for a queue via the JMS management API. From reading the latest Artemis docs it appears that I should be able to do this using the QueueControl.setDeadLetterAddress(...) method. See https://activemq.apache.org/artemis/docs/latest/management.html and search for "setDeadLetterAddress".
It is my understanding that the parameters of these methods should be found in the Artemis QueueControl javadocs here:
https://activemq.apache.org/artemis/docs/javadocs/javadoc-latest/org/apache/activemq/artemis/api/core/management/QueueControl.html
However, that documentation does not have any mention of a setDeadLetterAddress method or what parameters is might accept.
Does the QueueControl.setDeadLetterAddress method still exist and can it be called from the JMSManagementHelper.putOperationInvocation(...) method?
Many thanks!

Looking at the QueueControlImpl class code it is clear that the setDeadLetterAddress operation is no longer present. An operation in the ActiveMQServerControlImpl class named addAddressSettings does provide the capability to set the DLA for a queue (as well as plenty of other settings).
For example:
Queue managementQueue = ActiveMQJMSClient.createQueue("activemq.management");
Queue replyQueue = ActiveMQJMSClient.createQueue("management.reply");
JMSContext context = connectionFactory.createContext();
JMSConsumer consumer = context.createConsumer(replyQueue)) {
JMSProducer producer = context.createProducer();
producer.setJMSReplyTo(replyQueue);
// Using AddressSettings isn't required, but is provided
// for clarity.
AddressSettings settings = new AddressSettings()
.setDeadLetterAddress(new SimpleString("my.messages.dla"))
.setMaxDeliveryAttempts(5)
.setExpiryAddress(new SimpleString("ExpiryAddress"))
.setExpiryDelay(-1L) // No expiry
.setLastValueQueue(false)
.setMaxSizeBytes(-1) // No max
.setPageSizeBytes(10485760)
.setPageCacheMaxSize(5)
.setRedeliveryDelay(500)
.setRedeliveryMultiplier(1.5)
.setMaxRedeliveryDelay(2000)
.setRedistributionDelay(1000)
.setSendToDLAOnNoRoute(true)
.setAddressFullMessagePolicy(AddressFullMessagePolicy.PAGE)
.setSlowConsumerThreshold(-1) // No slow consumer checking
.setSlowConsumerCheckPeriod(1000)
.setSlowConsumerPolicy(SlowConsumerPolicy.NOTIFY)
.setAutoCreateJmsQueues(true)
.setAutoDeleteJmsQueues(false)
.setAutoCreateJmsTopics(true)
.setAutoDeleteJmsTopics(false)
.setAutoCreateQueues(true)
.setAutoDeleteQueues(false)
.setAutoCreateAddresses(true)
.setAutoDeleteAddresses(false);
Message m = context.createMessage();
JMSManagementHelper.putOperationInvocation(m, ResourceNames.BROKER, "addAddressSettings",
"my.messages",
settings.getDeadLetterAddress().toString(),
settings.getExpiryAddress().toString(),
settings.getExpiryDelay(),
settings.isLastValueQueue(),
settings.getMaxDeliveryAttempts(),
settings.getMaxSizeBytes(),
settings.getPageSizeBytes(),
settings.getPageCacheMaxSize(),
settings.getRedeliveryDelay(),
settings.getRedeliveryMultiplier(),
settings.getMaxRedeliveryDelay(),
settings.getRedistributionDelay(),
settings.isSendToDLAOnNoRoute(),
settings.getAddressFullMessagePolicy().toString(),
settings.getSlowConsumerThreshold(),
settings.getSlowConsumerCheckPeriod(),
settings.getSlowConsumerPolicy().toString(),
settings.isAutoCreateJmsQueues(),
settings.isAutoDeleteJmsQueues(),
settings.isAutoCreateJmsTopics(),
settings.isAutoDeleteJmsTopics(),
settings.isAutoCreateQueues(),
settings.isAutoDeleteQueues(),
settings.isAutoCreateAddresses(),
settings.isAutoDeleteAddresses());
producer.send(managementQueue, m);
Message response = consumer.receive();
// addAddressSettings returns void but this will also return errors if the
// method or parameters are wrong.
log.info("addAddressSettings Reply: {}", JMSManagementHelper.getResult(response));

Related

Spring Integration's `ImapIdleAdapter` doesn't mark messages as read

I have declared a mail listener with Spring Integration like this:
#Bean
public IntegrationFlow mailListener() {
return IntegrationFlows.from(
Mail.imapIdleAdapter(getUrl())
.searchTermStrategy((s, f) -> new FlagTerm(new Flags(Flags.Flag.SEEN), false))
.shouldMarkMessagesAsRead(true)
.shouldDeleteMessages(false)
.get())
.<Message>handle((payload, header) -> handle(payload)).get();
}
In my test mail account I have a few 'unread' and a few 'read' messages. Starting the application I see in the logs that the listener fetches all of the 'unread' messages over and over again without ever marking them as 'read'.
Given that I specified shouldMarkMessagesAsRead(true) I would expect the Adapter to mark a message as read after fetching it.
Am I understanding and/or doing something wrong?
Thanks to Artem Bilan's hint on activating debug output I found out that the mailbox was opened in read-only mode.
And thanks to Gary Russell's answer to another question I tried removing the .get() call on the ImapIdleChannelAdapterSpec:
#Bean
public IntegrationFlow mailListener() {
return IntegrationFlows.from(
Mail.imapIdleAdapter(getUrl())
.shouldMarkMessagesAsRead(true)
.shouldDeleteMessages(false))
.<Message>handle((payload, header) -> handle(payload)).get();
}
Now the mailbox gets opened in read-write mode and marking the messages with the SEEN flag works fine.
I also don't actually need the custom SearchTermStrategy now as Artem Bilan already suggested.
In this type of situations we recommend to set:
.javaMailProperties(p -> p.put("mail.debug", "true"))
on that Mail.imapIdleAdapter().
Probably your e-mail server really does not support that \seen flag, so the message is marked as read via some other flag.
So, with that mail debugging option you should see some interesting info in your logs.
The logic in our default DefaultSearchTermStrategy is like this around that Flags.Flag.SEEN :
if (supportedFlags.contains(Flags.Flag.SEEN)) {
NotTerm notSeen = new NotTerm(new FlagTerm(new Flags(Flags.Flag.SEEN), true));
if (searchTerm == null) {
searchTerm = notSeen;
}
else {
searchTerm = new AndTerm(searchTerm, notSeen);
}
}
See if you really need a custom strategy and why a default one is not enough for you: https://docs.spring.io/spring-integration/docs/current/reference/html/mail.html#search-term

using jmx monitor kafka topic

I am using jmx to monitoring kafka topic.
val url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://broker1:9393/jmxrmi");
val jmxc = JMXConnectorFactory.connect(url, null);
val mbsc = jmxc.getMBeanServerConnection();
val messageCountObj = new ObjectName("kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=mytopic");
val messagesInPerSec = mbsc.getAttribute(messageCountObj,"MeanRate")
using this code I can get the MeanRate of "mytopic" on broker1.
but I have 10 brokers,how can I get the "mytopic"'s MeanRate from all my brokers?
I have try "service:jmx:rmi:///jndi/rmi://broker1:9393,broker2:9393,broker3:9393/jmxrmi"
got an error :(
It would be nice if it were that simple ;)
There's no way to do this as you outlined. You will need to make a seperate connection to each broker.
One possible solution would be to use MBeanServer Federation which would register proxies for each of your brokers in one MBeanServer, so if you did this on broker1, you could connect to service:jmx:rmi:///jndi/rmi://broker1:9393/jmxrmi and query the stats for all your brokers in one go, but you would need to query 10 different ObjectNames, query the value for each and then compute the MeanRate yourself. [Java] Pseudo code:
ObjectName wildcard = new ObjectName("*:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=mytopic");
double totalRate = 0d;
int respondingBrokers = 0;
for(ObjectName on : mbsc.queryNames(wildcard, null)) {
totalRate += (Double)mbsc.getAttribute(messageCountObj,"MeanRate");
respondingBrokers++;
}
// Average rate of mean rates: totalRate/respondingBrokers
Note: no exception handling, and I am assuming the rate type is a Double.
You could also create and register a custom MBean that computed the aggregate mean on the federated broker.
If you are maven oriented, you can build the OpenDMK from here.

How to use Flink streaming to process Data stream of Complex Protocols

I'm using Flink Stream for the handling of data traffic log in 3G network (GPRS Tunnelling Protocol). And I'm having trouble in the synthesis of information in a user session of the user.
For example: how to map the start and end one session. I don't know that there Flink streaming suited to handle complex protocols like that?
p/s:
We capture data exchanging between SGSN and GGSN in 3G network (use GTP protocol with GTP-C/U messages). A session is started when the SGSN sends the CreateReq (TEID, Seq, IMSI, TEID_dl,TEID_data_dl) message and GGSN responses CreateRsp(TEID_dl, Seq, TEID_ul, TEID_data_ul) message.
After the session is established, others GTP-C messages (ex: UpdateReq, DeleteReq) sent from SGSN to GGSN uses TEID_ul and response message uses TEID_dl, GTP- U message uses TEID_data_ul (SGSN -> GGSN) and TEID_data_dl (GGSN -> SGSN). GTP-U messages contain information such as AppID (facebook, twitter, web), url,...
Finally, I want to handle continuous log data stream and map the GTP-C messages and GTP-U of the same one user (IMSI) to make a report.
I've tried this:
val sessions = createReqs.connect(createRsps).flatMap(new CoFlatMapFunction[CreateReq, CreateRsp, Session] {
// holds CreateReqs indexed by (tedid_dl,seq)
private val createReqs = mutable.HashMap.empty[(String, String), CreateReq]
// holds CreateRsps indexed by (tedid,seq)
private val createRsps = mutable.HashMap.empty[(String, String), CreateRsp]
override def flatMap1(req: CreateReq, out: Collector[Session]): Unit = {
val key = (req.teid_dl, req.header.seqNum)
val oRsp = createRsps.get(key)
if (!oRsp.isEmpty) {
val rsp = oRsp.get
println("OK")
out.collect(new Session(rsp.header.time, req.imsi, req.teid_dl, req.teid_ddl, rsp.teid_upl, rsp.teid_dupl, req.rat, req.apn))
createRsps.remove(key)
} else {
createReqs.put(key, req)
}
}
override def flatMap2(rsp: CreateRsp, out: Collector[Session]): Unit = {
val key = (rsp.header.teid, rsp.header.seqNum)
val oReq = createReqs.get(key)
if (!oReq.isEmpty) {
val req = oReq.get
out.collect(new Session(rsp.header.time, req.imsi, req.teid_dl, req.teid_ddl, rsp.teid_upl, rsp.teid_dupl, req.rat, req.apn))
createReqs.remove(key)
} else {
createRsps.put(key, rsp)
}
}
}).print()
This code always returns empty result. The fact that the input stream contains CreateRsp and CreateReq message of the same session. They appear very close together (within 1 second). When I debug, the oReq.isEmpty == true every time.
What i'm doing wrong?
To be honest it is a bit difficult to see through the telco specifics here, but if I understand correctly you have at least 3 streams, the first two being the CreateReq and the CreateRsp streams.
To detect the establishment of a session I would use the ConnectedDataStream abstraction to share state between the two aforementioned streams. Check out this example for usage or the related Flink docs.
Is this what you are trying to achieve?

Understanding Esper IO Http example

What is Trigger Event here ?
How to plug this to the EsperEngine for getting events ?
What URI should be passed ? how should engineURI look like ?
Is it the remote location of the esper engine ?
ConfigurationHTTPAdapter adapterConfig = new ConfigurationHTTPAdapter();
// add additional configuration
Request request = new Request();
request.setStream("TriggerEvent");
request.setUri("http://localhost:8077/root");
adapterConfig.getRequests().add(request);
// start adapter
EsperIOHTTPAdapter httpAdapter = new EsperIOHTTPAdapter(adapterConfig, "engineURI");
httpAdapter.start();
// destroy the adapter when done
httpAdapter.destroy();
Changed the stream from TriggerEvents to HttpEvents and I get this exception given below
ConfigurationException: Event type by name 'HttpEvents' not found
The "engineURI" is a name for the CEP engine instance and has nothing to do with the EsperIO http transport. Its a name for looking up what engines exists and finding the engine by name. So any text can be used here and the default CEP engine is named "default" when you allocate the default one.
You should define the event type of the event you expect to receive via http. A sample code is in http://svn.codehaus.org/esper/esper/trunk/esperio-socket/src/test/java/com/espertech/esperio/socket/TestSocketAdapterCSV.java
You need to declare your event type(s) in either Java, or through Esper's EPL statements.
The reason why you are getting exception is because your type is not defined.
Then you can start sending events by specifying type you are sending in HTTP request. For example, here is a bit of code in python:
import urllib
cepurl = "http://localhost:8084"
param = urllib.urlencode({'stream':'DataEvent',
'date': datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ"),
'src':data["ipsrc"],
'dst':data["ipdst"],
'type':data["type"]})
# sending event:
f = urllib.urlopen(cepurl + "/sendevent?" + param);
rez = f.read()
in java this probably would be something like this:
SupportHTTPClient client = new SupportHTTPClient();
client.request(8084, "sendevent", "stream", "DataEvent", "date", "mydate");

How to resend a message from the JBoss 4.2.2 message queue after retry expired

Is there a way to resend expired messages in a JBoss 4.2.2 message queue? The issue is they exceeded their retry amounts, but now the problem is fixed, so is there a way to resend them?
In JBoss 3 they were just text files that you could move around. Now that it is stored in a database, how can you do it?
Have a look at Hermes JMS. It's an open source tool for browsing JMS queues and topics. It can replay messages that end up on the broker's undeliverable queue.
This is what I ended up doing:
Hashtable t = new Hashtable();
t.put(Context.PROVIDER_URL, "localhost:1099");
t.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
Context ctx = new InitialContext(t);
Queue q = (Queue) ctx.lookup("/queue/DLQ");
//----------------------------
ConnectionFactory cf = (ConnectionFactory) ctx.lookup("/ConnectionFactory");
Connection connection = cf.createConnection();
Session session = connection.createSession(true, 0);
//---------------------------------
MessageConsumer consumer = session.createConsumer(q);
connection.start();
SpyObjectMessage m;
Queue originialDestination = null;
//There can only be one in my case, but really you have to look it up every time.
MessageProducer producer = null;
while ((m = (SpyObjectMessage) consumer.receive(5000)) != null) {
Object o = m.getObject();
Date messageDate = new Date(m.getJMSTimestamp());
String originalQueue = m.getStringProperty("JBOSS_ORIG_DESTINATION");
if (originialDestination == null) {
originialDestination = (Queue) ctx.lookup("/queue/" +
originalQueue.substring(originalQueue.indexOf('.') + 1));
producer = session.createProducer(originialDestination);
}
producer.send(session.createObjectMessage((Serializable) o));
m.acknowledge();
}
//session.commit(); //Uncomment to make this real.
connection.close();
ctx.close();
Note: I work for CodeStreet
Our 'ReplayService for JMS' product is built exactly for this use case: search and retrieve previously published messages (n-times delivery) - JMS is really designed for a 1-time delivery.
With ReplayService for JMS, you would configure a WebLogic recording to record all messages published to your topic or queue. Through a Web-based GUI, you can then search for individual messages (by substring, XPath or JMS Selector) and then replay them again to the original JMS destination.
See http://www.codestreet.com/marketdata/jms/jms_details.php for further details.