Sending multiple messages to JMS queue in Mule - queue

I am new to Mule. I am using RabbitMQ. In my Mule studio, I have configured AMQP in Mule studio.
I am able to run a flow where I put one message read from HTTP endpoint payload and put into a queue.
Now, I need to send multiple messages, say 1000, to that queue at a time. One option is that I hit the url in the browser that many times but that is very time consuming. I want to send 1000 messages at one go. How can i do that in mule? or How should I proceed with it?

It sounds like your trying to load test your Mule app. I would use something like Apache JMeter. JMeter will allow you to enter the url of your endpoint and set how many times to call it and many other more advanced features.
A good blog post on using JMeter and Mule is available here: http://blogs.mulesoft.org/measuring-the-performance-of-your-mule-esb-application/

Related

Is it possible to have Gatling JMS listen for messages only?

I've been exploring the use of Gatling for JMS testing as part of broader perf testing of our AUT. I've played with the example as found at https://gatling.io/docs/current/jms/, and have successfully had gatling create a queue on my test ActiveMQ server, and read the message.
However, actual AUT testing needs dictates that services in our app will create the msgs on our ActiveMQ server - and all I want in my Gatling code is make REST calls to our services that generate the messages, then the Gatling JMS code should pick up the messages, parse them as appropriate, and when I find a certain message, move on to the next bit of the test.
As per the gatling link above, "Currently, requestReply and send (fire and forget) requests are supported." Does this mean what I am trying to do is impossible? Does this mean I have to create the messages with Gatling, but not necessarily look for a reply?
If it is possible, I assume I could split the example I've been playing with into 2 separate exec actions - one to send, and one to receive? But how?
Thanks!
No, it's not possible at the moment (Gatling 3.3).

Best way to write to Kafka from web site?

I mean I know how to get data into kafka either by some file agent or programmatically using any of the clients, but speaking from architectural point of view...
It can't just be collecting HTTP logs.
I'm assuming when someone clicks a link or does something of interest, we can use some kind of ajax/javascript call to make a call to some microservice to capture the extra info that we want? But that's not always "reliable" per say, but do we care?
Or while the given "action" posts back to the server we simultaneously write to Kafka and perform the other action?
It’s not clear from your question if you are trying to collect all the clickstream logs from a set of web servers, or if you are trying to selective publish some data to Kafka from your web app, so I will answer both.
The easiest way to collect every web click is to configure your web servers to use Syslog ( see http://archive.oreilly.com/pub/a/sysadmin/2006/10/12/httpd-syslog.html ) and configure your Syslog server to send data to Kafka (see https://www.balabit.com/documents/syslog-ng-ose-latest-guides/en/syslog-ng-ose-guide-admin/html/configuring-destinations-kafka.html). Alternatively there are some more advanced features available in this Kafka Connector for Syslog-NG (see https://github.com/jcustenborder/kafka-connect-syslog). You can also write httpd logs to a file and use a Kafka File Connector to publish to Kafka (see https://docs.confluent.io/current/connect/connect-filestream/filestream_connector.html)
If you just want to enable your apps to send certain log data to Kafka directly you can use the Kafka REST Proxy and publish using a simple HTTP POST from either your client JavaScript or your server side logic (see https://docs.confluent.io/current/kafka-rest/docs/index.html)

Solutions of Kafka project to analyze HTTP requests on web server

Context:
A Web server that receives millions of HTTP requests every day. Of
course, there must be a project(named handler) who is responsible for handling
these requests and response them with some information.
Seen from the server side, I would like to use Kafka to extract some information from them and analyze it in real time(or each time interval).
Question:
how can I use these requests as the producer of Kafka?
how to build a customer of Kafka?(all this data need to be analyzed and then returned, but Kafka is "just" a message system)
Some imaginations:
A1.1 Maybe I can let the project "handler" call the jar of Kafka then, it can trigger the producer code to send message Kafka.
A1.2 Maybe I can create another project who listens to all the HTTP requests at the server, but there are other HTTP requests at the server.
I tried to think a lot of solutions, but I am not so sure about them, I would like to ask your guys if you have already known some mature ideas or you have some ideas to implement this?
You can use elk . kafka as the log broker

JMS | IBM Websphere Application Server 8.5 | How to see messages and their formats lying on queues

I am working on a POC wherein I am trying to achieve SOAP over JMS. Basically I'll be submitting my SOAP messages directly to the jms queue and a consumer will further read these SOAP messages and process them. The reason we want to stick to SOAP is because it is a standard formatand we'll not have to do something extra to design a new standard form messages.
For this poc I am using the default messaging provider which comes by default with IBM websphere app server 8.5. I referred to the following and I am able to submit my messages to queue. The problem is that I expected the SOAP to stay as XML/String on my queue however it is getting converted into a byte message.
I want to check the message and its type on my queue using some kind of queue browser tool which could work with IBM WAS8.5 . I googled and found that there are a lot of Queue browser tools available for servers like Glassfish etc but I couldn't find any tool or option for IBM WAS8.5?
Can you please guide me on what I can do to ensure that my SOAP message stays as XML on JMS queue and any GUI tool/option I could use to see the message and its type on Queue ?
Regards
Aakash
You can use the SIB Explorer tool to view the messages on the queue in WAS. The link to the tool is here.
There is also the SIB Destination Handler tool that allows you to perform more actions on the messages that you might find useful for your issue (like printing out properties etc). The SIB Destination Handler tool can be found here.
In addition to tools mentioned by whitfiea you can use web admin console and go to:
Buses > myBus > Destinations > myQueue > Queue points >
myQueue#rad9vmNode02.server1-myBus > (switch to the Runtime tab) > Messages
then select the message. You should be able to see the message contents.

Sending Spring Integration messages over RMI seamlessly?

Is there a way to send Spring Messages over an int-rmi:outbound to int-rmi:inbound gateway?
I've got two components that both use Spring Integration and I'd ideally like to send a Message between them so the receiving component can then seamlessly use a router or filters to decide where that message ends up.
The components are both written in Java, but are running in separate processes (Probably on the same machine but it's not guaranteed).
I've managed to use Spring integration to get Component 1 to call a Method on Component 2 and then for Component 2 to call a Method on Component 1, using RMI, and I Set the parameter of the RMI method to a Message which I can then obtain a channel and send into Spring Integration's flow.
But I was wondering if there was a way to skip that last step? And just have the Message flow through the two applications.
Sorry, it isn't clear what is the issue.
Actually it works out-of-the-box.
Here is a test-case as a sample