trying to read HornetQ bodyBuffer and getting NegativeArraySizeException - hornetq

trying to use
val nullablestr = hornetQMessage.getBodyBuffer.readNullableSimpleString.toString
or
val strMessage = hornetQMessage.getBodyBuffer.readString
getting
java.lang.NegativeArraySizeException
at org.hornetq.core.buffers.impl.ChannelBufferWrapper.readSimpleStringInternal(ChannelBufferWrapper.java:83)
at org.hornetq.core.buffers.impl.ChannelBufferWrapper.readNullableSimpleString(ChannelBufferWrapper.java:58)
at com.gamescale.messaging.hornetQ.HornetQMessageConverter$.extractGSMessage(HornetQMessageConverter.scala:68)
at com.gamescale.messaging.hornetQ.MessageBusHornetQClientImpl$$anonfun$1$$anon$2$$anonfun$receive$1.apply(MessageBusHornetQClientImpl.scala:246)
at com.gamescale.messaging.hornetQ.MessageBusHornetQClientImpl$$anonfun$1$$anon$2$$anonfun$receive$1.apply(MessageBusHornetQClientImpl.scala:243)
at akka.actor.Actor$class.apply(Actor.scala:563)
at com.gamescale.messaging.hornetQ.MessageBusHornetQClientImpl$$anonfun$1$$anon$2.apply(MessageBusHornetQClientImpl.scala:242)
at akka.actor.LocalActorRef.invoke(ActorRef.scala:905)
at akka.dispatch.MessageInvocation.invoke(MessageHandling.scala:25)
at akka.dispatch.ExecutableMailbox$class.processMailbox(ExecutorBasedEventDrivenDispatcher.scala:216)
at akka.dispatch.ExecutorBasedEventDrivenDispatcher$$anon$4.processMailbox(ExecutorBasedEventDrivenDispatcher.scala:122)
at akka.dispatch.ExecutableMailbox$class.run(ExecutorBasedEventDrivenDispatcher.scala:188)
at akka.dispatch.ExecutorBasedEventDrivenDispatcher$$anon$4.run(ExecutorBasedEventDrivenDispatcher.scala:122)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
at akka.dispatch.MonitorableThread.run(ThreadPoolBuilder.scala:184)
I am using the same method to write the message
hornetQMessage.getBodyBuffer.writeString(message)
any ideas ?

As a reference to any HornetQ users. The cause of this bug was determined at the HornetQ forum.
In short, it was caused by reading values from the buffer in the wrong order. Say, you write a negative integer before writing the string, then at the other side, you try to read the string without reading the negative integer first.

Related

Gatling - Long Polling in a loop - asynchronous response - loop doesn't work as intended

This is my first question ever on this website, please bear with me patiently.
I'm trying to make an http long polling program for a project using Gatling. While crawling through many questions on stackoverflow, though I've been able to reunite separate concepts into a piece of syntactically correct code, sadly, it doesn't do what is intended to do.
When a status code of 200 is obtained after any request, the loop should break and the test would be considered as approved. If the status code is different to 200, it should keep the connection alive and polling, not failing the test.
When the .tryMax value is reached and all responses gave a status different to 200, the loop should break and the test should be considered as failed.
Using the difference operator (!=) doesn't work either, so then I took the decision to alternatively use .equals() and test the loop, to no avail.
Being new to both Gatling and Scala, I'm still trying to figure out what's wrong with this code, execution-wise:
def HttpPollingAsync() = {
asLongAs(session => session("statuss").validate[String].equals("200")) {
exec(
polling
.every(10 seconds)
.exec(
http("polling-async-response")
.post("/" + BaseURL + "/resource-async-response")
.headers(headers)
.body(RawFileBody("requestdata.json"))
.check(
status.is(200),
jsonPath("$.status").is("200"),
jsonPath("$.status").saveAs("statuss")
))
).exec(polling.stop)
}
}
val scn = scenario("asyncpolling")
.tryMax(60){
exec(HttpPollingAsync())
}
setUp(scn.inject(atOnceUsers(10))).protocols(httpProtocol)
The exception I get when running this piece of code is (it's just syntactically correct):
Exception in thread "main" java.lang.UnsupportedOperationException: There were no requests sent during the simulation, reports won't be generated
at io.gatling.charts.report.ReportsGenerator.generateFor(ReportsGenerator.scala:49)
at io.gatling.app.RunResultProcessor.generateReports(RunResultProcessor.scala:59)
at io.gatling.app.RunResultProcessor.processRunResult(RunResultProcessor.scala:38)
at io.gatling.app.Gatling$.start(Gatling.scala:81)
at io.gatling.app.Gatling$.fromArgs(Gatling.scala:46)
at io.gatling.app.Gatling$.main(Gatling.scala:38)
at io.gatling.app.Gatling.main(Gatling.scala)
So there's some part when it's never accessed or used.
Any bit of help or pointing me in the right direction would be appreciated.
Thank you!
an asLongAs loop condition is evaluated at the start of the loop - so on your first execution the condition will fail due to there being no session value for statuss.
the doWhile loop provides checking at the end of the loop.

Issue with iText when doing document.Close... Unbalanced Save Restore state operators

We have an application which was working fine as a monolith.
Now we are in the process of splitting the application
In this process, I am getting an error stating something as shown here below...
This happens only at the place of d.close()
Document d = new Document(PageSize.A4, 10, 10, 50, 50);
......
.....
finally{
if(d.isOpen()) {
d.close();
}
byteOutputStream.flush();
byteOutputStream.close();
pw.close();
return byteOutputStream.toByteArray();
}
(As a monolith the whole application was working fine)
(iText2.1.7 jar is used)
at com.ibm.CORBA.iiop.UtilDelegateImpl.mapSystemException(UtilDelegateImpl.java:241)
at javax.rmi.CORBA.Util.mapSystemException(Util.java:84)
at <<stub path>>.retrieve(_fileName1Remote_Stub.java:1)
at <<filePath>>.retrieve(fileName2.java:778)
at <<filePath>>.onCustomAction1(fileName3.java:403)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at org.apache.el.parser.AstValue.invoke(AstValue.java:266)
at org.apache.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:278)
at org.apache.myfaces.view.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:83)
at javax.faces.event.MethodExpressionActionListener.processAction(MethodExpressionActionListener.java:83)
... 43 more
Caused by: com.itextpdf.text.exceptions.IllegalPdfSyntaxException: Unbalanced save/restore state operators.
at com.itextpdf.text.pdf.PdfContentByte.sanityCheck(PdfContentByte.java:3171)
at com.itextpdf.text.pdf.PdfContentByte.toPdf(PdfContentByte.java:245)
at com.itextpdf.text.pdf.PdfFormXObject.(PdfFormXObject.java:88)
at com.itextpdf.text.pdf.PdfTemplate.getFormXObject(PdfTemplate.java:241)
at com.itextpdf.text.pdf.PdfWriter.addSharedObjectsToBody(PdfWriter.java:1257)
at com.itextpdf.text.pdf.PdfWriter.close(PdfWriter.java:1169)
at com.itextpdf.text.pdf.PdfDocument.close(PdfDocument.java:780)
at com.itextpdf.text.Document.close(Document.java:409)
at <>.createPDF(<>.java:135)
at <>.getPdfData(fileName1Bean.java:339)
at <>.retrieve(fileName1Bean.java:205)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:88)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:613)
at com.ibm.ejs.container.EJSContainer.invokeProceed(EJSContainer.java:5730)
at com.ibm.ejs.container.interceptors.InvocationContextImpl.proceed(InvocationContextImpl.java:568)
at <>.retrieveIntercept(<>.java:43)
at sun.reflect.GeneratedMethodAccessor215.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:613)
at com.ibm.ejs.container.interceptors.InterceptorProxy.invokeInterceptor(InterceptorProxy.java:227)
at com.ibm.ejs.container.interceptors.InvocationContextImpl.proceed(InvocationContextImpl.java:548)
at com.ibm.ejs.container.interceptors.InvocationContextImpl.doAroundInvoke(InvocationContextImpl.java:229)
at com.ibm.ejs.container.EJSContainer.invoke(EJSContainer.java:5621)
at <>_c01dfd09.retrieve(EJSRemote0SL<>Bean_c01dfd09.java)
at <>Bean_c01dfd09_Tie.retrieve(_<>Bean_c01dfd09_Tie.java:1)
at <>.invoke(<>_c01dfd09_Tie.java)
at com.ibm.CORBA.iiop.ServerDelegate.dispatchInvokeHandler(ServerDelegate.java:669)
at com.ibm.CORBA.iiop.ServerDelegate.dispatch(ServerDelegate.java:523)
at com.ibm.rmi.iiop.ORB.process(ORB.java:523)
at com.ibm.CORBA.iiop.ORB.process(ORB.java:1575)
at com.ibm.rmi.iiop.Connection.doRequestWork(Connection.java:3039)
at com.ibm.rmi.iiop.Connection.doWork(Connection.java:2922)
at com.ibm.rmi.iiop.WorkUnitImpl.doWork(WorkUnitImpl.java:64)
at com.ibm.ws.giop.threadpool.WorkQueueElement.dispatch(WorkQueueElement.java:165)
at com.ibm.ws.giop.filter.GiopFilterChain.processMessage(GiopFilterChain.java:203)
at com.ibm.ws.giop.threadpool.PooledThread.handleRequest(PooledThread.java:81)
at com.ibm.ws.giop.threadpool.PooledThread.run(PooledThread.java:102)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1862)
So here it is...
First of all, I apologize if there was ambiguity in my question.
As said, version 2.1.7 is also used (thanks to the point made by Amedee). There is a 5.x version which I found in the shared libraries of the server.
Since the directory structure is totally different between the two versions, both the jars are maintained for a specific reason.
Also, I am new to this whole application myself and trying to get an understanding with people around in this project. When people who developed it may not be here, I thought a post may give me some idea. So, you are also partly correct Mr.Lowagie - I am ignorant on iText and doing my learning as well - I do not have a reason to Lie! :-).
Mkl, there is no exception in the try block. What eventually turned out was that the image was not getting generated or getting garbled for some other reasons. And it is being investigated. I have given this activity to a person who understands the system better than I do.
Thanks to all your mails and support my dear friends!

kafka 0.72, minimum number of brokers

I'm trying to create a kafka producer that sends messages to kafka brokers (and not to zoo keeper).
I know that the better practice is working with zk, but for the moment I would like to send messages directly to a broker.
To do that, I'm setting the property "broker.list" as described in the documentation. The thing is that it appears that in order for it to work it requires minimum of 3 brokers (else I get an exception).
In the source code of kafka I can see:
if(brokerInfo.size < 3) throw new InvalidConfigException("broker.list has invalid value")
This is weird cause in my data center I hold only 2 kafka nodes (and 3 zk), what can I do in this case?
Is there a way go around this?
The brokerInfo is obtained by splitting the individual broker info and NOT the number of brokers .. if you checked the source code more carefully you would see some thing like
// check if each individual broker info is valid => (brokerId: brokerHost: brokerPort)
and then they split this info as below
brokerInfoList.foreach { bInfo =>
val brokerInfo = bInfo.split(":")
if(brokerInfo.size < 3) throw new InvalidConfigException("broker.list has invalid value")
}
so every single broker expected to have an id with host name and port separated by the : delimiter
basically regarding the number of broker it just do this
val brokerInfoList = config.brokerList.split(",")
if(brokerInfoList.size == 0) throw new InvalidConfigException("broker.list is empty")
So you should be fine with that I guess, just try to pass a single broker and it should work. Let us know how it goes
Apparently when writing
props.put("broker.list", "0:" + <host:port>);
It works (I added the "0:" to the original string).
I have found it in section 9 of the quick start guide.
I'm not sure I'm getting it, maybe this zero is the partition number(?) maybe something else (could be nice if someone can shed some light here).

Problem loading range_slices in Cassandra

I'm having just a little bit of trouble getting data out of Cassandra. The main problem is this exception:
ERROR 15:45:07,037 Internal error processing get_range_slices
java.lang.AssertionError: (162293240116362681726824838407749997815,35552186147124906726154103286687761342]
at org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1251)
at org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:428)
at org.apache.cassandra.thrift.CassandraServer.get_range_slices(CassandraServer.java:513)
at org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.process(Cassandra.java:2868)
at org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2555)
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:167)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
So what do I do? I use describe_ring to get the topology of the network, then I ask each of the nodes in the network describe_splits which gives me the tokens I should use to fetch the ranges, and then I just start asking for them, making sure that I set the start_token and end_token on the keyranges.
Any ideas?
That's a bug fixed for 0.6.9 and 0.7rc2.

Configuring an MDB in JBOSS

How maxMessages property affects the MDB?
For example:
#ActivationConfigProperty(propertyName = "maxMessages", propertyValue="5").
How would this value affect if maxSessions is 10?
The JBoss docs are a bit wooly on this, they say MaxMessages is defined as
The number of messages to wait for
before attempting delivery of the
session, each message is still
delivered in a separate transaction
(default 1)
I think you were wondering if it affects the number of threads or concurrent sessions than can pass through the MDB at one time, but it seems this parameter is not related to that behaviour, and so there's no conflict.
I think you're confused, maxSessions refer to the the maximum number of JMS sessions that can concurrently deliver messages to MDB.
In the xml confi file standardjboss.xml you'd set MaximumSize to set the number of concurrent messages. In this case I've set it to 150. This affects all MDBs, however.
<invoker-proxy-binding>
<name>message-driven-bean</name>
<invoker-mbean>default</invoker-mbean>
<proxy-factory>org.jboss.ejb.plugins.jms.JMSContainerInvoker</proxy-factory>
<proxy-factory-config>
<JMSProviderAdapterJNDI>DefaultJMSProvider</JMSProviderAdapterJNDI>
<ServerSessionPoolFactoryJNDI>StdJMSPool</ServerSessionPoolFactoryJNDI>
<CreateJBossMQDestination>true</CreateJBossMQDestination>
<!-- WARN: Don't set this to zero until a bug in the pooled executor is fixed -->
<MinimumSize>1</MinimumSize>
**<MaximumSize>150</MaximumSize>**
<KeepAliveMillis>30000</KeepAliveMillis>
<MaxMessages>1</MaxMessages>
<MDBConfig>
<ReconnectIntervalSec>10</ReconnectIntervalSec>
<DLQConfig>
<DestinationQueue>queue/DLQ</DestinationQueue>
<MaxTimesRedelivered>200</MaxTimesRedelivered>
<TimeToLive>0</TimeToLive>
</DLQConfig>
</MDBConfig>
</proxy-factory-config>
</invoker-proxy-binding>