Last-Value Property in HornetQ 2.1.2.Final - jboss

How does the Last-Value Property work in HornetQ?
I'm sending 4 elements to MyQueue just to test this property: 2 with a Last-Value Property defined, and the other 2 with a diferent Last-Value.
I thought that just 2 elements will be processed in the Queue: one of each Last-Value Property. But it doesn't seem to happen. The values in the JBoss JMX Console are like: MessageCount = -4, DeliveringCount = -4, MessagesAdded = 4.
So, how does it works?
I'm using Jboss 5.1.0.GA, and I set the "last-value-queue" with true, in tue hornetq-configuration.xml file.

The messageCount being negative was an issue with last-value-queues that is being fixed on the next version.
https://issues.jboss.org/browse/HORNETQ-466
with this commit:
https://github.com/clebertsuconic/hornetq/commit/a78836cdef4e28d76064500f57cb8e8a799da9bf
Other than the negative counter, everything works as expected.

Related

Tomcat default value for socket.soKeepAlive

I am trying to debug an issue related to keep-alives/connection resets and found that tomcat documentation says:
socket.soKeepAlive:
(bool)Boolean value for the socket's keep alive setting (SO_KEEPALIVE). JVM default used > if not set.
This is not set by the application I am debugging. Is there a way to figure out the default the jvm is using? (For instance by inspecting a system property?)
I cannot test out the behavior by inspecting the actual keep alive behavior since I don't have access to the VM.
Answering on my own based on more research and experimentation.
Default value of socket.soKeepAlive: From the JVM docs for SocketOptions.SO_KEEPALIVE:
The initial value of this socket option is FALSE. The socket option may be enabled or disabled at any time.
Also to note:
When the SO_KEEPALIVE option is enabled the operating system may use a keep-alive mechanism to periodically probe the other end of a connection
From what I understood, this would mean tomcat would not probe clients to check if an established connection is active by default
Default Value of keepAliveTimeout: The default value is to use the value that has been set for the connectionTimeout attribute
In my case this was not being reflected. connectionTimeout was set to 10 seconds, but still tomcat responses had keep alive headers set to only 5 seconds.
However, I found that the application authors also set an attribute called socket.soTimeout to 5 seconds which tomcat describes as:
This is equivalent to standard attribute connectionTimeout.
I found that when both conncetionTimeout and socket.soTimeout are set, socket.soTimeout takes precedence since changing the socket.soTimeout value caused the values returned by keep alive headers to change accordingly.

Variable Multithread Access - Corruption

In a nutshell:
I have one counter variable that is accessed from many threads. Although I've implemented multi-thread read/write protections, the variable seems to still -in an inconsistent way- get written to simultaneously, leading to incorrect results from the counter.
Getting into the weeds:
I'm using a "for loop" that triggers roughly 100 URL requests in the background, each in its “DispatchQueue.global(qos: .userInitiated).async” queue.
These processes are async, once they finish they update a “counter” variable. This variable is supposed to be multi-thread protected, meaning it’s always accessed from one thread and it’s accessed syncronously. However, something is wrong, from time to time the variable will be accessed simultaneously by two threads leading to the counter not updating correctly. Here's an example, lets imagine we have 5 URLs to fetch:
We start with the Counter variable at 5.
1 URL Request Finishes -> Counter = 4
2 URL Request Finishes -> Counter = 3
3 URL Request Finishes -> Counter = 2
4 URL Request Finishes (and for some reason – I assume variable is accessed at the same time) -> Counter 2
5 URL Request Finishes -> Counter = 1
As you can see, this leads to the counter being 1, instead of 0, which then affects other parts of the code. This error happens inconsistently.
Here is the multi-thread protection I use for the counter variable:
Dedicated Global Queue
//Background queue to syncronize data access fileprivate let
globalBackgroundSyncronizeDataQueue = DispatchQueue(label:
"globalBackgroundSyncronizeSharedData")
Variable is always accessed via accessor:
var numberOfFeedsToFetch_Value: Int = 0
var numberOfFeedsToFetch: Int {
set (newValue) {
globalBackgroundSyncronizeDataQueue.sync() {
self.numberOfFeedsToFetch_Value = newValue
}
}
get {
return globalBackgroundSyncronizeDataQueue.sync {
numberOfFeedsToFetch_Value
}
}
}
I assume I may be missing something but I've used profiling and all seems to be good, also checked the documentation and I seem to be doing what they recommend. Really appreciate your help.
Thanks!!
Answer from Apple Forums:https://forums.developer.apple.com/message/322332#322332:
The individual accessors are thread safe, but an increment operation
isn't atomic given how you've written the code. That is, while one
thread is getting or setting the value, no other threads can also be
getting or setting the value. However, there's nothing preventing
thread A from reading the current value (say, 2), thread B reading the
same current value (2), each thread adding one to this value in their
private temporary, and then each thread writing their incremented
value (3 for both threads) to the property. So, two threads
incremented but the property did not go from 2 to 4; it only went from
2 to 3. You need to do the whole increment operation (get, increment
the private value, set) in an atomic way such that no other thread can
read or write the property while it's in progress.

Simpy 3.0.4, setting resource priority

I am having trouble with resource priority in simpy. Consider the following code:
import simpy
env = simpy.Environment()
res = simpy.PriorityResource(env, capacity = 1)
def go(id):
with res.request(priority = id) as req:
yield req
print id,res
env.process(go(3))
env.process(go(2))
env.process(go(4))
env.process(go(5))
env.process(go(1))
env.run()
Lower number means higher priority, so I should get 1,2,3,4,5. But instead i am getting 3,1,2,4,5. So the first output is wrong, after that its sorted!
Thanks in advance for your help.
This is correct. When "3" requests the resource, it is empty so it gets the
slot. The remaining processes have to queue and will get the resource in the
order 1, 2, 4, 5.
If you use the PreemptiveResource instead (like request(priority=id,
preempt=True)), 3 will still get the resource first but will be preempted by
2. 2 will then get preempted by 1. 2 and 3 would then have to request the
resource again to gain access to it.
Even I had the same problem where I was supposed to make a factory FIFO. At that time I assigned a reaction time to a part and made it to follow the previous part. That is only if the previous part had got into service of resource, I made the next part request. It solved the problem objectively but seemed like it slowed down the simulation little and also gave a rexn time to the part. It was basically a revamp of the factory process. But I would love to see a feature when the part doesn't have to request again.
Can it be done in the present version?

Request time-out in Quickfix ??

is there anyway to set request time-out while sending message from initiator ??
we had a issue where we got late reply from acceptor and application went in not responsive mode. issue can be with network delay or etc. but I think it will be good if we can set time-out option here.
Seeing with Application call back didn't find anything .
I want to set time-out option with SendToTarget API,,
any suggestion
Did you add CheckLatency and MaxLatency in your config file and confirmed ?
CheckLatency If set to Y, messages must be received from the counterparty within a defined number of seconds (see MaxLatency). It is useful to turn this off if a system uses localtime for it's timestamps instead of GMT.
MaxLatency If CheckLatency is set to Y, this defines the number of seconds latency allowed for a message to be processed. Default is 120. positive integer
I'm experiencing the same problem using QuickFix /n
Looking at the source code for version 1.4 the section that reads those settings from the configuration file is commented out and replaced with hard coded default values.
// FIXME to get from config if available
session.MaxLatency = 120;
session.CheckLatency = true;

Configuring an MDB in JBOSS

How maxMessages property affects the MDB?
For example:
#ActivationConfigProperty(propertyName = "maxMessages", propertyValue="5").
How would this value affect if maxSessions is 10?
The JBoss docs are a bit wooly on this, they say MaxMessages is defined as
The number of messages to wait for
before attempting delivery of the
session, each message is still
delivered in a separate transaction
(default 1)
I think you were wondering if it affects the number of threads or concurrent sessions than can pass through the MDB at one time, but it seems this parameter is not related to that behaviour, and so there's no conflict.
I think you're confused, maxSessions refer to the the maximum number of JMS sessions that can concurrently deliver messages to MDB.
In the xml confi file standardjboss.xml you'd set MaximumSize to set the number of concurrent messages. In this case I've set it to 150. This affects all MDBs, however.
<invoker-proxy-binding>
<name>message-driven-bean</name>
<invoker-mbean>default</invoker-mbean>
<proxy-factory>org.jboss.ejb.plugins.jms.JMSContainerInvoker</proxy-factory>
<proxy-factory-config>
<JMSProviderAdapterJNDI>DefaultJMSProvider</JMSProviderAdapterJNDI>
<ServerSessionPoolFactoryJNDI>StdJMSPool</ServerSessionPoolFactoryJNDI>
<CreateJBossMQDestination>true</CreateJBossMQDestination>
<!-- WARN: Don't set this to zero until a bug in the pooled executor is fixed -->
<MinimumSize>1</MinimumSize>
**<MaximumSize>150</MaximumSize>**
<KeepAliveMillis>30000</KeepAliveMillis>
<MaxMessages>1</MaxMessages>
<MDBConfig>
<ReconnectIntervalSec>10</ReconnectIntervalSec>
<DLQConfig>
<DestinationQueue>queue/DLQ</DestinationQueue>
<MaxTimesRedelivered>200</MaxTimesRedelivered>
<TimeToLive>0</TimeToLive>
</DLQConfig>
</MDBConfig>
</proxy-factory-config>
</invoker-proxy-binding>