HIkariCP: connection leaks and usage time - hikaricp

I was wondering whether connection leaks and usage time are correlating.
A connection leak is identified if a connection is out of the pool for an amount of time over the configured leakDetectionThreshold.
Is this amount of time the same as the connection usage time?
I am asking because I am seeing some connection leaks with a leakDetectionThreshold of 30s whereas I cannot find any connections with a corresponding connection usage time.
Thanks,
Michail

Connection usage time, as reported by Dropwizard metrics, is recorded using a Histogram with an exponentially decaying reservior. Quoting their doc:
A histogram with an exponentially decaying reservoir produces
quantiles which are representative of (roughly) the last five minutes
of data. It does so by using a forward-decaying priority reservoir
with an exponential weighting towards newer data.
As such, I think it would be very difficult to correlate individual connection leaks with any particular quartile in the histogram.

Related

STM32 ADC: leave it running at 'high' speed or switch it off as much as possible?

I am using a G0 with one ADC and 8 channels. Works fine. I use 4 channels. One is temperature that is measured constantly and I am interested in the value every 60s. Another one is almost the opposite: it is measuring sound waves for a couple a minutes per day and I need those samples at 10kHz.
I solved this by letting all 4 channels sample at 10kHz and have the four readings moved to memory by DMA (array of length 4 with 1 measurement each). Every 60s I take the temperature and when I need the audio, I retrieve the audio values.
If I had two ADC's, I would start the temperature ADC reading for 1 conversion every 60s. Non-stop. And I would only start the audio ADC for the the couple of minutes a day that it is needed. But with the one ADC solution, it seems simple to let all conversions run at this high speed continuously and that raised my question: Is there any true downside in having 40.000 conversions per second, 24 hours per day? If not, the code is simple. I just have the most recent values in memory all the time. But maybe I ruin the chip? I use too much energy I know. But there is plenty of it in this case.
You aren't going to "wear it out" by running it when you don't need to.
The main problems are wasting power and RAM.
If you have enough of these, then the lesser problems are:
The wasted power will become heat, this may upset your temperature measurements (this is a very small amount though).
Having the DMA running will increase your interrupt latency and maybe also slow down the processor slightly, if it encounters bus contention (this only matters if you are close to capacity in these regards).
Having it running all the time may also have the advantage of more stable readings, not being perturbed turning things on and off.

Benefits of using aggressive timeouts with reactive programming

In the blocking world, it is highly recommended to set aggressive timeouts in order to fail fast and release the underlying resources (Section 5.1 of https://pragprog.com/book/mnee/release-it).
In the async/non-blocking world, requests are not blocking the main thread and the resources are available immediately for further processing. Timeouts are still necessary, however does it still make sense to set aggressive values?
In real-time software, network requests or control operations on machinery take a large amount of time in comparison to day-to-day software operations. For instance, telling a step motor to advance to a particular position may take seconds, while normal operations might take milliseconds. Let's say that a typical step motor advance takes n milliseconds, and one that goes the maximum distance takes m milliseconds.
An aggressive timeout would compute n and add a small fudge factor, perhaps 10%, and fail quickly if the goal wasn't reached in that time. As you stated, the aggressive timeout will allow you to release resources. A non-aggressive timeout of m plus epsilon would fail much more slowly, and tie up resources unnecessarily.
In the asynchronous software world, there a number of other choices between success and failure. An asynchronous operation might also calculate n plus 10%, and put up a progress bar (if user feedback is desired) and then show progress towards the estimated goal's end. When the timeout is reached, the progress bar would be full, but you might cause it to pulse or change color to indicate it was taking longer than expected. If the step motor still had not reached its goal after m milliseconds, then you could announce a failure.
In other cases, when the feedback is not important, then you could certainly use m plus epsilon as your timeout.

Higher concurrency vs lower concurrency

In doing a load test (by using Siege for example) for servers, is a lower concurrency number better?
What does this number signify?
The Siege docs go into detail on concurrency here: https://www.joedog.org/2012/02/17/concurrency-single-siege/
From that page:
The calculation is simple: total transactions divided by elapsed time. If we did 100 transactions in 10 seconds, then our concurrency was 10.00.
Higher concurrency measure CAN mean that your server is handling more connections faster but it can also mean that your server is falling behind on calculations and causing connections to be queued. So your concurrency measure is only valuable when taken in context of time elapsed.

Reporting Utilization in AnyLogic

I am looking for some insight into reporting utilization correctly. I am using a time plot that reports resourceName.utilization(), additionally, I am also adding the utilization values to a Statistics object every hour. I then plot the mean value of this Statistics object as statisticName.mean(). Since utilization in AnyLogic is the returned value is the mean over all individual unit utilization, calculated from the most recent resetStats() call up to current time, does reporting the value statisticName.mean() even make sense? That would be the average of time averaged values.

Benchmarking Memcached Server

I am trying to benchmark a memcached server. The results produced for TCP traffic are in terms of number of requests, number of hits, number of misses, number of gets, number of sets, delay time, etc. I am confused about how to produce throughput measure from it.
I suggest doing a lot of experiments at different loads, and drawing a graph of response time vs. requests-per-second.
Typically you will get a graph that looks like the one at the top of this paper by Hart et al which has an obvious "knee" which shows that if you apply too much load the response time suddenly gets much worse.
You could consider the requests-per-second of this knee to be throughput of your memcached system.