Hazelcast QueueStore memory limit property is not working - queue

Using 3.4.1
I want to limit the number of entry in queue memory and thats why tried setting the memory limit property for queue store config but its not working. I think its not related if we set the property or not, still we will have all the entry stored in both in queue memory and Queuestore.
Find the code: https://gist.github.com/hitendrapratap/f8d27777f264c0966a39

Is the bounded queue what you are looking for: http://docs.hazelcast.org/docs/3.4/manual/html-single/hazelcast-documentation.html#bounded-queue ?

Related

How to free Redis Scala client allocated by RedisClientPool?

I am using debasishg/scala-redis as my Redis Client.
I want it to support multi threaded executions. Following their documentation: https://github.com/debasishg/scala-redis I defined
val clients = new RedisClientPool("localhost", 6379)
and then using it on each access to redis:
clients.withClient {
client => {
...
}
}
My question is, do I need to free each allocated client? And if so, what is a correct way to do it?
If you look at the constructor for RedisClientPool, there is a default value maxIdle ("the maximum number of objects that can sit idle in the pool", as per this), and a default value for poolWaitTimeout. You can change those values, but basically if you wait poolWaitTimeout you are guaranteed to have your ressources cleaned, except for the maxIdle clients on stand-by.
Also, if you can't stand the idea of idle clients, you can shut down the whole pool with mypool.close, and create it again when needed, but depending on your use case that might defeat the purpose of using a pool (if it's a cron job I guess that's fine).

empty hazelcast iqueue size in memory

We create a simple iqueue in Hazelcast:
HazelcastInstance h = Hazelcast.newHazelcastInstance(config);
BlockingQueue<String> queue = h.getQueue("my-distributed-queue");
Let's assume that queue.size() == 0.
Does the distributed queue "my-distributed-queue" use any memory resources?
Background:
I want to use Hazelcast for creating large amount (>1k) of short lived queues (for keeping time order in item groups). I'm wondering what happens if an IQueue object in Hazelcast is drained out (size==0). Will it leave any artifacts in memory that won't be cleaned up by GC?
I've analized the heap dumps in VisualVM and I've found that queue items are stored as IQueueItem objects. When the queue size is 0, then there are no IQueueItem instances. But are there any other no removable artefacts? Thx for help.
There is some fixed cost of each structure even if it doesn't contain any data. The cost is rather low, you can see the structure backing each instance of a queue here: https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/java/com/hazelcast/queue/impl/QueueContainer.java
You can always destroy a queue once you don't need it - just call the destroy() method Each structure provided by Hazelcast implements this interface.

Perl caching under Apache - Mutex during long load

I have a large file I need to load into the cache (As a hash) which will be shared between perl processes. Loading the data into cache takes around 2 seconds, but we have over 10 calls per second.
Does using the compute method cause other processes to be locked out?
Otherwise, I would appreciate suggestions on how to manage the load process so that there's a guaranteed lock during load and only one load process happening!
Thanks!
Not sure about the guaranteed lock, but you could use memcached with a known key as a mutex substitute (as of this writing, you haven't said what your "cache" is). If the value for the key is false, set it to true, start loading, and then return the result.
For the requests happening during that time, you could either busy-wait, or try using a 503 "Service Unavailable" status with a few seconds in the Retry-After field. I'm not sure of the browser support for this.
As a bonus, using a time-to-live of, say, 5 minutes on the mutex key will cause the entire file to be reloaded every 5 minutes. Otherwise, if you have a different reason for reloading the file, you will need to manually delete the key.

Why do we need to set Max and Min thread in .Net?

I am struggling in a thread project. I came across the setMaxThread , SetMinThread, GetMaxThread and GetAvailableThread. I didn't find any good reason to use those methods in the threadpool.
Help me out here,
why do we need it and when do we use it?
According to MSDN
There is one thread pool per process. Beginning with the .NET
Framework 4, the default size of the thread pool for a process depends
on several factors, such as the size of the virtual address space. A
process can call the GetMaxThreads method to determine the number of
threads. The number of threads in the thread pool can be changed by
using the SetMaxThreads method.
If you don't want to use the default value, use setter method to change your process's thread pool.

Memcache-based message queue?

I'm working on a multiplayer game and it needs a message queue (i.e., messages in, messages out, no duplicates or deleted messages assuming there are no unexpected cache evictions). Here are the memcache-based queues I'm aware of:
MemcacheQ: http://memcachedb.org/memcacheq/
Starling: http://rubyforge.org/projects/starling/
Depcached: http://www.marcworrell.com/article-2287-en.html
Sparrow: http://code.google.com/p/sparrow/
I learned the concept of the memcache queue from this blog post:
All messages are saved with an integer as key. There is one key that has the next key and one that has the key of the oldest message in the queue. To access these the increment/decrement method is used as its atomic, so there are two keys that act as locks. They get incremented, and if the return value is 1 the process has the lock, otherwise it keeps incrementing. Once the process is finished it sets the value back to 0. Simple but effective. One caveat is that the integer will overflow, so there is some logic in place that sets the used keys to 1 once we are close to that limit. As the increment operation is atomic, the lock is only needed if two or more memcaches are used (for redundancy), to keep those in sync.
My question is, is there a memcache-based message queue service that can run on App Engine?
I would be very careful using the Google App Engine Memcache in this way. You are right to be worrying about "unexpected cache evictions".
Google expect you to use the memcache for caching data and not storing it. They don't guarantee to keep data in the cache. From the GAE Documentation:
By default, items never expire, though
items may be evicted due to memory
pressure.
Edit: There's always Amazon's Simple Queueing Service. However, this may not meet price/performance levels either as:
There would be the latency of calling from the Google to Amazon servers.
You'd end up paying twice for all the data traffic - paying for it to leave Google and then paying again for it to go in to Amazon.
I have started a Simple Python Memcached Queue, it might be useful:
http://bitbucket.org/epoz/python-memcache-queue/
If you're happy with the possibility of losing data, by all means go ahead. Bear in mind, though, that although memcache generally has lower latency than the datastore, like anything else, it will suffer if you have a high rate of atomic operations you want to execute on a single element. This isn't a datastore problem - it's simply a problem of having to serialize access.
Failing that, Amazon's SQS seems like a viable option.
Why not use Task Queue:
https://developers.google.com/appengine/docs/python/taskqueue/
https://developers.google.com/appengine/docs/java/taskqueue/
It seems to solve the issue without the likely loss of messages in Memcached-based queue.
Until Google impliment a proper job-queue, why not use the data-store? As others have said, memcache is just a cache and could lose queue items (which would be.. bad)
The data-store should be more than fast enough for what you need - you would just have a simple Job model, which would be more flexible than memcache as you're not limited to key/value pairs