I'm using executeOnEntries over a map and I would like to enqueue values into a queue in it's process method.
When I try and do that I get an IllegalThreadStateException (cannot make remote call: com.hazelcast.collection.impl.queue.operations.OfferOperation)
So I guess this is why:
accessing IMap from EntryProcessor
Which brings me to ask if I can define some kind of 'local' queue on each of the HZ instances and listen to it from another 'global' (shared) queue ? Or solve this in some kind of other way.
Related
I'm using hunchentoot session values to make my server code re-entrant. Problem is that session values are, by definition, retained during the session, i.e., from one call from the same browser to the next, whereas what I really am looking for is what amount to thread-specific re-entrancy, so that all the values disappear between calls -- I want to treat each click as a separate "from scratch" event, even if they are from the same session . Easy enough to have the driver either set to nil, or delete my session values, but I'm wondering if there's a "correct" way to do this? I don't see any thread-based analog to hunchentoot:session-value in the documentation.
Thanks in advance for any guidance you can offer.
If you want a value to be "thread specific" and at the same time to be "from scratch" on every request, that requires that every request must be dispatched in a brand new thread. This is not the case according to the Hunchentoot documentation, which says that two models are supported: a single-threaded taskmaster and a thread-per-connection taskmaster.
If your configuration is multi-threaded, then a thread-specific variable bound in a request-handling can therefore be expected to be per-connection. In a single-threaded Hunchentoot setup, it will effectively be global, tied to the request servicing thread.
A thread-based analog to hunchentoot:session-value probably doesn't exist because it would only introduce behaviors into the web app which surprisingly change if the threading model is reconfigured, or if the request pattern from the browser changes. A browser can make multiple requests using the same connection, or close the connection between requests.
To extend the request objects with custom per-request, I would look into, perhaps, subclassing from the acceptor (how to do this is described in the docs). My custom acceptor would have a custom method of the process-connection generic function which would create extended/subclasses request objects carrying the extra stuff I wanted to put into a request.
Another way would be to have some global weak hash which binds request objects as keys to additional information.
I'm trying to create a socket based communication with a server, with a Haxe client targetting CPP.
I'm looking at sys.net.Socket that looks like what I want, but every methods is synchronous! How can I wait for a server event?
I'm used to Node syntax with .on() functions, is there any equivalent here?
Thanks
There are two possible solutions for non-blocking socket access in haxe/cpp:
1) Set the socket to non-blocking
With the Socket.setBlocking method you set the blocking behavior of the socket. If set to true, which is the default, methods like socket.accept() (and likely socket.read() but I haven't personally tested it) will block until they complete.
But if you set blocking to false, those functions will throw if no data is available (you'll need to catch and move on.) So in your main loop you could access your non-blocking socket with try/catch around the read() calls.
2) Put your socket in a separate thread from your main loop
You can easily create a separate Thread for your socket communcations, so then a blocking socket is fine. In this model, your socket thread will send data back to the main thread with Thread.sendMessage(), your main loop will check via Thread.readMessage(block:Bool) whether there's new data from the socket.
Historically hxcpp and async is arduous task as there is no hxcpp main loop out of the box, so the task is virtually always deferred to a toolkit ( openfl, nme etc...)
AFAIK there is no out of the box solution, binding http://zeromq.org/ might be a straghtforward and easy task thought.
You can also defer to HTTP implemtentations boxed with your favorite toolkit.
Good luck !
I have initialized a Cache::Memcached::Fast object in the mod_perl startup for re-use by the scripts
For eg in startup.pl
$GLOBAL::memc = new Cache::Memcached::Fast({servers => '192.168.1.1:11211'});
I notice that when multiple calls to $GLOBAL::memc->get() happen simultaneously to the scripts the data for 1 process is sometimes copied to the results of the other
How can I make sure the memc handles are multi process-safe
This link explains a different problem , that the memcache handle dies .. but I guess this is also because of the same reason
What is the best way to create persistent memcached connections under mod_perl?
Is Memcached get and put methods are thread safe
use cas(compare and set) function to have proper syncronization between processes/threads.
In order to notify external (to AKKA) components in case an error occurred within an Actor, we use an ErrorHandler listener per one of the SO solutions.
Some errors require a complete process / JVM stop. In which case unless we call:
EventHandler.shutdown()
It keeps the process up.
What would be a clean way to shutdown JVM process in this case? And if we do need to use EventHandler.shutdown(), what would be the most logical ( AKKA? ) place to invoke it from?
If you're running the Akka Microkernel it will be done for you. If you're running it using an AkkaLoader in a ServletContainer, it will be done for you. Do you have a defined application lifecycle?
While writing Scala RemoteActor code, I noticed some pitfalls:
RemoteActor.classLoader = getClass().getClassLoader() has to be set in order to avoid "java.lang.ClassNotFoundException"
link doesn't always work due to "a race condition for which a NetKernel (the facility responsible for forwarding messages remotely) that backs a remote actor can close before the remote actor's proxy (more specifically, proxy delegate) has had a chance to send a message remotely indicating the local exit." (Stephan Tu)
RemoteActor.select doesn't always return the same delegate (RemoteActor.select - result deterministic?)
Sending a delegate over the network prevents the application to quit normally (RemoteActor unregister actor)
Remote Actors won't terminate if RemoteActor.alive() and RemoteActor.register() are used outside act. (See the answer of Magnus)
Are there any other pitfalls a programmer should be aware of?
Here's another; you need to put your RemoteActor.alive() and RemoteActor.register() calls inside your act method when you define your actor or the actor won't terminate when you call exit(); see How do I kill a RemoteActor?