I am doing remote debugging using eclipse. My requirement is to make 20 requests at the same time, stopping at one point using debug breakpoint and then release all the suspended threads at the same time to test how code is behaving when multiple threads access the code at same time. However, when I tried this I found only one thread is serving all the request
Daemon Thread [http-0.0.0.0-8080-Processor60] (Suspended (breakpoint at line 440 in VcsInfoDAO))
when first request completes, then only second request comes to the breakpoint serving by the same thread mentioned above. Is there any setting in eclipse to make it all the request comes to a single point and then in some way to release the threads at the same time so that all the threads access the code thereafter at the same time.
Any help would be highly appreciated.
Sourabh
Eclipse has nothing to do with what you see. If you set a breakpoint to some place inside a method supposed to be called concurrently, and if your client code really launches 20 concurrent requests, and if you observe that the second request is only handled once the first one has finished, then what you thought was concurrent is not.
I see two possible explanations:
you have a unique thread handling all the requests. If several are sent concurrently, all the requests are queued and handled one by one
you have several threads handling the request concurrently, but the client code sends 20 requests sequentially, rather than sending 20 requests concurrently.
Anyway, using a breakpoint to test such a thing is not a good solution. You'll have to hit the "Continue (F8)" button for each of the 20 threads, thus they won't restart at the same time. You'd bette use a CountDownLatch initialized at 20 to do that:
private CountDownLatch latch = new CountDownLatch(20);
public void run() {
// some code
// here we want to pause all 20 threads and restart them all at the same time
latch.countDown(); // The 20th thread will open the barrier, and they will all restart at the same time
latch.await();
}
Related
I've been recently taught, the concept of monitors. My prof, said "only one thread per time can be in the monitor". I am not sure I get this that's why I did my research. Wikipaideia states what i wrote as a title. My book states though, that inside monitor there are queues, for threads that are waiting , until some defined condition is met. What really confused me is a pseudocode we were given as solution to the bounded buffer problem with monitors.
My question is : If a process is not stopped by a wait() inside the monitor, does monitor structure guaruntee us that it will be permitted to execute the whole method without being interrupted by a context switch or just that, while it is executing the method nobody else produce or consumer is executing their according method?? .
Because in this, slide:
It seems like we only wake up a consumer if the buffer was empty, and we just produced an item.
Everytime a producer that reaches that part of code, has produced an item. Why don't we signal everytime? I supposed that , we (may) consider that: if the buffer wasn't empty, then they may be "active" consumers waiting, to be resumed because they were interrupted by a context switch, but then I thought to myself is this possible? Is it possible to be interrupted inside a method (not because you are "waited") but by a context switch?
I am having trouble with the JVM immediately exiting using various new applications I wrote which spawn threads through the Scala 2.10 Futures + Promises framework.
It seems that at least with the default execution context, even if I'm using blocking, e.g.
future { blocking { /* work */ }}
no non-daemon thread is launched, and therefore the JVM thinks it can immediately quit.
A stupid work around is to launch a dummy Thread instance which is just waiting, but then I also need to make sure that this thread stops when the processes are done.
So how to I enforce them to run on non-daemon threads?
In looking at the default ExecutionContext attached to ExecutionContext.global, it's of the fork join variety and the Threadfactory it uses sets the threads to daemon. If you want to work around this, you could use a different ExecutionContext, one you set up yourself. If you still want the FJP variety (and you probably do as it scales the best), you should be able to look at what they are doing in ExecutionContextImpl via this link and create something similar. Or just use a cached thread pool via Executors.newCachedThreadPool as that won't shut down immediately before your futures complete.
spawn processes
If this means processes and not just tasks, then scala.sys.process spawns non-daemon threads to run OS processes.
Otherwise, if you're creating a bunch of tasks, this is what Future.sequence helps with. Then just Await ready (Future sequence List(futures)) on the main thread.
So I'm writing a mini timeout library in scala, it looks very similar to the code here: How do I get hold of exceptions thrown in a Scala Future?
The function I execute is either going to complete successfully, or block forever, so I need to make sure that on a timeout the executing thread is cancelled.
Thus my question is: On a timeout, does awaitAll terminate the underlying actor, or just let it keep running forever?
One alternative that I'm considering is to use the java Future library to do this as there is an explicit cancel() method one can call.
[Disclaimer - I'm new to Scala actors myself]
As I read it, scala.actors.Futures.awaitAll waits until the list of futures are all resolved OR until the timeout. It will not Future.cancel, Thread.interrupt, or otherwise attempt to terminate a Future; you get to come back later and wait some more.
The Future.cancel may be suitable, however be aware that your code may need to participate in effecting the cancel operation - it doesn't necessarily come for free. Future.cancel cancels a task that is scheduled, but not yet started. It interrupts a running thread [setting a flag that can be checked]... which may or may not acknowledge the interrupt. Review Thread.interrupt and Thread.isInterrupted(). Your long-running task would normally check to see if it's being interrupted (your code), and self-terminate. Various methods (i.e. Thread.sleep, Object.wait and others) respond to the interrupt by throwing InterruptedException. You need to review & understand that mechanism to ensure your code will meet your needs within those constraints. See this.
i'm building some application, where i have to use memcached.
I found quite nice client:
net.spy.memcached.MemcachedClient
Under this cliend everything works greate except one - i have problem with close connection, and after a while i'm startign to fight with memory leak.
I was looking for possibility for close connection, and i foud "shutdown" method. But if i use this method like this:
MemcachedClient c = new MemcachedClient(new InetSocketAddress(
memcachedIp, memcachedPort));
c.set(something, sessionLifeTime, memcache.toJSONString());
c.shutdown();
I have problem with adding anything do memcached - in logs i see that this method is opening connection, and before it will add anything to memcached, it's closeing the connection.
Do you have any idea, what to do?
Additionally - i found method: c.shutdown(2, TimeUnit.SECONDS); - which should close connection after 2 seconds, but i have connected jmx monitor to my tomcat and i see, that Memcached thread isn't finished after 2 seconds - this thread isn't finished at all...
The reason you are having an issue adding things to memcached like this is that the set(...) function is asynchronous and all it does is put that operation into a queue to be sent to memcached. Since you call shutdown right after this the operation doesn't actually have time to make it out onto the wire. You need to call set(...).get() in order to make your application thread actually wait for the operation to complete before calling shutdown.
Also, I haven't experience IO threads not dying after calling shutdown with a timeout. One way you can confirm that this is an actual bug is by running a standalone program with Spymemached. If the process doesn't terminate when it's completed then you've found an issue.
Earlier this month I asked this question 'What is a runloop?' After reading the answers and did some tries I got it to work, but still I do not understand it completely. If a runloop is just an loop that is associated with an thread and it don't spawn another thread behind the scenes how can any of the other code in my thread(mainthread to keep it simple) execute without getting "blocked"/not run because it somewhere make an infinite loop?
That was question number one. Then over to my second.
If I got something right about this after having worked with this, but not completely understood it a runloop is a loop where you attach 'flags' that notify the runloop that when it comes to the point where the flag is, it "stops" and execute whatever handler that is attached at that point? Then afterwards it keep running to the next in que.
So in this case no events is placed in que in connections, but when it comes to events it take whatever action associated with tap 1 and execute it before it runs to connections again and so on. Or am I as far as I can be from understanding the concept?
"Sort of."
Have you read this particular documentation?
It goes into considerable depth -- quite thorough depth -- into the architecture and operation of run loops.
A run loop will get blocked if it dispatches a method that takes too long or that loops forever.
That's the reason why an iPhone app will want to do everything which won't fit into 1 "tick" of the UI run loop (say at some animation frame rate or UI response rate), and with room to spare for any other event handlers that need to be done in that same "tick", either broken up asynchronously, on dispatched to another thread for execution.
Otherwise stuff will get blocked until control is returned to the run loop.