My question might seem lame, but I am asking this out of curiosity. When we write a standalone java program , it gets executed and then gets terminated. But in socket programming, we create a while true loop, under which it continuously listens for a request. Is it only using while(true) that a server can be created, and if it does then will it not create a out of memory exception ?
With a while(true) loop there is no out of memory exception if you dont have memory leaks in your loop. If you would try to run endless with recursion, this would happen at some point due to the growing callstack.
If a program has to run endless (or until some condition, but not a fixed amount of operations), it needs an endless loop, sometimes it is called event-loop to sound more fancy, but most programs have it somewhere.
Technically you can also write for(;;) aswell, which can theoretically be better in unoptimized environments, because there is no condition that has to be checked, but most compilers will optimize the while(true)anyways.
Related
I am using parallel computing toolbox. When my matlab code terminates due to exception/error while parallel worker is running (e.g. inside an infinite loop), I have to restart the pool via delete(gcp) and rerunning gcp. Otherwise the following runs always freeze. But restarting parallel pools takes too much time.
This does not happen when I manually cancel the future object cancel(futureObject) but I do no want to do this everytime program unexpectedly terminates.
I can cancel futureObject at the beginning of the matlab script but the object gets removed when I accidently do clear all and there is no way but I have to dodelete(gcp). Is there a way to cancel all futureobject automatically (even it is not in the base workspace anymore) everytime I restart matlab script?
I just decided to use try/catch block with cancel(futureObject) on the top-level of the script.
I know that an interrupt causes the OS to change a CPU from its current task and to run a kernel routine. I this case, the system has to save the current context of the process running on the CPU.
However, I would like to know whether or not a context switch occurs when any random process makes a system call.
I would like to know whether or not a context switch occurs when any random process makes a system call.
Not precisely. Recall that a process can only make a system call if it's currently running -- there's no need to make a context switch to a process that's already running.
If a process makes a blocking system call (e.g, sleep()), there will be a context switch to the next runnable process, since the current process is now sleeping. But that's another matter.
There are generally 2 ways to cause a content switch. (1) a timer interrupt invokes the scheduler that forcibly makes a context switch or (2) the process yields. Most operating systems have a number of system services that will cause the process to yield the CPU.
well I got your point. so, first I clear a very basic idea about system call.
when a process/program makes a syscall and interrupt the kernel to invoke syscall handler. TSS loads up Kernel stack and jump to syscall function table.
See It's actually same as running a different part of that program itself, the only major change is Kernel play a role here and that piece of code will be executed in ring 0.
now your question "what will happen if a context switch happen when a random process is making a syscall?"
well, nothing will happen. Things will work in same way as they were working earlier. Just instead of having normal address in TSS you will have address pointing to Kernel stack and SysCall function table address in that random process's TSS.
Ubuntu Linux, 2.6.32-45 kernel, 64b, Perl 5.10.1
I connect many new IO::Socket::UNIX stream sockets to a server, and mostly they work fine. But sometimes in a heavily threaded environment on a faster processor, they return "Resource temporarily unavailable" (EAGAIN/EWOULDBLOCK). I use a timeout on the Connect, so this causes the sockets to be put into non-blocking mode during the connect. But my timeout period isn't occurring - it doesn't wait any noticeable time, it returns quickly.
I see that inside IO::Socket, it tries the connect, and if it fails with EINPROGRESS or EAGAIN/EWOULDBLOCK, it does a select to wait for the write bit to be set. This seems normal so far. In my case the select quickly succeeds, implying that the write bit is set, and the code then tries a re-connect. (I guess this is an attempt to get any error via error slippage?) Anyway, the re-connect fails again with the EAGAIN/EWOULDBLOCK.
In my code this is easy to fix with a re-try loop. But I don't understand why, when the socket becomes writeable, that the socket is not re-connectable. I thought the select guard was always sufficient for a non-blocking connect. Apparently not; so my questions are:
What conditions cause the connect to fail when the select works (the write bit gets set)?
Is there a better way than spinning and retrying, to wait for the connect to succeed? The spinning is wasting cycles. Instead I'd like it to block on something like a select/poll, but I still need a timeout.
Thanx,
-- Steve
But I don't understand why, when the socket becomes writeable, that the socket is not re-connectable.
I imagine it's because whatever needed resource became free got snatched up before you were able to connect again. Replacing the select with a spin loop would not help that.
Earlier this month I asked this question 'What is a runloop?' After reading the answers and did some tries I got it to work, but still I do not understand it completely. If a runloop is just an loop that is associated with an thread and it don't spawn another thread behind the scenes how can any of the other code in my thread(mainthread to keep it simple) execute without getting "blocked"/not run because it somewhere make an infinite loop?
That was question number one. Then over to my second.
If I got something right about this after having worked with this, but not completely understood it a runloop is a loop where you attach 'flags' that notify the runloop that when it comes to the point where the flag is, it "stops" and execute whatever handler that is attached at that point? Then afterwards it keep running to the next in que.
So in this case no events is placed in que in connections, but when it comes to events it take whatever action associated with tap 1 and execute it before it runs to connections again and so on. Or am I as far as I can be from understanding the concept?
"Sort of."
Have you read this particular documentation?
It goes into considerable depth -- quite thorough depth -- into the architecture and operation of run loops.
A run loop will get blocked if it dispatches a method that takes too long or that loops forever.
That's the reason why an iPhone app will want to do everything which won't fit into 1 "tick" of the UI run loop (say at some animation frame rate or UI response rate), and with room to spare for any other event handlers that need to be done in that same "tick", either broken up asynchronously, on dispatched to another thread for execution.
Otherwise stuff will get blocked until control is returned to the run loop.
I have an application using POE which has about 10 sessions doing various tasks. Over time, the app starts consuming more and more RAM and this usage doesn't go down even though the app is idle 80% of the time. My only solution at present is to restart the process often.
I'm not allowed to post my code here so I realize it is difficult to get help but maybe someone can tell me what I can do find out myself?
Don't expect the process size to decrease. Memory isn't released back to the OS until the process terminates.
That said, might you have reference loops in data structures somewhere? AFAIK, the perl garbage collector can't sort out reference loops.
Are you using any XS modules anywhere? There could be leaks hidden inside those.
A guess: your program executes a loop for as long as it is running; in this loop it may be that you allocate memory for a buffer (or more) each time some condition occurs; since the scope is never exited, the memory remains and will never be cleaned up. I suggest you check for something like this. If it is the case, place the allocating code in a sub that you call from the loop and where it will go out of scope, and get cleaned up, on return to the loop.
Looks like Test::Valgrind is a tool for searching for memory leaks. I've never used it myself though (but I used plain valgrind with C source).
One technique is to periodically dump the contents of $POE::Kernel::poe_kernel to a time- or sequence-named file. $poe_kernel is the root of a tree spanning all known sessions and the contents of their heaps. The snapshots should monotonically grow if the leaked memory is referenced. You'll be able to find out what's leaking by diff'ing an early snapshot with a later one.
You can export POE_ASSERT_DATA=1 to enable POE's internal data consistency checks. I don't expect it to surface problems, but if it does I'd be very happy to receive a bug report.
Perl can not resolve reference rings. Either you have zombies (which you can detect via ps axl) or you have a memory leak (reference rings/circle)
There are a ton of programs to detect memory leaks.
strace, mtrace, Devel::LeakTrace::Fast, Devel::Cycle