Fortran and Eclipse: Displaying text in console - eclipse

I'm having a small difficulty with Fortran 90 and Eclipse. I installed the "Photran" plugin to Eclipse, and have managed to compile everything perfect, and overall the program does what it has to do. The problem comes when displaying text in the Eclipse console. The code it self not that important, since it does what it has to do, but more the output generation.
The piece of the code I'm having trouble with is the following:
subroutine main_program
write(*,*) "Program begins!"
<Program that takes ~5mins to run>
write(*,*) "Program ends!"
end subroutine main_program
Specifically, the problem is that in the console, the first message should be shown immediately, "Program begins!", and after ~5 minutes it should show "Program ends!". It happens that both of these messages get displayed only after the program is done running, not while the programs is executing.
I have used:
subroutine main_program
print*, "Program begins!"
<Program that takes ~5mins to run>
print*, "Program ends!"
end subroutine main_program
but it keeps on doing the same thing. I saw a "similar" post earlier (can't find the link though, sorry about that) but it was not really what I was looking for.

OK, here's the answer. Insert the statement
flush 6
after the first write statement to have its output sent immediately to the console. Insert it anywhere else you wish once you understand what it is doing.
It is obvious (to me) from the situation OP describes that the output is being buffered, that is the program issues a write statement and passes the output off to the operating system which does as it damn well pleases -- here it waits until the program ends before writing anything to the console. I guess that its buffering capabilities have some limits and if the program exceeded them the o/s would empty its buffers prior to program end.
Fortran now (since 2003 I think) provides a standard way of telling the o/s to actually flush the buffer to the output device -- the flush statement. In its simplest form flush takes only one argument, the unit number of the output channel to be flushed. I guessed that OP had unit 6 connected to stdout (aka *), since this is a near-universal default configuration, though not one guaranteed by the Fortran language standard.
I don't think that flush * is correct.
If you have a pre-2003 compiler then (a) for Backus' sake update and (b) it is likely that it supports a non-standard way to flush buffers; if memory serves gfortran used to provide a subroutine which would be called something like call flush(6).
There are other ways, outside Fortran, to tell the o/s to write to disk (or console or what have you) immediately. Look at the documentation for your o/s if you are interested in them.

Related

How to pinpoint where in the program a crash happened using .dmp and WinDbg?

I have a huge application (made in PowerBuilder) that crashes every once in a while so it is hard to reproduce this error. We have it set up so that when a crash like this occurs, we recieve a .dmp file.
I used WinDbg to analyze my .dmp file with the command !analyze -v. From this I can deduct that the error that occured was an Access Violation C0000005. Based on the [0] and [1] parameters, it attempted to dereference a null pointer.
WinDbg also showed me STACK_TEXT consisting of around 30 lines, but I am not sure how to read it. From what I have seen I need to use some sort of symbols.
First line of my STACK_TEXT is this:
00000000`00efca10 00000000`75d7fa46 : 00000000`10df1ae0 00000000`0dd62828 00000000`04970000 00000000`10e00388 : pbvm!ob_get_runtime_class+0xad
From this, my goal is to analyze this file to figure out where exactly in the program this error happened or which function it was in. Is this something I will be able to find after further analyzing the stack trace?
How can I pinpoint where in the program a crash happened using .dmp and WinDbg so I can fix my code?
If you analyze a crash dump with !analyze -v, the lines after STACK TEXT is the stack trace. The output is equivalent to kb, given you set the correct thread and context.
The output of kb is
Child EBP
Return address
First 4 values on the stack
Symbol
The backticks ` tell you that you are running in 64 bit and they split the 64 bit values in the middle.
On 32 bit, the first 4 parameters on the stack were often equivalent to the first 4 parameters to the function, depending on the calling convention.
On 64 bit, the stack is not so relevant any more, because with the 64 bit calling convention, parameters are passed via registers. Therefore you can probably ignore those values.
The interesting part is the symbol like pbvm!ob_get_runtime_class+0xad.
In front of ! is the module name, typically a DLL or EXE name. Look for something that you built. After the ! and before the + is a method name. After the + is the offset in bytes from the beginning of the function.
As long as you don't have functions with thousands of lines of code, that number should be small, like < 0x200. If the number is larger than that, it typically means that you don't have correct symbols. In that case, the method name is no longer reliable, since it's probably just the last known (the last exported) method name and a faaaar way from there, so don't trust it.
In case of pbvm!ob_get_runtime_class+0xad, pbvm is the DLL name, ob_get_runtime_class is the method name and +0xad is the offset within the method where the instruction pointer is.
To me (not knowing anything about PowerBuilder) PBVM sounds like the PowerBuilder DLL implementation for Virtual Memory. So that's not your code, it's the code compiled by Sybase. You'd need to look further down the call stack to find the culprit code in your DLL.
After reading Wikipedia, it seems that PowerBuilder does not necessarily compile to native code, but to intermediate P-Code instead. In this case you're probably out of luck, since your code is never really on the call stack and you need a special debugger or a WinDbg extension (which might not exist, like for Java). Run it with the -pbdebug command line switch or compile it to native code and let it crash again.

Matlab MCR program errors corrupt terminal

I have compiled a Matlab routine using the MCR and deployed it to other computers without having them installed matlab. So far, so good. But of course, the routine is not completely error-free, particularly the GUI part. The problem is that when the MCR tries to write the error message to the terminal, it seems to corrupt the terminal so that everything is no longer legible - not even the prompt. Sometimes I also get an extra window, vaguely resembling the matlab editor window, full of illegible ascii characters. Does anyone know what is causing this, or how to avoid it?
My first attempt was a big try-catch block around everything, but whatever it is still seems to get through. The catch block just tries to divert the error to an errordlg rather than the command prompt:
catch e
errordlg({e.message;['in: ',e.stack.name]})
end
MATLAB Compiler does not support command window functions.
Peter Webb tells on Loren's blog:
Certain MATLAB functions cannot be deployed because they act on
objects that are not present in a deployed application. For example,
since deployed applications have no command window, functions that
modify the command window can't be deployed.
So, you probably need to get rid of any function that prints to the command window.
Also, you can check out the mccExcludedFiles.log file.

Is there a standard way for perl to behave when it runs out of memory?

Is there a standard(ish) way for a Perl interpreter (aka "perl") to behave when it runs out of memory? Is it documented/specced-out in any way? Coded in some uniform way?
I'm especially interested in any standards which are expressed as covenant to Perl code being run - e.g., will die be called? Will END block be executed? Etc...
I'm fine with both an "theoretical" answer (e.g. some sort of generic "this is what perl code ought to do in general on out-of-memory" mission statement document from Larry/P5P/etc..., even if not 100% of malloc() calls follow this rule); or a "practical" statement (e.g. all malloc() calls in Perl are wrapped into a generic "allocate_memory" function which uniformly handles all failures).
It may be possible that the answer depends on what specifically causes out of memory (e.g. a request for more memory for Perl code's data structure vs. memory allocated by internal Perl code unrelated to explicit "need to store more data" logic in Perl program).
If the answer is extremely implementation dependent, assume perl for Solaris/Linux,and narrowing down to any recent stable version (5.8 to 5.16) is acceptable.
The question is limited to standard Perl interpreter, however you wish to define that as far as pre-compile configuration (e.g. perl that comes with a major Linux distribution, or one compiled with all defaults left alone, etc...).
NOTE: This question came out of Gilles's comment to another Q
Taking a look at the manual page for the various diagnostic warnings that Perl will issue when the "use diagnostics" pragma is enabled, you can see various different types of "out of memory" errors and what they mean.
So you can infer the "standard" behavior from these messages; the one with an exclamation point ("Out of memory!") sounds like the one you're asking about:
Out of memory!
(X) The malloc() function returned 0, indicating there was
insufficient remaining memory (or virtual memory) to satisfy the
request. Perl has no option but to exit immediately.
An "X" level error is labeled as "A very fatal error (nontrappable)."
However if it's a "large request" (for greater than 64K) it is trappable (I guess Perl assumes it'll have enough memory to shutdown cleanly).

how DOS executed multiple processes simultaneously?

DOS is always given as an example of single tasking operating system. However when a command is issued through command-line, control switches from the shell to the command and then switches back to shell when the command completes.Thus there are two processes executing simultaneously. Is there something wrong in my understanding ?
No, they weren't executing simultaneously.
COMMAND.COM had a resident portion that was in memory all the time and a transient portion that could be tossed out at will.
When you ran a program, it typically got loaded in place of the transient portion and then run. When the program exited, it did so by calling code in the resident portion which would then reload the transient portion if necessary and continue.
The fact that some of the code remained resident in no way means that it was "running". In a similar way, vast tracts of MS-DOS (the kernel) stayed continuously in memory yet they weren't "running", unless called explicitly by a non-kernel program.
Now there were things can could be said to run concurrently, DOS had plenty of TSR (terminate and stay resident) programs that would run, hook into an interrupt or DOS in some way, then exit but leaving some memory allocated (where its code was).
Then, in response to certain events, that code would be run. Perhaps one of the famous ones was Borland Sidekick which was a personal information manager that would pop up instantly with a keypress.
While the other process is running, the command line processor is not running: it is suspended. The only "multitasking" facility that was available in DOS was "Terminate and Stay Resident".
It doesnt matter whether you are running DOS or Windows or Linux or BSD or whatever on that processor it is all the same. At that period of time you for purposes of this discussion had a single execution unit, a single core executing the instructions, mostly in order. Makes no difference if those instructions wear the name DOS or Linux or windows. Just instructions.
Just like now as then when a windows program decides to terminate it tries to do it nicely with some flavor of exit call. When a linux program terminates it tries to do so nicely with some flavor of exit call to the system. And when a dos program terminates it tries to do so nicely with some flavor of exit call to the system. In a shell, command prompt, etc linux, windows, dos, that shell, which is a program itself, loads and branches to the program you have loaded and your program runs for a while and as mentioned tries to return to the prior program nicely with some flavor of exit. Just like when the shell you were running wants to return when it is done running it tries to do so nicely.
As with linux or windows, easier to see back then, you dont run anything "at the same time" or "in parallel" one instruction stream at a time. (today we have multiple execution units and/or cores that are designed to each be doing something in parallel with something managing them, so today you can actually say "in parallel") You want to switch "tasks" or "threads" or "processes" you needed an interrupt, that switched to you different code, an interrupt handler, and that handler could return to the same program that was interrupted or switch to another. You can put whatever name on it you want it is how you make things look like they are running at the same time. dos, linux, windows, etc, this is typically how you switch from one "program" or bit of code to another. linux and windows have their kernels and operating system behind them that was called during the interrupts, and dos had that as well (dos HAS that, dos is still alive you touch a dos machine every few days most likely (gas pump, atm machine, etc), dos is also still used in the development and testing of x86 motherboards/computers, nothing can compete with it as an embedded x86 platform, nothing has the freedom that dos has to do what you want, this is why bios upgrades are still distributed as a dos program). The interrupt handlers would give time slices to the various bios handlers and dos handlers. task/process/thread switching was not as designed or planned as an operating system like linux or windows, but it was there, for each version of dos there were rules you followed and you could switch tasks (tsrs are a popular term). Just talking to a floppy, hard disk, etc there was code involved in the whole process, it wasnt buried in the hardware, lots of things happened in parallel. no different than a hard disk controller driver in something more complicated like linux or windows. At least one, maybe some, non-microsoft dos clones could multitask.
The short answer, When you have a function bob() that calls a function ted().
int bob ( int something )
{
...some code
...more code
ted();
...some code
...more code
}
is bob() still running? Are they running in parallel? No, the bob() code is still there, somewhere, waiting for the ted() code to finish what it was doing and return. So long as ted() doesnt crash it will return and bob() can continue to execute. bob is suspended while ted executes. Not much different with a shell or command line in an more complicated operating system. There is some function somewhere that has loaded your program into memory and called it, it might be a fork or clone of a command line that you were running so that that command line can continue "in parallel" or the clone can continue in parallel. but the concept is the same.
The difference from a trivial C program like the one above is that the code above can be thought of being resolved at compile time where loading and running a program is definitely runtime, basically self modifying code, the program modifies memory then jumps to it. When it returns that code, cleans up, unwinds, and exits itself or waits for another command depending on the design. DOS was just very very simple, a bunch of system calls, combined with a bunch of BIOS calls, and a very simple command line that could load programs and do a small number of other commands. It didnt have any rules you couldnt get around (windows is a dos program), if the program you launched didnt want to return (you could at least at the time launch linux from dos through an intermediate dos program) well it kind of messes up your question of what happens when the program completes, well linux didnt return, it took over the system.

How can I debug a Perl program that suddenly exits?

I have Perl program based on IO::Async, and it sometimes just exits after a few hours/days without printing any error message whatsoever. There's nothing in dmesg or /var/log either. STDOUT/STDERR are both autoflush(1) so data shouldn't be lost in buffers. It doesn't actually exit from IO::Async::Loop->loop_forever - print I put there just to make sure of that never gets triggered.
Now one way would be to keep peppering the program with more and more prints and hope one of them gives me some clue. Is there better way to get information what was going on in a program that made it exit/silently crash?
One trick I've used is to run the program under strace or ltrace (or attach to the process using strace). Naturally that was under Linux. Under other operating systems you'd use ktrace or dtrace or whatever is appropriate.
A trick I've used for programs which only exhibit sparse issues over days or week and then only over handfuls among hundreds of systems is to direct the output from my tracer to a FIFO, and have a custom program keep only 10K lines in a ring buffer (and with a handler on SIGPIPE and SIGHUP to dump the current buffer contents into a file. (It's a simple program, but I don't have a copy handy and I'm not going to re-write it tonight; my copy was written for internal use and is owned by a former employer).
The ring buffer allows the program to run indefinitely with fear of running systems out of disk space ... we usually only need a few hundred, even a couple thousand lines of the trace in such matters.
If you are capturing STDERR, you could start the program as perl -MCarp::Always foo_prog. Carp::Always forces a stack trace on all errors.
A sudden exit without any error message is possibly a SIGPIPE. Traditionally SIGPIPE is used to stop things like the cat command in the following pipeline:
cat file | head -10
It doesn't usually result in anything being printed either by libc or perl to indicate what happened.
Since in an IO::Async-based program you'd not want to silently exit on SIGPIPE, my suggestion would be to put somewhere in the main file of the program a line something like
$SIG{PIPE} = sub { die "Aborting on SIGPIPE\n" };
which will at least alert you to this fact. If instead you use Carp::croak without the \n you might even be lucky enough to get the file/line number of the syswrite, etc... that caused the SIGPIPE.