DOS is always given as an example of single tasking operating system. However when a command is issued through command-line, control switches from the shell to the command and then switches back to shell when the command completes.Thus there are two processes executing simultaneously. Is there something wrong in my understanding ?
No, they weren't executing simultaneously.
COMMAND.COM had a resident portion that was in memory all the time and a transient portion that could be tossed out at will.
When you ran a program, it typically got loaded in place of the transient portion and then run. When the program exited, it did so by calling code in the resident portion which would then reload the transient portion if necessary and continue.
The fact that some of the code remained resident in no way means that it was "running". In a similar way, vast tracts of MS-DOS (the kernel) stayed continuously in memory yet they weren't "running", unless called explicitly by a non-kernel program.
Now there were things can could be said to run concurrently, DOS had plenty of TSR (terminate and stay resident) programs that would run, hook into an interrupt or DOS in some way, then exit but leaving some memory allocated (where its code was).
Then, in response to certain events, that code would be run. Perhaps one of the famous ones was Borland Sidekick which was a personal information manager that would pop up instantly with a keypress.
While the other process is running, the command line processor is not running: it is suspended. The only "multitasking" facility that was available in DOS was "Terminate and Stay Resident".
It doesnt matter whether you are running DOS or Windows or Linux or BSD or whatever on that processor it is all the same. At that period of time you for purposes of this discussion had a single execution unit, a single core executing the instructions, mostly in order. Makes no difference if those instructions wear the name DOS or Linux or windows. Just instructions.
Just like now as then when a windows program decides to terminate it tries to do it nicely with some flavor of exit call. When a linux program terminates it tries to do so nicely with some flavor of exit call to the system. And when a dos program terminates it tries to do so nicely with some flavor of exit call to the system. In a shell, command prompt, etc linux, windows, dos, that shell, which is a program itself, loads and branches to the program you have loaded and your program runs for a while and as mentioned tries to return to the prior program nicely with some flavor of exit. Just like when the shell you were running wants to return when it is done running it tries to do so nicely.
As with linux or windows, easier to see back then, you dont run anything "at the same time" or "in parallel" one instruction stream at a time. (today we have multiple execution units and/or cores that are designed to each be doing something in parallel with something managing them, so today you can actually say "in parallel") You want to switch "tasks" or "threads" or "processes" you needed an interrupt, that switched to you different code, an interrupt handler, and that handler could return to the same program that was interrupted or switch to another. You can put whatever name on it you want it is how you make things look like they are running at the same time. dos, linux, windows, etc, this is typically how you switch from one "program" or bit of code to another. linux and windows have their kernels and operating system behind them that was called during the interrupts, and dos had that as well (dos HAS that, dos is still alive you touch a dos machine every few days most likely (gas pump, atm machine, etc), dos is also still used in the development and testing of x86 motherboards/computers, nothing can compete with it as an embedded x86 platform, nothing has the freedom that dos has to do what you want, this is why bios upgrades are still distributed as a dos program). The interrupt handlers would give time slices to the various bios handlers and dos handlers. task/process/thread switching was not as designed or planned as an operating system like linux or windows, but it was there, for each version of dos there were rules you followed and you could switch tasks (tsrs are a popular term). Just talking to a floppy, hard disk, etc there was code involved in the whole process, it wasnt buried in the hardware, lots of things happened in parallel. no different than a hard disk controller driver in something more complicated like linux or windows. At least one, maybe some, non-microsoft dos clones could multitask.
The short answer, When you have a function bob() that calls a function ted().
int bob ( int something )
{
...some code
...more code
ted();
...some code
...more code
}
is bob() still running? Are they running in parallel? No, the bob() code is still there, somewhere, waiting for the ted() code to finish what it was doing and return. So long as ted() doesnt crash it will return and bob() can continue to execute. bob is suspended while ted executes. Not much different with a shell or command line in an more complicated operating system. There is some function somewhere that has loaded your program into memory and called it, it might be a fork or clone of a command line that you were running so that that command line can continue "in parallel" or the clone can continue in parallel. but the concept is the same.
The difference from a trivial C program like the one above is that the code above can be thought of being resolved at compile time where loading and running a program is definitely runtime, basically self modifying code, the program modifies memory then jumps to it. When it returns that code, cleans up, unwinds, and exits itself or waits for another command depending on the design. DOS was just very very simple, a bunch of system calls, combined with a bunch of BIOS calls, and a very simple command line that could load programs and do a small number of other commands. It didnt have any rules you couldnt get around (windows is a dos program), if the program you launched didnt want to return (you could at least at the time launch linux from dos through an intermediate dos program) well it kind of messes up your question of what happens when the program completes, well linux didnt return, it took over the system.
Related
I have this problem:
I run some large calculations before going to sleep (or work).
When I return sometimes RAM is already filled and the program starts writing to Disk, which is a problem since then computer becomes almost non responsive, also the button "Interrupt the current operation" doesn't stop mserver.exe from executing a task.
This is what I saw 10 mins after I pressed the button "Interrupt the current operation":
Not to mention that calculations are probably like 100 or even 1000 times slower when it starts using Disk instead of RAM (so it's pointless anyway).
Another problem is that I was unable to save some variables to file since in Maple I couldn't type anything while mserver.exe was executing a task and after I killed the process mserver.exe I was still unable to save those variables since Maple commands don't work when connection to kernel is lost.
So, my question: can I make it so that mserver.exe won't use Disk at all (I mean from Maple alone, not by disabling page file in Windows) and just stop execution automatically when RAM is full (just like Classic Maple does when it hits 2GB limit)?
Also it would be nice to be able to limit Maple from using processor too much, for example up to 75% or so, so that I could work on that computer without problems.
You might experiment with a few of the options available for specifying limits on the Maple (kernel, mserver) engine.
In particular,
--init-reserve-mem=memorysize
(or, possible, the -T option). See here for some more detail:
https://www.maplesoft.com/support/help/MapleSim/view.aspx?path=maple
On Linux/OSX you could pass that in a call to the maple script that launches Maple. On MS-Windows you could add that to the command string/Property in the launcher (icon).
You might try setting it to a fraction of your total RAM, eg. 50-75%, and see how it goes. Presumably you'll have some other processes running.
As far as restricting the CPU use goes, that's more of a OS issue. On Linux/OSX you could use the system's nice facility. I don't know what's avaliable on MS-Windows (built-in or 3rd party). You might be able to set the Priority of the running mserver process from the Task Manager. Or you might look at something like the START facility:
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/start
I often call computationally intensive command-line programs from within MATLAB using the system command:
[status, result] = system(cmd_line_for_my_low_level_exe, '-echo');
where the -echo option (supposedly) echoes console output (stdout) generated by low_level_exe in the MATLAB command window.
On Linux machines this works great, with MATLAB echoing the console output in (seemingly) real-time. Users get a nice continuous update on low_level_exe's progress.
On Windows machines this is not the case. It can often be many minutes in-between echoes, and users sometimes get impatient and assume the code has crashed...
Is there a way to increase/control the frequency of MATLAB's -echo, or possibly another, better option entirely? (I'd prefer to stay away from mex files to maintain compatibility with Octave).
Is this actually a MATLAB issue, or just a Linux/Windows incompatibility?
I'm having a small difficulty with Fortran 90 and Eclipse. I installed the "Photran" plugin to Eclipse, and have managed to compile everything perfect, and overall the program does what it has to do. The problem comes when displaying text in the Eclipse console. The code it self not that important, since it does what it has to do, but more the output generation.
The piece of the code I'm having trouble with is the following:
subroutine main_program
write(*,*) "Program begins!"
<Program that takes ~5mins to run>
write(*,*) "Program ends!"
end subroutine main_program
Specifically, the problem is that in the console, the first message should be shown immediately, "Program begins!", and after ~5 minutes it should show "Program ends!". It happens that both of these messages get displayed only after the program is done running, not while the programs is executing.
I have used:
subroutine main_program
print*, "Program begins!"
<Program that takes ~5mins to run>
print*, "Program ends!"
end subroutine main_program
but it keeps on doing the same thing. I saw a "similar" post earlier (can't find the link though, sorry about that) but it was not really what I was looking for.
OK, here's the answer. Insert the statement
flush 6
after the first write statement to have its output sent immediately to the console. Insert it anywhere else you wish once you understand what it is doing.
It is obvious (to me) from the situation OP describes that the output is being buffered, that is the program issues a write statement and passes the output off to the operating system which does as it damn well pleases -- here it waits until the program ends before writing anything to the console. I guess that its buffering capabilities have some limits and if the program exceeded them the o/s would empty its buffers prior to program end.
Fortran now (since 2003 I think) provides a standard way of telling the o/s to actually flush the buffer to the output device -- the flush statement. In its simplest form flush takes only one argument, the unit number of the output channel to be flushed. I guessed that OP had unit 6 connected to stdout (aka *), since this is a near-universal default configuration, though not one guaranteed by the Fortran language standard.
I don't think that flush * is correct.
If you have a pre-2003 compiler then (a) for Backus' sake update and (b) it is likely that it supports a non-standard way to flush buffers; if memory serves gfortran used to provide a subroutine which would be called something like call flush(6).
There are other ways, outside Fortran, to tell the o/s to write to disk (or console or what have you) immediately. Look at the documentation for your o/s if you are interested in them.
I'm using a a few system() commands in my perl script that is running on Linux.
The commands I run with the system() function output their data to a log which I then parse to decide what to do next.
I noticed that sometimes it looks like the code that parses the log file (which comes after the system() function) doesn't use the final log.
For example, I search for a "test pass" phrase in the log file - and the script doesn't find it even though when I open the log it is there.
Another example - I try to delete the folder where the log was placed but it doesn't let me because it's "Not empty". When I try to delete it manually it is deleted with errors.
(These examples happen every now and then, but most of the time they don't)
It looks like some kind of "timing" problem to me. How can I solve it?
If you want to be safe and are on Linux, call system sync; after every command that writes to the disk before reading from the disk. That will force the OS to write everything still buffered to the filesystem and only return afterwards. Thus you can be sure that when it is finished, everything you wrote to files now actually has arrived there.
But be aware, that may be overkill in many situations. There are reasons for those buffers and manually calling sync constantly is most likely not the fastest way of achieving things. See also http://linux.die.net/man/8/sync
If, for example, you have something else that you could do between the writing and the reading, like some calculations or whatever, that would likely be enough and you would not waste time by telling the OS that you know better how and when it has to do it's jobs. ^^
But if perfect efficiency is not your main concern (and you should not be using Perl if it was), system sync; after everything that modifies files and before accessing those files is probably okay and safe.
I have Perl program based on IO::Async, and it sometimes just exits after a few hours/days without printing any error message whatsoever. There's nothing in dmesg or /var/log either. STDOUT/STDERR are both autoflush(1) so data shouldn't be lost in buffers. It doesn't actually exit from IO::Async::Loop->loop_forever - print I put there just to make sure of that never gets triggered.
Now one way would be to keep peppering the program with more and more prints and hope one of them gives me some clue. Is there better way to get information what was going on in a program that made it exit/silently crash?
One trick I've used is to run the program under strace or ltrace (or attach to the process using strace). Naturally that was under Linux. Under other operating systems you'd use ktrace or dtrace or whatever is appropriate.
A trick I've used for programs which only exhibit sparse issues over days or week and then only over handfuls among hundreds of systems is to direct the output from my tracer to a FIFO, and have a custom program keep only 10K lines in a ring buffer (and with a handler on SIGPIPE and SIGHUP to dump the current buffer contents into a file. (It's a simple program, but I don't have a copy handy and I'm not going to re-write it tonight; my copy was written for internal use and is owned by a former employer).
The ring buffer allows the program to run indefinitely with fear of running systems out of disk space ... we usually only need a few hundred, even a couple thousand lines of the trace in such matters.
If you are capturing STDERR, you could start the program as perl -MCarp::Always foo_prog. Carp::Always forces a stack trace on all errors.
A sudden exit without any error message is possibly a SIGPIPE. Traditionally SIGPIPE is used to stop things like the cat command in the following pipeline:
cat file | head -10
It doesn't usually result in anything being printed either by libc or perl to indicate what happened.
Since in an IO::Async-based program you'd not want to silently exit on SIGPIPE, my suggestion would be to put somewhere in the main file of the program a line something like
$SIG{PIPE} = sub { die "Aborting on SIGPIPE\n" };
which will at least alert you to this fact. If instead you use Carp::croak without the \n you might even be lucky enough to get the file/line number of the syswrite, etc... that caused the SIGPIPE.