In both GW-BASIC and QuickBASIC, statements are passed arguments, some of which are optional and can be omitted depending on the statement:
REM Move the text cursor to the specified column and row.
LOCATE row%, column%
REM Move the text cursor to the specified column without changing the row.
LOCATE , column%
In GW-BASIC, the CLEAR statement is rather unusual in that its first "argument" is always omitted:
CLEAR , basicMem
CLEAR , basicMem, basicStack
CLEAR , , basicStack
In QuickBASIC, the basicMem parameter became optional due to the interpreter/runtime managing its own memory:
CLEAR , , basicStack
What I'm wondering is whether that first "argument" ever used for anything prior to GW-BASIC, i.e. something like this was actually useful:
CLEAR missingArg, basicMem, basicStack
REM ^^^^^^^^^^
REM here
That is, was there ever an purposeful non-empty argument before the first comma?
If anybody has any idea, I'd love to know!
What I'm wondering is whether that first "argument" ever used for
anything prior to GW-BASIC, i.e. something like this was actually
useful:
CLEAR missingArg, basicMem, basicStack
REM ^^^^^^^^^^
REM here
That is, was there ever an purposeful non-empty argument before the
first comma?
Yes, there was a first argument, but there was never a 3-argument form that actually made use of it.
Microsoft (originally Micro-Soft) created Altair BASIC. It featured a CLEAR command with no arguments that set all program variables to zero. The 4K version had no strings, so it had no need for managing string space. However, the 8K, Extended, and Disk versions had a CLEAR command that also accepted a single argument of the form CLEAR x. The value x specified the maximum amount of string space available in bytes, with the default at load time of BASIC being 50 bytes in the 8K version and 200 bytes in the Extended and Disk versions until it was changed [source]. That's where the missing first argument came from and what it was used for originally. At the time, however, only that one argument was valid.
Microsoft went on to develop a derivative called "BASIC-80" for several systems, notably the Intel ISIS-II, CP/M, and TEKDOS operating systems. A "Standalone Disk BASIC" version of BASIC-80 was also created that could run on "almost any 8080 or Z80 based disk hardware without an operating system." There was no 4K version of BASIC-80, so it's reasonable to assume all versions of BASIC-80 had strings available as the 8K version of Altair BASIC did. As a result, that string space needed managed. However, it was in BASIC-80 that a second argument was added:
CLEAR [expression![,address]]
expression! was an expression that specified the amount of string space, like in 8K (Altair) BASIC, and address was the maximum address available to BASIC, i.e. the amount of memory available to BASIC, like the argument immediately after the first comma in GW-BASIC.
Eventually, BASIC-80, Release 5.0, was shipped into the world, and it featured the odd syntax instead:
CLEAR [,[expression1][,expression2]]
expression1 was the maximum memory available to BASIC, and expression2 was the amount of stack space. Appendix A: New Features in BASIC-80, Release 5.0 explains why the first argument was dropped:
String space is allocated dynamically, and the first argument in a two-argument CLEAR statement will be ignored.
In other words, CLEAR strSpace!,maxMem would ignore the strSpace! argument in BASIC-80, Release 5.0, so the syntax became CLEAR [,[maxMem][,maxStack]].
QuickBASIC eventually changed the syntax further to just CLEAR [,,stack].
Confusingly, the on-line help system of QuickBASIC 4.5 states the following:
Note: Two commas are used before stack to keep QuickBASIC compatible
with BASICA. BASICA included an additional argument that set the
size of the data segment. Because QuickBASIC automatically manages
the data segment, the first parameter is no longer required.
"The first parameter" mentioned is maxMem as BASICA (and GW-BASIC) used the syntax available with BASIC-80, Release 5.0, rather than the equally missing strSpace! parameter used by pre-5.0 releases of BASIC-80.
Related
I have a huge application (made in PowerBuilder) that crashes every once in a while so it is hard to reproduce this error. We have it set up so that when a crash like this occurs, we recieve a .dmp file.
I used WinDbg to analyze my .dmp file with the command !analyze -v. From this I can deduct that the error that occured was an Access Violation C0000005. Based on the [0] and [1] parameters, it attempted to dereference a null pointer.
WinDbg also showed me STACK_TEXT consisting of around 30 lines, but I am not sure how to read it. From what I have seen I need to use some sort of symbols.
First line of my STACK_TEXT is this:
00000000`00efca10 00000000`75d7fa46 : 00000000`10df1ae0 00000000`0dd62828 00000000`04970000 00000000`10e00388 : pbvm!ob_get_runtime_class+0xad
From this, my goal is to analyze this file to figure out where exactly in the program this error happened or which function it was in. Is this something I will be able to find after further analyzing the stack trace?
How can I pinpoint where in the program a crash happened using .dmp and WinDbg so I can fix my code?
If you analyze a crash dump with !analyze -v, the lines after STACK TEXT is the stack trace. The output is equivalent to kb, given you set the correct thread and context.
The output of kb is
Child EBP
Return address
First 4 values on the stack
Symbol
The backticks ` tell you that you are running in 64 bit and they split the 64 bit values in the middle.
On 32 bit, the first 4 parameters on the stack were often equivalent to the first 4 parameters to the function, depending on the calling convention.
On 64 bit, the stack is not so relevant any more, because with the 64 bit calling convention, parameters are passed via registers. Therefore you can probably ignore those values.
The interesting part is the symbol like pbvm!ob_get_runtime_class+0xad.
In front of ! is the module name, typically a DLL or EXE name. Look for something that you built. After the ! and before the + is a method name. After the + is the offset in bytes from the beginning of the function.
As long as you don't have functions with thousands of lines of code, that number should be small, like < 0x200. If the number is larger than that, it typically means that you don't have correct symbols. In that case, the method name is no longer reliable, since it's probably just the last known (the last exported) method name and a faaaar way from there, so don't trust it.
In case of pbvm!ob_get_runtime_class+0xad, pbvm is the DLL name, ob_get_runtime_class is the method name and +0xad is the offset within the method where the instruction pointer is.
To me (not knowing anything about PowerBuilder) PBVM sounds like the PowerBuilder DLL implementation for Virtual Memory. So that's not your code, it's the code compiled by Sybase. You'd need to look further down the call stack to find the culprit code in your DLL.
After reading Wikipedia, it seems that PowerBuilder does not necessarily compile to native code, but to intermediate P-Code instead. In this case you're probably out of luck, since your code is never really on the call stack and you need a special debugger or a WinDbg extension (which might not exist, like for Java). Run it with the -pbdebug command line switch or compile it to native code and let it crash again.
ld man here say
-n
--nmagic
Turn off page alignment of sections, and mark the output as "NMAGIC" if possible.
-N
--omagic
Set the text and data sections to be readable and writable. Also, do not page-align the data segment, and disable linking against shared
libraries. If the output format supports Unix style magic numbers,
mark the output as "OMAGIC". Note: Although a writable text section is
allowed for PE-COFF targets, it does not conform to the format
specification published by Microsoft.
--no-omagic
This option negates most of the effects of the -N option. It sets the text section to be read-only, and forces the data segment to be
page-aligned. Note - this option does not enable linking against
shared libraries. Use -Bdynamic for this.
I do understand that theses options are used to make the code (.text) section writable or not, but I don't get the point to align or not the sections, and what is a "NMAGIC" section
On historic (PDP-11) Unix, an executable file's header began with a branch instruction that would jump past the header, to the actual start of the code. When Unix was ported to other processors, that initial PDP-11 branch instruction became fossilized as the "magic number" for the a.out(5) file format. When "pure text" was introduced, initially to allow processes to share their code segments, a new magic number was introduced so that the kernel could tell the difference (there were some important Unix programs that relied on self-modifying code and thus needed to be loaded with writable code segments). The old magic number (0407) was given the name "OMAGIC" -- "old magic" -- and the new magic number (0410) was given the name "NMAGIC", "new magic". The data segment immediately follows the code segment in memory, so when the code segment is made read-only, it must be padded to a page boundary.
Various operating systems and file formats since then introduced other magic numbers; in the last FreeBSD releases to use a.out format, the normal formats were ZMAGIC and QMAGIC, which were introduced to allow page zero in the address space to be unmapped for safety (so that a null-pointer dereference would fault) while still allowing executables to be demand paged (i.e., mmap()ed into the process's address space).
So to answer your question more directly: NMAGIC and OMAGIC are different formats of executable files, not of individual sections. They indicate the desired correspondence between the in-memory and on-disk layouts of the executable. (The reason these numbers are traditionally written in octal rather than hex or decimal is that octal is a natural representation for the instruction format on the PDP-11.) GNU ld uses these names (only) as references to executable formats that have analogous features, even when you are not generating traditional a.out format -- which of course is quite rare today. One particular benefit to using OMAGIC format is that it is more compact than any other format, which may matter in cases like boot loaders where space is limited, there is no demand paging, and there is also no room for any sort of padding.
I am defining a variable in the beginning of my source code in MATLAB. Now I would like to know at which lines this variable effects something. In other words, I would like to see all lines in which that variable is read out. This wish does not only include all accesses in the current function, but also possible accesses in sub-functions that use this variable as an input argument. In this way, I can see in a quick way where my change of this variable takes any influence.
Is there any possibility to do so in MATLAB? A graphical marking of the corresponding lines would be nice but a command line output might be even more practical.
You may always use "Find Files" to search for a certain keyword or expression. In my R2012a/Windows version is in Edit > Find Files..., with the keyboard shortcut [CTRL] + [SHIFT] + [F].
The result will be a list of lines where the searched string is found, in all the files found in the specified folder. Please check out the options in the search dialog for more details and flexibility.
Later edit: thanks to #zinjaai, I noticed that #tc88 required that this tool should track the effect of the name of the variable inside the functions/subfunctions. I think this is:
very difficult to achieve. The problem of running trough all the possible values and branching on every possible conditional expression is... well is hard. I think is halting-problem-hard.
in 90% of the case the assumption that the output of a function is influenced by the input is true. But the input and the output are part of the same statement (assigning the result of a function) so looking for where the variable is used as argument should suffice to identify what output variables are affected..
There are perverse cases where functions will alter arguments that are handle-type (because the argument is not copied, but referenced). This side-effect will break the assumption 2, and is one of the main reasons why 1. Outlining the cases when these side effects take place is again, hard, and is better to assume that all of them are modified.
Some other cases are inherently undecidable, because they don't depend on the computer states, but on the state of the "outside world". Example: suppose one calls uigetfile. The function returns a char type when the user selects a file, and a double type for the case when the user chooses not to select a file. Obviously the two cases will be treated differently. How could you know which variables are created/modified before the user deciding?
In conclusion: I think that human intuition, plus the MATLAB Debugger (for run time), and the Find Files (for quick search where a variable is used) and depfun (for quick identification of function dependence) is way cheaper. But I would like to be wrong. :-)
I recently got some Fortran code, which successfully ran on Mac OS. This code along with input files were later sent to me to get compiled. I precisely used the same code and the same input files but an error "array bounds exceeded" appeared. I am using CVF 6.6 on Windows XP.
I wanted to know the following things:
Is this a compiler or OS problem?
Shall I arrange a Mac OS to get them compiled?
After surfing so much on internet I think the wise thing to do is to get my data "format free". But I don't how to do that when my data is a time series with time in one column and voltage in the second.
The error message array bounds exceeded always (I think) indicates that your code has tried to access an array element outside the bounds of an array, for example element 25 in an array with 24 elements. This can only occur at run-time, and your compiler/run-time will only spot it if, when compiling, you set on the compiler option(s) for array bounds checking; your compiler documentation will tell you what those options are.
The error message should have been accompanied by some more information telling you where in the program the error occurred and the index of the out-of-bounds array access.
Given that your source code and your input data are identical how could this have occurred ? Since you have compiled the program on 2 different platforms your compilations cannot have been identical, it is entirely possible that array bounds checking is switched off on your Mac and on on your Windows PC.
Fortran programs may execute apparently successfully despite making accesses to out-of-bounds array elements. If the memory address of array element 25 out of 24 holds a value which is meaningful and the address is within your program's space the computation is likely to continue. It is also likely to be useless, but you can go for many years before finding that out.
I suggest that you go back to the Mac, recompile with array bounds checking, and run again, see what happens.
It's also possible that the routines which read your file find a different number of values on XP and Mac; I suspect that can be caused by different line ending characters, even by whether or not the input file has a newline at the end. Check this too.
Is there a such a way to know how much memory space a file would take before hand?
For example, lets say I have a file size of 1G bytes. How would this file size translate to memory size?
I take your example from the comment and elaborate on what might happen to a text file when loaded into memory: Some time ago, "text" usually meant ASCII (as the least common denominator at least). And lots of software, written in a language like C, would represent such ASCII strings as an char* type. This resulted in a more-or-less exact match in memory requirements: Every byte in the input file would occupy one byte when loaded into RAM.
But that has changed in the last years with the rise of Unicode. The same text file, loaded by a simple Java program (and using Java's String type, which is very likely) would take up two times the amount of RAM. This is so, because the Java String type represents each character internally using UTF-16 (16 bits per character minimum), whereas ASCII used only one byte per character.
What I'm trying to say here is: There is no easy answer to your question, it always depends on who reads the data and what he's about to do with it.
One thing is true quite often: by "loading", the data does not become smaller.
If you read the whole file into memory at once, you'll need at least the size of the file free memory. Much of the time people don't actually need to do so, they just don't know another way. For an explanation of the problem and alternatives see:
http://www.effectiveperlprogramming.com/2010/01/memory-map-files-instead-of-slurping-them/
You can check yourself by writing a little test script with Memory::Usage.
From its documentation's synopsis:
use Memory::Usage;
my $mu = Memory::Usage->new();
# Record amount of memory used by current process
$mu->record('starting work');
# Do the thing you want to measure
$object->something_memory_intensive();
# Record amount in use afterwards
$mu->record('after something_memory_intensive()');
# Spit out a report
$mu->dump();
Then you'll know how much your build of Perl, given whatever character encoding you intend to use, and whatever method of dealing with the file you intend to implement, will consume in memory.
If you can avoid loading the whole file at once, and instead just iterate over it line by line or record by record, the memory concern goes away. So it would help to know what you actually are trying to accomplish. You may have an XY problem.
perldoc -f stat
stat Returns a 13-element list giving the status info for a file,
either the file opened via FILEHANDLE or DIRHANDLE, or named by
EXPR. If EXPR is omitted, it stats $_. Returns the empty list
if "stat" fails. Typically used as follows:
($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,
$atime,$mtime,$ctime,$blksize,$blocks)
= stat($filename);
Note the $size return value. It is the size of the file in bytes. If you are going to slurp the entire file into memory you will need at least $size bytes. Then again, you might need a whole lot more (or even a whole lot less), depending on what you do to the contents of the file.