What is a "base" program on a Casio calculator? - calculator

When I make a program, I have the option to make it either "run" or "base".
https://src.mrhitech.repl.co/calcmanual.pdf (backup; the original one seems to have been taken down) mentions a "base" program several times on pages 213-214, but never says what it entails (other than having a big "B" next to the filename).
Whatever I do to the program, it seems completely indistinguishable from a "run" program–a program that runs when you select it and hit F1 or EXE.
I'm using a CASIO fx-9860GII if that matters.
EDIT: NEW INFO:
It seems that a "base" calculator program has different commands available than a "run" program–when you edit the program you are much more restricted–you can't use if, while, or for loops, as well as advanced calculator tools–but you also have the ability to convert numbers between different bases.
That still doesn't answer what you are supposed to use it for, but I hope it helps people answer.

Related

Can I give "static" input on VSCode?

My college has a proprietary IDE for C programming. It is very simple: Half the screen for the code, and the other half has the input and output (as you can see here).
The input works in a way you only have to type it once, and the program will read each line on every scanf there is. I find this a convenient way of testing, instead of giving the input every time I compile the program, and I'm wondering if there's any way to do something similar on VSCode.

How can I run an algorithm written for Digitool 4.3 (2003)?

I work on computational music. I have found the ps13 pitch spelling algorithm implemented in Lisp in 2003, precisely "Digitool MCL 4.3". I would like to run this code, preferably on a Linux x86 machine, to compare its results with other similar codes.
I am new to Lisp, but so far my research led me to think that Digitool MCL is no longer available. I thought of two ways which may help me:
a virtual environment (Docker or else) which would emulate a machine from 2003…
a code translation tool which would transform the 2003 source code into something executable today
I have not succeeded in finding one of these two options, nor running it directly with sbcl (but, as a newbie, I may have missed a small modification to make it run easily).
May someone help me?
Summary
This code is very close to being portable CL: you won't need something emulating an antique Mac to run it. I ran it on three implementations (SBCL, LispWorks, CCL) within a few minutes. However if you're not a Lisp person (and don't want to become one) it will be somewhat more fiddly to do that.
However I can't just send you a fixed version, both because this isn't the right forum for that, and also because we'd need to get the author's permission to do so. I have asked him if he would be interested in a portabalised version, and if he is, I will send him one in due course. You could also get in touch and ask to be notified.
(Meta-summary: while I think the question is fine, any reasonable answer probably doesn't fit on SO.)
Details
One initial problem with this code is that the file uses old Mac line end conventions (I think: not Unix anyway): unless whatever Lisp you're using is smart enough to spot this (some are, SBCL seems not to be although I am sure there are options to tell it) you'll need to convert it.
Given that, the code that implements this algorithm is very, very close to being portable Common Lisp. It has four dependencies on non-standard things:
two global variables, *save-local-symbols* and *verbose-eval-selection*;
two functions: choose-file-dialog and choose-directory-dialog.
The global variables can probably be safely commented out as I think they are just controls for the compiler, probably. The functions have fairly obvious specifications: they're obviously meant to pop up file / directory choosers.
However you can just not use the bits of the code that use these functions, so you can compile it, get a few compiler warnings about undefined functions, and then it's fine.
But it gets better than that in fact: the latter-day descendant of MCL is Clozure CL: CCL is free, and open source. CCL has both choose-file-dialog and choose-directory-dialog already and both of the globals exist although one is no longer exported.
Unfortunately there are then some hidden portability problems to do with assumptions about what pathnames look like as strings: it's making some assumption about what things looked like on pre-OSX Macs I think. This kind of problem is easy but often a bit fiddly to fix (I think in this case it would be easy). So, again, the answer to that is just not call the things that are doing a lot of pathname munging:
> (ps13-test-from-file-list (directory "~/Downloads/d/*.opnd"))
[... much output ...]
Total number of errors = 81.
Total number of notes = 41544.
Percentage correct = 99.81%
nil
Note that the above output came from LispWorks, not CCL: CCL works just as well though, as will any CL probably.
SBCL has one additional problem: the CL-USER package in SBCL already uses a package which exports int which is defined in this code. So you need to compile it in some other package. But given that, it's fine in SBCL as well.

Racket Interactive vs Compiled Performance

Whether or not I compile a Racket program seems to make no difference to the runtime performance.
Is it just the loading of the file initially that is improved by compilation? In other words, does running racket src.rkt do a jit compilation on the fly, which is why I see no difference in compiling vs interactive?
Even for tight loops of integer arithmetic, where I thought some difference would occur, the profile times are equivalent whether or not I previously did a raco make.
Am I missing something simple?
PS, I notice that I can run racket against the source file (.rkt) or .zo file. Does racket automatically use the .zo if one is found that corresponds to the .rkt file, or does the .zo file need to be used explicitly? Either way, it makes no difference to the performance numbers I'm seeing.
Yes, you're right.
Racket compiles code in two stages: first, the code is compiled into bytecode form, and then when it runs it gets jitted into machine code. When you compile a file, you're basically creating the bytecode which saves on re-compiling it later. Since that's usually not something that takes a lot of time for small pieces of code, you won't see any noticeable difference in runtimes. For an extreme example, you can delete all *.zo files in the collection tree and start DrRacket -- it will take a lot of time to start since there's a ton of code, but once it does start, it would run almost as usual. (It would also be slow to click "run" since that will reload and recompile some files.) Another concern for bigger pieces of code is that the compilation process can make memory consumption higher, but that's also not an issue with smaller pieces of code.
See also the Performace chapter in the guide for hints on how to improve performance.
Racket will always compile your code, regardless of whether it is run interactively at the REPL or run from the command-line. Here is the section in the guide that explains it. In interactive mode, the compiler turns every expression/definition into bytecode in memory and executes that. Otherwise, the compilers outputs the bytecode to zo files.
Note: Eli replied at the same time I did. See his response for more details.

"All programs are interpreted". How?

A computer scientist will correctly explain that all programs are
interpreted and that the only question is at what level. --perlfaq
How are all programs interpreted?
A Perl program is a text file read by the perl program which causes the perl program to follow a sequence of actions.
A Java program is a text file which has been converted into a series of byte codes which are then interpreted by the java program to follow a sequence of actions.
A C program is a text file which is converted via the C compiler into an assembly program which is converted into machine code by the assembler. The machine code is loaded into memory which causes the CPU to follow a sequence of actions.
The CPU is a jumble of transistors, resistors, and other electrical bits which is laid out by hardware engineers so that when electrical impulses are applied, it will follow a sequence of actions as governed by the laws of physics.
Physicists are currently working out what makes those rules and how they are interpreted.
Essentially, every computer program is interpreted by something else which converts it into something else which eventually gets translated into how the electrons in your local neighborhood fly around.
EDIT/ADDED: I know the above is a bit tongue-in-cheek, so let me add a slightly less goofy addition:
Interpreted languages are where you can go from a text file to something running on your computer in one simple step.
Compiled languages are where you have to take an extra step in the middle to convert the language text into machine- or byte-code.
The latter can easily be easily be converted into the former by a simple transformation:
Make a program called interpreted-c, which can take one or more C files and can run a program which doesn't take any arguments:
#!/bin/sh
MYEXEC=/tmp/myexec.$$
gcc -o $MYEXEC ${1+"$#"} && $MYEXEC
rm -f $MYEXEC
Now which definition does your C program fall into? Compare & contrast:
$ perl foo.pl
$ interpreted-c foo.c
Machine code is interpreted by the processor at runtime, given that the same machine code supplied to a processor of a certain arch (x86, PowerPC etc), should theoretically work the same regardless of the specific model's 'internal wiring'.
EDIT:
I forgot to mention that an arch may add new instructions for things like accessing new registers, in which case code written to use it won't work on older processors in the range. Much like when you try to use an old version of a library and then try to use capabilities only found in newer libraries.
Example: many Linux distros are released as 686 only, despite the fact it's in the 'x86 family'. This is due to the use of new instructions.
My first thought was too look inside the CPU — see below — but that's not right. The answer is much much simpler than that.
A high-level description of a CPU is:
1. execute the current op
2. grab the next op
3. goto 1
Compare it to Perl's interpreter:
while ((PL_op = op = op->op_ppaddr(aTHX))) {
}
(Yeah, that's the whole thing.)
There can be no doubt that the CPU is an interpreter.
It just goes to show how useless it is to classify something is interpreted or not.
Original answer:
Even at the CPU level, programs get rewritten into simpler instructions to allow the CPU to execute more them more quickly. This is done by changing the order in which they are executed and executing them in parallel. For example, Intel's Hyperthreading.
Even deeper, each instruction is considered a program of its own, one that routes electronic signals. See microcode.
The Levels of interpretions are really easy to explain:
2: Runtimelanguage (CLR, Java Runtime...) & Scriptlanguage (Python, Ruby...)
1: Assemblies
0: Binary Code
Edit: I changed the level of Scriptinglanguages to the same level of Runtimelanguages. Thank's for the hint. :-)
I can write a Game Boy interpreter that works similarly to how the Java Virtual Machine works, treating the z80 machine instructions as byte code. Assuming the original was written in C1, does that mean C suddenly became an interpreted language just because I used it like one?
From another angle, gcc can compile C into machine code for a number of different processors. There's no reason the target machine has to be the same as the machine you're compiling on. In fact, this is a common way to compile C code for AVRs and other microcontrollers.
As a matter of abstraction, the compiler's job is to translate flat text into a structure, then translate that structure into something that can be executed somewhere. Whatever is doing the execution may have its own levels of breaking out the structure before really executing it.
A lot of power becomes available once you start thinking along these lines.
A good book on this is Structure and Interpretation of Computer Programs. Even if you only get through the first chapter (or half of the first chapter), I think you'll learn a lot.
1 I think most Game Boy stuff was hand coded ASM, but the principle remains.

How can I run an elisp function asynchronously?

for those who don't know, imenu is a thing in emacs that lets a mode insert one or more menu items into the menu bar. The most common usage is to make a "table of contents" accessible from a drop-down menu, so the user can quickly jump to declarations of functions or classes or sections in a document, etc.
imenu has a couple different ways of working - in the first and more commonly used way, a major mode provides regexps to imenu, and imenu uses those regexps to perform the scan of the buffer and build the index. A major mode sets this up by putting the list of regexps into imenu-generic-expression. The second way is for the major mode to perform its own scan. It can do this by instead setting the variable imenu-create-index-function to the name of a function defined by themode, which returns a list containing the table of contents.
I'm doing the latter - imenu-create-index-function - but sometimes the fn takes a looong time to run, say 3 or 4 seconds or more, which freezes the UI. If I make the operation asynchronous, that would solve that problem.
I know about asynch processes. The scan logic is implemented in elisp. Is it possible to run elisp in an asynch process? If so, how?
Or, is there a way to run regular elisp asynchronously in emacs, without resorting to an asynch process?
I think the way font-lock does it is, it fontifies on idle. It keeps state and fontifies a little at a time, always remembering where it left off, what else needs to be fontified, what has changed since the last fontification run, etc. Is my understanding correct? Maybe I could use this incremental approach .
Recommendations?
To run elisp asynchronously you can use either run-with-idle-timer or run-with-timer. I imagine you'll want the idle version. Check the documentation links for more details.
Note: If the code takes 3 or 4 seconds to run, it'll still take that long (and freeze your Emacs while it runs), so if you can break the work up into small enough chunks that it only takes .5 seconds or so at a time, that might work well.
One package that I use all the time, pabbrev.el, uses idle timers really well - I never notice it running. That might be a good package to examine to see how it breaks up the work (it is scanning all open buffers and building up a word frequency list).
The answers posted by TreyJackson and jeremiahd were valid back in year 2011. Now, in 2018, here is a link to the emacs documentation for asynchronous processes.
You can run elisp in an asynch process by spawning emacs in batch mode as the process, see http://www.emacswiki.org/emacs/BatchMode . Other than that, there's basically nothing as far as I know.
It looks like http://nschum.de/src/emacs/async-eval/ basically wraps the boilerplate necessary to do this. No clue if it's actively maintained or anything though.