little man computer- Branch on positive - little-man-computer

In Little Man computer(LMC), the condition Branch on Positive(BRP) includes zero as a positive number( I thought number>0 is positive). I know LMC is a imaginative concept, but I was wondering if any processor (outdated or current ones) uses Branch on positive including zero as positive number?

BRZ sets instructions to be executed specifically if Branch is Zero, but BRP does count zero as a positive number, so the only way around this is to contradict the BRP instructions with BRZ instructions.

Your question asked about specific processors, and the closest I can come is the PDP-8 SPA – Skip on AC ≥ 0. I can describe the rationale for including zero as a positive number. Virtually all modern computers use two's complement format for integers. That makes the leftmost bit the sign bit. Negative numbers have a one in the sign bit, and positive numbers have a zero in the sign bit. The number zero is represented as all zeros, including the sign bit. So, if branch on positive were implemented on a two's complement computer that tested the sign bit, the number zero would be positive.
Alternatively, when Dr. Madnick designed LMC, and also now, calculators do not display a minus sign with the number zero.
That said, I wish Madnick had called it BNN: branch if not negative.

PDP-11 has BPL:
BPL Branch if plus (N=0)
...where N is the negative flag (0 or 1), and so it applies when the tested value is not negative.
ARM has BPL:
bpl - branch if pl (positive or zero)
...this includes the 6502:
BPL - Branch if Positive
If the negative flag is clear then add the relative displacement to the program counter to cause a branch to a new location.

Related

how do i get the first bit of a 16bit number using a logic gate simulator?

I am currently building an Alu computer component in Logical Circuit simulator. I want to check if the result i get from the Alu (a 16 bit number) is positive or negative, I know i need to use the two's complement (in whice the most significant bit tells us if the number is positive or negative).
I tried to take the answer (Output from the mux16) and run it through a 16 bit splitter and then run it through a not gate(since i want my ng output[1bit] to be equal to 1 if the number is negative) and take the upper connection of the splitter since i assumed it would be the first bit but it seems to not work this way.
Is the problem in the implemenation?

Numerical convergence and minimum number size

I have a program which calculates probability values
(p-values),
but it is entering a very large negative number into the
exp function
exp(-626294.830) which evaluates to zero instead of the very small
positive number that it should be.
How can I get this to evaluate as a very small floating point number?
I have tried
Math::BigFloat,
bignum, and
bigrat
but all have failed.
Wolfram Alpha says that exp(-626294.830) is 4.08589×10^-271997... zero is a pretty close approximation to that ;-) Although you've edited and removed the context from your question, do you really need to work with such tiny numbers, or perhaps there is some way you could optimize your algorithm or scale your numbers?
Anyway, you are correct that code like Math::BigFloat->new("-626294.830")->bexp seems to take quite some time, even with the support of use Math::BigFloat lib => 'GMP';.
The only alternative I can offer at the moment is Math::Prime::Util::GMP's expreal, although you need to specify a precision to it.
use Math::Prime::Util::GMP qw/expreal/;
use Math::BigFloat;
my $e = Math::BigFloat->new(expreal(-626294.830,272000));
print $e->bnstr,"\n";
__END__
4.086e-271997
But on my machine, even that still takes ~20s to run, which brings us back to the question of potential optimization in other places.
Floating point numbers do not have infinite precision. Assuming the number is represented as an IEEE 754 double, we have 52 bits for a fraction, 11 bits for the exponent, and one bit for the sign. Due to the way exponents are encoded, the smallest positive number that can be represented is 2^-1022.
If we look at your number e^-626294.830, we can do a change of base and see that it equals 2^(log_2 e · -626294.830) = 2^-903552.445, which is significantly smaller than 2^-1022. Approximating your number as zero is therefore correct.
Instead of calculating this value using arbitrary-precision numerics, you are likely better off solving the necessary equations by hand, then coding this in a way that does not require extreme precision. For example, it is unlikely that you need the exact value of e^-626294.830, but perhaps just the magnitude. Then, you can calculate the logarithm instead of using exp().

Why is the "sign bit" included in the calculation of negative numbers?

I'm new to Swift and is trying to learn the concept of "shifting behavior for signed integers". I saw this example from "the swift programming language 2.1".
My question is: Why is the sign bit included in the calculation as well?
I experienced with several number combinations and they all works, but I don't seem to get reasons behind including sign bit in the calculation.
To add -1 to -4, simply by performing a standard binary addition of
all eight bits (including the sign bit), and discarding anything that
doesn't fit in the eight bits once you are done:
This is not unique to Swift. Nevertheless, historically, computers didn't subtract, they just added. To calculate A-B, do a two's-complement of B and add it to A. The two's-complement (negate all the bits and add 1) means that the highest order bit will be 0 for positive numbers (and zero), or 1 for negative numbers.
This is called 2's compliment math and allows both addition and subtraction to be done with simple operations. This is the way many hardware platforms implement arithmetic. You should look more into 2's compliment arithmetic of you want a deeper understanding.

Carry bit of addition in IJVM

The IADD instruction in IJVM adds two 1-word numbers. When I add EEEEEEEE to itself I get DDDDDDDC. What happens to the carry 1? How can I get it? Is it saved in a register?
It appears that the carry-out bit is lost.
No version of the IJVM Assembly Language Specification that I've come across says anything about a carry-out bit, or carry flag.
IADD Pop two words from stack; push their sum
downeyt adds:
The MIC1 that interprets IJVM only has two condition codes, N and Z. The carry out from the ALU is not stored. The microarchitecture could be modified to store the carry out, like it stores the N and Z bits.

I have two word integer

I'm working in assembly on a homework problem, and each word is 16bit. So I have a two word 32 bit integer. The high order bits are in R1 (register 1) and the lower order bits are in R0 (register 0). So the number is just "R1R0". I'm supposed to treat it as a continuous number so imagine the number is just both combined. So I want to work with R1R0 as a positive number, so if it is negative I want to take the NOT of R1R0 and add 1 because it is a 2's complement number and doing so would turn it into a positive number. My problem is that if you took the NOT of R1R0 and R0 was a positive number previous and then became a negative number from the NOT what should I do? Should I carry over from the R1 and subtract 1 from R1?