I have two word integer - word

I'm working in assembly on a homework problem, and each word is 16bit. So I have a two word 32 bit integer. The high order bits are in R1 (register 1) and the lower order bits are in R0 (register 0). So the number is just "R1R0". I'm supposed to treat it as a continuous number so imagine the number is just both combined. So I want to work with R1R0 as a positive number, so if it is negative I want to take the NOT of R1R0 and add 1 because it is a 2's complement number and doing so would turn it into a positive number. My problem is that if you took the NOT of R1R0 and R0 was a positive number previous and then became a negative number from the NOT what should I do? Should I carry over from the R1 and subtract 1 from R1?

Related

How to calculate Log of big numbers whose factor is not known , like prime numbers

I want to calculate log for big number. I got only the factorization method when I checked how to evaluate the log of big number. But in my case I don't know the factor of that number and I don't want to calculate factor first.
Is there any option to evaluate log of big number like of 20 digits or more.

How can I calculate the impact on collision probability when truncating a hash?

I'd like to reduce an MD5 digest from 32 characters down to, ideally closer to 16. I'll be using this as a database key to retrieve a set of (public) user-defined parameters. I'm expecting the number of unique "IDs" to eventually exceed 10,000. Collisions are undesirable but not the end of the world.
I'd like to understand the viability of a naive truncation of the MD5 digest to achieve a shorter key. But I'm having trouble digging up a formula that I can understand (given I have a limited Math background), let alone use to determine the impact on collision probability that truncating the hash would have.
The shorter the better, within reason. I feel there must be a simple formula, but I'd rather have a definitive answer than do my own guesswork cobbled together from bits and pieces I have read around the web.
You can calculate the chance of collisions with this formula:
chance of collision = 1 - e^(-n^2 / (2 * d))
Where n is the number of messages, d is the number of possibilities, and e is the constant e (2.718281828...).
#mypetition's answer is great.
I found a few other equations that are more-or-less accurate and/or simplified here, along with a great explanation and a handy comparison of real-world probabilities:
1−e^((−k(k−1))/2N) - sample plot here
(k(k-1))/2N - sample plot here
k^2/2N - sample plot here
...where k is the number of ID's you'll be generating (the "messages") and N is the largest number that can be produced by the hash digest or the largest number that your truncated hexadecimal number could represent (technically + 1, to account for 0).
A bit more about "N"
If your original hash is, for example, "38BF05A71DDFB28A504AFB083C29D037" (32 hex chars), and you truncate it down to, say, 12 hex chars (e.g.: "38BF05A71DDF"), the largest number you could produce in hexadecimal is "0xFFFFFFFFFFFF" (281474976710655 - which is 16^12-1 (or 256^6 if you prefer to think in terms of bytes). But since "0" itself counts as one of the numbers you could theoretically produce, you add back that 1, which leaves you simply with 16^12.
So you can think of N as 16 ^ (numberOfHexDigits).

How many 32-bit numbers have five 1's? Combinatorics

For each bit (binary digit) that you have, there are two possibilities: Either it can be a zero, or it can be a one.
Therefore, if you have one bit, you have two possible numbers. If you have two bits, each of them can be either a zero or a one, and since there are two possibilities for the first, and two possibilities for the second, there are 2^2=4 total possibilities.
Similarly, if you have some number n of bits, each of them can be a zero or a one, and there will therefore be 2^n possibilities.
I understand this. Because of this fundamental counting principle, I know that there are 2^32 total combinations of 32 bit numbers, but how many have just five 1's?
How do I go about solving this? Count everything that doesn't include five 1's?
You have 32 bits total. Pick 5 to be "1". Order doesn't matter.
32C5 = 32!/(5!27!) = 201376

little man computer- Branch on positive

In Little Man computer(LMC), the condition Branch on Positive(BRP) includes zero as a positive number( I thought number>0 is positive). I know LMC is a imaginative concept, but I was wondering if any processor (outdated or current ones) uses Branch on positive including zero as positive number?
BRZ sets instructions to be executed specifically if Branch is Zero, but BRP does count zero as a positive number, so the only way around this is to contradict the BRP instructions with BRZ instructions.
Your question asked about specific processors, and the closest I can come is the PDP-8 SPA – Skip on AC ≥ 0. I can describe the rationale for including zero as a positive number. Virtually all modern computers use two's complement format for integers. That makes the leftmost bit the sign bit. Negative numbers have a one in the sign bit, and positive numbers have a zero in the sign bit. The number zero is represented as all zeros, including the sign bit. So, if branch on positive were implemented on a two's complement computer that tested the sign bit, the number zero would be positive.
Alternatively, when Dr. Madnick designed LMC, and also now, calculators do not display a minus sign with the number zero.
That said, I wish Madnick had called it BNN: branch if not negative.
PDP-11 has BPL:
BPL Branch if plus (N=0)
...where N is the negative flag (0 or 1), and so it applies when the tested value is not negative.
ARM has BPL:
bpl - branch if pl (positive or zero)
...this includes the 6502:
BPL - Branch if Positive
If the negative flag is clear then add the relative displacement to the program counter to cause a branch to a new location.

Fixed point vs Floating point number

I just can't understand fixed point and floating point numbers due to hard to read definitions about them all over Google. But none that I have read provide a simple enough explanation of what they really are. Can I get a plain definition with example?
A fixed point number has a specific number of bits (or digits) reserved for the integer part (the part to the left of the decimal point) and a specific number of bits reserved for the fractional part (the part to the right of the decimal point). No matter how large or small your number is, it will always use the same number of bits for each portion. For example, if your fixed point format was in decimal IIIII.FFFFF then the largest number you could represent would be 99999.99999 and the smallest non-zero number would be 00000.00001. Every bit of code that processes such numbers has to have built-in knowledge of where the decimal point is.
A floating point number does not reserve a specific number of bits for the integer part or the fractional part. Instead it reserves a certain number of bits for the number (called the mantissa or significand) and a certain number of bits to say where within that number the decimal place sits (called the exponent). So a floating point number that took up 10 digits with 2 digits reserved for the exponent might represent a largest value of 9.9999999e+50 and a smallest non-zero value of 0.0000001e-49.
A fixed point number just means that there are a fixed number of digits after the decimal point. A floating point number allows for a varying number of digits after the decimal point.
For example, if you have a way of storing numbers that requires exactly four digits after the decimal point, then it is fixed point. Without that restriction it is floating point.
Often, when fixed point is used, the programmer actually uses an integer and then makes the assumption that some of the digits are beyond the decimal point. For example, I might want to keep two digits of precision, so a value of 100 means actually means 1.00, 101 means 1.01, 12345 means 123.45, etc.
Floating point numbers are more general purpose because they can represent very small or very large numbers in the same way, but there is a small penalty in having to have extra storage for where the decimal place goes.
From my understanding, fixed-point arithmetic is done using integers. where the decimal part is stored in a fixed amount of bits, or the number is multiplied by how many digits of decimal precision is needed.
For example, If the number 12.34 needs to be stored and we only need two digits of precision after the decimal point, the number is multiplied by 100 to get 1234. When performing math on this number, we'd use this rule set. Adding 5620 or 56.20 to this number would yield 6854 in data or 68.54.
If we want to calculate the decimal part of a fixed-point number, we use the modulo (%) operand.
12.34 (pseudocode):
v1 = 1234 / 100 // get the whole number
v2 = 1234 % 100 // get the decimal number (100ths of a whole).
print v1 + "." + v2 // "12.34"
Floating point numbers are a completely different story in programming. The current standard for floating point numbers use something like 23 bits for the data of the number, 8 bits for the exponent, and 1 but for sign. See this Wikipedia link for more information on this.
The term ‘fixed point’ refers to the corresponding manner in which numbers are represented, with a fixed number of digits after, and sometimes before, the decimal point.
With floating-point representation, the placement of the decimal point can ‘float’ relative to the significant digits of the number.
For example, a fixed-point representation with a uniform decimal point placement convention can represent the numbers 123.45, 1234.56, 12345.67, etc, whereas a floating-point representation could in addition represent 1.234567, 123456.7, 0.00001234567, 1234567000000000, etc.
There's of what a fixed-point number is and , but very little mention of what I consider the defining feature. The key difference is that floating-point numbers have a constant relative (percent) error caused by rounding or truncating. Fixed-point numbers have constant absolute error.
With 64-bit floats, you can be sure that the answer to x+y will never be off by more than 1 bit, but how big is a bit? Well, it depends on x and y -- if the exponent is equal to 10, then rounding off the last bit represents an error of 2^10=1024, but if the exponent is 0, then rounding off a bit is an error of 2^0=1.
With fixed point numbers, a bit always represents the same amount. For example, if we have 32 bits before the decimal point and 32 after, that means truncation errors will always change the answer by 2^-32 at most. This is great if you're working with numbers that are all about equal to 1, which gain a lot of precision, but bad if you're working with numbers that have different units--who cares if you calculate a distance of a googol meters, then end up with an error of 2^-32 meters?
In general, floating-point lets you represent much larger numbers, but the cost is higher (absolute) error for medium-sized numbers. Fixed points get better accuracy if you know how big of a number you'll have to represent ahead of time, so that you can put the decimal exactly where you want it for maximum accuracy. But if you don't know what units you're working with, floats are a better choice, because they represent a wide range with an accuracy that's good enough.
It is CREATED, that fixed-point numbers don't only have some Fixed number of decimals after point (digits) but are mathematically represented in negative powers. Very good for mechanical calculators:
e.g, the price of smth is USD 23.37 (Q=2 digits after the point. ) The machine knows where the point is supposed to be!
Take the number 123.456789
As an integer, this number would be 123
As a fixed point (2), this
number would be 123.46 (Assuming you rounded it up)
As a floating point, this number would be 123.456789
Floating point lets you represent most every number with a great deal of precision. Fixed is less precise, but simpler for the computer..