How Jump instruction is executed based on value of Out- The Alu Output - cpu-architecture

Figure from The Elements of Computer System (Nand2Tetris)
Have a look at the scenario where
j1 = 1 (out < 0 )
j2 = 0 (out = 0 )
j3 = 1 (out > 0 )
How this scenario is possible as out < 0 is true as well as out > 0 but out = 0 is false. How out can have both positive and negative values at the same time?
In other words when JNE instruction is going to execute although it theoretically seems possible to me but practically its not?

If out < 0, the jump is executed if j1 = 1.
If out = 0, the jump is executed if j2 = 1.
If out > 0, the jump is executed if j3 = 1.
Hopefully now you can understand the table better. In particular, JNE is executed if out is non-zero, and is skipped if out is zero.

The mnemonic makes sense if those are match-any conditions, not match-all. i.e. jump if the difference is greater or less than zero, but not if it is zero.
Specifically, sub x, y / jne target works the usual way: it jumps if x and y were equal before the subtraction. (So the subtraction result is zero). This is what the if(out!=0) jump in the Effect column is talking about.
IDK the syntax for Nand2Tetris, but hopefully the idea is clear.
BTW, on x86 JNZ is a synonym for JNE, so you can use whichever one is semantically relevant. JNE only really makes sense after something that works as a compare, even though most operations set ZF based on whether the result is zero or not.

Related

How to do bitwise operation decently?

I'm doing analysis on binary data. Suppose I have two uint8 data values:
a = uint8(0xAB);
b = uint8(0xCD);
I want to take the lower two bits from a, and whole content from b, to make a 10 bit value. In C-style, it should be like:
(a[2:1] << 8) | b
I tried bitget:
bitget(a,2:-1:1)
But this just gave me separate [1, 1] logical type values, which is not a scalar, and cannot be used in the bitshift operation later.
My current solution is:
Make a|b (a or b):
temp1 = bitor(bitshift(uint16(a), 8), uint16(b));
Left shift six bits to get rid of the higher six bits from a:
temp2 = bitshift(temp1, 6);
Right shift six bits to get rid of lower zeros from the previous result:
temp3 = bitshift(temp2, -6);
Putting all these on one line:
result = bitshift(bitshift(bitor(bitshift(uint16(a), 8), uint16(b)), 6), -6);
This is doesn't seem efficient, right? I only want to get (a[2:1] << 8) | b, and it takes a long expression to get the value.
Please let me know if there's well-known solution for this problem.
Since you are using Octave, you can make use of bitpack and bitunpack:
octave> a = bitunpack (uint8 (0xAB))
a =
1 1 0 1 0 1 0 1
octave> B = bitunpack (uint8 (0xCD))
B =
1 0 1 1 0 0 1 1
Once you have them in this form, it's dead easy to do what you want:
octave> [B A(1:2)]
ans =
1 0 1 1 0 0 1 1 1 1
Then simply pad with zeros accordingly and pack it back into an integer:
octave> postpad ([B A(1:2)], 16, false)
ans =
1 0 1 1 0 0 1 1 1 1 0 0 0 0 0 0
octave> bitpack (ans, "uint16")
ans = 973
That or is equivalent to an addition when dealing with integers
result = bitshift(bi2de(bitget(a,1:2)),8) + b;
e.g
a = 01010111
b = 10010010
result = 00000011 100010010
= a[2]*2^9 + a[1]*2^8 + b
an alternative method could be
result = mod(a,2^x)*2^y + b;
where the x is the number of bits you want to extract from a and y is the number of bits of a and b, in your case:
result = mod(a,4)*256 + b;
an extra alternative solution close to the C solution:
result = bitor(bitshift(bitand(a,3), 8), b);
I think it is important to explain exactly what "(a[2:1] << 8) | b" is doing.
In assembly, referencing individual bits is a single operation. Assume all operations take the exact same time and "efficient" a[2:1] starts looking extremely inefficient.
The convenience statement actually does (a & 0x03).
If your compiler actually converts a uint8 to a uint16 based on how much it was shifted, this is not a 'free' operation, per se. Effectively, what your compiler will do is first clear the "memory" to the size of uint16 and then copy "a" into the location. This requires an extra step (clearing the "memory" (register)) that wouldn't normally be needed.
This means your statement actually is (uint16(a & 0x03) << 8) | uint16(b)
Now yes, because you're doing a power of two shift, you could just move a into AH, move b into AL, and AH by 0x03 and move it all out but that's a compiler optimization and not what your C code said to do.
The point is that directly translating that statement into matlab yields
bitor(bitshift(uint16(bitand(a,3)),8),uint16(b))
But, it should be noted that while it is not as TERSE as (a[2:1] << 8) | b, the number of "high level operations" is the same.
Note that all scripting languages are going to be very slow upon initiating each instruction, but will complete said instruction rapidly. The terse nature of Python isn't because "terse is better" but to create simple structures that the language can recognize so it can easily go into vectorized operations mode and start executing code very quickly.
The point here is that you have an "overhead" cost for calling bitand; but when operating on an array it will use SSE and that "overhead" is only paid once. The JIT (just in time) compiler, which optimizes script languages by reducing overhead calls and creating temporary machine code for currently executing sections of code MAY be able to recognize that the type checks for a chain of bitwise operations need only occur on the initial inputs, hence further reducing runtime.
Very high level languages are quite different (and frustrating) from high level languages such as C. You are giving up a large amount of control over code execution for ease of code production; whether matlab actually has implemented uint8 or if it is actually using a double and truncating it, you do not know. A bitwise operation on a native uint8 is extremely fast, but to convert from float to uint8, perform bitwise operation, and convert back is slow. (Historically, Matlab used doubles for everything and only rounded according to what 'type' you specified)
Even now, octave 4.0.3 has a compiled bitshift function that, for bitshift(ones('uint32'),-32) results in it wrapping back to 1. BRILLIANT! VHLL place you at the mercy of the language, it isn't about how terse or how verbose you write the code, it's how the blasted language decides to interpret it and execute machine level code. So instead of shifting, uint32(floor(ones / (2^32))) is actually FASTER and more accurate.

How can I extract a specific bit from a 16-bit register using math ONLY?

I have a 16-bit WORD and I want to read the status of a specific bit or several bits.
I've tried a method that divides the word by the bit that I want, converts the result to two values - an integer and to a real, and compares the two. if they are not equal, then it it equates to false. This appears to only work if i am looking for a bit that the last 'TRUE' bit in the word. If there are any successive TRUE bits, it fails. Perhaps I just haven't done it right. I don't have the ability to use code, just basic math, boolean operations, and type conversion. Any ideas? I hope this isn't a dumb question but i have a feeling it is.
eg:
WORD 0010000100100100 = 9348
I want to know the value of bit 2. how can i determine it from 9348?
There are many ways, depending on what operations you can use. It appears you don't have much to choose from. But this should work, using just integer division and multiplication, and a test for equality.
(psuedocode):
x = 9348 (binary 0010000100100100, bit 0 = 0, bit 1 = 0, bit 2 = 1, ...)
x = x / 4 (now x is 1000010010010000
y = (x / 2) * 2 (y is 0000010010010000)
if (x == y) {
(bit 2 must have been 0)
} else {
(bit 2 must have been 1)
}
Every time you divide by 2, you move the bits to the left one position (in your big endian representation). Every time you multiply by 2, you move the bits to the right one position. Odd numbers will have 1 in the least significant position. Even numbers will have 0 in the least significant position. If you divide an odd number by 2 in integer math, and then multiply by 2, you loose the odd bit if there was one. So the idea above is to first move the bit you want to know about into the least significant position. Then, divide by 2 and then multiply by two. If the result is the same as what you had before, then there must have been a 0 in the bit you care about. If the result is not the same as what you had before, then there must have been a 1 in the bit you care about.
Having explained the idea, we can simplify to
((x / 8) * 2) <> (x / 4)
which will resolve to true if the bit was set, and false if the bit was not set.
AND the word with a mask [1].
In your example, you're interested in the second bit, so the mask (in binary) is
00000010. (Which is 2 in decimal.)
In binary, your word 9348 is 0010010010000100 [2]
0010010010000100 (your word)
AND 0000000000000010 (mask)
----------------
0000000000000000 (result of ANDing your word and the mask)
Because the value is equal to zero, the bit is not set. If it were different to zero, the bit was set.
This technique works for extracting one bit at a time. You can however use it repeatedly with different masks if you're interested in extracting multiple bits.
[1] For more information on masking techniques see http://en.wikipedia.org/wiki/Mask_(computing)
[2] See http://www.binaryhexconverter.com/decimal-to-binary-converter
The nth bit is equal to the word divided by 2^n mod 2
I think you'll have to test each bit, 0 through 15 inclusive.
You could try 9348 AND 4 (equivalent of 1<<2 - index of the bit you wanted)
9348 AND 4
should give 4 if bit is set, 0 if not.
So here is what I have come up with: 3 solutions. One is Hatchet's as proposed above, and his answer helped me immensely with actually understanding HOW this works, which is of utmost importance to me! The proposed AND masking solutions could have worked if my system supports bitwise operators, but it apparently does not.
Original technique:
( ( ( INT ( TAG / BIT ) ) / 2 ) - ( INT ( ( INT ( TAG / BIT ) ) / 2 ) ) <> 0 )
Explanation:
in the first part of the equation, integer division is performed on TAG/BIT, then REAL division by 2. In the second part, integer division is performed TAG/BIT, then integer division again by 2. The difference between these two results is compared to 0. If the difference is not 0, then the formula resolves to TRUE, which means the specified bit is also TRUE.
eg: 9348/4 = 2337 w/ integer division. Then 2337/2 = 1168.5 w/ REAL division but 1168 w/ integer division. 1168.5-1168 <> 0, so the result is TRUE.
My modified technique:
( INT ( TAG / BIT ) / 2 ) <> ( INT ( INT ( TAG / BIT ) / 2 ) )
Explanation:
effectively the same as above, but instead of subtracting the two results and comparing them to 0, I am just comparing the two results themselves. If they are not equal, the formula resolves to TRUE, which means the specified bit is also TRUE.
eg: 9348/4 = 2337 w/ integer division. Then 2337/2 = 1168.5 w/ REAL division but 1168 w/ integer division. 1168.5 <> 1168, so the result is TRUE.
Hatchet's technique as it applies to my system:
( INT ( TAG / BIT )) <> ( INT ( INT ( TAG / BIT ) / 2 ) * 2 )
Explanation:
in the first part of the equation, integer division is performed on TAG/BIT. In the second part, integer division is performed TAG/BIT, then integer division again by 2, then multiplication by 2. The two results are compared. If they are not equal, the formula resolves to TRUE, which means the specified bit is also TRUE.
eg: 9348/4 = 2337. Then 2337/2 = 1168 w/ integer division. Then 1168x2=2336. 2337 <> 2336 so the result is TRUE. As Hatchet stated, this method 'drops the odd bit'.
Note - 9348/4 = 2337 w/ both REAL and integer division, but it is important that these parts of the formula use integer division and not REAL division (12164/32 = 380 w/ integer division and 380.125 w/ REAL division)
I feel it important to note for any future readers that the BIT value in the equations above is not the bit number, but the actual value of the resulting decimal if the bit in the desired position was the only TRUE bit in the binary string (bit 2 = 4 (2^2), bit 6 = 64 (2^6))
This explanation may be a bit too verbatim for some, but may be perfect for others :)
Please feel free to comment/critique/correct me if necessary!
I just needed to resolve an integer status code to a bit state in order to interface with some hardware. Here's a method that works for me:
private bool resolveBitState(int value, int bitNumber)
{
return (value & (1 << (bitNumber - 1))) != 0;
}
I like it, because it's non-iterative, requires no cast operations and essentially translates directly to machine code operations like Shift, And and Comparison, which probably means it's really optimal.
To explain in a little more detail, I'm comparing the bitwise value to a mask for the bit I am interested in (value & mask) using an AND operation. If the bitwise AND operation result is zero, then the bit is not set (return false). If the AND operation result is not zero, then the bit is set (return true). The result of the AND operation is either zero or the value of the bit (1, 2, 4, 8, 16, 32...). Hence the boolean evaluation comparing the AND operation result and 0. The mask is created by taking the number 1 and shifting it left (bit wise), by the appropriate number of binary places (1 << n). The number of places is the number of the bit targeted minus 1. If it's bit #1, I want to shift the 1 left by 0 and if it's #2, I want to shift it left 1 place, etc.
I'm surprised no one rates my solution. It think it's most logical and succinct... and works.

In count change recursive algorithm, why do we return 1 if the amount = 0?

I am taking coursera course and,for an assignment, I have written a code to count the change of an amount given a list of denominations. A doing a lot of research, I found explanations of various algorithms. In the recursive implementation, one of the base cases is if the amount money is 0 then the count is 1. I don't understand why but this is the only way the code works. I feel that is the amount is 0 then there is no way to make change for it and I should throw an exception. The code look like:
function countChange(amount : Int, denoms :List[Int]) : Int = {
if (amount == 0 ) return 1 ....
Any explanation is much appreciated.
To avoid speaking specifically about the Coursera problem, I'll refer to a simpler but similar problem.
How many outcomes are there for 2 coin flips? 4.
(H,H),(H,T),(T,H),(T,T)
How many outcomes are there for 1 coin flip? 2.
(H),(T)
How many outcomes are there for 0 coin flips? 1.
()
Expressing this recursively, how many outcomes are there for N coin flips? Let's call it f(N) where
f(N) = 2 * f(N - 1), for N > 0
f(0) = 1
The N = 0 trivial (base) case is chosen so that the non-trivial cases, defined recursively, work out correctly. Since we're doing multiplication in this example and the identity element for multiplication is 1, it makes sense to choose that as the base case.
Alternatively, you could argue from combinatorics: n choose 0 = 1, 0! = 1, etc.

Bound constraints ignored Matlab

I have the following code working out the efficient frontier for a portfolio of assets:
lb=Bounds(:,1);
ub=Bounds(:,2);
P = Portfolio('AssetList', AssetList,'LowerBound', lb, 'UpperBound', ub, 'Budget', 1);
P = P.estimateAssetMoments(AssetReturns)
[Passetmean, Passetcovar] = P.getAssetMoments
pwgt = P.estimateFrontier(20);
[prsk, pret] = P.estimatePortMoments(pwgt);
It works fine apart from the fact that it ignores the constraints to some extent (results below). How do I set the constraints to be hard constraints- i.e. prevent it from ignoring an upper bound of zero? For example, when I set an upper and lower bound to zero (i.e. I do not want a particular asset to be included in a portfolio) I still get values in the calculated portfolio weights for that asset, albeit very small ones, coming out as part of the optimised portfolio.
Lower bounds (lb), upper bounds (ub), and one of the portfolio weights (pwgt) are set out below:
lb ub pwgt(:,1)
0 0 1.06685493772574e-16
0 0 4.17200995972422e-16
0 0 0
0 0 2.76688394418301e-16
0 0 3.39138439553466e-16
0.192222222222222 0.252222222222222 0.192222222222222
0.0811111111111111 0.141111111111111 0.105624956477606
0.0912121212121212 0.151212121212121 0.0912121212121212
0.0912121212121212 0.151212121212121 0.0912121212121212
0.0306060606060606 0.0906060606060606 0.0306060606060606
0.0306060606060606 0.0906060606060606 0.0306060606060606
0.121515151515152 0.181515151515152 0.181515151515152
0.0508080808080808 0.110808080808081 0.110808080808081
0.00367003367003366 0.0636700336700337 0.0388531580005063
0.00367003367003366 0.0636700336700337 0.0636700336700338
0.00367003367003366 0.0636700336700337 0.0636700336700337
0 0 0
0 0 0
0 0 1.29236898960272e-16
I could use something like: pwgt=floor(pwgt*1000)/1000;, but is there not a more elegant solution than this?
The point is that your bound has not been ignored.
You are calculating with floating point numbers, and hence 0 and 4.17200995972422e-16 are both close enough to 0 to let your program allow them.
My recommendation would indeed be to round your results (or simply display less decimals with format short), however I would do the rounding like this:
pwgt=round(pwgt*100000)/100000;
Note that the other results may also be 'above' the upper bound, however this will not become visible due to the insignificance.
I had issues like this with a laminate/engineering properties code, which was propogating errors all over everything. I fixed it by taking all of the values I had, and systematically converting them from double to sym, and suddenly my 1e-16 values became real zeros, that I could also eval(val) and still see as zeros! This may help, but you may have to go inside of the .m files you're running, and have the numbers convert to sym with val = sym(val).
I can't remember for certain, but I think Matlab functions MIGHT change sym to double once they receive the data for their own internal processing.

Brainfuck compare 2 numbers as greater than or less than

How can I compare two numbers with an inequality? (greater than or less than)
I want to compare single digits
For example
1 2
5 3
9 2
etc.
This is the best way to compare two numbers.Why because, if you are intelligent enough, you can use the same code in bigger programs.It's highly portable.
Assume we have two numbers a,b.
we have two blocks : if( a>=b ) and else,
Hope its enough.
0 1 0 a b 0
Make the array like this. And point to the (4) i.e. point to the a
+>+< This is for managing if a=0 and b=0
[->-[>]<<] This is a magic loop. if a is the one which
reaches 0 first (a<b),then pointer will be at(4).
Else it will be at (3)
<[-
// BLOCK (a>=b)
//You are at (2) and do whatever you want and come back to (2).
//Its a must
]
<[-<
// BLOCK(a<b)
//You are at (1) and do whatever you want and come back to (1).
//Its a must
]
It will not affect the following program code as both the code blocks will end up in (1) You can do further coding assuming that pointer will reach (1)
Please remove the documentation if you copy the code. Because code contains some valid brainfuck symbols like < . , etc.
Once you know which is the distance between the two numbers you should or decrement both of them in the same loop iteration and then check both for being zero: you will understand which one is the smaller.
Eg:
+++++ > +++ < [->-< check is first is zero, then second]
(this is just to give you a hint, you will have to take care about equal numbers and similar issues.
I was thinking about this too, and while I'm sure this isn't the best solution, at least it can answer the question of which number is larger =)
The program asks for two characters, outputs '<' if the first is smaller, '>' if it is larger, and '=' if they are equal. After outputting one char, the program halts by asking for additional input.
+>,>,<<[>-[>>>]<[>>-[>++++++++++[->++++++<]>.,]++++++++++[->++++++<]>+.,]<-[>>>]<<[>>>++++++++++[->++++++<]>++.,]<<<]
Hopefully somewhat clearer:
+ init (0) to 1
>, read (1)
>, read (2)
<<[ loop forever
>-[>>>] decrement (1) going to (4) if (1) != 0
<[ goto (0) == 1 if (1) reached 0 (otherwise goto (3))
>>-[>++++++++++[->++++++<]>.,] decrement (2) printing lessthan if larger than 0
++++++++++[->++++++<]>+., if (2) == 0 print '='
]
<-[>>>] decrement (2) going to (5) if (2) != 0
<<[ goto (0) == 1 if (2) reached 0 (otherwise goto (3))
>>>++++++++++[->++++++<]>++., print largerthan since (2) reached 0 first
]
<<< goto(0)
]
I made a solution, that gives you back a boolean and the pointer always at the same point.
This is how it looks like at the beginning:
0 0 0 a b 0 0
p
And these are the two possible outputs:
0 0 0 0 0 1 0 #true
p
0 0 0 0 0 0 0 #false
p
The code:
>>>>
[ # while cell != 0
- # decrement a
[ # if a != 0
>- # decrement b
[ # if b != 0
< # go left
<-< # undo the finally-block;
] # finally-block
<[-]> # clear a
>+> # res = 1; move to end-position
<<< # undo the finally-block
] # finally-block
>[-]>> # clear b; res = 0; move to end-position
] #
minified version:
>>>>[-[>-[< <-<]<[-]>>+><<<]>[-]>>]
Given two numbers A and B, the following code will print A if A is greater than B, B if B is greater than A and C if both are equal.
>>>>>>>>>++++++[>+++++++++++<-]>[>+>+>+<<<-]>+>->
<<<<<<<<<<<,>,<
[->-<[>]<<]>>>[>>]>>>>>>>>.
No such thing exists in BF. The > and < in BF move the pointer to the right and to the left, respectively.