When the borrow (i.e. carry flag is cleared) happens during subtracting operation (sbc asm instruction) on 6502 used by NES? Is it each time the result is negative (-1 to -128)?
Many thanks!
Thanks
STeN
On a 6502 SBC n is exactly identical to ADC (n EOR $FF) — it's one's complement. So carry is clear when A + (operand ^ 0xff) + existing carry is less than 256.
EDIT: so, if carry is set then the subtraction occurs without borrow. If carry is clear then subtraction occurs with borrow. Therefore if carry is set after the subtraction then there was no borrow. If carry is clear then there was borrow.
If you want to test whether a result is negative, check the sign bit implicitly via a BMI or BPL.
It's a bit more complicated than that if in decimal mode on a generic 6502 but the NES variant doesn't have decimal mode so ignore anything you read about that.
To clarify re: the comments below; if you're treating numbers as signed then 127 is +127, 128 is -128, etc. Normal two's complement. Nothing special. E.g.
LDA #-63 ; i.e. 1100 0001
SEC
SBC #65 ; i.e. 0100 0001
; result in accumulator is now -128, i.e. 1000 0000,
; and carry remains set because there was no borrow
BPL somewhere ; wouldn't jump, because -128 is negative
BMI somewhereElse ; would jump, because -128 is negative
The following is exactly equivalent in terms of inner workings:
LDA #-63 ; i.e. 1100 0001
SEC ; ... everything the same up until here ...
ADC #65 ; i.e. 1011 1110 (the complement of 0100 0001)
; result = 1100 0001 + 1011 1110 + 1 = [1] 0111 1111 + 1 = [1] 1000 0000
; ^
; |
; carry
; = -128
So, as above, defining "the result" as per the 6502 manual and ordinary programmatic meaning of "the thing sitting in the accumulator", you can test whether the result is positive or negative as stated above, e.g.
SBC $23
BMI resultWasNegative
resultWasPositive: ...
If you're interested in whether the complete result would have been negative (i.e. had it fitted into the accumulator) then you can also check the overflow flag. If overflow is set then that means that whatever is in the accumulator has the wrong sign because of the 8-bit limit. So you can do the equivalent of an exclusive OR between overflow and sign:
SBC $23
BVC signIsTheOpposite
BMI resultWasNegative
JMP resultWasPositive
signIsTheOpposite:
BPL resultWasNegative
JMP resultWasPositive
Tommy's answer is correct, but I have a simpler way of looking at it.
Operations in the 6502's ALU are all 8 bit so you can think of a subtraction like this (for $65 and $64):
01100101
-01100100
========
00000001
What I do is imagine the subtraction is a 9 bit (unsigned) operation with the 9th bit of the accumulator set to 1, so $65 - $64 would look like this:
1 01100101
- 01100100
==========
1 00000001
Whereas $64 - $65 would look like this
1 01100100
- 01100101
==========
0 11111111
The new carry bit is the imaginary 9th bit of the result.
Essentially, the carry is set when the operand interpreted as an unsigned number is greater than the accumulator interpreted as an unsigned number. Or to be pedantic when
A < operand - 1 + oldcarry
Nope, the result may as well be positive.
Example:
lda #$10
sec
sbc #$f0
Carry will be clear after that and Accumulator will be $20.
To test for positive/negative values after substraction use the N(egative)-flag of the status-register and the branches evaluating it (BMI/BPL).
I want to vectorize the multiplication of two memory aligned arrays.
I didn't find any way to multiply 64*64 bit in AVX/AVX2, so I just did loop-unroll and AVX2 loads/stores. Is there a faster way to do this?
Note: I don't want to save the high-half result of each multiplication.
void multiply_vex(long *Gi_vec, long q, long *Gj_vec){
int i;
__m256i data_j, data_i;
__uint64_t *ptr_J = (__uint64_t*)&data_j;
__uint64_t *ptr_I = (__uint64_t*)&data_i;
for (i=0; i<BASE_VEX_STOP; i+=4) {
data_i = _mm256_load_si256((__m256i*)&Gi_vec[i]);
data_j = _mm256_load_si256((__m256i*)&Gj_vec[i]);
ptr_I[0] -= ptr_J[0] * q;
ptr_I[1] -= ptr_J[1] * q;
ptr_I[2] -= ptr_J[2] * q;
ptr_I[3] -= ptr_J[3] * q;
_mm256_store_si256((__m256i*)&Gi_vec[i], data_i);
}
for (; i<BASE_DIMENSION; i++)
Gi_vec[i] -= Gj_vec[i] * q;
}
UPDATE:
I'm using the Haswell microarchitecture with both ICC/GCC compilers. So both AVX and AVX2 is fine.
I substitute the -= by the C intrisic _mm256_sub_epi64 after the multiplication loop-unroll, where it get some speedup. Currently, it is ptr_J[0] *= q; ...
I use __uint64_t but is a error. The right data type is __int64_t.
You seem to be assuming long is 64bits in your code, but then using __uint64_t as well. In 32bit, the x32 ABI, and on Windows, long is a 32bit type. Your title mentions long long, but then your code ignores it. I was wondering for a while if your code was assuming that long was 32bit.
You're completely shooting yourself in the foot by using AVX256 loads but then aliasing a pointer onto the __m256i to do scalar operations. gcc just gives up and gives you the terrible code you asked for: vector load and then a bunch of extract and insert instructions. Your way of writing it means that both vectors have to be unpacked to do the sub in scalar as well, instead of using vpsubq.
Modern x86 CPUs have very fast L1 cache that can handle two operations per clock. (Haswell and later: two loads and one store per clock). Doing multiple scalar loads from the same cache line is better than a vector load and unpacking. (Imperfect uop scheduling reduces the throughput to about 84% of that, though: see below)
gcc 5.3 -O3 -march=haswell (Godbolt compiler explorer) auto-vectorizes a simple scalar implementation pretty well. When AVX2 isn't available, gcc foolishly still auto-vectorizes with 128b vectors: On Haswell, this will actually be about 1/2 the speed of ideal scalar 64bit code. (See the perf analysis below, but substitute 2 elements per vector instead of 4).
#include <stdint.h> // why not use this like a normal person?
#define BASE_VEX_STOP 1024
#define BASE_DIMENSION 1028
// restrict lets the compiler know the arrays don't overlap,
// so it doesn't have to generate a scalar fallback case
void multiply_simple(uint64_t *restrict Gi_vec, uint64_t q, const uint64_t *restrict Gj_vec){
for (intptr_t i=0; i<BASE_DIMENSION; i++) // gcc doesn't manage to optimize away the sign-extension from 32bit to pointer-size in the scalar epilogue to handle the last less-than-a-vector elements
Gi_vec[i] -= Gj_vec[i] * q;
}
inner loop:
.L4:
vmovdqu ymm1, YMMWORD PTR [r9+rax] # MEM[base: vectp_Gj_vec.22_86, index: ivtmp.32_76, offset: 0B], MEM[base: vectp_Gj_vec.22_86, index: ivtmp.32_76, offset: 0B]
add rcx, 1 # ivtmp.30,
vpsrlq ymm0, ymm1, 32 # tmp174, MEM[base: vectp_Gj_vec.22_86, index: ivtmp.32_76, offset: 0B],
vpmuludq ymm2, ymm1, ymm3 # tmp173, MEM[base: vectp_Gj_vec.22_86, index: ivtmp.32_76, offset: 0B], vect_cst_.25
vpmuludq ymm0, ymm0, ymm3 # tmp176, tmp174, vect_cst_.25
vpmuludq ymm1, ymm4, ymm1 # tmp177, tmp185, MEM[base: vectp_Gj_vec.22_86, index: ivtmp.32_76, offset: 0B]
vpaddq ymm0, ymm0, ymm1 # tmp176, tmp176, tmp177
vmovdqa ymm1, YMMWORD PTR [r8+rax] # MEM[base: vectp_Gi_vec.19_81, index: ivtmp.32_76, offset: 0B], MEM[base: vectp_Gi_vec.19_81, index: ivtmp.32_76, offset: 0B]
vpsllq ymm0, ymm0, 32 # tmp176, tmp176,
vpaddq ymm0, ymm2, ymm0 # vect__13.24, tmp173, tmp176
vpsubq ymm0, ymm1, ymm0 # vect__14.26, MEM[base: vectp_Gi_vec.19_81, index: ivtmp.32_76, offset: 0B], vect__13.24
vmovdqa YMMWORD PTR [r8+rax], ymm0 # MEM[base: vectp_Gi_vec.19_81, index: ivtmp.32_76, offset: 0B], vect__14.26
add rax, 32 # ivtmp.32,
cmp rcx, r10 # ivtmp.30, bnd.14
jb .L4 #,
Translate that back to intrinsics if you want, but it's going to be a lot easier to just let the compiler autovectorize. I didn't try to analyse it to see if it's optimal.
If you don't usually compile with -O3, you could use #pragma omp simd before the loop (and -fopenmp).
Of course, instead of a scalar epilogue, it would prob. be faster to do an unaligned load of the last 32B of Gj_vec, and store into the last 32B of Gi_vec, potentially overlapping with the last store from the loop. (A scalar fallback is still needed if the arrays are smaller than 32B.)
Improved vector intrinsic version for Haswell
From my comments on Z Boson's answer. Based on Agner Fog's vector class library code.
Agner Fog's version saves an instruction but bottlenecks on the shuffle port by using phadd + pshufd where I use psrlq / paddq / pand.
Since one of your operands is constant, make sure to pass set1(q) as b, not a, so the "bswap" shuffle can be hoisted.
// replace hadd -> shuffle (4 uops) with shift/add/and (3 uops)
// The constant takes 2 insns to generate outside a loop.
__m256i mul64_haswell (__m256i a, __m256i b) {
// instruction does not exist. Split into 32-bit multiplies
__m256i bswap = _mm256_shuffle_epi32(b,0xB1); // swap H<->L
__m256i prodlh = _mm256_mullo_epi32(a,bswap); // 32 bit L*H products
// or use pshufb instead of psrlq to reduce port0 pressure on Haswell
__m256i prodlh2 = _mm256_srli_epi64(prodlh, 32); // 0 , a0Hb0L, 0, a1Hb1L
__m256i prodlh3 = _mm256_add_epi32(prodlh2, prodlh); // xxx, a0Lb0H+a0Hb0L, xxx, a1Lb1H+a1Hb1L
__m256i prodlh4 = _mm256_and_si256(prodlh3, _mm256_set1_epi64x(0x00000000FFFFFFFF)); // zero high halves
__m256i prodll = _mm256_mul_epu32(a,b); // a0Lb0L,a1Lb1L, 64 bit unsigned products
__m256i prod = _mm256_add_epi64(prodll,prodlh4); // a0Lb0L+(a0Lb0H+a0Hb0L)<<32, a1Lb1L+(a1Lb1H+a1Hb1L)<<32
return prod;
}
See it on Godbolt.
Note that this doesn't include the final subtract, only the multiply.
This version should perform a bit better on Haswell than gcc's autovectorized version. (like maybe one vector per 4 cycles instead of one vector per 5 cycles, bottlenecked on port0 throughput. I didn't consider other bottlenecks for the full problem, since this was a late addition to the answer.)
An AVX1 version (two elements per vector) would suck, and probably still be worse than 64bit scalar. Don't do it unless you already have your data in vectors, and want the result in a vector (extracting to scalar and back might not be worth it).
Perf analysis of GCC's autovectorized code (not the intrinsic version)
Background: see Agner Fog's insn tables and microarch guide, and other links in the x86 tag wiki.
Until AVX512 (see below), this is probably only barely faster than scalar 64bit code: imul r64, m64 has a throughput of one per clock on Intel CPUs (but one per 4 clocks on AMD Bulldozer-family). load/imul/sub-with-memory-dest is 4 fused-domain uops on Intel CPUs (with an addressing mode that can micro-fuse, which gcc fails to use). The pipeline width is 4 fused-domain uops per clock, so even a large unroll can't get this to issue at one-per-clock. With enough unrolling, we'll bottleneck on load/store throughput. 2 loads and one store per clock is possible on Haswell, but stores-address uops stealing load ports will lower the throughput to about 81/96 = 84% of that, according to Intel's manual.
So perhaps the best way for Haswell would load and multiply with scalar, (2 uops), then vmovq / pinsrq / vinserti128 so you can do the subtract with a vpsubq. That's 8 uops to load&multiply all 4 scalars, 7 shuffle uops to get the data into a __m256i (2 (movq) + 4 (pinsrq is 2 uops) + 1 vinserti128), and 3 more uops to do a vector load / vpsubq / vector store. So that's 18 fused-domain uops per 4 multiplies (4.5 cycles to issue), but 7 shuffle uops (7 cycles to execute). So nvm, this is no good compared to pure scalar.
The autovectorized code is using 8 vector ALU instructions for each vector of four values. On Haswell, 5 of those uops (multiplies and shifts) can only run on port 0, so no matter how you unroll this algorithm it will achieve at best one vector per 5 cycles (i.e. one multiply per 5/4 cycles.)
The shifts could be replaced with pshufb (port 5) to move the data and shift in zeros. (Other shuffles don't support zeroing instead of copying a byte from the input, and there aren't any known-zeros in the input that we could copy.)
paddq / psubq can run on ports 1/5 on Haswell, or p015 on Skylake.
Skylake runs pmuludq and immediate-count vector shifts on on p01, so it could in theory manage a throughput of one vector per max(5/2, 8/3, 11/4) = 11/4 = 2.75 cycles. So it bottlenecks on total fused-domain uop throughput (including the 2 vector loads and 1 vector store). So a bit of loop unrolling will help. Probably resource conflicts from imperfect scheduling will bottleneck it to slightly less than 4 fused-domain uops per clock. The loop overhead can hopefully run on port 6, which can only handle some scalar ops, including add and compare-and-branch, leaving ports 0/1/5 for vector ALU ops, since they're close to saturating (8/3 = 2.666 clocks). The load/store ports are nowhere near saturating, though.
So, Skylake can theoretically manage one vector per 2.75 cycles (plus loop overhead), or one multiply per ~0.7 cycles, vs. Haswell's best option (one per ~1.2 cycles in theory with scalar, or one per 1.25 cycles in theory with vectors). The scalar one per ~1.2 cycles would probably require a hand-tuned asm loop, though, because compilers don't know how to use a one-register addressing mode for stores, and a two-register addressing mode for loads (dst + (src-dst) and increment dst).
Also, if your data isn't hot in L1 cache, getting the job done with fewer instructions lets the frontend get ahead of the execution units, and get started on the loads before the data is needed. Hardware prefetch doesn't cross page lines, so a vector loop will probably beat scalar in practice for large arrays, and maybe even for smaller arrays.
AVX-512DQ introduces a 64bx64b->64b vector multiply
gcc can auto-vectorize using it, if you add -mavx512dq.
.L4:
vmovdqu64 zmm0, ZMMWORD PTR [r8+rax] # vect__11.23, MEM[base: vectp_Gj_vec.22_86, index: ivtmp.32_76, offset: 0B]
add rcx, 1 # ivtmp.30,
vpmullq zmm1, zmm0, zmm2 # vect__13.24, vect__11.23, vect_cst_.25
vmovdqa64 zmm0, ZMMWORD PTR [r9+rax] # MEM[base: vectp_Gi_vec.19_81, index: ivtmp.32_76, offset: 0B], MEM[base: vectp_Gi_vec.19_81, index: ivtmp.32_76, offset: 0B]
vpsubq zmm0, zmm0, zmm1 # vect__14.26, MEM[base: vectp_Gi_vec.19_81, index: ivtmp.32_76, offset: 0B], vect__13.24
vmovdqa64 ZMMWORD PTR [r9+rax], zmm0 # MEM[base: vectp_Gi_vec.19_81, index: ivtmp.32_76, offset: 0B], vect__14.26
add rax, 64 # ivtmp.32,
cmp rcx, r10 # ivtmp.30, bnd.14
jb .L4 #,
So AVX512DQ (expected to be part of Skylake multi-socket Xeon (Purley) in ~2017) will give a much larger than 2x speedup (from wider vectors) if these instructions are pipelined at one per clock.
Update: Skylake-AVX512 (aka SKL-X or SKL-SP) runs VPMULLQ at one per 1.5 cycles for xmm, ymm, or zmm vectors. It's 3 uops with 15c latency. (With maybe an extra 1c of latency for the zmm version, if that's not a measurement glitch in the AIDA results.)
vpmullq is much faster than anything you can build out of 32-bit chunks, so it's very much worth having an instruction for this even if current CPUs don't have 64-bit-element vector-multiply hardware. (Presumably they use the mantissa multipliers in the FMA units.)
If you're interested in SIMD 64bx64b to 64b (lower) operations here are the AVX and AVX2 solutions from Agner Fog's Vector Class Library. I would test these with arrays and see how it compares to what GCC does with a generic loop such as the one in Peter Cordes' answer.
AVX (use SSE - you can still compile with -mavx to get vex encoding).
// vector operator * : multiply element by element
static inline Vec2q operator * (Vec2q const & a, Vec2q const & b) {
#if INSTRSET >= 5 // SSE4.1 supported
// instruction does not exist. Split into 32-bit multiplies
__m128i bswap = _mm_shuffle_epi32(b,0xB1); // b0H,b0L,b1H,b1L (swap H<->L)
__m128i prodlh = _mm_mullo_epi32(a,bswap); // a0Lb0H,a0Hb0L,a1Lb1H,a1Hb1L, 32 bit L*H products
__m128i zero = _mm_setzero_si128(); // 0
__m128i prodlh2 = _mm_hadd_epi32(prodlh,zero); // a0Lb0H+a0Hb0L,a1Lb1H+a1Hb1L,0,0
__m128i prodlh3 = _mm_shuffle_epi32(prodlh2,0x73); // 0, a0Lb0H+a0Hb0L, 0, a1Lb1H+a1Hb1L
__m128i prodll = _mm_mul_epu32(a,b); // a0Lb0L,a1Lb1L, 64 bit unsigned products
__m128i prod = _mm_add_epi64(prodll,prodlh3); // a0Lb0L+(a0Lb0H+a0Hb0L)<<32, a1Lb1L+(a1Lb1H+a1Hb1L)<<32
return prod;
#else // SSE2
int64_t aa[2], bb[2];
a.store(aa); // split into elements
b.store(bb);
return Vec2q(aa[0]*bb[0], aa[1]*bb[1]); // multiply elements separetely
#endif
}
AVX2
// vector operator * : multiply element by element
static inline Vec4q operator * (Vec4q const & a, Vec4q const & b) {
// instruction does not exist. Split into 32-bit multiplies
__m256i bswap = _mm256_shuffle_epi32(b,0xB1); // swap H<->L
__m256i prodlh = _mm256_mullo_epi32(a,bswap); // 32 bit L*H products
__m256i zero = _mm256_setzero_si256(); // 0
__m256i prodlh2 = _mm256_hadd_epi32(prodlh,zero); // a0Lb0H+a0Hb0L,a1Lb1H+a1Hb1L,0,0
__m256i prodlh3 = _mm256_shuffle_epi32(prodlh2,0x73); // 0, a0Lb0H+a0Hb0L, 0, a1Lb1H+a1Hb1L
__m256i prodll = _mm256_mul_epu32(a,b); // a0Lb0L,a1Lb1L, 64 bit unsigned products
__m256i prod = _mm256_add_epi64(prodll,prodlh3); // a0Lb0L+(a0Lb0H+a0Hb0L)<<32, a1Lb1L+(a1Lb1H+a1Hb1L)<<32
return prod;
}
These functions work for signed and unsigned 64-bit integers. In your case since q is constant within the loop you don't need to recalculate some things every iteration but your compiler will probably figure that out anyway.
I'm doing analysis on binary data. Suppose I have two uint8 data values:
a = uint8(0xAB);
b = uint8(0xCD);
I want to take the lower two bits from a, and whole content from b, to make a 10 bit value. In C-style, it should be like:
(a[2:1] << 8) | b
I tried bitget:
bitget(a,2:-1:1)
But this just gave me separate [1, 1] logical type values, which is not a scalar, and cannot be used in the bitshift operation later.
My current solution is:
Make a|b (a or b):
temp1 = bitor(bitshift(uint16(a), 8), uint16(b));
Left shift six bits to get rid of the higher six bits from a:
temp2 = bitshift(temp1, 6);
Right shift six bits to get rid of lower zeros from the previous result:
temp3 = bitshift(temp2, -6);
Putting all these on one line:
result = bitshift(bitshift(bitor(bitshift(uint16(a), 8), uint16(b)), 6), -6);
This is doesn't seem efficient, right? I only want to get (a[2:1] << 8) | b, and it takes a long expression to get the value.
Please let me know if there's well-known solution for this problem.
Since you are using Octave, you can make use of bitpack and bitunpack:
octave> a = bitunpack (uint8 (0xAB))
a =
1 1 0 1 0 1 0 1
octave> B = bitunpack (uint8 (0xCD))
B =
1 0 1 1 0 0 1 1
Once you have them in this form, it's dead easy to do what you want:
octave> [B A(1:2)]
ans =
1 0 1 1 0 0 1 1 1 1
Then simply pad with zeros accordingly and pack it back into an integer:
octave> postpad ([B A(1:2)], 16, false)
ans =
1 0 1 1 0 0 1 1 1 1 0 0 0 0 0 0
octave> bitpack (ans, "uint16")
ans = 973
That or is equivalent to an addition when dealing with integers
result = bitshift(bi2de(bitget(a,1:2)),8) + b;
e.g
a = 01010111
b = 10010010
result = 00000011 100010010
= a[2]*2^9 + a[1]*2^8 + b
an alternative method could be
result = mod(a,2^x)*2^y + b;
where the x is the number of bits you want to extract from a and y is the number of bits of a and b, in your case:
result = mod(a,4)*256 + b;
an extra alternative solution close to the C solution:
result = bitor(bitshift(bitand(a,3), 8), b);
I think it is important to explain exactly what "(a[2:1] << 8) | b" is doing.
In assembly, referencing individual bits is a single operation. Assume all operations take the exact same time and "efficient" a[2:1] starts looking extremely inefficient.
The convenience statement actually does (a & 0x03).
If your compiler actually converts a uint8 to a uint16 based on how much it was shifted, this is not a 'free' operation, per se. Effectively, what your compiler will do is first clear the "memory" to the size of uint16 and then copy "a" into the location. This requires an extra step (clearing the "memory" (register)) that wouldn't normally be needed.
This means your statement actually is (uint16(a & 0x03) << 8) | uint16(b)
Now yes, because you're doing a power of two shift, you could just move a into AH, move b into AL, and AH by 0x03 and move it all out but that's a compiler optimization and not what your C code said to do.
The point is that directly translating that statement into matlab yields
bitor(bitshift(uint16(bitand(a,3)),8),uint16(b))
But, it should be noted that while it is not as TERSE as (a[2:1] << 8) | b, the number of "high level operations" is the same.
Note that all scripting languages are going to be very slow upon initiating each instruction, but will complete said instruction rapidly. The terse nature of Python isn't because "terse is better" but to create simple structures that the language can recognize so it can easily go into vectorized operations mode and start executing code very quickly.
The point here is that you have an "overhead" cost for calling bitand; but when operating on an array it will use SSE and that "overhead" is only paid once. The JIT (just in time) compiler, which optimizes script languages by reducing overhead calls and creating temporary machine code for currently executing sections of code MAY be able to recognize that the type checks for a chain of bitwise operations need only occur on the initial inputs, hence further reducing runtime.
Very high level languages are quite different (and frustrating) from high level languages such as C. You are giving up a large amount of control over code execution for ease of code production; whether matlab actually has implemented uint8 or if it is actually using a double and truncating it, you do not know. A bitwise operation on a native uint8 is extremely fast, but to convert from float to uint8, perform bitwise operation, and convert back is slow. (Historically, Matlab used doubles for everything and only rounded according to what 'type' you specified)
Even now, octave 4.0.3 has a compiled bitshift function that, for bitshift(ones('uint32'),-32) results in it wrapping back to 1. BRILLIANT! VHLL place you at the mercy of the language, it isn't about how terse or how verbose you write the code, it's how the blasted language decides to interpret it and execute machine level code. So instead of shifting, uint32(floor(ones / (2^32))) is actually FASTER and more accurate.
I have a 16-bit WORD and I want to read the status of a specific bit or several bits.
I've tried a method that divides the word by the bit that I want, converts the result to two values - an integer and to a real, and compares the two. if they are not equal, then it it equates to false. This appears to only work if i am looking for a bit that the last 'TRUE' bit in the word. If there are any successive TRUE bits, it fails. Perhaps I just haven't done it right. I don't have the ability to use code, just basic math, boolean operations, and type conversion. Any ideas? I hope this isn't a dumb question but i have a feeling it is.
eg:
WORD 0010000100100100 = 9348
I want to know the value of bit 2. how can i determine it from 9348?
There are many ways, depending on what operations you can use. It appears you don't have much to choose from. But this should work, using just integer division and multiplication, and a test for equality.
(psuedocode):
x = 9348 (binary 0010000100100100, bit 0 = 0, bit 1 = 0, bit 2 = 1, ...)
x = x / 4 (now x is 1000010010010000
y = (x / 2) * 2 (y is 0000010010010000)
if (x == y) {
(bit 2 must have been 0)
} else {
(bit 2 must have been 1)
}
Every time you divide by 2, you move the bits to the left one position (in your big endian representation). Every time you multiply by 2, you move the bits to the right one position. Odd numbers will have 1 in the least significant position. Even numbers will have 0 in the least significant position. If you divide an odd number by 2 in integer math, and then multiply by 2, you loose the odd bit if there was one. So the idea above is to first move the bit you want to know about into the least significant position. Then, divide by 2 and then multiply by two. If the result is the same as what you had before, then there must have been a 0 in the bit you care about. If the result is not the same as what you had before, then there must have been a 1 in the bit you care about.
Having explained the idea, we can simplify to
((x / 8) * 2) <> (x / 4)
which will resolve to true if the bit was set, and false if the bit was not set.
AND the word with a mask [1].
In your example, you're interested in the second bit, so the mask (in binary) is
00000010. (Which is 2 in decimal.)
In binary, your word 9348 is 0010010010000100 [2]
0010010010000100 (your word)
AND 0000000000000010 (mask)
----------------
0000000000000000 (result of ANDing your word and the mask)
Because the value is equal to zero, the bit is not set. If it were different to zero, the bit was set.
This technique works for extracting one bit at a time. You can however use it repeatedly with different masks if you're interested in extracting multiple bits.
[1] For more information on masking techniques see http://en.wikipedia.org/wiki/Mask_(computing)
[2] See http://www.binaryhexconverter.com/decimal-to-binary-converter
The nth bit is equal to the word divided by 2^n mod 2
I think you'll have to test each bit, 0 through 15 inclusive.
You could try 9348 AND 4 (equivalent of 1<<2 - index of the bit you wanted)
9348 AND 4
should give 4 if bit is set, 0 if not.
So here is what I have come up with: 3 solutions. One is Hatchet's as proposed above, and his answer helped me immensely with actually understanding HOW this works, which is of utmost importance to me! The proposed AND masking solutions could have worked if my system supports bitwise operators, but it apparently does not.
Original technique:
( ( ( INT ( TAG / BIT ) ) / 2 ) - ( INT ( ( INT ( TAG / BIT ) ) / 2 ) ) <> 0 )
Explanation:
in the first part of the equation, integer division is performed on TAG/BIT, then REAL division by 2. In the second part, integer division is performed TAG/BIT, then integer division again by 2. The difference between these two results is compared to 0. If the difference is not 0, then the formula resolves to TRUE, which means the specified bit is also TRUE.
eg: 9348/4 = 2337 w/ integer division. Then 2337/2 = 1168.5 w/ REAL division but 1168 w/ integer division. 1168.5-1168 <> 0, so the result is TRUE.
My modified technique:
( INT ( TAG / BIT ) / 2 ) <> ( INT ( INT ( TAG / BIT ) / 2 ) )
Explanation:
effectively the same as above, but instead of subtracting the two results and comparing them to 0, I am just comparing the two results themselves. If they are not equal, the formula resolves to TRUE, which means the specified bit is also TRUE.
eg: 9348/4 = 2337 w/ integer division. Then 2337/2 = 1168.5 w/ REAL division but 1168 w/ integer division. 1168.5 <> 1168, so the result is TRUE.
Hatchet's technique as it applies to my system:
( INT ( TAG / BIT )) <> ( INT ( INT ( TAG / BIT ) / 2 ) * 2 )
Explanation:
in the first part of the equation, integer division is performed on TAG/BIT. In the second part, integer division is performed TAG/BIT, then integer division again by 2, then multiplication by 2. The two results are compared. If they are not equal, the formula resolves to TRUE, which means the specified bit is also TRUE.
eg: 9348/4 = 2337. Then 2337/2 = 1168 w/ integer division. Then 1168x2=2336. 2337 <> 2336 so the result is TRUE. As Hatchet stated, this method 'drops the odd bit'.
Note - 9348/4 = 2337 w/ both REAL and integer division, but it is important that these parts of the formula use integer division and not REAL division (12164/32 = 380 w/ integer division and 380.125 w/ REAL division)
I feel it important to note for any future readers that the BIT value in the equations above is not the bit number, but the actual value of the resulting decimal if the bit in the desired position was the only TRUE bit in the binary string (bit 2 = 4 (2^2), bit 6 = 64 (2^6))
This explanation may be a bit too verbatim for some, but may be perfect for others :)
Please feel free to comment/critique/correct me if necessary!
I just needed to resolve an integer status code to a bit state in order to interface with some hardware. Here's a method that works for me:
private bool resolveBitState(int value, int bitNumber)
{
return (value & (1 << (bitNumber - 1))) != 0;
}
I like it, because it's non-iterative, requires no cast operations and essentially translates directly to machine code operations like Shift, And and Comparison, which probably means it's really optimal.
To explain in a little more detail, I'm comparing the bitwise value to a mask for the bit I am interested in (value & mask) using an AND operation. If the bitwise AND operation result is zero, then the bit is not set (return false). If the AND operation result is not zero, then the bit is set (return true). The result of the AND operation is either zero or the value of the bit (1, 2, 4, 8, 16, 32...). Hence the boolean evaluation comparing the AND operation result and 0. The mask is created by taking the number 1 and shifting it left (bit wise), by the appropriate number of binary places (1 << n). The number of places is the number of the bit targeted minus 1. If it's bit #1, I want to shift the 1 left by 0 and if it's #2, I want to shift it left 1 place, etc.
I'm surprised no one rates my solution. It think it's most logical and succinct... and works.