I noticed that zlib's adler32 function does not always return the original seed when passed an empty string. For example:
adler32(0xFFFFFFFF, // seed
(const Bytef *) "", // buffer
0 // length
)
returns 0xE000E. I would think it should return 0xFFFFFFFF instead.
For most other values, it does return the original seed. However, for 1965855 numbers from 0 to 0xFFFFFFFF, adler32 changes the seed when the input string is empty.
Is this a bug or quirky behavior of the implementation, or is the range of the Adler-32 function in fact a subset of [0,0xFFFFFFFF] ?
Note that when the pointer is NULL, adler32 always returns 1 (the initial seed). This is documented behavior.
The first argument is not a "seed". It is the previous adler32 value that is being appended to.
Yes, the range of adler32 is not all 32-bit values. 0xffffffff is not a valid adler32. The only valid adler32 values are those in which the upper and lower 16-bit halves of the 32-bit value are, when interpreted as integers, both less than 65521. When you call adler32() with a zero length, it will return the upper and lower halves modulo 65521.
1965855 is 65521*15 + 15*65521 + 15*15, which is the number of 32-bit values with a valid upper half and invalid lower half, plus the number with an invalid upper half and a valid lower half, plus the number with both halves invalid.
Related
As the title states, lldb reports the value of UInt.max to be a UInt of -1, which seems highly illogical. Considering that let uint: UInt = -1 doesn't even compile, how is this even possible? I don't see any way to have a negative value of UInt at runtime because the initializer will crash if given a negative value. I want to know the actual maximum value of UInt.
The Int value of -1 and the UInt value UInt.max have the same bit representation in memory.
You can see that if you do:
let i = Int(bitPattern: UInt.max) // i == -1
and in the opposite direction:
if UInt(bitPattern: Int(-1)) == UInt.max {
print("same")
}
Output:
same
The debugger is incorrectly displaying UInt.max as a signed Int. They have the same bit representation in memory (0xffffffffffffffff on a 64-bit system such as iPhone 6 and 0xffffffff on a 32-bit system such as iPhone 5), and the debugger apparently chooses to show that value as an Int.
You can see the same issue if you do:
print(String(format: "%d", UInt.max)) // prints "-1"
It doesn't mean UInt.max is -1, just that both have the same representation in memory.
To see the maximum value of UInt, do the following in an app or on a Swift Playground:
print(UInt.max)
This will print 18446744073709551615 on a 64-bit system (such as a Macintosh or iPhone 6) and 4294967295 on a 32-bit system (such as an iPhone 5).
In lldb:
(lldb) p String(UInt.max)
(String) $R0 = "18446744073709551615"
(lldb)
This sounds like an instance of transitioning between interpreting a literal value under Two's Complement and Unsigned representations.
In the unsigned world, a binary number is just a binary number, so the more digits that are 1, the bigger the number, since there is no need to encode the sign somehow. In order to represent the largest number of signed values, the Two's Complement encoding scheme encodes positive values as normal provided the most extreme bit is not a 1. If the most extreme bit is a 1, then the bits are reinterpreted as described at https://en.wikipedia.org/wiki/Two%27s_complement.
As shown on Wikipedia, the Two's Complement representation of -1 has all bits set to 1, or the maximal unsigned value.
I have a 16-bit WORD and I want to read the status of a specific bit or several bits.
I've tried a method that divides the word by the bit that I want, converts the result to two values - an integer and to a real, and compares the two. if they are not equal, then it it equates to false. This appears to only work if i am looking for a bit that the last 'TRUE' bit in the word. If there are any successive TRUE bits, it fails. Perhaps I just haven't done it right. I don't have the ability to use code, just basic math, boolean operations, and type conversion. Any ideas? I hope this isn't a dumb question but i have a feeling it is.
eg:
WORD 0010000100100100 = 9348
I want to know the value of bit 2. how can i determine it from 9348?
There are many ways, depending on what operations you can use. It appears you don't have much to choose from. But this should work, using just integer division and multiplication, and a test for equality.
(psuedocode):
x = 9348 (binary 0010000100100100, bit 0 = 0, bit 1 = 0, bit 2 = 1, ...)
x = x / 4 (now x is 1000010010010000
y = (x / 2) * 2 (y is 0000010010010000)
if (x == y) {
(bit 2 must have been 0)
} else {
(bit 2 must have been 1)
}
Every time you divide by 2, you move the bits to the left one position (in your big endian representation). Every time you multiply by 2, you move the bits to the right one position. Odd numbers will have 1 in the least significant position. Even numbers will have 0 in the least significant position. If you divide an odd number by 2 in integer math, and then multiply by 2, you loose the odd bit if there was one. So the idea above is to first move the bit you want to know about into the least significant position. Then, divide by 2 and then multiply by two. If the result is the same as what you had before, then there must have been a 0 in the bit you care about. If the result is not the same as what you had before, then there must have been a 1 in the bit you care about.
Having explained the idea, we can simplify to
((x / 8) * 2) <> (x / 4)
which will resolve to true if the bit was set, and false if the bit was not set.
AND the word with a mask [1].
In your example, you're interested in the second bit, so the mask (in binary) is
00000010. (Which is 2 in decimal.)
In binary, your word 9348 is 0010010010000100 [2]
0010010010000100 (your word)
AND 0000000000000010 (mask)
----------------
0000000000000000 (result of ANDing your word and the mask)
Because the value is equal to zero, the bit is not set. If it were different to zero, the bit was set.
This technique works for extracting one bit at a time. You can however use it repeatedly with different masks if you're interested in extracting multiple bits.
[1] For more information on masking techniques see http://en.wikipedia.org/wiki/Mask_(computing)
[2] See http://www.binaryhexconverter.com/decimal-to-binary-converter
The nth bit is equal to the word divided by 2^n mod 2
I think you'll have to test each bit, 0 through 15 inclusive.
You could try 9348 AND 4 (equivalent of 1<<2 - index of the bit you wanted)
9348 AND 4
should give 4 if bit is set, 0 if not.
So here is what I have come up with: 3 solutions. One is Hatchet's as proposed above, and his answer helped me immensely with actually understanding HOW this works, which is of utmost importance to me! The proposed AND masking solutions could have worked if my system supports bitwise operators, but it apparently does not.
Original technique:
( ( ( INT ( TAG / BIT ) ) / 2 ) - ( INT ( ( INT ( TAG / BIT ) ) / 2 ) ) <> 0 )
Explanation:
in the first part of the equation, integer division is performed on TAG/BIT, then REAL division by 2. In the second part, integer division is performed TAG/BIT, then integer division again by 2. The difference between these two results is compared to 0. If the difference is not 0, then the formula resolves to TRUE, which means the specified bit is also TRUE.
eg: 9348/4 = 2337 w/ integer division. Then 2337/2 = 1168.5 w/ REAL division but 1168 w/ integer division. 1168.5-1168 <> 0, so the result is TRUE.
My modified technique:
( INT ( TAG / BIT ) / 2 ) <> ( INT ( INT ( TAG / BIT ) / 2 ) )
Explanation:
effectively the same as above, but instead of subtracting the two results and comparing them to 0, I am just comparing the two results themselves. If they are not equal, the formula resolves to TRUE, which means the specified bit is also TRUE.
eg: 9348/4 = 2337 w/ integer division. Then 2337/2 = 1168.5 w/ REAL division but 1168 w/ integer division. 1168.5 <> 1168, so the result is TRUE.
Hatchet's technique as it applies to my system:
( INT ( TAG / BIT )) <> ( INT ( INT ( TAG / BIT ) / 2 ) * 2 )
Explanation:
in the first part of the equation, integer division is performed on TAG/BIT. In the second part, integer division is performed TAG/BIT, then integer division again by 2, then multiplication by 2. The two results are compared. If they are not equal, the formula resolves to TRUE, which means the specified bit is also TRUE.
eg: 9348/4 = 2337. Then 2337/2 = 1168 w/ integer division. Then 1168x2=2336. 2337 <> 2336 so the result is TRUE. As Hatchet stated, this method 'drops the odd bit'.
Note - 9348/4 = 2337 w/ both REAL and integer division, but it is important that these parts of the formula use integer division and not REAL division (12164/32 = 380 w/ integer division and 380.125 w/ REAL division)
I feel it important to note for any future readers that the BIT value in the equations above is not the bit number, but the actual value of the resulting decimal if the bit in the desired position was the only TRUE bit in the binary string (bit 2 = 4 (2^2), bit 6 = 64 (2^6))
This explanation may be a bit too verbatim for some, but may be perfect for others :)
Please feel free to comment/critique/correct me if necessary!
I just needed to resolve an integer status code to a bit state in order to interface with some hardware. Here's a method that works for me:
private bool resolveBitState(int value, int bitNumber)
{
return (value & (1 << (bitNumber - 1))) != 0;
}
I like it, because it's non-iterative, requires no cast operations and essentially translates directly to machine code operations like Shift, And and Comparison, which probably means it's really optimal.
To explain in a little more detail, I'm comparing the bitwise value to a mask for the bit I am interested in (value & mask) using an AND operation. If the bitwise AND operation result is zero, then the bit is not set (return false). If the AND operation result is not zero, then the bit is set (return true). The result of the AND operation is either zero or the value of the bit (1, 2, 4, 8, 16, 32...). Hence the boolean evaluation comparing the AND operation result and 0. The mask is created by taking the number 1 and shifting it left (bit wise), by the appropriate number of binary places (1 << n). The number of places is the number of the bit targeted minus 1. If it's bit #1, I want to shift the 1 left by 0 and if it's #2, I want to shift it left 1 place, etc.
I'm surprised no one rates my solution. It think it's most logical and succinct... and works.
Do I need to null-terminate a basic float array in objective C?
I have a basic float array:
float data[] = {0.5, 0.1, 1};
and when I do a sizeof(data) I get "12".
You don't need to null terminate it to create one, no. And in general a method taking a float[] would also take a size parameter to indicate how many elements there are.
You get sizeof(data) = 12 because a float is 4-bytes on your architecture and there's 3 of them.
sizeof return the amount of memory (in bytes) occupied by the parameter. In your case, every float occupies 4 bytes, thus 4*3=12.
As Hot Licks said in the comment of mattjgalloway's answer, there is not a standard way to retrieve the number of elements in a C array.
Using size = sizeof(data) / sizeof(float) works, but you must be careful in using this approach, since if you pass the array as a parameter it won't work.
A common approach is to store the size in a variable and use it as upper bound in your for loop (often functions that expect an array have an additional parameter to get the size of the array).
Using a null-terminated array is useful because you can iterate through your array and stop when the i-esim element is null (that's the approach of methods like strcmp).
Values of type float can never be null, so it's not possible to terminate an array of type float with null. For one thing, variables of any primitive type always have a numeric value, and the various null constants (in Objective-C nil, Nil, NULL, and '\0') have the literal value 0, which is obviously a valid value in range of a float.
So even if you can compile the following line without a warning,
float x = NULL;
...it would have the same consequence as this:
float x = 0;
Inserting a null constant in an array of type float would be indistinguishable from inserting 0.0 or any other constant zero value.
I have a, very long, integer. The integer is represented by a array of unsigned chars.
Example: the integer 1234 with base 10 is represented in the array as [4,3,2,1], [2,2,3,2] (base 8) and [2,13,4] (base 16)
Now I want to convert my integer with base n to another integer with base m. In my persued for a answer I came accross Wallar's algorithm, originally from here.
from math import *
def baseExpansion(n,c,b):
j = 0
base10 = sum([pow(c,len(n)-k-1)*n[k] for k in range(0,len(n))])
while floor(base10/pow(b,j)) != 0: j = j+1
return [floor(base10/pow(b,j-p)) % b for p in range(1,j+1)]
At first I thought this was my answer but unfortunately it is not. The problem I have is that the algorithm computes the sum. In my case this is a problem because the variable base10 is of type unsigned integer of 32 bits. Therefore when my integer, represented as a array, has more then 10 digits it can not convert the number anymore. Anyone has a solution?
Here's the school-book algorithm for doing what you're trying. You start with a representation for zero and call it a running total. Then, for each digit of the number to be converted, starting with the most significant and going to the least significant, 1) multiply the running total by the base of the source number and 2) add the digit to the running total. Now all you need is algorithms to do the multiplication and addition (and you can actually do both at once). Here's how to do that: 1) set the current digit to a variable, call it "carry", 2) for each digit in your new number, starting with the least significant and going to the most significant: 2a) set carry to the current digit in the new number times the output base plus carry, 2b) set the current digit to carry mod the output base, 2c) set carry to carry divided by the output base. And that should do it. There is an implementation of what you are trying to do somewhere here: http://www.cis.ksu.edu/~howell/calculator/comparison.html
rand(n) returns a number between 0 and n. Will rand work as expected, with regard to "randomness", for all arguments up to the integer limit on my platform?
This is going to depend on your randbits value:
rand calls your system's random number generator (or whichever one was
compiled into your copy of Perl). For this discussion, I'll call that
generator RAND to distinguish it from rand, Perl's function. RAND produces
an integer from 0 to 2**randbits - 1, inclusive, where randbits is a small
integer. To see what it is in your perl, use the command 'perl
-V:randbits'. Common values are 15, 16, or 31.
When you call rand with an argument arg, perl takes that value as an
integer and calculates this value.
arg * RAND
rand(arg) = ---------------
2**randbits
This value will always fall in the range required.
0 <= rand(arg) < arg
But as arg becomes large in comparison to 2**randbits, things become
problematic. Let's imagine a machine where randbits = 15, so RAND ranges
from 0..32767. That is, whenever we call RAND, we get one of 32768
possible values. Therefore, when we call rand(arg), we get one of 32768
possible values.
It depends on the number of bits used by your system's (pseudo)random number generator. You can find this value via
perl -V:randbits
or within a program via
use Config;
my $randbits = $Config{randbits};
rand can generate 2^randbits distinct random numbers. While you can generate numbers larger than 2^randbits, you can't generate all of the integer values in the range [0, N) when N > 2^randbits.
Values of N which aren't a power of two can also be problematic, as the distribution of (integer truncated) random values won't quite be flat. Some values will be slightly over-represented, others slightly under-represented.
It's worth noting that randbits is a paltry 15 on Windows. This means you can only get 32768 (2**15) distinct values. You can improve the situation by making multiple calls to rand and combining the values:
use Config;
use constant RANDBITS => $Config{randbits};
use constant RAND_MAX => 2**RANDBITS;
sub double_rand {
my $max = shift || 1;
my $iv =
int rand(RAND_MAX) << RANDBITS
| int rand(RAND_MAX);
return $max * ($iv / 2**(2*RANDBITS));
}
Assuming randbits = 15, double_rand mimics randbits = 30, providing 1073741824 (2**30) possible distinct values. This alleviates (but can never eliminate) both of the problems mentioned above.
We are talking about big random integers and whether it is possible to get them. It should be noted that the concatenation of two random integers is also a random integer. So if your system, for any reason, cannot go beyond 999999999999, then just write
$bigrand = int(rand(999999999999)).int(rand(999999999999));
and you'll get a random integer of (maximally) twice the length.
(Actually this is not a numeric answer to the question “how big a rand number can be” but rather the answer “you can get as big as you want, just concatenate small numbers”.)