I am having a hard time getting a valid value out of the HR characteristics. I am clearly not handling the values properly in Dart.
Example Data:
List<int> value = [22, 56, 55, 4, 7, 3];
Flags Field:
I convert the first item in the main byte array to binary to get the flags
22 = 10110 (as binary)
this leads me to believe that it is U16 (bit[0] is == 1)
HR Value:
Because it is 16 bit I am trying to get the bytes in the 1 & 2 indexes. I then try to buffer them into a ByteData. From there I get convert them to Uint16 with the Endian set to Little. This is giving me a value of 14136. Clearly I am missing something fundamental about how this is supposed to work.
Any help in clearing up what I am not understanding about how to process the 16 bit BLE values would be much appreciated.
Thank you.
/*
Constructor - constructs the heart rate value from a BLE message
*/
HeartRate(List<int> values) {
var flags = values[0];
var s = flags.toRadixString(2);
List<String> flagsArray = s.split("");
int offset = 0;
//Determine whether it is U16 or not
if (flagsArray[0] == "0") {
//Since it is Uint8 i will only get the first value
var hr = values[1];
print(hr);
} else {
//Since UTF 16 is two bytes I need to combine them
//Create a buffer with the first two bytes after the flags
var buffer = new Uint8List.fromList(values.sublist(1, 3)).buffer;
var hrBuffer = new ByteData.view(buffer);
var hr = hrBuffer.getUint16(0, Endian.little);
print(hr);
}
}
Your updated data looks much better. Here's how to decode it, and the process you'd use to figure this out yourself from scratch.
Determine the format
The Bluetooth site has been reorganized recently (~2020), and in particular they got rid of some of the document viewers, which makes things much harder to find and read IMO. All the documentation is in the Heart Rate Service (HRS) document, linked from the main GATT page, but for just parsing the format, the best source I know of is the XML for org.bluetooth.characteristic.heart_rate_measurement. (Since the reorganization, I don't know how you can find this page without searching for it. It doesn't seem to be linked anymore.)
Byte 0 - Flags: 22 (0001 0110)
Bits are numbered from LSB (0) to MSB (7).
Bit 0 - Heart Rate Value Format: 0 => UINT8 beats per minute
Bit 1-2 - Sensor Contact Status: 11 => Supported and detected
Bit 3 - Energy Expended Status: 0 => Not present
Bit 4 - RR-Interval: 1 => One or more values are present
The meaning of RR-intervals is explained in the HRS document, linked above. It sounds like you just want the heart rate value, so I won't go into them here.
Byte 1 - UINT8 BPM: 56
Since Bit 0 of flags was 0, this is the beats per minute. 56.
Bytes 2-5 - UINT16 RR Intervals: 55, 4, 7, 3
You probably don't care about these, but there are two UINT16 values here (there can be an arbitrary number of RR-Interval values). BLE is always little-endian, so [55, 4] is 1,079 (55 + 4<<8), and [7, 3] is 775 (7 + 3<<8).
I believe the docs are a little confusing on this one. The XML suggests that these values are in seconds, but the comments say "Resolution of 1/1024 second." The normal way to express this would be <BinaryExponent>-10</BinaryExponent>, and I'm certain that's what they meant. So these would be:
RR0: 1.05s (1079/1024)
RR1: 0.76s (775/1024)
Related
I have to read and write some values to a Bike Smart trainer with BLE (Bluetooth Low Energy) used with Flutter. When I try to read the values from the GATT characteristic org.bluetooth.characteristic.supported_power_range (found on bluetooth.org site https://www.bluetooth.com/specifications/gatt/characteristics/ ) I get the return value of an Int List [0,0,200,0,1,0].
The GATT characteristic sais that there are 3 sint16 fields for Min., Max. and step size Watts (Power).
The Byte transmission order also sais that the least significant octet is transmitted first.
My guessings are, that the 3 parameters are returned in an Int array with 8bit value each. But I can't interpret the 200 for maybe the maximum Power setting. Because the smart trainer should provide max. 2300W Watts resistance (ELITE Drivo https://www.elite-it.com/de/produkte/home-trainer/rollentrainer-interaktive/drivo)
The Output results from this code snippet:
device.readCharacteristic(savedCharacteristics[Characteristics.SUPPORTED_POWER_RANGE]).then((List<int> result) {
result.forEach((i) {
print(i.toString());
});
});
// result: [0,0,200,0,1,0]
Maybe some one of u knows how to interpret the binary/hex/dec values of the flutter_blue characteristic output.
Or some hints would be great
Edit
For future readers, I got the solution. I'm a bit asheamed because I read the wrong characteristic.
The return value [0,0,200,0,1,0] was for supported resistance level. (which is 20% and the 200 shows the 20% with a resolution of 0.1 like described in the GATT spec)
I also got a return value for the supported power level which was [0,0,160,15,1,0]. Now the solution how to read the 2 Bytes of max powre level: you get the 160,15 the spec sais LSO (least significant octet first, don't confuse it with LSB least significant bit first). In fact of that you have to read it like 15,160. now do the math with the first Byte 15*256 + 160 = 4000 and thats the correct maximum supported power of the trainer like in the datasheet.
I hope I help someone with that. Thanks for the two replys they are also correct and helped me to find my mistake.
I had the same problem connecting to a Polar H10 to recover HR and RR intervals. It might not be 100% the same, but I think my case can guide you to solve yours.
I am receiving the same list as you like these two examples:
[0,60]
[16,61,524,2]
Looking at the specs of the GATT Bluetooth Heart Rate Service I figured that each element of the list retrieved matches 1 byte of the data transmitted by the characteristic you are subscripted to. For this service, the first byte, i.e., the first element of the list, has some flags to point out if there is an RR value after the HR value (16) or not (0). This is just two cases among the many different ones that can ocur depending on the flags values, but I think it shows how important this first byte can be.
After that, the HR value is coded as an unsigned integer with 8 bits (UINT8), that is, the HR values match the second element of the lists shown before. However, the RR interval is coded as an unsigned integer eith 16bits (UINT16), so it complicates the translation of those two last elements of the list #2 [16,61,524,2], because we should use 16 bits to get this value and the bytes are not in the correct order.
This is when we import the library dart:typed_data
import 'dart:typed_data';
...
_parseHr(List<int> value) {
// First sort the values in the list to interpret correctly the bytes
List<int> valueSorted = [];
valueSorted.insert(0, value[0]);
valueSorted.insert(1, value[1]);
for (var i=0; i< (value.length-3); i++) {
valueSorted.insert(i+2, value[i+3]);
valueSorted.insert(i+3, value[i+2]);
}
// Get flags directly from list
var flags = valueSorted[0];
// Get the ByteBuffer view of the data to recode it later
var buffer = new Uint8List.fromList(valueSorted).buffer; // Buffer bytes from list
if (flags == 0) {
// HR
var hrBuffer = new ByteData.view(buffer, 1, 1); // Get second byte
var hr = hrBuffer.getUint8(0); // Recode as UINT8
print(hr);
}
if (flags == 16) {
// HR
var hrBuffer = new ByteData.view(buffer, 1, 1); // Get second byte
var hr = hrBuffer.getUint8(0); // Recode as UINT8
// RR (more than one can be retrieved in the list)
var nRr = (valueSorted.length-2)/2; // Remove flags and hr from byte count; then split in two since RR is coded as UINT16
List<int> rrs = [];
for (var i = 0; i < nRr; i++) {
var rrBuffer = new ByteData.view(buffer, 2+(i*2), 2); // Get pairs of bytes counting since the 3rd byte
var rr = rrBuffer.getUint16(0); // Recode as UINT16
rrs.insert(i,rr);
}
print(rrs);
}
Hope it helps, the key is to get the buffer view of the sorted list, get the bytes that you need, and recode them as the standard points out.
I used print(new String.fromCharCodes(value)); and that worked for me.
value is your return from List<int> value = await characteristic.read();
I thank ukBaz for his answer to this question. Write data to BLE device and read its response flutter?
You can use my package byte_data_wrapper to transform this data to a decimal value which you can understand:
Get the buffer:
import 'dart:typed_data';
final buffer = Uint16List.fromList(result).buffer;
Create the byteDataCreator:
// Don't forget to add it to your pubspec.yaml
//dependencies:
// byte_data_wrapper:
// git: git://github.com/Taym95/byte_data_wrapper.git
import 'byte_data_wrapper/byte_data_wrapper.dart';
final byteDataCreator = ByteDataCreator.view(buffer);
Get your data :
// You can use getUint8() if valeu is Uint8
final min = byteDataCreator.getUint16();
final max = byteDataCreator.getUint16();
final stepSize = byteDataCreator.getUint16();
I know its too late to answer this but if there is anyone still having a trouble, just convert it manually to be an integer. Because I think you are receiving a type of ByteArray (correct me if I'm wrong).
num bytesToInteger(List<int> bytes) {
/// Given
/// 232 3 0 0
/// Endian.little representation:
/// To binary
/// 00000000 00000000 00000011 11101000
/// Combine
/// 00000000000000000000001111101000
/// Equivalent : 1000
num value = 0;
//Forcing to be Endian.little (I think most devices nowadays uses this type)
if (Endian.host == Endian.big) {
bytes = List.from(bytes.reversed);
}
for (var i = 0, length = bytes.length; i < length; i++) {
value += bytes[i] * pow(256, i);
}
return value;
}
and vice versa when you try to write over 255
Uint8List integerToBytes(int value) {
const arrayLength = 4;
return Uint8List(arrayLength)..buffer.asByteData().setInt32(0, value, Endian.little);
}
Hope this helps.
P.S. I also posted the similar problem here.
I would like to know how to read a binary file into memory (writing it to memory like an "Array Buffer" from JavaScript), and write to different parts of memory 8-bit, 16-bit, 32-bit etc. values, even 5 bit or 10 bit values.
extension Binary {
static func readFileToMemory(_ file) -> ArrayBuffer {
let data = NSData(contentsOfFile: "/path/to/file/7CHands.dat")!
var dataRange = NSRange(location: 0, length: ?)
var ? = [Int32](count: ?, repeatedValue: ?)
data.getBytes(&?, range: dataRange)
}
static func writeToMemory(_ buffer, location, value) {
buffer[location] = value
}
static func readFromMemory(_ buffer, location) {
return buffer[location]
}
}
I have looked at a bunch of places but haven't found a standard reference.
https://github.com/nst/BinUtils/blob/master/Sources/BinUtils.swift
https://github.com/apple/swift/blob/master/stdlib/public/core/ArrayBuffer.swift
https://github.com/uraimo/Bitter/blob/master/Sources/Bitter/Bitter.swift
In Swift, how do I read an existing binary file into an array?
Swift - writing a byte stream to file
https://apple.github.io/swift-nio/docs/current/NIO/Structs/ByteBuffer.html
https://github.com/Cosmo/BinaryKit/blob/master/Sources/BinaryKit.swift
https://github.com/vapor-community/bits/blob/master/Sources/Bits/Data%2BBytesConvertible.swift
https://academy.realm.io/posts/nate-cook-tryswift-tokyo-unsafe-swift-and-pointer-types/
https://medium.com/#gorjanshukov/working-with-bytes-in-ios-swift-4-de316a389a0c
I would like for this to be as low-level as possible. So perhaps using UnsafeMutablePointer, UnsafePointer, or UnsafeMutableRawPointer.
Saw this as well:
let data = NSMutableData()
var goesIn: Int32 = 42
data.appendBytes(&goesIn, length: sizeof(Int32))
println(data) // <2a000000]
var comesOut: Int32 = 0
data.getBytes(&comesOut, range: NSMakeRange(0, sizeof(Int32)))
println(comesOut) // 42
I would basically like to allocate a chunk of memory and be able to read and write from it. Not sure how to do that. Perhaps using C is the best way, not sure.
Just saw this too:
let rawData = UnsafeMutablePointer<UInt8>.allocate(capacity: width * height * 4)
If you're looking for low level code you'll need to use UnsafeMutableRawPointer. This is a pointer to a untyped data. Memory is accessed in bytes, so 8 chunks of at least 8 bits. I'll cover multiples of 8 bits first.
Reading a File
To read a file this way, you need to manage file handles and pointers yourself. Try the the following code:
// Open the file in read mode
let file = fopen("/Users/joannisorlandos/Desktop/ownership", "r")
// Files need to be closed manually
defer { fclose(file) }
// Find the end
fseek(file, 0, SEEK_END)
// Count the bytes from the start to the end
let fileByteSize = ftell(file)
// Return to the start
fseek(file, 0, SEEK_SET)
// Buffer of 1 byte entities
let pointer = UnsafeMutableRawPointer.allocate(byteCount: fileByteSize, alignment: 1)
// Buffer needs to be cleaned up manually
defer { pointer.deallocate() }
// Size is 1 byte
let readBytes = fread(pointer, 1, fileByteSize, file)
let errorOccurred = readBytes != fileByteSize
First you need to open the file. This can be done using Swift strings since the compiler makes them into a CString itself.
Because cleanup is all for us on this low level, a defer is put in place to close the file at the end.
Next, the file is set to seek the end of the file. Then the distance between the start of the file and the end is calculated. This is used later, so the value is kept.
Then the program is set to return to the start of the file, so the application starts reading from the start.
To store the file, a pointer is allocated with the amount of bytes that the file has in the file system. Note: This can change inbetween the steps if you're extremely unlucky or the file is accessed quite often. But I think for you, this is unlikely.
The amount of bytes is set, and aligned to one byte. (You can learn more about memory alignment on Wikipedia.
Then another defer is added to make sure no memory leaks at the end of this code. The pointer needs to be deallocated manually.
The file's bytes are read and stored in the pointer. Do note that this entire process reads the file in a blocking manner. It can be more preferred to read files asynchronously, if you plan on doing that I'll recommend looking into a library like SwiftNIO instead.
errorOccurred can be used to throw an error or handle issues in another manner.
From here, your buffer is ready for manipulation. You can print the file if it's text using the following code:
print(String(cString: pointer.bindMemory(to: Int8.self, capacity: fileByteSize)))
From here, it's time to learn how to read manipulate the memory.
Manipulating Memory
The below demonstrates reading byte 20..<24 as an Int32.
let int32 = pointer.load(fromByteOffset: 20, as: Int32.self)
I'll leave the other integers up to you. Next, you can alos put data at a position in memory.
pointer.storeBytes(of: 40, toByteOffset: 30, as: Int64.self)
This will replace byte 30..<38 with the number 40. Note that big endian systems, although uncommon, will store information in a different order from normal little endian systems. More about that here.
Modifying Bits
As you notes, you're also interested in modifying five or ten bits at a time. To do so, you'll need to mix the previous information with the new information.
var data32bits = pointer.load(fromByteOffset: 20, as: Int32.self)
var newData = 0b11111000
In this case, you'll be interested in the first 5 bits and want to write them over bit 2 through 7. To do so, first you'll need to shift the bits to a position that matches the new position.
newData = newData >> 2
This shifts the bits 2 places to the right. The two left bits that are now empty are therefore 0. The 2 bits on the right that got shoved off are not existing anymore.
Next, you'll want to get the old data from the buffer and overwrite the new bits.
To do so, first move the new byte into a 32-bits buffer.
var newBits = numericCast(newData) as Int32
The 32 bits will be aligned all the way to the right. If you want to replace the second of the four bytes, run the following:
newBits = newBits << 16
This moves the fourth pair 16 bit places left, or 2 bytes. So it's now on position 1 starting from 0.
Then, the two bytes need to be added on top of each other. One common method is the following:
let oldBits = data32bits & 0b11111111_11000001_11111111_11111111
let result = oldBits | newBits
What happens here is that we remove the 5 bits with new data from the old dataset. We do so by doing a bitwise and on the old 32 bits and a bitmap.
The bitmap has all 1's except for the new locations which are being replaced. Because those are empty in the bitmap, the and operator will exclude those bits since one of the two (old data vs. bitmap) is empty.
AND operators will only be 1 if both sides of the operator are 1.
Finally, the oldBits and the newBits are merged with an OR operator. This will take each bit on both sides and set the result to 1 if the bits at both positions are 1.
This will merge successfully since both buffers contain 1 bits that the other number doesn't set.
I am receiving EEG data from a 24 bit ADC over serial. The ADC data is transmitting in 3 bytes from MSB to LSB. The full packet is 21 bytes:
The first byte is the start byte - 0xFF (255 in decimal)
Then packet number byte.
Then the next 3 bytes are the 24 bit ADC value broken into MSB LSB2 LSB1
I can parse the data fine, but re-constructing a 2's complement signed int32 number is causing issues. The values I am getting out certainly don't reflect what the ADC should be giving out.
Below are the lines to read and parse the 504 samples (which gives me 24 ADC values (504samples/21bytes = 24 values)). I have tried uint8 instead of uchar with similar results (when I try int8 I get a invalid specified precision error).
comEEGSMT = serial(com,'BaudRate',3000000);
fopen(comEEGSMT);
rawData(1:504) = fread(comEEGSMT, 504, 'uchar');
fclose(comEEGSMT);
startPackets = find(rawData == 255);
bytes = rawData([startpackets+2 startpackets+3 startpackets+4]);
I have tried the following method to reconstruct the value:
ADC_value = bytes(:,1)*256^2 + bytes(:,2)*256 + bytes(:,3);
and the following line is the formula to convert the above number to volts:
ADC_value_volts = ADC_value*(5/3)*(1/(2^32));
The values are in the range of 4000 - 8000 microvolts with large jumps in value. The values SHOULD be in the range of 200 - 600 microvolts with small changes.
I have found other questions relating to similar issues, but have had no success trying the proposed solutions such as in the link below:
https://uk.mathworks.com/matlabcentral/answers/137965-concatenate-3-bytes-array-of-real-time-serial-data-into-single-precision
Any help would be very much appreciated as I've been stuck on this for quite long.
Thanks Mark
Starting with ADC_value as int32 with value 0, then:
ADC_value |= MSB << 16;
ADC_value |= LSB2 << 8;
ADC_value |= LSB1;
And then, to find out the corresponding volts value, supposing your ADC has a reference voltage VREF, in volts (e.g. 5.0V):
ADC_value_volts = (ADC_value * VREF)/2^24
since your converter is 24 bits, not 32.
Note the above expressions are in C language equivalent, not Matlab.
EDIT:
The ADC data sheet tell us the PGA gain can be set for the following values:
1, 2, 4, 6, 8, 12, 24, one value at the time for each channel.
The FSR (full scale range) of measurement is: (2*VREF)/Gain = 5/3, for Gain=6,
(eq.(5) page 23) so this must be accounted for in expression computing the volts
values. (these can be verified if you have access to the hardware and can make some
measurements).
Data resulted from ADC is already in two's complement, binary form, 24 bits.
The weird thing is the data sheet counts bits starting with 1, not 0, so this
is why shifting with "17" instead 16 - this is in fact 16 for coding.
(revealed in fig 47, page 42).
So the computing formula of ADC_value_volts should be:
ADC_value_volts = (AC_value * FSR/(2^23))/3 (1LSB=FSR/(2^23), pg.37)
If some other calculations/modifications from original, them these must be explained by provider.
If the provider is not friendly, worth to be changed...
To reduce the download size of an iPhone application I'm compressing some audio files. Specifically I'm using afconvert on the command line to change .wav format to .caf format w/ ima4 compression.
I've read this (wooji-juice.com) awesome post about this exact topic. I'm having trouble w/ the "decoding ima4 packets" step. I've looked at their sample code and I'm stuck. Please help w/ some pseudo code or sample code that can guide me in the right direction.
Thanks!
Additional info:
Here is what I've completed and where I'm having trouble...
I can play .wav files in both the simulator and on the phone.
I can compress .wav files to .caf w/ ima4 compression using afconvert on the command line. I'm using the SoundEngine that came w/ CrashLanding (I fixed one memory leak).
I modified the SoundEngine code to look for the mFormatID 'ima4'.
I don't understand the blog post linked above starting w/ "Calculating the size of the unpacked data". Why do I need to do this? Also, what does the term "packet" refer to? I'm very new to any sort of audio programming.
After gathering all the data from Wooji-Juice, Multimedia Wiki and Apple, here is my proposal (may need some experiment):
File structure
Apple IMA4 file are made of packet of 34 bytes. This is the packet unit used to build the file.
Each 34 bytes packet has two parts:
the first 2 bytes contain the preamble: an initial predictor and a step index
the 32 bytes left contain the sound nibbles (a nibble of 4 bits is used to retrieve a 16 bits sample)
Each packet has 32 bytes of compressed data, that represent 64 samples of 16 bits.
If the sound file is stereo, the packets are interleaved (one for the left, one for the right); there must be an even number of packets.
Decoding
Each packet of 34 bytes will lead to the decompression of 64 samples of 16 bits. So the size of the uncompressed data is 128 bytes per packet.
The decoding pseudo code looks like:
int[] ima_index_table = ... // Index table from [Multimedia Wiki][2]
int[] step_table = ... // Step table from [Multimedia Wiki][2]
byte[] packet = ... // A packet of 34 bytes compressed
short[] output = ... // The output buffer of 128 bytes
int preamble = (packet[0] << 8) | packet[1];
int predictor = preamble && 0xFF80; // See [Multimedia Wiki][2]
int step_index = preamble && 0x007F; // See [Multimedia Wiki][2]
int i;
int j = 0;
for(i = 2; i < 34; i++) {
byte data = packet[i];
int lower_nibble = data && 0x0F;
int upper_nibble = (data && 0xF0) >> 4;
// Decode the lower nibble
step_index += ima_index_table[lower_nibble];
diff = ((signed)nibble + 0.5f) * step / 4;
predictor += diff;
step = ima_step_table[step index];
// Clamp the predictor value to stay in range
if (predictor > 65535)
output[j++] = 65535;
else if (predictor < -65536)
output[j++] = -65536;
else
output[j++] = (short) predictor;
// Decode the uppper nibble
step_index += ima_index_table[upper_nibble];
diff = ((signed)nibble + 0.5f) * step / 4;
predictor += diff;
step = ima_step_table[step index];
// Clamp the predictor value to stay in range
if (predictor > 65535)
output[j++] = 65535;
else if (predictor < -65536)
output[j++] = -65536;
else
output[j++] = (short) predictor;
}
The term "packet" refers to a group of compressed audio samples with a header. You need the header to decode the data immediately following. If you consider your ima4 file to be a book, then each packet is a page. At the top are the values needed to decode that page, followed by the compressed audio.
That's why you need to calculate the size of the unpacked data (and then make space for it) -- since it's compressed, you need to convert data from compressed audio to uncompressed audio before you can output it. In order to allocate an output buffer, you need to know how big it has to be (note: you may need to output in chunks that are larger than a single packet at a time).
It looks like the typical structure, per the earlier "Overview" section, is that sets of 64 samples, each 16 bits (so 128 bytes) are translated to a 2-byte header and a 32-byte set of compressed samples (34 bytes in all). So, in the typical case, you can produce your expected output datasize by taking the input data size, dividing by 34 to get the number of packets, then multiplying by 128 bytes for the uncompressed audio per packet.
You shouldn't do that, though. It looks like you should instead query kAudioFilePropertyDataFormat to get the mBytesPerPacket -- this is the "34" value above, and mFramesPerPacket -- this is the 64, above, that gets multiplied by 2 (for 16-byte samples) to make 128 bytes of output.
Then, for each packet, you will need to run through the decoding described in the post. In somewhat longer pseudo C-code, assuming you are getting arrays of bytes, to handle the header:
packet = GetPacket();
Header = (packet[0] << 8) | packet[1]; //Big-endian 16-bit value
step_index = Header & 0x007f; //Lower seven bits
predictor = Header & 0xff80; //Upper nine bits
for (i = 2; i < mBytesPerPacket; i++)
{
nibble = packet[i] & 0x0f; //Low Nibble
process that nibble, per the blogpost -- be careful on sign-extension!
nibble = (packet[i] & 0xf0) >> 4; //High Nibble
process that nibble, per the blogpost -- be careful on sign-extension!
}
The sign-extension above refers to the fact that the post involves handling each nibble both in an unsigned and a signed way. If the high bit of a nibble (bit 3) is a 1, then it is negative; additionally the bit-shift may do sign-extension. This is not handled in the above pseudocode.
I want to count the number of set bits in a uint in Specman:
var x: uint;
gen x;
var x_set_bits: uint;
x_set_bits = ?;
What's the best way to do this?
One way I've seen is:
x_set_bits = pack(NULL, x).count(it == 1);
pack(NULL, x) converts x to a list of bits.
count acts on the list and counts all the elements for which the condition holds. In this case the condition is that the element equals 1, which comes out to the number of set bits.
I don't know Specman, but another way I've seen this done looks a bit cheesy, but tends to be efficient: Keep a 256-element array; each element of the array consists of the number of bits corresponding to that value. For example (pseudocode):
bit_count = [0, 1, 1, 2, 1, ...]
Thus, bit_count2 == 1, because the value 2, in binary, has a single "1" bit. Simiarly, bit_count[255] == 8.
Then, break the uint into bytes, use the byte values to index into the bit_count array, and add the results. Pseudocode:
total = 0
for byte in list_of_bytes
total = total + bit_count[byte]
EDIT
This issue shows up in the book Beautiful Code, in the chapter by Henry S. Warren. Also, Matt Howells shows a C-language implementation that efficiently calculates a bit count. See this answer.