How to convert ASCII array (image) to a single string - import

My metadata is stored in a 8 bit unsigned dataset in a HDF5 file. After importing to DM, it become a 2D image of 1*length dimension. Each "pixel" stores the ASCII value of the corresponding value to the character.
For further processing, I have to convert the ASCII array to a single string, and further to TagGroup. Here is the stupid method (pixel by pixel) I currently do:
String Img2Str (image img){
Number dim1, dim2
img.getsize(dim1,dim2)
string out = ""
for (number i=0; i<dim1*dim2; i++)
out += img.getpixel(0,i).chr()
Return out
}
This pixel-wise operation is really quite slow! Is there any other faster method to do this work?

Yes, there is a better way. You really want to look into the chapter of raw-data streaming:
If you hold raw data in a "stream" object, you can read and write it in any form you like. So the solution to your problem is to
Create a stream
Add the "image" to the stream (writing binary data)
Reset the steam position to the start
Read out the binary data a string
This is the code:
{
number sx = 10
number sy = 10
image textImg := IntegerImage( "Text", 1, 0 , sx, sy )
textImg = 97 + random()*26
textImg.showimage()
object stream = NewStreamFromBuffer( 0 )
ImageWriteImageDataToStream( textImg, stream, 0 )
stream.StreamSetPos(0,0)
string asString = StreamReadAsText( stream, 0, sx*sy )
Result("\n as string:\n\t"+asString)
}
Note that you could create a stream linked to file on disc and, provided you know the starting position in bytes, read from the file directly as well.

Related

How to read and convert Bluetooth characteristic from byte data to proper values(Bluetooth for flutter)

I have to read and write some values to a Bike Smart trainer with BLE (Bluetooth Low Energy) used with Flutter. When I try to read the values from the GATT characteristic org.bluetooth.characteristic.supported_power_range (found on bluetooth.org site https://www.bluetooth.com/specifications/gatt/characteristics/ ) I get the return value of an Int List [0,0,200,0,1,0].
The GATT characteristic sais that there are 3 sint16 fields for Min., Max. and step size Watts (Power).
The Byte transmission order also sais that the least significant octet is transmitted first.
My guessings are, that the 3 parameters are returned in an Int array with 8bit value each. But I can't interpret the 200 for maybe the maximum Power setting. Because the smart trainer should provide max. 2300W Watts resistance (ELITE Drivo https://www.elite-it.com/de/produkte/home-trainer/rollentrainer-interaktive/drivo)
The Output results from this code snippet:
device.readCharacteristic(savedCharacteristics[Characteristics.SUPPORTED_POWER_RANGE]).then((List<int> result) {
result.forEach((i) {
print(i.toString());
});
});
// result: [0,0,200,0,1,0]
Maybe some one of u knows how to interpret the binary/hex/dec values of the flutter_blue characteristic output.
Or some hints would be great
Edit
For future readers, I got the solution. I'm a bit asheamed because I read the wrong characteristic.
The return value [0,0,200,0,1,0] was for supported resistance level. (which is 20% and the 200 shows the 20% with a resolution of 0.1 like described in the GATT spec)
I also got a return value for the supported power level which was [0,0,160,15,1,0]. Now the solution how to read the 2 Bytes of max powre level: you get the 160,15 the spec sais LSO (least significant octet first, don't confuse it with LSB least significant bit first). In fact of that you have to read it like 15,160. now do the math with the first Byte 15*256 + 160 = 4000 and thats the correct maximum supported power of the trainer like in the datasheet.
I hope I help someone with that. Thanks for the two replys they are also correct and helped me to find my mistake.
I had the same problem connecting to a Polar H10 to recover HR and RR intervals. It might not be 100% the same, but I think my case can guide you to solve yours.
I am receiving the same list as you like these two examples:
[0,60]
[16,61,524,2]
Looking at the specs of the GATT Bluetooth Heart Rate Service I figured that each element of the list retrieved matches 1 byte of the data transmitted by the characteristic you are subscripted to. For this service, the first byte, i.e., the first element of the list, has some flags to point out if there is an RR value after the HR value (16) or not (0). This is just two cases among the many different ones that can ocur depending on the flags values, but I think it shows how important this first byte can be.
After that, the HR value is coded as an unsigned integer with 8 bits (UINT8), that is, the HR values match the second element of the lists shown before. However, the RR interval is coded as an unsigned integer eith 16bits (UINT16), so it complicates the translation of those two last elements of the list #2 [16,61,524,2], because we should use 16 bits to get this value and the bytes are not in the correct order.
This is when we import the library dart:typed_data
import 'dart:typed_data';
...
_parseHr(List<int> value) {
// First sort the values in the list to interpret correctly the bytes
List<int> valueSorted = [];
valueSorted.insert(0, value[0]);
valueSorted.insert(1, value[1]);
for (var i=0; i< (value.length-3); i++) {
valueSorted.insert(i+2, value[i+3]);
valueSorted.insert(i+3, value[i+2]);
}
// Get flags directly from list
var flags = valueSorted[0];
// Get the ByteBuffer view of the data to recode it later
var buffer = new Uint8List.fromList(valueSorted).buffer; // Buffer bytes from list
if (flags == 0) {
// HR
var hrBuffer = new ByteData.view(buffer, 1, 1); // Get second byte
var hr = hrBuffer.getUint8(0); // Recode as UINT8
print(hr);
}
if (flags == 16) {
// HR
var hrBuffer = new ByteData.view(buffer, 1, 1); // Get second byte
var hr = hrBuffer.getUint8(0); // Recode as UINT8
// RR (more than one can be retrieved in the list)
var nRr = (valueSorted.length-2)/2; // Remove flags and hr from byte count; then split in two since RR is coded as UINT16
List<int> rrs = [];
for (var i = 0; i < nRr; i++) {
var rrBuffer = new ByteData.view(buffer, 2+(i*2), 2); // Get pairs of bytes counting since the 3rd byte
var rr = rrBuffer.getUint16(0); // Recode as UINT16
rrs.insert(i,rr);
}
print(rrs);
}
Hope it helps, the key is to get the buffer view of the sorted list, get the bytes that you need, and recode them as the standard points out.
I used print(new String.fromCharCodes(value)); and that worked for me.
value is your return from List<int> value = await characteristic.read();
I thank ukBaz for his answer to this question. Write data to BLE device and read its response flutter?
You can use my package byte_data_wrapper to transform this data to a decimal value which you can understand:
Get the buffer:
import 'dart:typed_data';
final buffer = Uint16List.fromList(result).buffer;
Create the byteDataCreator:
// Don't forget to add it to your pubspec.yaml
//dependencies:
// byte_data_wrapper:
// git: git://github.com/Taym95/byte_data_wrapper.git
import 'byte_data_wrapper/byte_data_wrapper.dart';
final byteDataCreator = ByteDataCreator.view(buffer);
Get your data :
// You can use getUint8() if valeu is Uint8
final min = byteDataCreator.getUint16();
final max = byteDataCreator.getUint16();
final stepSize = byteDataCreator.getUint16();
I know its too late to answer this but if there is anyone still having a trouble, just convert it manually to be an integer. Because I think you are receiving a type of ByteArray (correct me if I'm wrong).
num bytesToInteger(List<int> bytes) {
/// Given
/// 232 3 0 0
/// Endian.little representation:
/// To binary
/// 00000000 00000000 00000011 11101000
/// Combine
/// 00000000000000000000001111101000
/// Equivalent : 1000
num value = 0;
//Forcing to be Endian.little (I think most devices nowadays uses this type)
if (Endian.host == Endian.big) {
bytes = List.from(bytes.reversed);
}
for (var i = 0, length = bytes.length; i < length; i++) {
value += bytes[i] * pow(256, i);
}
return value;
}
and vice versa when you try to write over 255
Uint8List integerToBytes(int value) {
const arrayLength = 4;
return Uint8List(arrayLength)..buffer.asByteData().setInt32(0, value, Endian.little);
}
Hope this helps.
P.S. I also posted the similar problem here.

How to read and write bits in a chunk of memory in Swift

I would like to know how to read a binary file into memory (writing it to memory like an "Array Buffer" from JavaScript), and write to different parts of memory 8-bit, 16-bit, 32-bit etc. values, even 5 bit or 10 bit values.
extension Binary {
static func readFileToMemory(_ file) -> ArrayBuffer {
let data = NSData(contentsOfFile: "/path/to/file/7CHands.dat")!
var dataRange = NSRange(location: 0, length: ?)
var ? = [Int32](count: ?, repeatedValue: ?)
data.getBytes(&?, range: dataRange)
}
static func writeToMemory(_ buffer, location, value) {
buffer[location] = value
}
static func readFromMemory(_ buffer, location) {
return buffer[location]
}
}
I have looked at a bunch of places but haven't found a standard reference.
https://github.com/nst/BinUtils/blob/master/Sources/BinUtils.swift
https://github.com/apple/swift/blob/master/stdlib/public/core/ArrayBuffer.swift
https://github.com/uraimo/Bitter/blob/master/Sources/Bitter/Bitter.swift
In Swift, how do I read an existing binary file into an array?
Swift - writing a byte stream to file
https://apple.github.io/swift-nio/docs/current/NIO/Structs/ByteBuffer.html
https://github.com/Cosmo/BinaryKit/blob/master/Sources/BinaryKit.swift
https://github.com/vapor-community/bits/blob/master/Sources/Bits/Data%2BBytesConvertible.swift
https://academy.realm.io/posts/nate-cook-tryswift-tokyo-unsafe-swift-and-pointer-types/
https://medium.com/#gorjanshukov/working-with-bytes-in-ios-swift-4-de316a389a0c
I would like for this to be as low-level as possible. So perhaps using UnsafeMutablePointer, UnsafePointer, or UnsafeMutableRawPointer.
Saw this as well:
let data = NSMutableData()
var goesIn: Int32 = 42
data.appendBytes(&goesIn, length: sizeof(Int32))
println(data) // <2a000000]
var comesOut: Int32 = 0
data.getBytes(&comesOut, range: NSMakeRange(0, sizeof(Int32)))
println(comesOut) // 42
I would basically like to allocate a chunk of memory and be able to read and write from it. Not sure how to do that. Perhaps using C is the best way, not sure.
Just saw this too:
let rawData = UnsafeMutablePointer<UInt8>.allocate(capacity: width * height * 4)
If you're looking for low level code you'll need to use UnsafeMutableRawPointer. This is a pointer to a untyped data. Memory is accessed in bytes, so 8 chunks of at least 8 bits. I'll cover multiples of 8 bits first.
Reading a File
To read a file this way, you need to manage file handles and pointers yourself. Try the the following code:
// Open the file in read mode
let file = fopen("/Users/joannisorlandos/Desktop/ownership", "r")
// Files need to be closed manually
defer { fclose(file) }
// Find the end
fseek(file, 0, SEEK_END)
// Count the bytes from the start to the end
let fileByteSize = ftell(file)
// Return to the start
fseek(file, 0, SEEK_SET)
// Buffer of 1 byte entities
let pointer = UnsafeMutableRawPointer.allocate(byteCount: fileByteSize, alignment: 1)
// Buffer needs to be cleaned up manually
defer { pointer.deallocate() }
// Size is 1 byte
let readBytes = fread(pointer, 1, fileByteSize, file)
let errorOccurred = readBytes != fileByteSize
First you need to open the file. This can be done using Swift strings since the compiler makes them into a CString itself.
Because cleanup is all for us on this low level, a defer is put in place to close the file at the end.
Next, the file is set to seek the end of the file. Then the distance between the start of the file and the end is calculated. This is used later, so the value is kept.
Then the program is set to return to the start of the file, so the application starts reading from the start.
To store the file, a pointer is allocated with the amount of bytes that the file has in the file system. Note: This can change inbetween the steps if you're extremely unlucky or the file is accessed quite often. But I think for you, this is unlikely.
The amount of bytes is set, and aligned to one byte. (You can learn more about memory alignment on Wikipedia.
Then another defer is added to make sure no memory leaks at the end of this code. The pointer needs to be deallocated manually.
The file's bytes are read and stored in the pointer. Do note that this entire process reads the file in a blocking manner. It can be more preferred to read files asynchronously, if you plan on doing that I'll recommend looking into a library like SwiftNIO instead.
errorOccurred can be used to throw an error or handle issues in another manner.
From here, your buffer is ready for manipulation. You can print the file if it's text using the following code:
print(String(cString: pointer.bindMemory(to: Int8.self, capacity: fileByteSize)))
From here, it's time to learn how to read manipulate the memory.
Manipulating Memory
The below demonstrates reading byte 20..<24 as an Int32.
let int32 = pointer.load(fromByteOffset: 20, as: Int32.self)
I'll leave the other integers up to you. Next, you can alos put data at a position in memory.
pointer.storeBytes(of: 40, toByteOffset: 30, as: Int64.self)
This will replace byte 30..<38 with the number 40. Note that big endian systems, although uncommon, will store information in a different order from normal little endian systems. More about that here.
Modifying Bits
As you notes, you're also interested in modifying five or ten bits at a time. To do so, you'll need to mix the previous information with the new information.
var data32bits = pointer.load(fromByteOffset: 20, as: Int32.self)
var newData = 0b11111000
In this case, you'll be interested in the first 5 bits and want to write them over bit 2 through 7. To do so, first you'll need to shift the bits to a position that matches the new position.
newData = newData >> 2
This shifts the bits 2 places to the right. The two left bits that are now empty are therefore 0. The 2 bits on the right that got shoved off are not existing anymore.
Next, you'll want to get the old data from the buffer and overwrite the new bits.
To do so, first move the new byte into a 32-bits buffer.
var newBits = numericCast(newData) as Int32
The 32 bits will be aligned all the way to the right. If you want to replace the second of the four bytes, run the following:
newBits = newBits << 16
This moves the fourth pair 16 bit places left, or 2 bytes. So it's now on position 1 starting from 0.
Then, the two bytes need to be added on top of each other. One common method is the following:
let oldBits = data32bits & 0b11111111_11000001_11111111_11111111
let result = oldBits | newBits
What happens here is that we remove the 5 bits with new data from the old dataset. We do so by doing a bitwise and on the old 32 bits and a bitmap.
The bitmap has all 1's except for the new locations which are being replaced. Because those are empty in the bitmap, the and operator will exclude those bits since one of the two (old data vs. bitmap) is empty.
AND operators will only be 1 if both sides of the operator are 1.
Finally, the oldBits and the newBits are merged with an OR operator. This will take each bit on both sides and set the result to 1 if the bits at both positions are 1.
This will merge successfully since both buffers contain 1 bits that the other number doesn't set.

CAPL - Converting 4 raw bytes into floating point

CAPL - Vector.
I receive message ID 0x110 which holds current information:
0x3E6978D5 -> 0.228
Currently I can read the data and save into Enviroment Variable to show in Panel using:
putValue(slow_current, this.long(4));
But I don't know how to convert the HEX 4 bytes into float variable, since I cannot use address or casting (float* x = (float *)&vBuffer;)
How to make this conversion in CAPL script? Thanks.
Typically your dbc-file shall contain conversion info from raw value (in your case 4B long) to physical value in form of factor and offset definition:
So your physical value of current shall be calculated as follows:
phys_val = (raw_value * factor) + offset
Note: if you define negative offset then you actually subtracting it in equation above.
But it seems you don't have dbc-file so you need to figure out factor and offset by yourself (if you have 2 example raw values and know their physical equivalent then it shall be as easy as finding linear equation parameters -> y = ax + b).
CAPL shall look like this:
variables
{
float current_phys;
/* adjust below values to your needs */
float factor = 0.001
dword offset = -1000
}
on message 0x110
{
current_phys = (this.long(4) * factor) + offset;
write(current_phys);
}
Alternate solution if you don't want to force transform the value:
You define a sysvar type float(double) and use that sysvar in the panel
(link to it), instead of the envVar
or you change the type of envVar to float(double).
The translation into float will be done automatically
.
Caveat: usually this trick requires that the input number is also 8 bytes as the defined CAPL float range 8 bytes. But you have this by message payload length constraint= 8bytes.
Does not look good, but works:
received msg: 0x3E6978D5
putValue(float4byte,interpretAdFloat(this.long(4)));
float4byte = 0.23
i just reused Vinícius Oliveira solution to avoid creating environment variable. it worked
float floatvalue;
floatvalue = interpretAsFloat(HexValue);
input (HexValue) = 0x3fe20e3a
output(floatvalue() = 1.76606

Interpreting inputBuffer's Value in a Callback

I am basing my code off of Portaudio's paex_record_file.c example. One of the parameters in the callback is inputBuffer, and I wanted to use its data to calculate other numbers with the double/float type. I changed the file from a .raw to a .txt, but notepad still cannot read it, leading me to believe its data is not actually encoded as a number. How is the data stored in inputBuffer and how can I do arithmetic with it (add, multiply, divide, etc)?
This is how I initialized inputParameters:
inputParameters.device = Pa_GetDefaultInputDevice(); /* default input device */
if (inputParameters.device == paNoDevice) {
fprintf(stderr,"Error: No default input device.\n");
goto error;
}
inputParameters.channelCount = 2; /* stereo input */
inputParameters.sampleFormat = paFloat32;
inputParameters.suggestedLatency = Pa_GetDeviceInfo( inputParameters.device )->defaultLowInputLatency;
inputParameters.hostApiSpecificStreamInfo = NULL;
This question is somewhat related to print floats from audio input callback function (unanswered).
The inputBuffer parameter to the callback is a void*. The actual type of the underlying buffer depends on the parameters and the flags that you pass to Pa_OpenStream.
If you specified paFloat32 then there will be a float* in there somewhere. However the are two possibilities:
Interleaved: inputParameters.sampleFormat = paFloat32;
Non-Interleaved: inputParameters.sampleFormat = paFloat32|paNonInterleaved;
You specified the interleaved option. In this case, inputBuffer points to a single buffer of interleaved floats. So you can write:
float *samples = (float*)inputBuffer;
In a two channel stream samples will contain interleaved left and right samples, e.g.:
samples[0]; // first left sample
samples[1]; // first right sample
samples[2]; // second left sample
samples[3]; // second right sample
// etc.
For completeness: If it had been a non-interleaved stream then inputBuffer points to an array of pointers to single-channel buffers. To extract the buffer pointers you would write something like:
float *left = ((float **) inputBuffer)[0];
float *right = ((float **) inputBuffer)[1];
Note that in all cases framesPerBuffer counts frames not samples. A frame includes one sample from each channel. For example, in a stereo stream, a frame includes both the left and right channel samples.

Decoding ima4 audio format

To reduce the download size of an iPhone application I'm compressing some audio files. Specifically I'm using afconvert on the command line to change .wav format to .caf format w/ ima4 compression.
I've read this (wooji-juice.com) awesome post about this exact topic. I'm having trouble w/ the "decoding ima4 packets" step. I've looked at their sample code and I'm stuck. Please help w/ some pseudo code or sample code that can guide me in the right direction.
Thanks!
Additional info:
Here is what I've completed and where I'm having trouble...
I can play .wav files in both the simulator and on the phone.
I can compress .wav files to .caf w/ ima4 compression using afconvert on the command line. I'm using the SoundEngine that came w/ CrashLanding (I fixed one memory leak).
I modified the SoundEngine code to look for the mFormatID 'ima4'.
I don't understand the blog post linked above starting w/ "Calculating the size of the unpacked data". Why do I need to do this? Also, what does the term "packet" refer to? I'm very new to any sort of audio programming.
After gathering all the data from Wooji-Juice, Multimedia Wiki and Apple, here is my proposal (may need some experiment):
File structure
Apple IMA4 file are made of packet of 34 bytes. This is the packet unit used to build the file.
Each 34 bytes packet has two parts:
the first 2 bytes contain the preamble: an initial predictor and a step index
the 32 bytes left contain the sound nibbles (a nibble of 4 bits is used to retrieve a 16 bits sample)
Each packet has 32 bytes of compressed data, that represent 64 samples of 16 bits.
If the sound file is stereo, the packets are interleaved (one for the left, one for the right); there must be an even number of packets.
Decoding
Each packet of 34 bytes will lead to the decompression of 64 samples of 16 bits. So the size of the uncompressed data is 128 bytes per packet.
The decoding pseudo code looks like:
int[] ima_index_table = ... // Index table from [Multimedia Wiki][2]
int[] step_table = ... // Step table from [Multimedia Wiki][2]
byte[] packet = ... // A packet of 34 bytes compressed
short[] output = ... // The output buffer of 128 bytes
int preamble = (packet[0] << 8) | packet[1];
int predictor = preamble && 0xFF80; // See [Multimedia Wiki][2]
int step_index = preamble && 0x007F; // See [Multimedia Wiki][2]
int i;
int j = 0;
for(i = 2; i < 34; i++) {
byte data = packet[i];
int lower_nibble = data && 0x0F;
int upper_nibble = (data && 0xF0) >> 4;
// Decode the lower nibble
step_index += ima_index_table[lower_nibble];
diff = ((signed)nibble + 0.5f) * step / 4;
predictor += diff;
step = ima_step_table[step index];
// Clamp the predictor value to stay in range
if (predictor > 65535)
output[j++] = 65535;
else if (predictor < -65536)
output[j++] = -65536;
else
output[j++] = (short) predictor;
// Decode the uppper nibble
step_index += ima_index_table[upper_nibble];
diff = ((signed)nibble + 0.5f) * step / 4;
predictor += diff;
step = ima_step_table[step index];
// Clamp the predictor value to stay in range
if (predictor > 65535)
output[j++] = 65535;
else if (predictor < -65536)
output[j++] = -65536;
else
output[j++] = (short) predictor;
}
The term "packet" refers to a group of compressed audio samples with a header. You need the header to decode the data immediately following. If you consider your ima4 file to be a book, then each packet is a page. At the top are the values needed to decode that page, followed by the compressed audio.
That's why you need to calculate the size of the unpacked data (and then make space for it) -- since it's compressed, you need to convert data from compressed audio to uncompressed audio before you can output it. In order to allocate an output buffer, you need to know how big it has to be (note: you may need to output in chunks that are larger than a single packet at a time).
It looks like the typical structure, per the earlier "Overview" section, is that sets of 64 samples, each 16 bits (so 128 bytes) are translated to a 2-byte header and a 32-byte set of compressed samples (34 bytes in all). So, in the typical case, you can produce your expected output datasize by taking the input data size, dividing by 34 to get the number of packets, then multiplying by 128 bytes for the uncompressed audio per packet.
You shouldn't do that, though. It looks like you should instead query kAudioFilePropertyDataFormat to get the mBytesPerPacket -- this is the "34" value above, and mFramesPerPacket -- this is the 64, above, that gets multiplied by 2 (for 16-byte samples) to make 128 bytes of output.
Then, for each packet, you will need to run through the decoding described in the post. In somewhat longer pseudo C-code, assuming you are getting arrays of bytes, to handle the header:
packet = GetPacket();
Header = (packet[0] << 8) | packet[1]; //Big-endian 16-bit value
step_index = Header & 0x007f; //Lower seven bits
predictor = Header & 0xff80; //Upper nine bits
for (i = 2; i < mBytesPerPacket; i++)
{
nibble = packet[i] & 0x0f; //Low Nibble
process that nibble, per the blogpost -- be careful on sign-extension!
nibble = (packet[i] & 0xf0) >> 4; //High Nibble
process that nibble, per the blogpost -- be careful on sign-extension!
}
The sign-extension above refers to the fact that the post involves handling each nibble both in an unsigned and a signed way. If the high bit of a nibble (bit 3) is a 1, then it is negative; additionally the bit-shift may do sign-extension. This is not handled in the above pseudocode.