I can't write data at index above 128 in byte array.
code is given below.
private void Write1(APDU apdu) throws ISOException
{
apdu.setIncomingAndReceive();
byte[] apduBuffer = apdu.getBuffer();
byte j = (byte)apduBuffer[4]; // Return incoming bytes lets take 160
Buffer1 = new byte[j]; // initialize a array with size 160
for (byte i=0; i<j; i++)
Buffer1[(byte)i] = (byte)apduBuffer[5+i];
}
It gives me error 6F 00 (It means reach End Of file).
I am using:
smart card type = contact card
using java card 2.2.2 with jcop using apdu
Your code contains several problems:
As already pointed out by 'pst' you are using a signed byte value which works only up to 128 - use a short instead
Your are creating a new buffer Buffer1 on every call of your Write1 method. On JavaCard there is usually no automatic garbage collection - therefore memory allocation should only be done once when the app is installed. If you only want to process the data in the adpu buffer just use it from there. And if you want to copy data from one byte array into another better use javacard.framework.Util.arrayCopy(..).
You are calling apdu.setIncomingAndReceive(); but ignore the return value. The return value gives you the number of bytes of data you can read.
The following code is from the API docs and shows the common way:
short bytesLeft = (short) (buffer[ISO7816.OFFSET_LC] & 0x00FF);
if (bytesLeft < (short)55) ISOException.throwIt( ISO7816.SW_WRONG_LENGTH );
short readCount = apdu.setIncomingAndReceive();
while ( bytesLeft > 0){
// process bytes in buffer[5] to buffer[readCount+4];
bytesLeft -= readCount;
readCount = apdu.receiveBytes ( ISO7816.OFFSET_CDATA );
}
short j = (short) apdu_buffer[ISO7816.OFFSET_LC] & 0xFF
Elaborating on pst's answer. A byte has 2^8 bits numbers, or rather 256. But if you are working with signed numbers, they will work in a cycle instead. So, 128 will be actually -128, 129 will be -127 and so on.
Update: While the following answer is "valid" for normal Java, please refer to Roberts answer for Java Card-specific information, as well additional concerns/approaches.
In Java a byte has values in the range [-128, 127] so, when you say "160", that's not what the code is really giving you :)
Perhaps you'd like to use:
int j = apduBuffer[4] & 0xFF;
That "upcasts" the value apduBuffer[4] to an int while treating the original byte data as an unsigned value.
Likewise, i should also be an int (to avoid a nasty overflow-and-loop-forever bug), and the System.arraycopy method could be handy as well...
(I have no idea if that is the only/real problem -- or if the above is a viable solution on a Java Card -- but it sure is a problem and aligns with the "128 limit" mentioned.)
Happy coding.
Related
#include<stdio.h>
int main()
{
typedef unsigned char *byte_pointer;
void show_bytes(byte_pointer start, size_t len)
{
int i;
for (i = 0; i < len; i++)
{
printf(" %.2x", start[i]);
printf("\n");
}
}
void show_int(int x)
{
show_bytes((byte_pointer) &x, sizeof(int));
}
void show_float(int x)
{
show_bytes((byte_pointer) &x, sizeof(float));
}
void show_pointer(int x)
{
show_bytes((byte_pointer) &x, sizeof(void *));
}
int a = 0x12345678;
byte_pointer ap = (byte_pointer) &a;
show_bytes(ap, 3);
return 0;
}
(Solutions according to the CS:APP book)
Big endian: 12 34 56
Little endian: 78 56 34
I know systems have different conventions for storage allocation but if two systems use the same convention but are different endian why are the hex values different?
Endian-ness is an issue that arises when we use more than one storage location for a value/type, which we do because somethings won't fit in a single storage location.
As soon as we use multiple storage locations for a single value that gives rise to the question of: What part of the value will we store in each storage location?
The first byte of a two-byte item will have a lower address than the second byte, and in particular, the address of the second byte will be at +1 from the address of the lower byte.
Storing a two-byte item in two bytes of storage, do we store the most significant byte first and the least significant byte second, or vice versa?
We choose to use directly consecutive bytes for the two bytes of the two-byte item, so no matter which (endian) way we choose to store such an item, we refer to the whole two-byte item by the lower address (the address of its first byte).
We can express these storage choices with a formula, here item[0] refer to the first byte while item[1] refers to the second byte.
item[0] = value >> 8 // also value / 256
item[1] = value & 0xFF // also value % 256
value = (item[0]<<8) | item[1] // also item[0]*256 | item[1]
--vs--
item[0] = value & 0xFF // also value % 256
item[1] = value >> 8 // also value / 256
value = item[0] | (item[1]<<8) // also item[0] | item[1]*256
The first set of formulas is for big endian, and the second for little endian.
By these formulas, it doesn't matter what order we access memory as to whether item[0] first, then item[1], or vice versa, or both at the same time (common in hardware), as long as the formulas for one endian are consistently used.
If the item in question is a four-byte value, then there are 4 possible orderings(!) — though only two of them are truly sensible.
For efficiency, the hardware offers us multibyte memory access in one instruction (and with one reference, namely to the lowest address of the multibyte item), and therefore, the hardware itself needs to define and consistently use one of the two possible/reasonable orderings.
If the hardware did not offer multibyte memory access, then the ordering would be entirely up to the software program itself to define (accessing memory one byte at a time), and the program could choose big or little endian, even differently for each variable, as long as it consistently accesses the multiple bytes of memory in the same manner to reassemble the values stored there.
In a similar manner, when we define a structure of multiple items (e.g. struct point { int x; int y; }, software chooses whether x comes first or y comes first in memory ordering. However, since programmers (and compilers) will still choose to use hardware instructions to access individual fields such as x in one go, the hardware's endian configuration remains necessary.
I have to read and write some values to a Bike Smart trainer with BLE (Bluetooth Low Energy) used with Flutter. When I try to read the values from the GATT characteristic org.bluetooth.characteristic.supported_power_range (found on bluetooth.org site https://www.bluetooth.com/specifications/gatt/characteristics/ ) I get the return value of an Int List [0,0,200,0,1,0].
The GATT characteristic sais that there are 3 sint16 fields for Min., Max. and step size Watts (Power).
The Byte transmission order also sais that the least significant octet is transmitted first.
My guessings are, that the 3 parameters are returned in an Int array with 8bit value each. But I can't interpret the 200 for maybe the maximum Power setting. Because the smart trainer should provide max. 2300W Watts resistance (ELITE Drivo https://www.elite-it.com/de/produkte/home-trainer/rollentrainer-interaktive/drivo)
The Output results from this code snippet:
device.readCharacteristic(savedCharacteristics[Characteristics.SUPPORTED_POWER_RANGE]).then((List<int> result) {
result.forEach((i) {
print(i.toString());
});
});
// result: [0,0,200,0,1,0]
Maybe some one of u knows how to interpret the binary/hex/dec values of the flutter_blue characteristic output.
Or some hints would be great
Edit
For future readers, I got the solution. I'm a bit asheamed because I read the wrong characteristic.
The return value [0,0,200,0,1,0] was for supported resistance level. (which is 20% and the 200 shows the 20% with a resolution of 0.1 like described in the GATT spec)
I also got a return value for the supported power level which was [0,0,160,15,1,0]. Now the solution how to read the 2 Bytes of max powre level: you get the 160,15 the spec sais LSO (least significant octet first, don't confuse it with LSB least significant bit first). In fact of that you have to read it like 15,160. now do the math with the first Byte 15*256 + 160 = 4000 and thats the correct maximum supported power of the trainer like in the datasheet.
I hope I help someone with that. Thanks for the two replys they are also correct and helped me to find my mistake.
I had the same problem connecting to a Polar H10 to recover HR and RR intervals. It might not be 100% the same, but I think my case can guide you to solve yours.
I am receiving the same list as you like these two examples:
[0,60]
[16,61,524,2]
Looking at the specs of the GATT Bluetooth Heart Rate Service I figured that each element of the list retrieved matches 1 byte of the data transmitted by the characteristic you are subscripted to. For this service, the first byte, i.e., the first element of the list, has some flags to point out if there is an RR value after the HR value (16) or not (0). This is just two cases among the many different ones that can ocur depending on the flags values, but I think it shows how important this first byte can be.
After that, the HR value is coded as an unsigned integer with 8 bits (UINT8), that is, the HR values match the second element of the lists shown before. However, the RR interval is coded as an unsigned integer eith 16bits (UINT16), so it complicates the translation of those two last elements of the list #2 [16,61,524,2], because we should use 16 bits to get this value and the bytes are not in the correct order.
This is when we import the library dart:typed_data
import 'dart:typed_data';
...
_parseHr(List<int> value) {
// First sort the values in the list to interpret correctly the bytes
List<int> valueSorted = [];
valueSorted.insert(0, value[0]);
valueSorted.insert(1, value[1]);
for (var i=0; i< (value.length-3); i++) {
valueSorted.insert(i+2, value[i+3]);
valueSorted.insert(i+3, value[i+2]);
}
// Get flags directly from list
var flags = valueSorted[0];
// Get the ByteBuffer view of the data to recode it later
var buffer = new Uint8List.fromList(valueSorted).buffer; // Buffer bytes from list
if (flags == 0) {
// HR
var hrBuffer = new ByteData.view(buffer, 1, 1); // Get second byte
var hr = hrBuffer.getUint8(0); // Recode as UINT8
print(hr);
}
if (flags == 16) {
// HR
var hrBuffer = new ByteData.view(buffer, 1, 1); // Get second byte
var hr = hrBuffer.getUint8(0); // Recode as UINT8
// RR (more than one can be retrieved in the list)
var nRr = (valueSorted.length-2)/2; // Remove flags and hr from byte count; then split in two since RR is coded as UINT16
List<int> rrs = [];
for (var i = 0; i < nRr; i++) {
var rrBuffer = new ByteData.view(buffer, 2+(i*2), 2); // Get pairs of bytes counting since the 3rd byte
var rr = rrBuffer.getUint16(0); // Recode as UINT16
rrs.insert(i,rr);
}
print(rrs);
}
Hope it helps, the key is to get the buffer view of the sorted list, get the bytes that you need, and recode them as the standard points out.
I used print(new String.fromCharCodes(value)); and that worked for me.
value is your return from List<int> value = await characteristic.read();
I thank ukBaz for his answer to this question. Write data to BLE device and read its response flutter?
You can use my package byte_data_wrapper to transform this data to a decimal value which you can understand:
Get the buffer:
import 'dart:typed_data';
final buffer = Uint16List.fromList(result).buffer;
Create the byteDataCreator:
// Don't forget to add it to your pubspec.yaml
//dependencies:
// byte_data_wrapper:
// git: git://github.com/Taym95/byte_data_wrapper.git
import 'byte_data_wrapper/byte_data_wrapper.dart';
final byteDataCreator = ByteDataCreator.view(buffer);
Get your data :
// You can use getUint8() if valeu is Uint8
final min = byteDataCreator.getUint16();
final max = byteDataCreator.getUint16();
final stepSize = byteDataCreator.getUint16();
I know its too late to answer this but if there is anyone still having a trouble, just convert it manually to be an integer. Because I think you are receiving a type of ByteArray (correct me if I'm wrong).
num bytesToInteger(List<int> bytes) {
/// Given
/// 232 3 0 0
/// Endian.little representation:
/// To binary
/// 00000000 00000000 00000011 11101000
/// Combine
/// 00000000000000000000001111101000
/// Equivalent : 1000
num value = 0;
//Forcing to be Endian.little (I think most devices nowadays uses this type)
if (Endian.host == Endian.big) {
bytes = List.from(bytes.reversed);
}
for (var i = 0, length = bytes.length; i < length; i++) {
value += bytes[i] * pow(256, i);
}
return value;
}
and vice versa when you try to write over 255
Uint8List integerToBytes(int value) {
const arrayLength = 4;
return Uint8List(arrayLength)..buffer.asByteData().setInt32(0, value, Endian.little);
}
Hope this helps.
P.S. I also posted the similar problem here.
I need to load values from uint8 array into 128 NEON register. There is a similar question. But there were no good answers.
My solution is:
uint8_t arr[4] = {1,2,3,4};
//load 4 of 8-bit vals into 64 bit reg
uint8x8_t _vld1_u8 = vld1_u8(arr);
//convert to 16-bit and move to 128-bit reg
uint16x8_t _vmovl_u8 = vmovl_u8(_vld1_u8);
//get low 64 bit and move them to 64-bit reg
uint16x4_t _vget_low_u16 = vget_low_u16(_vmovl_u8);
//convert to 32-bit and move to 128-bit reg
uint32x4_t ld32x4 = vmovl_u16(_vget_low_u16);
This works fine, but it seems to me that this approach is not the fastest. Maybe there is a better and faster way to load 8bit data into 128 reg as 32bit ?
Edit:
Thanks to #FrankH. I've came up with the second version using some hack:
uint8x16x2_t z = vzipq_u8(vld1q_u8(arr), q_zero);
uint8x16_t rr = *(uint8x16_t*)&z;
z = vzipq_u8(rr, q_zero);
ld32x4 = *(uint8x16_t*)&z;
It boils down to this assembly (when compiler optimisations are on):
vld1.8 {d16, d17}, [r5]
vzip.8 q8, q9
vorr q9, q4, q4
vzip.8 q8, q9
So there are no redundant stores and it's pretty fast. But still it is about x1.5 slower then the first solution.
You can do a "double zip" with zeroes:
uint16x4_t zero = 0;
uint32x4_t ld32x4 =
vreinterpretq_u32_u16(
vzipq_u8(
vzip_u8(
vld1_u8(arr),
vreinterpret_u8_u16(zero)
),
zero
)
);
Since the vreinterpretq_*() are no-ops, this boils down to three instructions. Don't have a crosscompiler around at the moment, can't validate that :(
Edit:
Don't get me wrong there ... while vreinterpretq_*() isn't resulting in a Neon instruction, it's not a no-op; that's because it stops the compiler from doing the type of funky things you'd see if you'd instead use widerVal.val[0]. All it tells the compiler is, like:
"you've got a uint8x16x2_t but I want to use only half of that as a uint8x16_t, give me half the registers."
Or:
"you have a uint8x16x2_t but I want to use those regs as a uint32x4_t instead."
I.e. it tells the compilers to alias sets of neon registers - preventing stores/loads to/from the stack as you'd get if you do the explicit sub-set access through the .val[...] syntax.
In a way, the .val[...] syntax "is a hack" but the better method, the use of vreinterpretq_*(), "looks like a hack". Not using it results in more instructions and slower/inferior code.
I am trying to modify pixel values (8 bits per channel RGBA) by numerically increasing/decreasing the values by a certain amount. How can I do this in Objective-C or C? The following code generates a "Error: EXC_BAD_ACCESS" everytime.
// Try to Increase RED by 50
for(int i = 0; i < myLength; i += 4) {
//NSLog prints the values FINE as integers
NSLog(#"(%i/%i/%i)", rawData[i], rawData[i+1], rawData[i+2]);
//But for some reason I cannot do this
rawData[i]+=50;
}
and even
// Try to set RED to 50
for(int i = 0; i < myLength; i += 4) {
//I cannot even do this...
unsigned char newVal = 50;
rawData[i] = 50;
}
Sidenote: rawData is a data buffer of type unsigned char
It's possible that you're overrunning the end of your allocated buffer, and that's why you're getting the access violation. That most likely means that your math is wrong in the allocation, or your rawData pointer is of the wrong type.
If you are accessing the raw data of a loaded UIImage, it might be mapped into memory read-only. You'd need to copy the data into a buffer that you allocated, most likely.
Hmm... What's rawdata? Maybe it's a const type which you can not modify?
To reduce the download size of an iPhone application I'm compressing some audio files. Specifically I'm using afconvert on the command line to change .wav format to .caf format w/ ima4 compression.
I've read this (wooji-juice.com) awesome post about this exact topic. I'm having trouble w/ the "decoding ima4 packets" step. I've looked at their sample code and I'm stuck. Please help w/ some pseudo code or sample code that can guide me in the right direction.
Thanks!
Additional info:
Here is what I've completed and where I'm having trouble...
I can play .wav files in both the simulator and on the phone.
I can compress .wav files to .caf w/ ima4 compression using afconvert on the command line. I'm using the SoundEngine that came w/ CrashLanding (I fixed one memory leak).
I modified the SoundEngine code to look for the mFormatID 'ima4'.
I don't understand the blog post linked above starting w/ "Calculating the size of the unpacked data". Why do I need to do this? Also, what does the term "packet" refer to? I'm very new to any sort of audio programming.
After gathering all the data from Wooji-Juice, Multimedia Wiki and Apple, here is my proposal (may need some experiment):
File structure
Apple IMA4 file are made of packet of 34 bytes. This is the packet unit used to build the file.
Each 34 bytes packet has two parts:
the first 2 bytes contain the preamble: an initial predictor and a step index
the 32 bytes left contain the sound nibbles (a nibble of 4 bits is used to retrieve a 16 bits sample)
Each packet has 32 bytes of compressed data, that represent 64 samples of 16 bits.
If the sound file is stereo, the packets are interleaved (one for the left, one for the right); there must be an even number of packets.
Decoding
Each packet of 34 bytes will lead to the decompression of 64 samples of 16 bits. So the size of the uncompressed data is 128 bytes per packet.
The decoding pseudo code looks like:
int[] ima_index_table = ... // Index table from [Multimedia Wiki][2]
int[] step_table = ... // Step table from [Multimedia Wiki][2]
byte[] packet = ... // A packet of 34 bytes compressed
short[] output = ... // The output buffer of 128 bytes
int preamble = (packet[0] << 8) | packet[1];
int predictor = preamble && 0xFF80; // See [Multimedia Wiki][2]
int step_index = preamble && 0x007F; // See [Multimedia Wiki][2]
int i;
int j = 0;
for(i = 2; i < 34; i++) {
byte data = packet[i];
int lower_nibble = data && 0x0F;
int upper_nibble = (data && 0xF0) >> 4;
// Decode the lower nibble
step_index += ima_index_table[lower_nibble];
diff = ((signed)nibble + 0.5f) * step / 4;
predictor += diff;
step = ima_step_table[step index];
// Clamp the predictor value to stay in range
if (predictor > 65535)
output[j++] = 65535;
else if (predictor < -65536)
output[j++] = -65536;
else
output[j++] = (short) predictor;
// Decode the uppper nibble
step_index += ima_index_table[upper_nibble];
diff = ((signed)nibble + 0.5f) * step / 4;
predictor += diff;
step = ima_step_table[step index];
// Clamp the predictor value to stay in range
if (predictor > 65535)
output[j++] = 65535;
else if (predictor < -65536)
output[j++] = -65536;
else
output[j++] = (short) predictor;
}
The term "packet" refers to a group of compressed audio samples with a header. You need the header to decode the data immediately following. If you consider your ima4 file to be a book, then each packet is a page. At the top are the values needed to decode that page, followed by the compressed audio.
That's why you need to calculate the size of the unpacked data (and then make space for it) -- since it's compressed, you need to convert data from compressed audio to uncompressed audio before you can output it. In order to allocate an output buffer, you need to know how big it has to be (note: you may need to output in chunks that are larger than a single packet at a time).
It looks like the typical structure, per the earlier "Overview" section, is that sets of 64 samples, each 16 bits (so 128 bytes) are translated to a 2-byte header and a 32-byte set of compressed samples (34 bytes in all). So, in the typical case, you can produce your expected output datasize by taking the input data size, dividing by 34 to get the number of packets, then multiplying by 128 bytes for the uncompressed audio per packet.
You shouldn't do that, though. It looks like you should instead query kAudioFilePropertyDataFormat to get the mBytesPerPacket -- this is the "34" value above, and mFramesPerPacket -- this is the 64, above, that gets multiplied by 2 (for 16-byte samples) to make 128 bytes of output.
Then, for each packet, you will need to run through the decoding described in the post. In somewhat longer pseudo C-code, assuming you are getting arrays of bytes, to handle the header:
packet = GetPacket();
Header = (packet[0] << 8) | packet[1]; //Big-endian 16-bit value
step_index = Header & 0x007f; //Lower seven bits
predictor = Header & 0xff80; //Upper nine bits
for (i = 2; i < mBytesPerPacket; i++)
{
nibble = packet[i] & 0x0f; //Low Nibble
process that nibble, per the blogpost -- be careful on sign-extension!
nibble = (packet[i] & 0xf0) >> 4; //High Nibble
process that nibble, per the blogpost -- be careful on sign-extension!
}
The sign-extension above refers to the fact that the post involves handling each nibble both in an unsigned and a signed way. If the high bit of a nibble (bit 3) is a 1, then it is negative; additionally the bit-shift may do sign-extension. This is not handled in the above pseudocode.