Java Blackberry sending custom made packet - sockets

i need to recreate this within Java for Blackberry device:
char cPacketData[1024];
int thisPacketLength=( X_PACKET_SPACE*12 ) + ( 20*X_PACKET_SPACE );
(*(int *) (cPacketData)) =X_PACKET_START;
(*(int *) (cPacketData+X_PACKET_SPACE)) =thisPacketLength;
(*(int *) (cPacketData+X_PACKET_SPACE*2)) =X_PACKET_POSITION_DATA;
(*(int *) (cPacketData+X_PACKET_SPACE*3)) =positionX;
(*(int *) (cPacketData+X_PACKET_SPACE*4)) =positionY;
send(mSocket,(const char *)&cPacketData,thisPacketLength,0);
I already know that i should use
OutputStreamWriter
but i don't know how to recreate that packet in Java, can you please help?
UPDATE
Ok, think i've got it right:
char[] payload = new char[100];
int start=9999;
payload[3] = (char)((start >> 24) & 0XFF);
payload[2] = (char)((start >> 16) & 0XFF);
payload[1] = (char)((start >> 8) & 0XFF);
payload[0] = (char)((start >> 0) & 0XFF);
int len=100;
payload[X_PACKET_SPACE+3] = (char)((len >> 24) & 0XFF);
payload[X_PACKET_SPACE+2] = (char)((len >> 16) & 0XFF);
payload[X_PACKET_SPACE+1] = (char)((len >> 8) & 0XFF);
payload[X_PACKET_SPACE] = (char)((len >> 0) & 0XFF);
_out.write(payload);
Seems to work fine, kinda 'oldsKewl' way of doing - so i would appreciate if you guys have any better option.
Just to confirm, it works by doing it this way.

Resolved
Here is how i do it, so that my server side can recv the packets from BB correctly.
OutputStream _out = conn.openOutputStream();
packet[3]= (byte)(9999 >>> 24);
packet[2]= (byte)(9999 >>> 16);
packet[1]= (byte)(9999 >>> 8);
packet[0]= (byte)(9999 >>> 0);
packet[8]= (byte)(60 >>> 24);
packet[7]= (byte)(60 >>> 16);
packet[6]= (byte)(60 >>> 8);
packet[5]= (byte)(60 >>> 0);
packet[13]= (byte)(4 >>> 24);
packet[12]= (byte)(4 >>> 16);
packet[11]= (byte)(4 >>> 8);
packet[10]= (byte)(4 >>> 0);
packet[18]= (byte)(_PIN >>> 24);
packet[17]= (byte)(_PIN >>> 16);
packet[16]= (byte)(_PIN >>> 8);
packet[15]= (byte)(_PIN >>> 0);
packet[23]= (byte)(1 >>> 24);
packet[22]= (byte)(1 >>> 16);
packet[21]= (byte)(1 >>> 8);
packet[20]= (byte)(1 >>> 0);
_out.write(packet,0,60);

Related

Fatal error: Not enough bits to represent the passed value

Trying to use Mikrotik API library written in Swift:
https://wiki.mikrotik.com/wiki/API_in_Swift
It works well, when I'm sending small commands
However, If I will try to send large script string, I'm getting error:
Fatal error: Not enough bits to represent the passed value
The code that crashes:
private func writeLen(_ command : String) -> Data {
let data = command.data(using: String.Encoding.utf8)
var len = data?.count ?? 0
var dat = Data()
if len < 0x80 {
dat.append([UInt8(len)], count: 1)
}else if len < 0x4000 {
len = len | 0x8000;
dat.append(Data(bytes: [UInt8(len >> 8)]))
dat.append(Data(bytes: [UInt8(len)]))
}else if len < 0x20000 {
len = len | 0xC00000;
dat.append(Data(bytes: [UInt8(len >> 16)]))
dat.append(Data(bytes: [UInt8(len >> 8)]))
dat.append(Data(bytes: [UInt8(len)]))
}
else if len < 0x10000000 {
len = len | 0xE0000000;
dat.append(Data(bytes: [UInt8(len >> 24)]))
dat.append(Data(bytes: [UInt8(len >> 16)]))
dat.append(Data(bytes: [UInt8(len >> 8)]))
dat.append(Data(bytes: [UInt8(len)]))
}else{
dat.append(Data(bytes: [0xF0]))
dat.append(Data(bytes: [UInt8(len >> 24)]))
dat.append(Data(bytes: [UInt8(len >> 16)]))
dat.append(Data(bytes: [UInt8(len >> 8)]))
dat.append(Data(bytes: [UInt8(len)]))
}
return dat
}
The fatal error appears in this part:
else if len < 0x4000 {
len = len | 0x8000;
dat.append(Data(bytes: [UInt8(len >> 8)]))
dat.append(Data(bytes: [UInt8(len)]))
}
at line:
dat.append(Data(bytes: [UInt8(len)]))
Data size at this moment is 1072 bytes and len equals to 33840, UInt8 cannot be initiated with that len value.
How can I edit the code to avoid the error?
I'm using Swift 4.2
EDIT:
Here is an example of the same logic but written in JavaScript
module.exports.encodeString = function encodeString(s) {
var data = null;
var len = Buffer.byteLength(s);
var offset = 0;
if (len < 0x80) {
data = new Buffer(len + 1);
data[offset++] = len;
} else if (len < 0x4000) {
data = new Buffer(len + 2);
len |= 0x8000;
data[offset++] = (len >> 8) & 0xff;
data[offset++] = len & 0xff;
} else if (len < 0x200000) {
data = new Buffer(len + 3);
len |= 0xC00000;
data[offset++] = (len >> 16) & 0xff;
data[offset++] = (len >> 8) & 0xff;
data[offset++] = len & 0xff;
} else if (len < 0x10000000) {
data = new Buffer(len + 4);
len |= 0xE0000000;
data[offset++] = (len >> 24) & 0xff;
data[offset++] = (len >> 16) & 0xff;
data[offset++] = (len >> 8) & 0xff;
data[offset++] = len & 0xff;
} else {
data = new Buffer(len + 5);
data[offset++] = 0xF0;
data[offset++] = (len >> 24) & 0xff;
data[offset++] = (len >> 16) & 0xff;
data[offset++] = (len >> 8) & 0xff;
data[offset++] = len & 0xff;
}
data.utf8Write(s, offset);
return data;
};
Maybe someone sees the difference
Thanks for the JavaScript translation. It clearly shows the problem, since the Swift version does not resemble it.
Let's take this stretch of the JavaScript, as it is the part you are stumbling over in Swift:
} else if (len < 0x4000) {
data = new Buffer(len + 2);
len |= 0x8000;
data[offset++] = (len >> 8) & 0xff;
data[offset++] = len & 0xff;
}
That is "translated" in Swift like this:
} else if len < 0x4000 {
len = len | 0x8000;
dat.append(Data(bytes: [UInt8(len >> 8)]))
dat.append(Data(bytes: [UInt8(len)]))
}
Well, you can see at once that they are not at all the same. In the last line, the Swift version has forgotten the & 0xff.
If you put that in, everything starts working. And we can make it look a lot more like the JavaScript original too:
} else if len < 0x4000 {
len |= 0x8000;
dat.append(Data(bytes: [UInt8(len >> 8)]))
dat.append(Data(bytes: [UInt8(len & 0xff)]))
}
So I'd say, yes, use the JavaScript as a guide and you'll be fine. If that last line doesn't feel "swifty" enough to you, then write it like this:
dat.append(Data(bytes: [UInt8(truncatingIfNeeded: len)]))
It's exactly the same result.
I don't guarantee that everything will work perfectly after you make those changes (the Swift code you showed still does not look to me like it does the same thing as the JavaScript), but at least the part where we write the length bytes into the start of the Data will work correctly.

Load a png file from a compilabe C source code

I converted a png -> c, with png2c. And I can compiled the source code with my code without a problem. My question is how can I access that part of the code in the my gtk_image_new_from_pixbuf? Anyone have stories about the subject?
/Edit: Here is the header code:
/* GIMP header image file format (RGB): /icon.h */
static unsigned int width = 255;
static unsigned int height = 255;
/* Call this macro repeatedly. After each use, the pixel data can be extracted */
#define HEADER_PIXEL(data,pixel) {\
pixel[0] = (((data[0] - 33) << 2) | ((data[1] - 33) >> 4)); \
pixel[1] = ((((data[1] - 33) & 0xF) << 4) | ((data[2] - 33) >> 2)); \
pixel[2] = ((((data[2] - 33) & 0x3) << 6) | ((data[3] - 33))); \
data += 4; \
}
static char *header_data =
"````````````````````````````````````````````````````````````````"
"````````````````````````````````````````````````````````````````"
"````````````````````````````````````````````````````````````````"
// more code

OpenCV equivalent for Matlab's rdivide?

For example we have expression using rdivide in Matlab:
B = bsxfun(#rdivide, A, A(4,:));
How can we write equavalent expression for opencv?
Opencv has divide function, but seems it can't be used for matrix with different dimensions:
Mat t1= Mat::ones(2,3,CV_64FC1);
Mat t2= Mat::ones(1,3,CV_64FC1);
Mat dst;
divide(t1,t2,dst);
this don't work, so we need to replicate one row to matrix to match dimensions of t1 or use divide with 1 row in cycle.
My solution for opencv(A modified inplace):
for(int i=0;i<A.rows;++i)
{
divide(A.row(i),A.row(3),A.row(i));
}
Is there any simpler way?
You can use the repeat function of OpenCV to replicate a matrix.
The equivalent OpenCV code for the above mentioned MATLAB command is following:
cv::Mat B = A/cv::repeat(A.row(3),4,1);
In addition to #sgarizvi solution, you may find this wrapper to Matlab rdivide helpful:
#include <opencv2\opencv.hpp>
#include <iostream>
using namespace std;
using namespace cv;
Mat rdivide(const Mat& A, const Mat& B)
{
int nx = A.cols / B.cols;
int ny = A.rows / B.rows;
return A / repeat(B, ny, nx);
}
Mat rdivide(const Mat& A, double d)
{
return A / d;
}
int main()
{
Mat1f A = (Mat1f(3, 5) << 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15);
Mat B = rdivide(A, A.row(2)); // Divide by matrix, works also for cols: e.g. A.col(2)
Mat C = rdivide(A, 2); // Divide by scalar
cout << "A: " << endl << A << endl << endl;
cout << "B: " << endl << B << endl << endl;
cout << "C: " << endl << C << endl << endl;
return 0;
}

How to generate unique GUID 36 chars in SQLITE iPhone

I am trying to create a table which is unique and has primary key. I know in sqlite we can develop Unique AUTOINCREMENT ID SQL AUTOINCREMENT, but is it possible to generate Unique GUID which is 36 chars long. The only reason to do that is to make it more unique.
This is the bit of code I use for UUIDs (I may have even found it here on Stack Overflow)...
+ (NSString *)GetUUID
{
CFUUIDRef theUUID = CFUUIDCreate(NULL);
CFStringRef string = CFUUIDCreateString(NULL, theUUID);
CFRelease(theUUID);
return [(NSString *)string autorelease];
}
I don't know how long the UUIDs generated are because in the ways I use it I don't care so perhaps check that by passing the result into a NSLog call.
HTH, Pedro :)
I user this code to generate guids on the iphone - category on NSString. Can't remember where I found it, but it works great.
#import "NSString_UniqueID.h"
static unichar x (unsigned int);
#implementation NSString (TWUUID)
+ (NSString*) stringWithUniqueId
{
CFUUIDRef uuid = CFUUIDCreate(NULL);
CFUUIDBytes b = CFUUIDGetUUIDBytes(uuid);
unichar unichars[22];
unichar* c = unichars;
*c++ = x(b.byte0 >> 2);
*c++ = x((b.byte0 & 3 << 4) + (b.byte1 >> 4));
*c++ = x((b.byte1 & 15 << 2) + (b.byte2 >> 6));
*c++ = x(b.byte2 & 63);
*c++ = x(b.byte3 >> 2);
*c++ = x((b.byte3 & 3 << 4) + (b.byte4 >> 4));
*c++ = x((b.byte4 & 15 << 2) + (b.byte5 >> 6));
*c++ = x(b.byte5 & 63);
*c++ = x(b.byte6 >> 2);
*c++ = x((b.byte6 & 3 << 4) + (b.byte7 >> 4));
*c++ = x((b.byte7 & 15 << 2) + (b.byte8 >> 6));
*c++ = x(b.byte8 & 63);
*c++ = x(b.byte9 >> 2);
*c++ = x((b.byte9 & 3 << 4) + (b.byte10 >> 4));
*c++ = x((b.byte10 & 15 << 2) + (b.byte11 >> 6));
*c++ = x(b.byte11 & 63);
*c++ = x(b.byte12 >> 2);
*c++ = x((b.byte12 & 3 << 4) + (b.byte13 >> 4));
*c++ = x((b.byte13 & 15 << 2) + (b.byte14 >> 6));
*c++ = x(b.byte14 & 63);
*c++ = x(b.byte15 >> 2);
*c = x(b.byte15 & 3);
CFRelease(uuid);
return [NSString stringWithCharacters: unichars length: 22];
}
#end
// Convert six-bit values into letters, numbers or _ or $ (64 characters in that set).
//------------------------------------------------------------------------------------
unichar x (unsigned int c)
{
if (c < 26) return 'a' + c;
if (c < 52) return 'A' + c - 26;
if (c < 62) return '0' + c - 52;
if (c == 62) return '$';
return '_';
}

(iphone) base64 encoding

I get KERN_PROTECTION_FAILURE somewhere (stack trace shows it's happening in main loop but won't give me more details because it seems that memory got corrupted in previous loop. I have all the settings to see debug output correctly)
When I remove calling the following code, the symptom goes away.
(Verify receipt for in App purchasee)
- (NSString *)encode:(const uint8_t *)input length:(NSInteger)length {
static char table[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
NSMutableData *data = [NSMutableData dataWithLength:((length + 2) / 3) * 4];
uint8_t *output = (uint8_t *)data.mutableBytes;
for (NSInteger i = 0; i < length; i += 3) {
NSInteger value = 0;
for (NSInteger j = i; j < (i + 3); j++) {
value <<= 8;
if (j < length) {
value |= (0xFF & input[j]);
}
}
NSInteger index = (i / 3) * 4;
output[index + 0] = table[(value >> 18) & 0x3F];
output[index + 1] = table[(value >> 12) & 0x3F];
output[index + 2] = (i + 1) < length ? table[(value >> 6) & 0x3F] : '=';
output[index + 3] = (i + 2) < length ? table[(value >> 0) & 0x3F] : '=';
}
return [[[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding] autorelease];
}
I see there are other ways of getting base64 encoding, How do I do base64 encoding on iphone-sdk?
What I find weird is that the 'length' is computed differently.
((length +2 /3) * 4 above, and lentext*4/3+4 below.
Can anyone tell what is going on?
Beside, using the below code, I get 'receipt data-malformed' error when I pass the encoded data to apple server.
+ (NSString *) base64StringFromData: (NSData *)data length: (int)length {
int lentext = [data length];
if (lentext < 1) return #"";
char *outbuf = malloc(lentext*4/3+4); // add 4 to be sure
if ( !outbuf ) return nil;
const unsigned char *raw = [data bytes];
int inp = 0;
int outp = 0;
int do_now = lentext - (lentext%3);
for ( outp = 0, inp = 0; inp < do_now; inp += 3 )
{
outbuf[outp++] = base64EncodingTable[(raw[inp] & 0xFC) >> 2];
outbuf[outp++] = base64EncodingTable[((raw[inp] & 0x03) << 4) | ((raw[inp+1] & 0xF0) >> 4)];
outbuf[outp++] = base64EncodingTable[((raw[inp+1] & 0x0F) << 2) | ((raw[inp+2] & 0xC0) >> 6)];
outbuf[outp++] = base64EncodingTable[raw[inp+2] & 0x3F];
}
if ( do_now < lentext )
{
char tmpbuf[2] = {0,0};
int left = lentext%3;
for ( int i=0; i < left; i++ )
{
tmpbuf[i] = raw[do_now+i];
}
raw = tmpbuf;
outbuf[outp++] = base64EncodingTable[(raw[inp] & 0xFC) >> 2];
outbuf[outp++] = base64EncodingTable[((raw[inp] & 0x03) << 4) | ((raw[inp+1] & 0xF0) >> 4)];
if ( left == 2 ) outbuf[outp++] = base64EncodingTable[((raw[inp+1] & 0x0F) << 2) | ((raw[inp+2] & 0xC0) >> 6)];
}
NSString *ret = [[[NSString alloc] initWithBytes:outbuf length:outp encoding:NSASCIIStringEncoding] autorelease];
free(outbuf);
return ret;
}
I’m using this very small library for encode/decode Base64: http://imthi.com/blog/programming/iphone-sdk-base64-encode-decode.php
It does its work and I assume you use a similar version of it. Did you try to call +initialize before the first usage?