I have a variable of type Data in Swift code, using Xcode 10.1, called data. I can see it in the debugger, but I don't know where the actual values are stored. It should contain a letter (one byte) and three Uint8 values, all 0-255, so it should be 4 bytes. The first _length is shown to be 6, so i don't know what else could be added in (one reason I want to see what is actually in there) (below). But I do not understand where the binary value is. The _rawValue does not seem to be it because it contains 4.5 bytes. Perhaps it is a pointer, as it says "RawPointer"?
Where are the actual bytes stored?
Edit:
By setting a new variable equal to data[i], i did figure out the number of bytes is correct (I found the code was putting things in i didn't know). My string is, for example "!C 0 21 255 17", so 6 bytes.
However, I would still love to find an answer to my question: Is there way during debug to view the elements without creating new variables to inspect?
Create an extension of Data as follows:
extension Data {
public var bytes: [UInt8]
{
return [UInt8](self)
}
}
You can view the bytes of data during debugging as:
po data.bytes
Just type po data as NSData in the debug console. You will see the hex bytes like <066465666768>
Related
I'm using this code to listen to a port:
int start(){
ushort port = 61888;
listener = new TcpSocket();
assert(listener.isAlive);
listener.blocking = false;
listener.bind(new InternetAddress(port));
listener.listen(10);
writefln("Listening on port %d.", port);
enum MAX_CONNECTIONS = 60;
auto socketSet = new SocketSet(MAX_CONNECTIONS + 1);
Socket[] reads;
while (true)
{
socketSet.add( listener);
foreach (sock; reads)
socketSet.add(sock);
Socket.select(socketSet, null, null);
}
return 0;
}
As far as I know, sockets interact with the bytes as they are. I want to find a way how to convert these bytes (which are essentially SQL requests) to strings. How can I do so, providing that input is in UTF-8, which is an encoding using variable size?
You seem to have a few questions here.
How do I get chars from bytes?
cast them with cast(char[]) st. This aliases the bytes, giving you a slice of the exact same data, and doesn't require new allocation. You are not yet assuming that the bytes are valid UTF-8, but autodecoding or other parts of your program might complain if they aren't. You can run it by std.utf.validate if you want.
do basically the same thing with std.string.assumeUTF(st), which at least also asserts on invalid UTF in debug builds only.
How do I get a string from char[]?
You can unsafely alias the char[] with std.exception.assumeUnique(st), or you can allocate an immutable copy with st.idup or std.utf.toUTF8(st).
What if my fixed buffer of bytes contains invalid UTF-8 -- because it got cut off?
If that's a risk you can use low level std.utf tools (decodeFront and catching UTFException is one way) to peel off the valid UTF-8 and then check if you have remaining bytes, or to check that the end of the input is valid UTF-8.
How do I know if I've gotten a complete SQL statement with my fixed buffer socket I/O?
Instead of just passing the raw SQL statement over the line, you can define a network protocol that includes information like statement size, or that has 'end of statement' markers that you can read for.
I've a cheat sheet for string type conversions, which links to a file of more elaborate unittests.
Try this:
import std.exception: assumeUnique;
string s = assumeUnique (cast(char[])ubyteArray);
I'm reading in a stream of data, 64 bytes to be exact. I want to read 16 bits starting at the 480th bit of the incoming data. Unfortunately, I do not know what the incoming data type is, it's a bunch of random characters/boxes. Reading it in as an unsigned short (v), I get the number I am looking for, which for this example is 13.
my $satt_id = unpack("x60v1"), $msgdata); #$satt_id == 13
This results in $satt_id == 13, which is 00000000 00001101.
If I pull the data as 16 bits (b or B), the string does not reflect the value of 13, but rather is byte-swapped or reversed.
my $satt_idb = unpack("x60b16", $msgdata); #satt_idb == "10110000 00000000"
my $satt_idB = unpack("x60B16", $msgdata); #satt_idB == "00001101 00000000"
Why is this occurring? I want to alter the data and resend out the message, which would be relatively easy if all of the message elements were the same size (16 bits, just pack back as it was unpacked), but some are 6, 4, 2, and 1 bits. Should I just use little-endian b and then reverse? After altering the data reverse it back to original order and then pack it back as b?
Completely separate and not perl related, but this haunted me in a different utility. I just conceded by swapping the values in the Enum designation. It worked, just wasn't very viable when the amount of bits got higher than 4 (16 different values).
Thanks!
EDIT: I'm guessing this is just related to binary notation? Apparently starts from the right? So $satt_idb is correct, if you read right to left. So to make it more user friendly, just reverse, alter, then reverse again and repack?
EDIT2: Basically I'm trying to make a user-friendly method of editing messages coming through a data stream. As I mentioned in the comments, if I want to edit a single bit from 0 to 1 (which in the message represents something as true/false), I don't want the user to have to worry about editing the octet of data received, just select from a dropdown of true/false.
If it works with v, it means the data is in little-endian byte order, which means
0b0000000000001101
is stored as
0b00001101 0b00000000
which is what you got.
Should I just use little-endian b and then reverse?
No. You are likely doing something incorrect if you are converting the numbers to a text representation (binary).
If you did somehow want the binary representation of the number, you could use
sprintf("%16b", $num)
I am using C_Decrypt with the CKM_AES_CBC_PAD mechanism. I know that my ciphertext which is 272 bytes long should actually decrypt to 256 bytes, which means a full block of padding was added.
I know that according to the standard when invoking C_Decrypt with a NULL output buffer the function may return an output length which is somewhat longer than the actual required length, in particular when padding is used this is understandable, as the function can't know how many padding bytes are in the final block without carrying out the actual decryption.
So the question is whether if I know that I should get exactly 256 bytes back, such as in the scenario I explained above, does it make sense that I am still getting a CKR_BUFFER_TOO_SMALL error as a result, despite passing a 256 bytes buffer? (To make it clear: I am indicating that this is the length of the output buffer in the appropriate output buffer length parameter, see the parameters of C_Decrypt to observe what I mean)
I am encountering this behavior with a Safenet Luna device and am not sure what to make of it. Is it my code's fault for not querying for the length first by passing NULL in the output buffer, or is this a bug on the HSM/PKCS11 library side?
One more thing I should perhaps mention is that when I provide a 272 (256+16) bytes output buffer, the call succeeds and I am noticing that I am getting back my expected plaintext, but also the padding block which means 16 final bytes with the value 0x10. However, the output length is updated correctly to 256, not 272 - this also proves that I am not using CKM_AES_CBC instead of CKM_AES_CBC_PAD accidentally, which I suspected for a moment as well :)
I have used CKM.AES_CBC_PAD padding mechanism with C_Decrypt in past. You have to make 2 calls to C_Decrypt (1st ==> To get the size of the plain text, 2nd ==> Actual decryption). see the documentation here which talks about determining the length of the buffer needed to hold the plain-text.
Below is the step-by-step code to show the behavior of decryption:
//Defining the decryption mechanism
CK_MECHANISM mechanism = new CK_MECHANISM(CKM.AES_CBC_PAD);
//Initialize to zero -> variable to hold size of plain text
LongRef lRefDec = new LongRef();
// Get ready to decrypt
CryptokiEx.C_DecryptInit(session_1, mechanism, key_handleId_in_hsm);
// Get the size of the plain text -> 1st call to decrypt
CryptokiEx.C_Decrypt(session_1, your_cipher, your_cipher.length, null, lRefDec);
// Allocate space to the buffer to store plain text.
byte[] clearText = new byte[(int)lRefDec.value];
// Actual decryption -> 2nd call to decrypt
CryptokiEx.C_Decrypt(session_1, eFileCipher, eFileCipher.length, eFileInClear,lRefDec);
Sometimes, decryption fails because your input encryption data was misleading (however, encryption is successful but corresponding decryption will fail) the decryption algorithm. So it is important not to send raw bytes directly to the encryption algorithm; rather encoding the input data with UTF-8/16 schema's preserves the data from getting misunderstood as network control bytes.
I've been stuck with this problem for the past 2 hours and I'm about to give up. I've Googled a lot and just can't find something that works. I am using the newest version of XCode.
I want to send a PNG image through Bluetooth Low Energy, the receiver in this case is Bleno. Things I've tried include converting the image to a Base64 String and converting the image to an UInt8 array and sending each entry in the array one by one.
Nothing I do works, so the only "working" code I is for converting an image to bytes, which is this:
let testImage = UIImage(named: "smallImage")
let imageData = UIImagePNGRepresentation(testImage!)
I already have all the connection code for BLE and am able to send a simple and short string successfully to Bleno. I also know through "peripheral.maximumWriteValueLength" that the maximum amount of bytes I can send at once is 512 bytes, although I can imagine that using Low Energy lowers this maximum. I'm trying to send the data with peripheral.writeValue, which looks like this at the moment (array was the UInt8 array I tried):
peripheral.writeValue(Data(bytes:array), for: char, type: CBCharacteristicWriteType.withResponse)
The error I most often get is Error Domain=CBATTErrorDomain Code=13 "The value's length is invalid.", which I assume is because the data I try to send is more than 512 bytes. I tried sending the data in packages smaller than 512 bytes, but like I said, I just can't get it to work.
In short my question is this: How do I send a PNG image (in multiple parts) through BLE?
Edit: I'm got something to work, altough it is pretty slow, because it's not utilising the full 20 bytes per packet:
let buffer: [UInt8] = Array(UIImagePNGRepresentation(testImage!)!)
let start = "I:"+String(buffer.count)
peripheral.writeValue(start.data(using: .utf8)!, for: char, type: CBCharacteristicWriteType.withResponse)
buffer.forEach{b in
let data = NSData(bytes: [UInt8(b)], length: MemoryLayout<UInt8>.size)
peripheral.writeValue(data as Data, for: char, type: CBCharacteristicWriteType.withResponse)
}
Thanks in advance!
I finally got some sort of pdf scanner to work. It reads into the callback functions without a problem, but when I try to NSLog the result from a CGPDFScannerPopString I get a result like this:
ˆ ˛˝ # ˜˜˜ #˜' ˜˜˜ "˜ '˜˜ " ' ˜˜
No string to be found here...
Any ideas of what it can be?
This is my callback function:
static void op_Tj (CGPDFScannerRef s, void *info)
{
CGPDFStringRef string;
if (!CGPDFScannerPopString(s, &string))
return;
NSLog(#"string: %#", (__bridge NSString *)CGPDFStringCopyTextString(string));
}
Thanks already!
Edit: Example PDF
You should be aware that the CGPDFStringRef is not a ASCII string or something similar at all. Cf. http://developer.apple.com/library/mac/documentation/graphicsimaging/Reference/CGPDFString/Reference/reference.html --- it is a "series of bytes—unsigned integer values in the range 0 to 255" which have to be interpreted according to the latest PDF reference.
The PDF reference in turn will tell you that the interpretation of the bytes depends on the font used, and while ASCII-like interpretations are common in case of European languages, they are not mandatory, and in case of Asian languages where font subset embedding is very common, the interpretation may look random.
CGPDFStringCopyTextString tries to interpret those bytes accordingly, but there does not have to be a sensible interpretation as a regular string.
EDIT Inspection of the sample PDF Ron supplied showed that in case of this sample indeed the encoding of the font in object 3 0 (which is dominant on most pages of the document) is not a standard encoding but instead:
<</Type/Encoding
/Differences[0/.notdef/C/O/V/E/R/space/slash/H/L/F/underscore/W/B/five/eight/four
/zero/two/six/D/one/period/three/Z/I/N/G/U/S/T/colon/seven/A/M/P/Y
/plus/nine/X/hyphen/i/s/p/a/t/c/h/n/f/o/K/greater/equal/l/m/y/J/Q
/parenleft/parenright/comma/dollar/ampersand/d/r/v/b/e/u/w/k/g/x/bar
/quotesingle/asterisk/q/question/percent]
>>
Looking at the top of the first document page
COVER / HLF_CWEB_58408485 / 58408485 / 26DEC12 10.30.22Z
BRIEFING INCLUDES FOLLOWING FLIGHTS:
26DEC12 OR0337 EHAM0630 MUVR1710 PHOYE VSM+2/8 179
NEXT FLIGHTS OF AIRCRAFT:
26DEC12 OR0338 MUVR1830 MMUN1940 PHOYE VSM+2/8 213
26DEC12 OR0338 MMUN2105 EHAM0655 PHOYE GPT+2/7 263
27DEC12 OR0365 EHAM0900 TNCB1930 PHOYE BAH+1/8 272
27DEC12 OR0366 TNCB2030 TNCC2110 PHOYE BAH+1/8 250
27DEC12 OR0366 TNCC2250 EHAM0835 PHOYE ASD+1/8 199
that encoding seems to have been created by dealing out the next number starting from one for the next required glyph. This obviously results in a highly individualistic encoding...
That being said the font object does include both an /Encoding entry and a /ToUnicode entry. Thus, if the method CGPDFStringCopyTextString was given a reference to the font here and really tried, it would easily be able to correctly translate those bytes into the corresponding text. That it doesn't achieve anything decent, seems to indicate that it simply does not have the information which font to interpret the bytes for --- I don't assume it doesn't try...
For accurate text extraction, therefore, you have to interpret the bytes in the CGPDFStringRef yourself using the information of the the font in the content stream. If you don't want to do that from scratch, you might be interested in PDFKitten, a framework for extracting data from PDFs in iOS. While it is not yet perfect (some font structures can baffle it), it is a good starting point.