How to convert a 64bit hex number to binary in Scala? - scala

I want to convert the following hash "d41d8cd98f00b204e9800998ecf8427e" to binary string. However, I can't seem to find a way to do that? Can someone teach me how I can do it in Scala? Thanks

Use BigInt:
scala> BigInt("d41d8cd98f00b204e9800998ecf8427e", 16).toString(2)
res0: String = 11010100000111011000110011011001100011110000000010110010000001001110100110000000000010011001100011101100111110000100001001111110
The 16 above means the string should be parsed in hex (base 16), and the 2 means the output string should be in binary (base 2).
If you want to dump the raw binary out to a file, you can convert the BigInt to a byte array and dump that:
scala> BigInt("d41d8cd98f00b204e9800998ecf8427e", 16).toByteArray
res1: Array[Byte] = Array(0, -44, 29, -116, -39, -113, 0, -78, 4, -23, -128, 9, -104, -20, -8, 66, 126)
Note that this gives you back 17 bytes instead of the 16 you'd expect for a 128-bit hash. That's because BigInt is a signed value, so it pads the byte array with an extra 0 in the most-significant-byte place to keep the value from being interpreted as negative. You could use res1.takeRight(16) to grab only the 16 bytes you're probably interested in.

Related

PDF417 decode and generate the same barcode using Swift

I have the following example of PDF417 barcode:
which can be decoded with online tool like zxing
as the following result: 5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e�¼ô���������¬‚C`Ìe%�æ‹�ÀsõbÿG)=‡x‚�qÀ1ß–[FzùŽûVû�É�üæ±RNI�Y[.H»Eàó¼åñüì²�tØ¿ªWp…Ã�{�Õ*
or online-qrcode-generator
as 5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e~|~~~~~~~~~~d~C`~e%~~~~;To~B~{~dj9v~~Z[Xm~~"HP3~~LH~~~O~"S~~,~~~~~~~k1~~~u~Iw}SQ~fqX4~mbc_ (I don't know which encoding is used to encode this)
The first part of the encoded key that contains barcode is always known and it is 5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e
The second part of it can be decoded from the base64string and it always contains 88 bytes. In my case it is:
Frz0DAAAAAAAAAAArIJDYMxlJQDmiwHAc/Vi/0cpPYd4ghlxwDHflltGevmO+1b7GckT/OZ/sVJOSRpZWy5Iu0Xg87zl8fzssg502L+qV3CFwxZ/ewjVKg==
I'm using Swift on iOS device to generate this PDF417 barcode by decoding the provided base64 string like this:
let base64Str = "Frz0DAAAAAAAAAAArIJDYMxlJQDmiwHAc/Vi/0cpPYd4ghlxwDHflltGevmO+1b7GckT/OZ/sVJOSRpZWy5Iu0Xg87zl8fzssg502L+qV3CFwxZ/ewjVKg=="
let knownKey = "5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e"
let decodedData = Data(base64Encoded: base64Str.replacingOccurrences(of: "-", with: "+")
.replacingOccurrences(of: "_", with: "/"))
var codeData=knownKey.data(using: String.Encoding.ascii)
codeData?.append(decodedData)
let image = generatePDF417Barcode(from: codeData!)
let imageView = UIImageView(image: image!)
//the function to generate PDF417 UIMAGE from parsed Data
func generatePDF417Barcode(from codeData: Data) -> UIImage? {
if let filter = CIFilter(name: "CIPDF417BarcodeGenerator") {
filter.setValue(codeData, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 3, y: 3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
But I always get the wrong barcodes generated. It can be seen visually.
Please help me correct the code to get the same result as the first barcode image.
I also have the another example of barcode:
The first part of key is the same but it's second part is known as int8 byte array and I also don't have an idea how to generate the PDF417 barcode from it (with prepended key) correctly.
Here's how I try:
let knownKey = "5wwwwwxwww0app5p3pewi0edpeapifxe0ixiwwdfxxi0xf5e"
let secretArray: [Int8] = [22, 124, 24, 12, 0, 0, 0, 0, 0, 0, 0, 0, 100, 127, 67, 96, -52, 101, 37, 0, -85, -123, 1, -64, 111, -28, 66, -27, 123, -25, 100, 106, 57, 118, -4, 16, 90, 91, 88, 109, -105, 126, 34, 72, 80, 51, -116, 28, 76, 72, -37, -24, -93, 79, -115, 34, 83, 18, -61, 44, -12, -13, -8, -59, -107, -9, -128, 107, 49, -50, 126, 13, -59, 50, -24, -43, 127, 81, -85, 102, 113, 88, 52, -60, 109, 98, 99, 95]
let secretUInt8 = secretArray.map { UInt8(bitPattern: $0) }
let secretData = Data(secretUInt8)
let keyArray: [UInt8] = Array(knownKey.utf8)
var keyData = Data(keyArray)
keyData.append(secretData)
let image = generatePDF417Barcode(from: keyData!)
let imageView = UIImageView(image: image!)
There are a lot of things going on here. Gereon is correct that there are a lot of parameters. Choosing different parameters can lead to very different bar codes that decode identically. Your current barcode is "correct" (though a bit messy due to an Apple bug). It's just different.
I'll start with the short answer of how to make your data match the barcode you have. Then I'll walk through what you should probably actually do, and finally I'll get to the details of why.
First, here's the code you're looking for (but probably not the code you want, unless you have to match this barcode):
filter.setValue(codeData, forKey: "inputMessage")
filter.setValue(3, forKey: "inputCompactionMode") // This is good (and the big difference)
filter.setValue(5, forKey: "inputDataColumns") // This is fine, but probably unneeded
filter.setValue(0, forKey: "inputCorrectionLevel") // This is bad
PDF 417 defines several "compaction modes" to let it pack a truly impressive amount of information into a very small space while still offering excellent error detection and correction, and handling a lot of real-world scanning concerns. The default compaction mode only supports Latin text and basic punctuation. (It compacts even more if you only use uppercase Latin letters and space.) The first part of your string can be stored with text compaction, but the rest can't, so it has to switch to byte compaction.
Core Image actually does this switch shockingly badly by default (I opened FB9032718 to track). Rather than encoding in text and then switching to bytes, or just doing it all in bytes, it switches to bytes over and over again unnecessarily.
There's no way for you to configure multiple compaction methods, but you can just set it to byte, which is what value 3 is. And that's also how your source is doing it.
The second difference is the number of data columns, which drive how wide the output is. Your source is using 5, but Core Image is choosing 6 based on its default rules (which aren't fully documented).
Finally, your source has set the error correction level to 0, which is not recommended. For a message of this size, the minimum recommended error correction level is 3, which is what Core Image chooses by default.
If you just want a good barcode, and don't have to match this input, my recommendation would be to set inputCompactionMode to 3, and leave the rest as defaults. If you want a different aspect ratio, I'd use inputPreferredAspectRatio rather than modifying the number of data columns directly.
You may want to stop reading now. This was a very enjoyable puzzle to spend the morning on, so I'm going to dump a lot of details here.
If you want a deep dive into how this format works, I don't know anything currently available other than the ISO 15438 Spec, which will cost you around US$200. But there used to be some pages at GeoCities that explained a lot of this, and they're still available through the Wayback Machine.
There also aren't a lot of tools for decoding this stuff on the command line, but pdf417decode does a reasonable job. I'll use output from it to explain how I knew all the values.
The last tool you need is a way to turn jpeg output into black-and-white pbm files so that pdf417decode can read them. For that, I use the following (after installing netpbm):
cat /tmp/barcode.jpeg | jpegtopnm | ppmtopgm | pamthreshold | pamtopnm > new.pbm && ./pdf417decode -c -e new.pbm
With that, let's decode the first three rows of your existing barcode (with my commentary to the side). Everywhere you see "function output," that means this value is the output of some function that takes the other thing as the input:
0 7f54 0x02030000 (0) // Left marker
0 6a38 0x00000007 (7) // Number of rows function output
0 218c 0x00000076 (118) // Total number of non-error correcting codewords
0 0211 0x00000385 (901) // Latch to Byte Compaction mode
0 68cf 0x00000059 (89) // Data
0 18ec 0x0000021c (540)
0 02e7 0x00000330 (816)
0 753c 0x00000004 (4) // Number of columns function output
0 7e8a 0x00030001 (1) // Right marker
1 7f54 0x02030000 (0) // Left marker
1 7520 0x00010002 (2) // Security Level function output
1 704a 0x00010334 (820) // Data
1 31f2 0x000101a7 (423)
1 507b 0x000100c9 (201)
1 5e5f 0x00010319 (793)
1 6cf3 0x00010176 (374)
1 7d47 0x00010007 (7) // Number of rows function output
1 7e8a 0x00030001 (1) // Right marker
2 7f54 0x02030000 (0) // Left marker
2 6a7e 0x00020004 (4) // Number of columns function output
2 0fb2 0x0002037a (890) // Data
2 6dfa 0x000200d9 (217)
2 5b3e 0x000200bc (188)
2 3bbc 0x00020180 (384)
2 5e0b 0x00020268 (616)
2 29e0 0x00020002 (2) // Security Level function output
2 7e8a 0x00030001 (1) // Right marker
The next 3 lines will continue this pattern of function outputs. Note that the same information is encoded on the left and right, but in a different order. The system has a lot of redundancy, and can detect that it's seeing a mirror image of the barcode.
We don't care about the number of rows for this purpose, but given a current row of n and a total number of rows of N, the function is:
30 * (n/3) + ((N-1)/3)
Where / always means "integer, truncating division." Given there are 24 rows, on row 0, this is 0 + (24-1)/3 = 7.
The security level function's output is 2. Given a security level of e, the function is:
30 * (n/3) + 3*e + (N-1) % 3
=> 0 + 3*e + (23%3) = 2
=> 3*e + 2 = 2
=> 3*e = 0
=> e = 0
Finally, the number of columns can just be counted off in the output. For completeness, given a number of columns c, the function is:
30 * (n/3) + (c - 1)
=> 0 + c - 1 = 4
=> c = 5
If you look at the Data lines, you'll notice that they don't match your input data at all. That's because they have a complex encoding that I won't detail here. But for Byte compaction, you can think of it as similar to Base64 encoding, but instead of 64, it's Base900. Where Base64 encodes 3 bytes of data into 4 characters, Base900 encodes 6 bytes of data into 5 codewords.
In the end, all these codewords get converted to symbols (actual lines and spaces). Which symbol is used depends on the line. Lines divisible by 3 use one symbol set, the lines after use a second, and the lines after that use a third. So the same codewords will look completely different on line 7 than on line 8.
Taken together, all these things make it very difficult to look at a barcode and decide how "different" it is from another barcode in terms of content. You just have to decode them and see what's going on.
CIPDF417BarcodeGenerator has a few more input parameters besides inputMessage that can have an influence on how the generated barcode looks - see the documentation. Visual inspection/comparison of two codes only makes sense when you know that all these parameters, most importantly inputCorrectionLevel were equal for both generators.
So, instead of a visual comparison, simply try decoding the barcodes using one of the many scanner apps out there, and compare the decoded bytes.
For your second example, try this:
// ...
var keyData = knownKey.data(using: .isoLatin1)!
keyData.append(secretData)
let image = generatePDF417Barcode(from: keyData)

Dart Convert IEEE-11073 32-bit FLOAT to a simple double

I don't have much experience working with these low level bytes and numbers, so I've come here for help. I'm connecting to a bluetooth thermometer in my Flutter app, and I get an array of numbers formatted like this according to their documentation. I'm attempting to convert these numbers to a plain temperature double, but can't figure out how. This is the "example" the company gives me. However when I get a reading of 98.5 on the thermometer I get a response as an array of [113, 14, 0, 254]
Thanks for any help!
IEEE-11073 is a commonly used format in medical devices. The table you quoted has everything in it for you to decode the numbers, though might be hard to decipher at first.
Let's take the first example you have: 0xFF00016C. This is a 32-bit number and the first byte is the exponent, and the last three bytes are the mantissa. Both are encoded in 2s complement representation:
Exponent, 0xFF, in 2's complement this is the number -1
Mantissa, 0x00016C, in 2's complement this is the number 364
(If you're not quite sure how numbers are encoded in 2's complement, please ask that as a separate question.)
The next thing we do is to make sure it's not a "special" value, as dictated in your table. Since the exponent you have is not 0 (it is -1), we know that you're OK. So, no special processing is needed.
Since the value is not special, its numeric value is simply: mantissa * 10^exponent. So, we have: 364*10^-1 = 36.4, as your example shows.
Your second example is similar. The exponent is 0xFE, and that's the number -2 in 2's complement. The mantissa is 0x000D97, which is 3479 in decimal. Again, the exponent isn't 0, so no special processing is needed. So you have: 3479*10^-2 = 34.79.
You say for the 98.5 value, you get the byte-array [113, 14, 0, 254]. Let's see if we can make sense of that. Your byte array, written in hex is: [0x71, 0x0E, 0x00, 0xFE]. I'm guessing you receive these bytes in the "reverse" order, so as a 32-bit hexadecimal this is actually 0xFE000E71.
We proceed similarly: Exponent is again -2, since 0xFE is how you write -2 in 2's complement using 8-bits. (See above.) Mantissa is 0xE71 which equals 3697. So, the number is 3697*10^-2 = 36.97.
You are claiming that this is actually 98.5. My best guess is that you are reading it in Fahrenheit, and your device is reporting in Celcius. If you do the math, you'll find that 36.97C = 98.55F, which is close enough. I'm not sure how you got the 98.5 number, but with devices like this, this outcome seems to be within the precision you can about expect.
Hope this helps!
Here is something that I used to convert sfloat16 to double in dart for our flutter app.
double sfloat2double(ieee11073) {
var reservedValues = {
0x07FE: 'PositiveInfinity',
0x07FF: 'NaN',
0x0800: 'NaN',
0x0801: 'NaN',
0x0802: 'NegativeInfinity'
};
var mantissa = ieee11073 & 0x0FFF;
if (reservedValues.containsKey(mantissa)){
return 0.0; // basically error
}
if ((ieee11073 & 0x0800) != 0){
mantissa = -((ieee11073 & 0x0FFF) + 1 );
}else{
mantissa = (ieee11073 & 0x0FFF);
}
var exponent = ieee11073 >> 12;
if (((ieee11073 >> 12) & 0x8) != 0){
exponent = -((~(ieee11073 >> 12) & 0x0F) + 1 );
}else{
exponent = ((ieee11073 >> 12) & 0x0F);
}
var magnitude = pow(10, exponent);
return (mantissa * magnitude);
}

Lempel-Ziv encoding - split string

I have given string 1100011110000011111000000. I have to encode it by Basic Lempel-Ziv algorithm. I know how to do it. The problem is two first numbers are 1. I parse it into anordered dictionary of substrings. So result is:
1, 10, 0, 01, 11, 100, 00, 011, 111, 000
I want to number new substring, but then 1=No.1, 10=No.2 and 0=No.3., that means, I cant encode substring 10, because here is not value of 0.
Another problem is - at the end here are three 0 more, but I have one substring 000. What I have to do with last 000?
So, is there any different way to parse the sting in substrings?

Binary in Swift

I'm trying to convert Binary and stuff with Swift and this is my code :
let hexa = String(Int(a, radix: 2)!, radix: 16)// Converting binary to hexadecimal
I am getting the error
Cannot convert value type of 'Int' to expected argument type 'String'
You're misunderstanding how integers are stored.
There is no notion of a "decimal" Int, a "hexadecimal" Int, etc. When you have an Int in memory, it's always binary (radix 2). It's stored as a series of 64 or 32 bits.
When you try to assign to the Int a value like 10 (decimal), 0xA (hex), 0b1010 (binary), the compiler does the necessary parsing to convert your source code's string representation of that Int, into a series of bits that can be stored in the Int's 64 or 32 bits of memory.
When you try to use the Int, for example with print(a), there is conversion behind the scenes to take that Int's binary representation in memory, and convert it into a String whose symbols represent an Int in base 10, using the symbols we're used to (0-9).
On a more fundamental, it helps to understand that the notion of a radix is a construct devised purely for our convenience when working with numbers. Abstractly, a number has a magnitude that is a distinct entity, uncoupled from any radix. A magnitude can be represented concretely using a textual representation and a radix.
This part Int(a, radix: 2), doesn't make sense. Even supposing such an initializer (Int.init?(Int, radix: Int)) existed, it wouldn't do anything!. If a = 5, then a is stored as binary 0b101. This would then be parsed from binary into an Int, giving you... 0b101, or the same 5 you started with.
On the other hand, Strings can have a notion of a radix, because they can be a textual representation of a decimal Int, a hex Int, etc. To convert from a String that contains a number, you use Int.init?(String, radix: Int). The key here is that it takes a String parameter.
let a = 10 //decimal 10 is stored as binary in memory 1010
let hexa = String(a, radix: 16) //the Int is converted to a string, 0xA

How to convert Mac address as String format to Long in scala?

my question is converting string to long in scala.
where string is format as Mac Address.
for eg:
"fe:1a:90:20:00:00" and "a0:b4:ac:c0:00:01"
How to convert in long type using scala.
This may help:
mac.split(":").map(Integer.parseInt(_,16)).foldLeft(0L) {case (acc,item) => acc*256+item}
First operation
mac.split(":")
gives Array of strings, like Array("fe","1a","90","20","00","00"). Each item of this array is base-16 encoded integer, so we can convert it to array of integers with:
val arrayOfInts = mac.split(":").map(Integer.parseInt(_,16))
which gives for first example Array(254, 26, 144, 32, 0, 0).
The last thing to do is to convert array of integers to long. Each item of array is in range [0,255], so multiplication on 256 is exactly enough to hold the value, last operation
arrayOfInts.foldLeft(0L) {case (acc,item) => acc*256+item}
does the conversion. It starts from 0L and for each item multiplies result on 256 and adds item:
(((((((0L*256 + 254)*256 + 26)*256) + 144)*256 + 32)*256 + 0)*256) + 0
I found the answer in github:
https://github.com/ppurang/flocke/blob/master/core/src/main/scala/org/purang/net/flocke/MacAddress.scala
You could use java library
def hex2Long(hex: String, removeDelimiter: String = ""): Long = {
java.lang.Long.valueOf(hex.replaceAll(removeDelimiter, ""), 16)
}
scala> hex2Long("00:00:00:00:2f:59")
res2: Long = 12121