Scala hex string to bytes - scala

Is there a neat way in Scala to convert a hexadecimally encoded String to a protobuf ByteString (and back again)?

You can use (without additional dependencies) DatatypeConverter as:
import com.google.protobuf.ByteString
import javax.xml.bind.DatatypeConverter
val hexString: String = "87C2D268483583714CD5"
val byteString: ByteString = ByteString.copyFrom(
DatatypeConverter.parseHexBinary(hexString)
)
val originalString = DatatypeConverter.printHexBinary(byteString.toByteArray)

You can use java.math.BigInteger to parse a String, get the Array[Byte] and from there turn it into a ByteString. Here would be the first step:
import java.math.BigInteger
val s = "f263575e7b00a977a8e9a37e08b9c215feb9bfb2f992b2b8f11e"
val bs = new BigInteger(s, 16).toByteArray
The content of bs is now:
Array(0, -14, 99, 87, 94, 123, 0, -87, 119, -88, -23, -93, 126, 8, -71, -62, 21, -2, -71, -65, -78, -7, -110, -78, -72, -15, 30)
You can then use (for example) the copyFrom method (JavaDoc here) to turn it into a ByteString.

Starting with Java 17, you can use a standard API for parsing HEX strings to byte array.
import java.util.HexFormat
HexFormat.of.parseHex("d719af")

Since the title of the question doesn't mention Protobuf, if anyone is looking for a solution that doesn't require any dependencies for converting a hex String to Seq[Byte] for any sized array: (don't forget to add input validation as necessarily)
val zeroChar: Byte = '0'.toByte
val aChar: Byte = 'a'.toByte
def toHex(bytes: Seq[Byte]): String = bytes.map(b => f"$b%02x").mkString
def toBytes(hex: String): Seq[Byte] = {
val lowerHex = hex.toLowerCase
val (result: Array[Byte], startOffset: Int) =
if (lowerHex.length % 2 == 1) {
// Odd
val r = new Array[Byte]((lowerHex.length >> 1) + 1)
r(0) = toNum(lowerHex(0))
(r, 1)
} else {
// Even
(new Array[Byte](lowerHex.length >> 1), 0)
}
var inputIndex = startOffset
var outputIndex = startOffset
while (outputIndex < result.length) {
val byteValue = (toNum(lowerHex(inputIndex)) * 16) +
toNum(lowerHex(inputIndex + 1))
result(outputIndex) = byteValue.toByte
inputIndex += 2
outputIndex += 1
}
result
}
def toNum(lowerHexChar: Char): Byte =
(if (lowerHexChar < 'a') lowerHexChar.toByte - zeroChar else 10 +
lowerHexChar.toByte - aChar).toByte
https://scalafiddle.io/sf/PZPHBlT/2

A simple solution without any dependency or intermediate object could be
def toBytes(hex: String): Seq[Byte] = {
assert(hex.length % 2 == 0) // only manage canonical case
hex.sliding(2, 2).map(Integer.parseInt(_, 16).toByte).toSeq
}
assert(toBytes("1234") == Seq[Byte](18,52))
code online

Related

for loop into map method with Spark using Scala

Hi I want to use a "for" into a map method in scala.
How can I do it?
For example here for each line read I want to generate a random word :
val rdd = file.map(line => (line,{
val chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";
val word = new String;
val res = new String;
val rnd = new Random;
val len = 4 + rnd.nextInt((6-4)+1);
for(i <- 1 to len){
val char = chars(rnd.nextInt(51));
word.concat(char.toString);
}
word;
}))
My current output is :
Array[(String, String)] = Array((1,""), (2,""), (3,""), (4,""), (5,""), (6,""), (7,""), (8,""), (9,""), (10,""), (11,""), (12,""), (13,""), (14,""), (15,""), (16,""), (17,""), (18,""), (19,""), (20,""), (21,""), (22,""), (23,""), (24,""), (25,""), (26,""), (27,""), (28,""), (29,""), (30,""), (31,""), (32,""), (33,""), (34,""), (35,""), (36,""), (37,""), (38,""), (39,""), (40,""), (41,""), (42,""), (43,""), (44,""), (45,""), (46,""), (47,""), (48,""), (49,""), (50,""), (51,""), (52,""), (53,""), (54,""), (55,""), (56,""), (57,""), (58,""), (59,""), (60,""), (61,""), (62,""), (63,""), (64,""), (65,""), (66,""), (67,""), (68,""), (69,""), (70,""), (71,""), (72,""), (73,""), (74,""), (75,""), (76,""), (77,""), (78,""), (79,""), (80,""), (81,""), (82,""), (83,""), (84,""), (85,""), (86...
I don't know why the right side is empty.
There's no need for var here. It's a one liner
Seq.fill(len)(chars(rnd.nextInt(51))).mkString
This will create a sequence of Char of length len by repeatedly calling chars(rnd.nextInt(51)), then makes it into a String.
Thus you'll get something like this :
import org.apache.spark.rdd.RDD
import scala.util.Random
val chars = ('a' to 'z') ++ ('A' to 'Z')
val rdd = file.map(line => {
val randomWord = {
val rnd = new Random
val len = 4 + rnd.nextInt((6 - 4) + 1)
Seq.fill(len)(chars(rnd.nextInt(chars.length-1))).mkString
}
(line, randomWord)
})
word.concat doesn't modify word but return a new String, you can make word a variable and add new string to it:
var word = new String
....
for {
...
word += char
...
}

How to evaluate binary key-value?

I am writing a external merge sort for big input files in Binary using Scala.
I generate input using gensort and evaluate output using valsort from this website: http://www.ordinal.com/gensort.html
I will read 100 bytes at a time, first 10 bytes for Key(List[Byte]) and the rest 90 bytes for Value(List[Byte])
After sorting, my output is evaluated by valsort, and it's wrong.
But when I using input in ASCII, my output is right.
So I wonder how to sort binary inputs in the right way?
Valsort said that my first unordered record is 56, here is what I printed out:
50 --> Key(List(-128, -16, 5, -10, -83, 23, -107, -109, 42, -11))
51 --> Key(List(-128, -16, 5, -10, -83, 23, -107, -109, 42, -11))
52 --> Key(List(-128, -10, -10, 68, -94, 37, -103, 30, 90, 16))
53 --> Key(List(-128, -10, -10, 68, -94, 37, -103, 30, 90, 16))
54 --> Key(List(-128, -10, -10, 68, -94, 37, -103, 30, 90, 16))
55 --> Key(List(-128, -10, -10, 68, -94, 37, -103, 30, 90, 16))
56 --> Key(List(-128, 0, -27, -4, -82, -82, 121, -125, -22, 99))
57 --> Key(List(-128, 0, -27, -4, -82, -82, 121, -125, -22, 99))
58 --> Key(List(-128, 0, -27, -4, -82, -82, 121, -125, -22, 99))
59 --> Key(List(-128, 0, -27, -4, -82, -82, 121, -125, -22, 99))
60 --> Key(List(-128, 7, -65, 118, 121, -12, 48, 50, 59, -8))
61 --> Key(List(-128, 7, -65, 118, 121, -12, 48, 50, 59, -8))
62 --> Key(List(-128, 7, -65, 118, 121, -12, 48, 50, 59, -8))
This is my external sorting code:
package externalsorting
import java.io.{BufferedOutputStream, File, FileOutputStream}
import java.nio.channels.FileChannel
import java.util.Calendar
import scala.collection.mutable
import readInput._
import scala.collection.mutable.ListBuffer
/**
* Created by hminle on 12/5/2016.
*/
object ExternalSortingExample extends App{
val dir: String = "C:\\ShareUbuntu\\testMerge"
val listFile: List[File] = Utils.getListOfFiles(dir)
listFile foreach(x => println(x.getName))
var fileChannelsInput: List[(FileChannel, Boolean)] = listFile.map{input => (Utils.getFileChannelFromInput(input), false)}
val tempDir: String = dir + "/tmp/"
val tempDirFile: File = new File(tempDir)
val isSuccessful: Boolean = tempDirFile.mkdir()
if(isSuccessful) println("Create temp dir successfully")
else println("Create temp dir failed")
var fileNameCounter: Int = 0
val chunkSize = 100000
// Split big input files into small chunks
while(!fileChannelsInput.isEmpty){
if(Utils.estimateAvailableMemory() > 400000){
val fileChannel = fileChannelsInput(0)._1
val (chunks, isEndOfFileChannel) = Utils.getChunkKeyAndValueBySize(chunkSize, fileChannel)
if(isEndOfFileChannel){
fileChannel.close()
fileChannelsInput = fileChannelsInput.drop(1)
} else {
val sortedChunk: List[(Key, Value)] = Utils.getSortedChunk(chunks)
val fileName: String = tempDir + "partition-" + fileNameCounter
Utils.writePartition(fileName, sortedChunk)
fileNameCounter += 1
}
} else {
println(Thread.currentThread().getName +"There is not enough available free memory to continue processing" + Utils.estimateAvailableMemory())
}
}
val listTempFile: List[File] = Utils.getListOfFiles(tempDir)
val start = Calendar.getInstance().getTime
val tempFileChannels: List[FileChannel] = listTempFile.map(Utils.getFileChannelFromInput(_))
val binaryFileBuffers: List[BinaryFileBuffer] = tempFileChannels.map(BinaryFileBuffer(_))
binaryFileBuffers foreach(x => println(x.toString))
val pq1: ListBuffer[BinaryFileBuffer] = ListBuffer.empty
binaryFileBuffers.filter(!_.isEmpty()).foreach(pq1.append(_))
val outputDir: String = dir + "/mergedOutput"
val bos = new BufferedOutputStream(new FileOutputStream(outputDir))
// Start merging temporary files
while(pq1.length > 0){
val pq2 = pq1.toList.sortWith(_.head()._1 < _.head()._1)
val buffer: BinaryFileBuffer = pq2.head
val keyVal: (Key, Value) = buffer.pop()
val byteArray: Array[Byte] = Utils.flattenKeyValue(keyVal).toArray[Byte]
Stream.continually(bos.write(byteArray))
if(buffer.isEmpty()){
buffer.close()
pq1 -= buffer
}
count+=1
}
bos.close()
}
This is BinaryFileBuffer.scala --> which is just a wrapper
package externalsorting
import java.nio.channels.FileChannel
import readInput._
/**
* Created by hminle on 12/5/2016.
*/
object BinaryFileBuffer{
def apply(fileChannel: FileChannel): BinaryFileBuffer = {
val buffer: BinaryFileBuffer = new BinaryFileBuffer(fileChannel)
buffer.reload()
buffer
}
}
class BinaryFileBuffer(fileChannel: FileChannel) extends Ordered[BinaryFileBuffer] {
private var cache: Option[(Key, Value)] = _
def isEmpty(): Boolean = cache == None
def head(): (Key, Value) = cache.get
def pop(): (Key, Value) = {
val answer = head()
reload()
answer
}
def reload(): Unit = {
this.cache = Utils.get100BytesKeyAndValue(fileChannel)
}
def close(): Unit = fileChannel.close()
def compare(that: BinaryFileBuffer): Int = {
this.head()._1.compare(that.head()._1)
}
}
This is my Utils.scala:
package externalsorting
import java.io.{BufferedOutputStream, File, FileOutputStream}
import java.nio.ByteBuffer
import java.nio.channels.FileChannel
import java.nio.file.Paths
import readInput._
import scala.annotation.tailrec
import scala.collection.mutable.ListBuffer
/**
* Created by hminle on 12/5/2016.
*/
object Utils {
def getListOfFiles(dir: String): List[File] = {
val d = new File(dir)
if(d.exists() && d.isDirectory){
d.listFiles.filter(_.isFile).toList
} else List[File]()
}
def get100BytesKeyAndValue(fileChannel: FileChannel): Option[(Key, Value)] = {
val size = 100
val buffer = ByteBuffer.allocate(size)
buffer.clear()
val numOfByteRead = fileChannel.read(buffer)
buffer.flip()
if(numOfByteRead != -1){
val data: Array[Byte] = new Array[Byte](numOfByteRead)
buffer.get(data, 0, numOfByteRead)
val (key, value) = data.splitAt(10)
Some(Key(key.toList), Value(value.toList))
} else {
None
}
}
def getFileChannelFromInput(file: File): FileChannel = {
val fileChannel: FileChannel = FileChannel.open(Paths.get(file.getPath))
fileChannel
}
def estimateAvailableMemory(): Long = {
System.gc()
val runtime: Runtime = Runtime.getRuntime
val allocatedMemory: Long = runtime.totalMemory() - runtime.freeMemory()
val presFreeMemory: Long = runtime.maxMemory() - allocatedMemory
presFreeMemory
}
def writePartition(dir: String, keyValue: List[(Key, Value)]): Unit = {
val byteArray: Array[Byte] = flattenKeyValueList(keyValue).toArray[Byte]
val bos = new BufferedOutputStream(new FileOutputStream(dir))
Stream.continually(bos.write(byteArray))
bos.close()
}
def flattenKeyValueList(keyValue: List[(Key,Value)]): List[Byte] = {
keyValue flatten {
case (Key(keys), Value(values)) => keys:::values
}
}
def flattenKeyValue(keyVal: (Key, Value)): List[Byte] = {
keyVal._1.keys:::keyVal._2.values
}
def getChunkKeyAndValueBySize(size: Int, fileChannel: FileChannel): (List[(Key, Value)], Boolean) = {
val oneKeyValueSize = 100
val countMax = size / oneKeyValueSize
var isEndOfFileChannel: Boolean = false
var count = 0
val chunks: ListBuffer[(Key, Value)] = ListBuffer.empty
do{
val keyValue = get100BytesKeyAndValue(fileChannel)
if(keyValue.isDefined) chunks.append(keyValue.get)
isEndOfFileChannel = !keyValue.isDefined
count += 1
}while(!isEndOfFileChannel && count < countMax)
(chunks.toList, isEndOfFileChannel)
}
def getSortedChunk(oneChunk: List[(Key, Value)]): List[(Key, Value)] = {
oneChunk.sortWith((_._1 < _._1))
}
}
How I define Key and Value:
case class Key(keys: List[Byte]) extends Ordered[Key] {
def isEmpty(): Boolean = keys.isEmpty
def compare(that: Key): Int = {
compare_aux(this.keys, that.keys)
}
private def compare_aux(keys1: List[Byte], keys2: List[Byte]): Int = {
(keys1, keys2) match {
case (Nil, Nil) => 0
case (list, Nil) => 1
case (Nil, list) => -1
case (hd1::tl1, hd2::tl2) => {
if(hd1 > hd2) 1
else if(hd1 < hd2) -1
else compare_aux(tl1, tl2)
}
}
}
}
case class Value(values: List[Byte])
I've found the answer. Reading from Binary and ASCII are different.
In what order should the sorted file be?
For binary records (GraySort or MinuteSort), the 10-byte keys should be ordered as arrays of unsigned bytes. The memcmp() library routine can be used for this purpose.
For sorting Binary, I need to convert signed bytes into unsigned bytes.

How to count the number of occurences of an element with scala/spark?

I had a file that contained a list of elements like this
00|905000|20160125204123|79644809999||HGMTC|1||22|7905000|56321647569|||34110|I||||||250995210056537|354805064211510||56191|||38704||A|||11|V|81079681404134|5||||SE|||G|144|||||||||||||||Y|b00534589.huawei_anadyr.20151231184912||1|||||79681404134|0|||+##+1{79098509982}2{2}3{2}5{79644809999}6{0000002A7A5AC635}7{79681404134}|20160125|
Through a series of steps, I managed to convert it to a list of elements like this
(902996760100000,CompactBuffer(6, 5, 2, 2, 8, 6, 5, 3))
Where 905000 and 902996760100000 are keys and 6, 5, 2, 2, 8, 6, 5, 3 are values. Values can be numbers from 1 to 8. Are there any ways to count number of occurences of these values using spark, so the result looks like this?
(902996760100000, 0_1, 2_2, 1_3, 0_4, 2_5, 2_6, 0_7, 1_8)
I could do it with if else blocks and staff, but that won't be pretty, so I wondered if there are any instrumets I could use in scala/spark.
This is my code.
class ScalaJob(sc: SparkContext) {
def run(cdrPath: String) : RDD[(String, Iterable[String])] = {
//pass the file
val fileCdr = sc.textFile(cdrPath);
//find values in every raw cdr
val valuesCdr = fileCdr.map{
dataRaw =>
val p = dataRaw.split("[|]",-1)
(p(1), ScalaJob.processType(ScalaJob.processTime(p(2)) + "_" + p(32)))
}
val x = valuesCdr.groupByKey()
return x
}
Any advice on optimizing it would be appreciated. I'm really new to scala/spark.
First, Scala is a type-safe language and so is Spark's RDD API - so it's highly recommended to use the type system instead of going around it by "encoding" everything into Strings.
So I'll suggest a solution that creates an RDD[(String, Seq[(Int, Int)])] (with second item in tuple being a sequence of (ID, count) tuples) and not a RDD[(String, Iterable[String])] which seems less useful.
Here's a simple function that counts the occurrences of 1 to 8 in a given Iterable[Int]:
def countValues(l: Iterable[Int]): Seq[(Int, Int)] = {
(1 to 8).map(i => (i, l.count(_ == i)))
}
You can use mapValues with this function (place the function in the object for serializability, like you did with the rest) on an RDD[(String, Iterable[Int])] to get the result:
valuesCdr.groupByKey().mapValues(ScalaJob.countValues)
The entire solution can then be simplified a bit:
class ScalaJob(sc: SparkContext) {
import ScalaJob._
def run(cdrPath: String): RDD[(String, Seq[(Int, Int)])] = {
val valuesCdr = sc.textFile(cdrPath)
.map(_.split("\\|"))
.map(p => (p(1), processType(processTime(p(2)), p(32))))
valuesCdr.groupByKey().mapValues(countValues)
}
}
object ScalaJob {
val dayParts = Map((6 to 11) -> 1, (12 to 18) -> 2, (19 to 23) -> 3, (0 to 5) -> 4)
def processTime(s: String): Int = {
val hour = DateTime.parse(s, DateTimeFormat.forPattern("yyyyMMddHHmmss")).getHourOfDay
dayParts.filterKeys(_.contains(hour)).values.head
}
def processType(dayPart: Int, s: String): Int = s match {
case "S" => 2 * dayPart - 1
case "V" => 2 * dayPart
}
def countValues(l: Iterable[Int]): Seq[(Int, Int)] = {
(1 to 8).map(i => (i, l.count(_ == i)))
}
}

Reading lines and raw bytes from the same source in scala

I need to write code that does the following:
Connect to a tcp socket
Read a line ending in "\r\n" that contains a number N
Read N bytes
Use those N bytes
I am currently using the following code:
val socket = new Socket(InetAddress.getByName(host), port)
val in = socket.getInputStream;
val out = new PrintStream(socket.getOutputStream)
val reader = new DataInputStream(in)
val baos = new ByteArrayOutputStream
val buffer = new Array[Byte](1024)
out.print(cmd + "\r\n")
out.flush
val firstLine = reader.readLine.split("\\s")
if(firstLine(0) == "OK") {
def read(written: Int, max: Int, baos: ByteArrayOutputStream): Array[Byte] = {
if(written >= max) baos.toByteArray
else {
val count = reader.read(buffer, 0, buffer.length)
baos.write(buffer, 0, count)
read(written + count, max, baos)
}
}
read(0, firstLine(1).toInt, baos)
} else {
// RAISE something
}
baos.toByteArray()
The problem with this code is that the use of DataInputStream#readLine raises a deprecation warning, but I can't find a class that implements both read(...) and readLine(...). BufferedReader for example, implements read but it reads Chars and not Bytes. I could cast those chars to bytes but I don't think it's safe.
Any other ways to write something like this in scala?
Thank you
be aware that on the JVM a char has 2 bytes, so "\r\n" is 4 bytes. This is generally not true for Strings stored outside of the JVM.
I think the safest way would be to read your file in raw bytes until you reache your Binary representation of "\r\n", now you can create a Reader (makes bytes into JVM compatible chars) on the first bytes, where you can be shure that there is Text only, parse it, and contiue safely with the rest of the binary data.
You can achive the goal to use read(...) and readLine(...) in one class. The idea is use BufferedReader.read():Int. The BufferedReader class has buffered the content so you can read one byte a time without performance decrease.
The change can be: (without scala style optimization)
import java.io.BufferedInputStream
import java.io.BufferedReader
import java.io.ByteArrayOutputStream
import java.io.PrintStream
import java.net.InetAddress
import java.net.Socket
import java.io.InputStreamReader
object ReadLines extends App {
val host = "127.0.0.1"
val port = 9090
val socket = new Socket(InetAddress.getByName(host), port)
val in = socket.getInputStream;
val out = new PrintStream(socket.getOutputStream)
// val reader = new DataInputStream(in)
val bufIns = new BufferedInputStream(in)
val reader = new BufferedReader(new InputStreamReader(bufIns, "utf8"));
val baos = new ByteArrayOutputStream
val buffer = new Array[Byte](1024)
val cmd = "get:"
out.print(cmd + "\r\n")
out.flush
val firstLine = reader.readLine.split("\\s")
if (firstLine(0) == "OK") {
def read(written: Int, max: Int, baos: ByteArrayOutputStream): Array[Byte] = {
if (written >= max) {
println("get: " + new String(baos.toByteArray))
baos.toByteArray()
} else {
// val count = reader.read(buffer, 0, buffer.length)
var count = 0
var b = reader.read()
while(b != -1){
buffer(count) = b.toByte
count += 1
if (count < max){
b = reader.read()
}else{
b = -1
}
}
baos.write(buffer, 0, count)
read(written + count, max, baos)
}
}
read(0, firstLine(1).toInt, baos)
} else {
// RAISE something
}
baos.toByteArray()
}
for test, below is a server code:
object ReadLinesServer extends App {
val serverSocket = new ServerSocket(9090)
while(true){
println("accepted a connection.")
val socket = serverSocket.accept()
val ops = socket.getOutputStream()
val printStream = new PrintStream(ops, true, "utf8")
printStream.print("OK 2\r\n") // 1 byte for alpha-number char
printStream.print("ab")
}
}
Seems this is the best solution I can find:
val reader = new BufferedReader(new InputStreamReader(in))
val buffer = new Array[Char](1024)
out.print(cmd + "\r\n")
out.flush
val firstLine = reader.readLine.split("\\s")
if(firstLine(0) == "OK") {
def read(readCount: Int, acc: List[Byte]): Array[Byte] = {
if(readCount <= 0) acc.toArray
else {
val count = reader.read(buffer, 0, buffer.length)
val asBytes = buffer.slice(0, count).map(_.toByte)
read(readCount - count, acc ++ asBytes)
}
}
read(firstLine(1).toInt, List[Byte]())
} else {
// RAISE
}
That is, use buffer.map(_.toByte).toArray to transform a char Array into a Byte Array without caring about the encoding.

Scala library to convert numbers (Int, Long, Double) to/from Array[Byte]

As the title says, is there any Scala library that exports functions to convert, preferably fluently, a byte array to an Int, to a Long or to a Double?
I need something compatible with 2.9.1 and FOSS.
If you happen to know exactly what I need and where to find it, a line for SBT and a line for an example will be enough! :)
If there's no such thing as what I'm looking for, the closest thing in Java will also work...
You can use Java NIO's ByteBuffer:
import java.nio.ByteBuffer
ByteBuffer.wrap(Array[Byte](1, 2, 3, 4)).getInt
ByteBuffer.wrap(Array[Byte](1, 2, 3, 4, 5, 6, 7, 8)).getDouble
ByteBuffer.wrap(Array[Byte](1, 2, 3, 4, 5, 6, 7, 8)).getLong
No extra dependencies required.
You can also use BigInt from the scala standard library.
import scala.math.BigInt
val bytearray = BigInt(1337).toByteArray
val int = BigInt(bytearray)
Java's nio.ByteBuffer is the way to go for now:
val bb = java.nio.ByteBuffer.allocate(4)
val i = 5
bb.putInt(i)
bb.flip // now can read instead of writing
val j = bb.getInt
bb.clear // ready to go again
You can also put arrays of bytes, etc.
Keep in mind the little/big-endian thing. bb.order(java.nio.ByteOrder.nativeOrder) is probably what you want.
For Double <-> ByteArray, you can use java.lang.Double.doubleToLongBits and java.lang.Double.longBitsToDouble.
import java.lang.Double
def doubleToByteArray(x: Double) = {
val l = java.lang.Double.doubleToLongBits(x)
val a = Array.fill(8)(0.toByte)
for (i <- 0 to 7) a(i) = ((l >> ((7 - i) * 8)) & 0xff).toByte
a
}
def byteArrayToDouble(x: Array[scala.Byte]) = {
var i = 0
var res = 0.toLong
for (i <- 0 to 7) {
res += ((x(i) & 0xff).toLong << ((7 - i) * 8))
}
java.lang.Double.longBitsToDouble(res)
}
scala> val x = doubleToByteArray(12.34)
x: Array[Byte] = Array(64, 40, -82, 20, 122, -31, 71, -82)
scala> val y = byteArrayToDouble(x)
y: Double = 12.34
Or ByteBuffer can be used:
import java.nio.ByteBuffer
def doubleToByteArray(x: Double) = {
val l = java.lang.Double.doubleToLongBits(x)
ByteBuffer.allocate(8).putLong(l).array()
}
def byteArrayToDouble(x:Array[Byte]) = ByteBuffer.wrap(x).getDouble
The following worked for me using Scala:
import org.apache.kudu.client.Bytes
Bytes.getFloat(valueToConvert)
You can also use:
Bytes.toInt(byteArray)
Worked like a charm!