Why Int overflow is occured this code?
[code]
val randomList = List(scala.util.Random.nextInt(), scala.util.Random.nextInt(), scala.util.Random.nextInt(), scala.util.Random.nextInt())
var listSum = 0
for (value <- randomList) {
listSum += value
println("value = " + value)
println("listSum = " + listSum)
println("\n")
}
println("sum is " + listSum)
[Result]
value = 2078728151
listSum = 2078728151
value = -1367097617
listSum = 711630534
value = 1543963641
listSum = -2039373121
value = -1351834340
listSum = 903759835
sum is 903759835
randomList: List[Int] = List(2078728151, -1367097617, 1543963641, -1351834340)
listSum: Int = 903759835
In Scala Int is 4 bytes long, so the greatest Integer is 2**(32-1)-1, which happens to be
Int.MaxValue = 2147483647
Any value above that is going to be Int.MinValue = -2147483648 or greater.
It is the same in any other language
Related
I'm new in Scala programming language so in this Bubble sort I need to generate 10 random integers instead of right it down like the code below
any suggestions?
object BubbleSort {
def bubbleSort(array: Array[Int]) = {
def bubbleSortRecursive(array: Array[Int], current: Int, to: Int): Array[Int] = {
println(array.mkString(",") + " current -> " + current + ", to -> " + to)
to match {
case 0 => array
case _ if(to == current) => bubbleSortRecursive(array, 0, to - 1)
case _ =>
if (array(current) > array(current + 1)) {
var temp = array(current + 1)
array(current + 1) = array(current)
array(current) = temp
}
bubbleSortRecursive(array, current + 1, to)
}
}
bubbleSortRecursive(array, 0, array.size - 1)
}
def main(args: Array[String]) {
val sortedArray = bubbleSort(Array(10,9,11,5,2))
println("Sorted Array -> " + sortedArray.mkString(","))
}
}
Try this:
import scala.util.Random
val sortedArray = (1 to 10).map(_ => Random.nextInt).toArray
You can use scala.util.Random for generation. nextInt method takes maxValue argument, so in the code sample, you'll generate list of 10 int values from 0 to 100.
val r = scala.util.Random
for (i <- 1 to 10) yield r.nextInt(100)
You can find more info here or here
You can use it this way.
val solv1 = Random.shuffle( (1 to 100).toList).take(10)
val solv2 = Array.fill(10)(Random.nextInt)
Hi I am trying UDAF with spark scala. I am getting the following exception.
Description Resource Path Location Type type mismatch; found : scala.collection.immutable.IndexedSeq[Any] required: String SumCalc.scala /filter line 63 Scala Problem
This is my code.
override def evaluate(buffer: Row): Any = {
val in_array = buffer.getAs[WrappedArray[String]](0);
var finalArray = Array.empty[Array[String]]
import scala.util.control.Breaks._
breakable {
for (outerArray <- in_array) {
val currentTimeStamp = outerArray(1).toLong
var sum = 0.0
var count = 0
var check = false
var list = outerArray
for (array <- in_array) {
val toCheckTimeStamp = array(1).toLong
if (((currentTimeStamp - 10L) <= toCheckTimeStamp) && (currentTimeStamp >= toCheckTimeStamp)) {
sum += array(5).toDouble
count += 1
}
if ((currentTimeStamp - 10L) > toCheckTimeStamp) {
check = true
break
}
}
if (sum != 0.0 && check) list = list :+ (sum).toString // getting error on this line.
else list = list :+ list(5).toDouble.toString
finalArray ++= Array(list)
}
finalArray
}
}
Any help will be appreciated.
There are a couple of mistakes in your evaluate function of UDAF.
list variable is a string but you are treating it as an array
finalArray is initialized as Array.empty[Array[String]] but later on you are adding Array(list) to the finalArray
You are not returning finalArray from evaluate method as its inside for loop
So the correct way should be as below
override def evaluate(buffer: Row): Any = {
val in_array = buffer.getAs[WrappedArray[String]](0);
var finalArray = Array.empty[String]
import scala.util.control.Breaks._
breakable {
for (outerArray <- in_array) {
val currentTimeStamp = outerArray(1).toLong // timestamp values
var sum = 0.0
var count = 0
var check = false
var list = outerArray
for (array <- in_array) {
val toCheckTimeStamp = array(1).toLong
if (((currentTimeStamp - 10L) <= toCheckTimeStamp) && (currentTimeStamp >= toCheckTimeStamp)) {
sum += array(5).toDouble // RSSI weightage values
count += 1
}
if ((currentTimeStamp - 10L) > toCheckTimeStamp) {
check = true
break
}
}
if (sum != 0.0 && check) list = list + (sum).toString // calculate sum for the 10 secs difference
else list = list + (sum).toString // If 10 secs difference is not there take rssi weightage value
finalArray ++= Array(list)
}
}
finalArray // Final results for this function
}
Hope the answer is helpful
I am trying to implement Haar Wavelet Transform in Scala. I am using this Python Code for reference Github Link to Python implementation of HWT
I am also giving here my Scala code version. I am new to Scala so forgive me for not-so-good-code.
/**
* Created by vipul vaibhaw on 1/11/2017.
*/
import scala.collection.mutable.{ListBuffer, MutableList,ArrayBuffer}
object HaarWavelet {
def main(args: Array[String]): Unit = {
var samples = ListBuffer(
ListBuffer(1,4),
ListBuffer(6,1),
ListBuffer(0,2,4,6,7,7,7,7),
ListBuffer(1,2,3,4),
ListBuffer(7,5,1,6,3,0,2,4),
ListBuffer(3,2,3,7,5,5,1,1,0,2,5,1,2,0,1,2,0,2,1,0,0,2,1,2,0,2,1,0,0,2,1,2)
)
for (i <- 0 to samples.length){
var ubound = samples(i).max+1
var length = samples(i).length
var deltas1 = encode(samples(i), ubound)
var deltas = deltas1._1
var avg = deltas1._2
println( "Input: %s, boundary = %s, length = %s" format(samples(i), ubound, length))
println( "Haar output:%s, average = %s" format(deltas, avg))
println("Decoded: %s" format(decode(deltas, avg, ubound)))
}
}
def wrap(value:Int, ubound:Int):Int = {
(value+ubound)%ubound
}
def encode(lst1:ListBuffer[Int], ubound:Int):(ListBuffer[Int],Int)={
//var lst = ListBuffer[Int]()
//lst1.foreach(x=>lst+=x)
var lst = lst1
var deltas = new ListBuffer[Int]()
var avg = 0
while (lst.length>=2) {
var avgs = new ListBuffer[Int]()
while (lst.nonEmpty) {
// getting first two element from the list and removing them
val a = lst.head
lst -= 1 // removing index 0 element from the list
val b = lst.head
lst -= 1 // removing index 0 element from the list
if (a<=b) {
avg = (a + b)/2
}
else{
avg = (a+b+ubound)/2
}
var delta = wrap(b-a,ubound)
avgs += avg
deltas += delta
}
lst = avgs
}
(deltas, avg%ubound)
}
def decode(deltas:ListBuffer[Int],avg:Int,ubound:Int):ListBuffer[Int]={
var avgs = ListBuffer[Int](avg)
var l = 1
while(deltas.nonEmpty){
for(i <- 0 to l ){
val delta = deltas.last
deltas -= -1
val avg = avgs.last
avgs -= -1
val a = wrap(math.ceil(avg-delta/2.0).toInt,ubound)
val b = wrap(math.ceil(avg+delta/2.0).toInt,ubound)
}
l*=2
}
avgs
}
def is_pow2(n:Int):Boolean={
(n & -n) == n
}
}
But Code gets stuck at "var deltas1 = encode(samples(i), ubound)" and doesn't give any output. How can I improve my implementation? Thanks in advance!
Your error is on this line:
lst -= 1 // removing index 0 element from the list.
This doesn't remove index 0 from the list. It removes the element 1 (if it exists). This means that the list never becomes empty. The while-loop while (lst.nonEmpty) will therefore never terminate.
To remove the first element of the list, simply use lst.remove(0).
I am converting my Scala code to pyspark like below, but got different counts for the final RDD.
My Scala code:
val scalaRDD = rowRDD.map {
row: Row =>
var rowList: ListBuffer[Row] = ListBuffer()
rowList.add(row)
(row.getString(1) + "_" + row.getString(6), rowList)
}.reduceByKey{ (list1,list2) =>
var rowList: ListBuffer[Row] = ListBuffer()
for (i <- 0 to list1.length -1) {
val row1 = list1.get(i);
var foundMatch = false;
breakable {
for (j <- 0 to list2.length -1) {
var row2 = list2.get(j);
val result = mergeRow(row1, row2)
if (result._1) {
list2.set(j, result._2)
foundMatch = true;
break;
}
} // for j loop
} // breakable for j
if(!foundMatch) {
rowList.add(row1);
}
}
list2.addAll(rowList);
list2
}.flatMap { t=> t._2 }
where
def mergeRow(row1:Row, row2:Row):(Boolean, Row)= {
var z:Array[String] = new Array[String](row1.length)
var hasDiff = false
for (k <- 1 to row1.length -2){
// k = 0 : ID, always different
// k = 43 : last field, which is not important
if (row1.getString(0) < row2.getString(0)) {
z(0) = row2.getString(0)
z(43) = row2.getString(43)
} else {
z(0) = row1.getString(0)
z(43) = row1.getString(43)
}
if (Option(row2.getString(k)).getOrElse("").isEmpty && !Option(row1.getString(k)).getOrElse("").isEmpty) {
z(k) = row1.getString(k)
hasDiff = true
} else if (!Option(row1.getString(k)).getOrElse("").isEmpty && !Option(row2.getString(k)).getOrElse("").isEmpty && row1.getString(k) != row2.getString(k)) {
return (false, null)
} else {
z(k) = row2.getString(k)
}
} // for k loop
if (hasDiff) {
(true, Row.fromSeq(z))
} else {
(true, row2)
}
}
I then tried to convert them to pyspark code as below:
pySparkRDD = rowRDD.map (
lambda row : singleRowList(row)
).reduceByKey(lambda list1,list2: mergeList(list1,list2)).flatMap(lambda x : x[1])
where I have:
def mergeRow(row1, row2):
z=[]
hasDiff = False
#for (k <- 1 to row1.length -2){
for k in xrange(1, len(row1) - 2):
# k = 0 : ID, always different
# k = 43 : last field, which is not important
if (row1[0] < row2[0]):
z[0] = row2[0]
z[43] = row2[43]
else:
z[0] = row1[0]
z[43] = row1[43]
if not(row2[k]) and row1[k]:
z[k] = row1[k].strip()
hasDiff = True
elif row1[k] and row2[k] and row1[k].strip() != row2[k].strip():
return (False, None)
else:
z[k] = row2[k].strip()
if hasDiff:
return (True, Row.fromSeq(z))
else:
return (True, row2)
and
def singleRowList(row):
myList=[]
myList.append(row)
return (row[1] + "_" + row[6], myList)
and
def mergeList(list1, list2):
rowList = []
for i in xrange(0, len(list1)-1):
row1 = list1[i]
foundMatch = False
for j in xrange(0, len(list2)-1):
row2 = list2[j]
resultBool, resultRow = mergeRow(row1, row2)
if resultBool:
list2[j] = resultRow
foundMatch = True
break
if foundMatch == False:
rowList.append(row1)
list2.extend(rowList)
return list2
BTW, rowRDD is converted from a data frame. i.e. rowRDD = myDF.rdd
However, I got different counts for scalaRDD and pySparkRDD. I checked the codes many times but couldn't figure out what I missed. Does anyone have any ideas? Thanks!
Consider this:
scala> (1 to 5).length
res1: Int = 5
and this:
>>> len(xrange(1, 5))
4
I have this code:
val rdd = sc.textFile(sample.log")
val splitRDD = rdd.map(r => StringUtils.splitPreserveAllTokens(r, "\\|"))
val rdd2 = splitRDD.filter(...).map(row => createRow(row, fieldsMap))
sqlContext.createDataFrame(rdd2, structType).save(
org.apache.phoenix.spark, SaveMode.Overwrite, Map("table" -> table, "zkUrl" -> zkUrl))
def createRow(row: Array[String], fieldsMap: ListMap[Int, FieldConfig]): Row = {
//add additional index for invalidValues
val arrSize = fieldsMap.size + 1
val arr = new Array[Any](arrSize)
var invalidValues = ""
for ((k, v) <- fieldsMap) {
val valid = ...
var value : Any = null
if (valid) {
value = row(k)
// if (v.code == "SOURCE_NAME") --> 5th column in the row
// sourceNameCount = row(k).split(",").size
} else {
invalidValues += v.code + " : " + row(k) + " | "
}
arr(k) = value
}
arr(arrSize - 1) = invalidValues
Row.fromSeq(arr.toSeq)
}
fieldsMap contains the mapping of the input columns: (index, FieldConfig). Where FieldConfig class contains "code" and "dataType" values.
TOPIC -> (0, v.code = "TOPIC", v.dataType = "String")
GROUP -> (1, v.code = "GROUP")
SOURCE_NAME1,SOURCE_NAME2,SOURCE_NAME3 -> (4, v.code = "SOURCE_NAME")
This is the sample.log:
TOPIC|GROUP|TIMESTAMP|STATUS|SOURCE_NAME1,SOURCE_NAME2,SOURCE_NAME3|
SOURCE_TYPE1,SOURCE_TYPE2,SOURCE_TYPE3|SOURCE_COUNT1,SOURCE_COUNT2,SOURCE_COUNT3|
DEST_NAME1,DEST_NAME2,DEST_NAME3|DEST_TYPE1,DEST_TYPE2,DEST_TYPE3|
DEST_COUNT1,DEST_COUNT2,DEST_COUNT3|
The goal is to split the input (sample.log), based on the number of source_name(s).. In the example above, the output will have 3 rows:
TOPIC|GROUP|TIMESTAMP|STATUS|SOURCE_NAME1|SOURCE_TYPE1|SOURCE_COUNT1|
|DEST_NAME1|DEST_TYPE1|DEST_COUNT1|
TOPIC|GROUP|TIMESTAMP|STATUS|SOURCE_NAME2|SOURCE_TYPE2|SOURCE_COUNT2|
DEST_NAME2|DEST_TYPE2|DEST_COUNT2|
TOPIC|GROUP|TIMESTAMP|STATUS|SOURCE_NAME3|SOURCE_TYPE3|SOURCE_COUNT3|
|DEST_NAME3|DEST_TYPE3|DEST_COUNT3|
This is the new code I am working on (still using createRow defined above):
val rdd2 = splitRDD.filter(...).flatMap(row => {
val srcName = row(4).split(",")
val srcType = row(5).split(",")
val srcCount = row(6).split(",")
val destName = row(7).split(",")
val destType = row(8).split(",")
val destCount = row(9).split(",")
var newRDD: ArrayBuffer[Row] = new ArrayBuffer[Row]()
//if (srcName != null) {
println("\n\nsrcName.size: " + srcName.size + "\n\n")
for (i <- 0 to srcName.size - 1) {
// missing column: destType can sometimes be null
val splittedRow: Array[String] = Row.fromSeq(Seq((row(0), row(1), row(2), row(3),
srcName(i), srcType(i), srcCount(i), destName(i), "", destCount(i)))).toSeq.toArray[String]
newRDD = newRDD ++ Seq(createRow(splittedRow, fieldsMap))
}
//}
Seq(Row.fromSeq(Seq(newRDD)))
})
Since I am having an error in converting my splittedRow to Array[String]
(".toSeq.toArray[String]")
error: type arguments [String] do not conform to method toArray's type parameter bounds [B >: Any]
I decided to update my splittedRow to:
val rowArr: Array[String] = new Array[String](10)
for (j <- 0 to 3) {
rowArr(j) = row(j)
}
rowArr(4) = srcName(i)
rowArr(5) = row(5).split(",")(i)
rowArr(6) = row(6).split(",")(i)
rowArr(7) = row(7).split(",")(i)
rowArr(8) = row(8).split(",")(i)
rowArr(9) = row(9).split(",")(i)
val splittedRow = rowArr
You could use a flatMap operation instead of a map operation to return multiple rows. Consequently, your createRow would be refactored to createRows(row: Array[String], fieldsMap: List[Int, IngestFieldConfig]): Seq[Row].