converting RDD to vector with fixed length file data - scala

I'm new to Spark + Scala and still developing my intuition. I have a file containing many samples of data. Every 2048 lines represents a new sample. I'm attempting to convert each sample into a vector and then run through a k-means clustering algorithm. The data file looks like this:
123.34 800.18
456.123 23.16
...
When I'm playing with a very small subset of the data, I create an RDD from the file like this:
val fileData = sc.textFile("hdfs://path/to/file.txt")
and then create the vector using this code:
val freqLineCount = 2048
val numSamples = 200
val freqPowers = fileData.map( _.split(" ")(1).toDouble )
val allFreqs = freqPowers.take(numSamples*freqLineCount).grouped(freqLineCount)
val lotsOfVecs = allFreqs.map(spec => Vectors.dense(spec) ).toArray
val lotsOfVecsRDD = sc.parallelize( lotsOfVecs ).cache()
val numClusters = 2
val numIterations = 2
val clusters = KMeans.train(lotsOfVecsRDD, numClusters, numIterations)
The key here is that I can call .grouped on an array of strings and it returns an array of arrays with the sequential 2048 values. That is then trivial to convert to vectors and run it through the KMeans training algo.
I'm attempting to run this code on a much larger data set and running into java.lang.OutOfMemoryError: Java heap space errors. Presumably because I'm calling the take method on my freqPowers variable and then performing some operations on that data.
How would I go about achieving my goal of running KMeans on this data set keeping in mind that
each data sample occurs every 2048 lines in the file (so the file should be parsed somewhat sequentially)
this code needs to run on a distributed cluster
I need to not run out of memory :)
thanks in advance

You can do something like:
val freqLineCount = 2048
val freqPowers = fileData.flatMap(_.split(" ")(1).toDouble)
// Replacement of your current code.
val groupedRDD = freqPowers.zipWithIndex().groupBy(_._2 / freqLineCount)
val vectorRDD = groupedRDD.map(grouped => Vectors.dense(grouped._2.map(_._1).toArray))
val numClusters = 2
val numIterations = 2
val clusters = KMeans.train(vectorRDD, numClusters, numIterations)
The replacing code uses zipWithIndex() and division of longs to group RDD elements into freqLineCount chunks. After the grouping, the elements in question are extracted into their actual vectors.

Related

Apply function to subset of Spark Datasets (Iteratively)

I have a Dataset of geospatial data that I need to sample in a grid like fashion. I want divide the experiment area into a grid, and use a sampling function called "sample()" that takes three inputs, on each square of the grid, and then merge the sampled datasets back together. My current method utilized a map function, but I've learned that you can't have an RDD of RDDs/Datasets/DataFrames. So how can I apply the sampling function to subsets of my dataset? Here is the code I tried to write in map reduce fashion:
val sampleDataRDD = boundaryValuesDS.rdd.map(row => {
val latMin = row._1
val latMax = latMin + 0.0001
val lonMin = row._2
val lonMax = lonMin + 0.0001
val filterDF = featuresDS.filter($"Latitude" > latMin).filter($"Latitude"< latMax).filter($"Longitude">lonMin).filter($"Longitude"< lonMin)
val sampleDS = filterDF.sample(false, 0.05, 1234)
(sampleDS)
})
val output = sampleDataDS.reduce(_ union _)
I've tried various ways of dealing with this. Converting sampleDS to an RDD and to a List, but I still continue to get a NullPointerExcpetion when calling "collect" on output.
I'm thinking I need to find a different solution, but I don't see it.
I've referenced these questions thus far:
Caused by: java.lang.NullPointerException at org.apache.spark.sql.Dataset
Creating a Spark DataFrame from an RDD of lists

Scala: two sliding more efficiently

I am working on the Quick Start of Apache Spark. I was wondering about efficiency of transformations on collections. I would like to know how to improve the following code:
// Variable initialisation
val N = 300.0
val input = (0.0 to N-1 by 1.0).toArray
val firstBigDivi = 100
val windowDuration = 6
val windowStep = 3
// Process
val windowedInput = inputArray.
sliding(firstBigDivi,firstBigDivi).toArray. //First, a big division
map(arr=>arr.sliding(windowDuration,windowStep).toArray)//Second, divide the division
Is there another way to do the same more efficiently? I think this code iterates twice over the input array (which could be an issue for big collections) is that right?
sliding creates an Iterator, so mapping that would be "cheap". You have a superfluous .toArray though between sliding and map. It suffices
val windowedInputIt = input.
sliding(firstBigDivi,firstBigDivi) //First, a big division
.map(arr=>arr.sliding(windowDuration,windowStep).toArray)
Then you can evaluate that iterator into an Array by writing
val windowedInput = windowedInputIt.toArray

Performance of BucketedRandomProjectionLSH (org.apache.spark.ml.feature.BucketedRandomProjectionLSH)

Hi I am using the BucketedRandomProjectionLSH (2 buckets 3 hash tables) algorithm to find similar people in a dataset of ~300,000 records. I am creating a sparse vector of bigrams for each record (1296 dimensions in each vector) and doing an approximate similarity self join on the dataset which as I mentioned is not too large.
On an 3 node spark cluster (Master:m3.xlarge, Core:2 m4.4xlarge), it takes ~7 hours to complete.
The performance is too slow and I am looking for some benchmarks that someone may have created for this algorithm. Additionally, any guidance on how to tune this algorithm will be really helpful.
Here is the code snippet for your reference:
val rdd=sc.loadFromMongoDB(ReadConfig(Map("uri" -> "mongodb://localhost:27017/Single.master","readPreference.name" -> "secondaryPreferred")))
val aggregatedRdd = rdd.withPipeline(Seq(Document.parse("{$unwind:'$sources'}"),Document.parse("{$project:{_id:0,id:'$sources._id',val:{$toLower:{$concat:['$sources.first_name','$sources.middle_name','$sources.last_name',{$substr:['$sources.gender',0,1]},'$sources.dob','$sources.address.street','$sources.address.city','$sources.address.state','$sources.address.zip','$sources.phone','$sources.email']}}}}")))
val fDF=aggregatedRdd.map(line=>line.values()).map(ll=>bigramMap(ll.toArray)).toDF("id","idx","keys")
val columnNames = Seq("idx","keys")
val result = fDF.select(columnNames.head, columnNames.tail: _*)
val brp = new BucketedRandomProjectionLSH().setBucketLength(2).setNumHashTables(3).setInputCol("keys").setOutputCol("values")
val model = brp.fit(result)
var outDD=model.approxSimilarityJoin(result, result, 100).filter("datasetA.idx < datasetB.idx").select(col("datasetA.idx").alias("idA"),col("datasetB.idx").alias("idB"),col("distCol"))
I tried BucketedRandomProjectionLSH to 10,000,000 data.
It takes 3hours.
I only stored Dataframe's cash before.
df.persist()

Read multiple *.tar.gz files as input for Spark in Scala [duplicate]

This question already has answers here:
How to read gz files in Spark using wholeTextFiles
(2 answers)
Closed 6 years ago.
I intend to apply linear regression on a dataset. it works fine when I apply a subset of the data in *.txt format as below:
// how could I read 26 *.tar.gz compressed files into a DataFrame?
val inputpath = "/Users/jasonzhu/Downloads/a.txt"
val rawDF = sc.textFile(inputpath).toDF()
val df = se.kth.spark.lab1.task2.Main.body(sqlContext, rawDF)
val splitDf = df.randomSplit(Array(0.95, 0.05), seed = 42L)
val (obsDF, testDF) =(splitDf(0).cache(), splitDf(1))
val maxIter = 6
val regParam = 0.07
val elasticNetParam = 0.1
println(s"maxIter=${maxIter}, regParam=${regParam}, elasticNetParam=${elasticNetParam}")
val myLR = new LinearRegression()
.setMaxIter(maxIter)
.setRegParam(regParam)
.setElasticNetParam(elasticNetParam)
val lrStage = 0
val pipeline = new Pipeline().setStages(Array(myLR))
val pipelineModel: PipelineModel = pipeline.fit(obsDF)
val lrModel = pipelineModel.stages(lrStage).asInstanceOf[LinearRegressionModel]
val trainingSummary = lrModel.summary
//print rmse of our model
println(s"RMSE: ${trainingSummary.rootMeanSquaredError}")
println(s"r2: ${trainingSummary.r2}")
//do prediction - print first k
val predictedDF = pipelineModel.transform(testDF)
predictedDF.show(5, false)
After spiking, I intend to apply the whole dataset, which resides in 26 *.tar.gz files, to the linear regression model. I'd like to know how I should read these compressed files into a DataFrame of Spark and consume it efficiently by taking the advantage of parallelism in Spark. Thanks!
textFile() method can take wildcards as well. From documentation:
All of Spark’s file-based input methods, including textFile, support running on directories, compressed files, and wildcards as well. For example, you can use textFile("/my/directory"), textFile("/my/directory/*.txt"), and textFile("/my/directory/*.gz").
Start with an empty RDD and run a loop to read each of the files as a RDD and keep adding the RDD to the initial RDD by a union operation in each iteration.

Compute average of numbers in a text file in spark scala

Lets say I have a file with each line representing a number. How do I find average of all the numbers in the file in Scala - Spark.
val data = sc.textFile("../../numbers.txt")
val sum = data.reduce( (x,y) => x+y )
val avg = sum/data.count()
The problem here is x and y are strings. How do I convert them into Long within the reduce function.
You need to apply a RDD.map which parses the strings before reducing them:
val sum = data.map(_.toInt).reduce(_+_)
val avg = sum / data.count()
But I think you're better off using DoubleRDDFunctions.mean instead of calculating it yourself:
val mean = data.map(_.toInt).mean()