Store Spark distributed matrix in MongoDB - mongodb

After calculating the distance matrix related to a set of points stored in a file on HDFS, I need to store the calculated distance matrix which is in a distributed form (CoordinateMatrix/RowMatrix), in MongoDB through MongoDB Connector for Apache Spark. Is there a recommended way to do this or even a better connector for such an operation ?
Here is the part of my code:
val data = sc.textFile("hdfs://localhost:54310/usrp/copy_sample_data.txt")
val points = data.map(s => Vectors.dense(s.split(',').map(_.toDouble)))
val indexed = points.zipWithIndex()
val indexedData = indexed.map{case (value, index) => (index, value)}
val pairedSamples = indexedData.cartesian(indexedData)
val dist = pairedSamples.map{case (x,y) => ((x,y),distance(x._2,y._2))}.map{case ((x,y),z) => (((x,y),z,covariance(z)))}
val entries: RDD[MatrixEntry] = dist.map{case (((x,y),z,cov)) => MatrixEntry(x._1, y._1, cov)}
val coomat: CoordinateMatrix = new CoordinateMatrix(entries)
To further note, I have created this matrix in Spark from a RDD. So maybe it is even better/possible to save data from RDD to Mongodb ?

CoordinateMatrix and RowMatrix are basically wrappers around RDD[MatrixEntry] and RDD[Vector] respectively and both can be relatively saved to MongoDB. For coordinate matrix:
val spark: SparkSession = ???
import spark.implicits._
// For 1.x
// val sqlContext: SQLContext = ???
// import sqlContext.implicits._
val options = Map(
"uri" -> ???
"database" -> ???
)
val coordMat = new CoordinateMatrix(sc.parallelize(Seq(
MatrixEntry(1, 3, 1.4), MatrixEntry(3, 6, 2.8))
))
coordMat.entries.toDF().write
.options(options)
.option("collection", "coordinates")
.format("com.mongodb.spark.sql")
.save()
you'll get documents of shape:
{'_id': ObjectId('...'), 'i': 3, 'j': 6, 'value': 2.8}
which can be easily casted back to the original form:
val entries = spark.read
.options(options)
.option("collection", "coordinates")
.format("com.mongodb.spark.sql")
.load()
.drop("_id")
.schema(...)
.as[MatrixEntry]
new CoordinateMatrix(entries.rdd)
Pretty much the same thing can be done for RowMatrix but you'll need a little bit more work (represent Vectors either as dense arrays or sparse tuple (size, indices, values)).
Unfortunately in both cases (CoordinateMatrix, RowMatrix) you'll loose information about matrix shape.

Related

How to apply kmeans for parquet file?

I want to apply k-means for my parquet file.but error appear .
edited
java.lang.ArrayIndexOutOfBoundsException: 2
code
val Data = sqlContext.read.parquet("/usr/local/spark/dataset/norm")
val parsedData = Data.rdd.map(s => Vectors.dense(s.getDouble(1),s.getDouble(2))).cache()
import org.apache.spark.mllib.clustering.KMeans
val numClusters = 30
val numIteration = 1
val userClusterModel = KMeans.train(parsedData, numClusters, numIteration)
val userfeature1 = parsedData.first
val userCost = userClusterModel.computeCost(parsedData)
println("WSSSE for users: " + userCost)
How to solve this error?
I believe you are using https://spark.apache.org/docs/latest/mllib-clustering.html#k-means as a reference to build your K-Means model.
In the example
val data = sc.textFile("data/mllib/kmeans_data.txt")
val parsedData = data.map(s => Vectors.dense(s.split(' ').map(_.toDouble))).cache()
data is of type org.apache.spark.rdd.RDD In your case sqlContext.read.parquet is of type DataFrame. Therefore you would have to convert the dataframe to RDD to perform the split operation
To convert from Dataframe to RDD you can use the below sample as reference
val rows: RDD[Row] = df.rdd
val parsedData = Data.rdd.map(s => Vectors.dense(s.getInt(0),s.getDouble(1))).cache()

How to calculate euclidean distance of each row in a dataframe to a constant reference array

I have a dataframe which is created from parquet files that has 512 columns(all float values).
I am trying to calculate euclidean distance of each row in my dataframe to a constant reference array.
My development environment is Zeppelin 0.7.3 with spark 2.1 and Scala. Here is the zeppelin paragraphs I run:
import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.linalg.Vectors
//Create dataframe from parquet file
val filePath = "/tmp/vector.parquet/*.parquet"
val df = spark.read.parquet(filePath)
//Create assembler and vectorize df
val assembler = new VectorAssembler()
.setInputCols(df.columns)
.setOutputCol("features")
val training = assembler.transform(df)
//Create udf
val eucDisUdf = udf((features: Vector,
myvec:Vector)=>Vectors.sqdist(features, myvec))
//Cretae ref vector
val myScalaVec = Vectors.dense( Array.fill(512)(25.44859))
val distDF =
training2.withColumn("euc",eucDisUdf($"features",myScalaVec))
This code gives the following error for eucDisUdf call:
error: type mismatch; found : org.apache.spark.ml.linalg.Vector
required: org.apache.spark.sql.Column
I appreciate any idea how to eliminate this error and compute distances properly in scala.
I think you can use currying to achieve that:
def eucDisUdf(myvec:Vector) = udf((features: Vector) => Vectors.sqdist(features, myvec))
val myScalaVec = Vectors.dense(Array.fill(512)(25.44859))
val distDF = training2.withColumn( "euc", eucDisUdf(myScalaVec)($"features") )

Spark MLib - Create LabeledPoint from RDD[Vector] features and RDD[Vector] label

I am building a training set using two text files representing documents and labels.
Documents.txt
hello world
hello mars
Labels.txt
0
1
I have read in these files and converted my document data to a tf-idf weighted term-document matrix which is represented as a RDD[Vector]. I have also read-in and created a RDD[Vector] for my labels:
val docs: RDD[Seq[String]] = sc.textFile("Documents.txt").map(_.split(" ").toSeq)
val labs: RDD[Vector] = sc.textFile("Labels.txt")
.map(s => Vectors.dense(s.split(',').map(_.toDouble)))
val hashingTF = new HashingTF()
val tf: RDD[Vector] = hashingTF.transform(docs)
tf.cache()
val idf = new IDF(minDocFreq = 3).fit(tf)
val tfidf: RDD[Vector] = idf.transform(tf)
I would like to use tfidf and labsto create a RDD[LabeledPoint], but I am not sure how to apply a mapping with two different RDDs. Is this even possible/efficient, or do I need to rethink my approach?
One way to handle this is to join based on indices:
import org.apache.spark.RangePartitioner
// Add indices
val idfIndexed = idf.zipWithIndex.map(_.swap)
val labelsIndexed = labels.zipWithIndex.map(_.swap)
// Create range partitioner on larger RDD
val partitioner = new RangePartitioner(idfIndexed.partitions.size, idfIndexed)
// Join with custom partitioner
labelsIndexed.join(idfIndexed, partitioner).values

How to convert a map to Spark's RDD

I have a data set which is in the form of some nested maps, and its Scala type is:
Map[String, (LabelType,Map[Int, Double])]
The first String key is a unique identifier for each sample, and the value is a tuple that contains the label (which is -1 or 1), and a nested map which is the sparse representation of the non-zero elements which are associated with the sample.
I would like to load this data into Spark (using MUtil) and train and test some machine learning algorithms.
It's easy to write this data into a file with LibSVM's sparse encoding, and then load it in Spark:
writeMapToLibSVMFile(data_map,"libsvm_data.txt") // Implemeneted some where else
val conf = new SparkConf().setAppName("DecisionTree").setMaster("local[4]")
val sc = new SparkContext(conf)
// Load and parse the data file.
val data = MLUtils.loadLibSVMFile(sc, "libsvm_data.txt")
// Split the data into training and test sets
val splits = data.randomSplit(Array(0.7, 0.3))
val (trainingData, testData) = (splits(0), splits(1))
// Train a DecisionTree model.
I know it should be as easy to directly load the data variable from data_map, but I don't know how.
Any help is appreciated!
I guess you want something like this
import org.apache.spark.rdd.RDD
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.regression.LabeledPoint
// If you know this upfront, otherwise it can be computed
// using flatMap
// trainMap.values.flatMap(_._2.keys).max + 1
val nFeatures: Int = ???
val trainMap = Map(
"x001" -> (-1, Map(0 -> 1.0, 3 -> 5.0)),
"x002" -> (1, Map(2 -> 5.0, 3 -> 6.0)))
val trainRdd: RDD[(String, LabeledPoint)] = sc
// Convert Map to Seq so it can passed to parallelize
.parallelize(trainMap.toSeq)
.map{case (id, (labelInt, values)) => {
// Convert nested map to Seq so it can be passed to Vector
val features = Vectors.sparse(nFeatures, values.toSeq)
// Convert label to Double so it can be used for LabeledPoint
val label = labelInt.toDouble
(id, LabeledPoint(label, features))
}}
It can be done in two ways
sc.textFile("libsvm_data.txt").map(s => createObject())
Convert map into collection of objects and use sc.parallelize()
The first one is preferrable.

How to change RowMatrix into Array in Spark or export it as a CSV?

I've got this code in Scala:
val mat: CoordinateMatrix = new CoordinateMatrix(data)
val rowMatrix: RowMatrix = mat.toRowMatrix()
val svd: SingularValueDecomposition[RowMatrix, Matrix] = rowMatrix.computeSVD(100, computeU = true)
val U: RowMatrix = svd.U // The U factor is a RowMatrix.
val S: Vector = svd.s // The singular values are stored in a local dense vector.
val V: Matrix = svd.V // The V factor is a local dense matrix.
val uArray: Array[Double] = U.toArray // doesn't work, because there is not toArray function in RowMatrix type
val sArray: Array[Double] = S.toArray // works good
val vArray: Array[Double] = V.toArray // works good
How can I change U into uArray or similar type, that could be printed out into CSV file?
That's a basic operation, here is what you have to do considering that U is a RowMatrix as following :
val U = svd.U
rows() is a RowMatrix method that allows you to get an RDD from your RowMatrix by row.
You'll just need to apply rows on your RowMatrix and map the RDD[Vector] to create an Array that you would concatenate into a string creating an RDD[String].
val rdd = U.rows.map( x => x.toArray.mkString(","))
All you'll have to do now it to save the RDD :
rdd.saveAsTextFile(path)
It works:
def exportRowMatrix(matrix:RDD[String], fileName: String) = {
val pw = new PrintWriter(fileName)
matrix.collect().foreach(line => pw.println(line))
pw.flush
pw.close
}
val rdd = U.rows.map( x => x.toArray.mkString(","))
exportRowMatrix(rdd, "U.csv")