convert bipartite graph to adjacency matrix spark scala - scala

I'm trying to convert edge list which is in the following format
data = [('a', 'developer'),
('b', 'tester'),
('b', 'developer'),
('c','developer'),
('c', 'architect')]
where the adjacency matrix will be in the form of
developer tester architect
a 1 0 0
b 1 1 0
c 1 0 1
I want to store the matrix in the following format
1 0 0
1 1 0
1 0 1
I've tried it using GraphX
def pageHash(title:String ) = title.toLowerCase.replace(" ","").hashCode.toLong
val edges: RDD[Edge[String]] = sc.textFile("/user/query.csv").map { line =>
val row = line.split(",")
Edge(pageHash(row(0)), pageHash(row(1)), "1")
}
val graph: Graph[Int, String] = Graph.fromEdges(edges, defaultValue = 1)
I'm able to create the graph but not able to convert to adjacent matrix representation.

One possible way to approach is something this:
Convert RDD to DataFrame
val rdd = sc.parallelize(Seq(
("a", "developer"), ("b", "tester"), ("b", "developer"),
("c","developer"), ("c", "architect")))
val df = rdd.toDF("row", "col")
Index columns:
import org.apache.spark.ml.feature.StringIndexer
val indexers = Seq("row", "col").map(x =>
new StringIndexer().setInputCol(x).setOutputCol(s"${x}_idx").fit(df)
)
Transform data and create RDD[MatrixEntry]:
import org.apache.spark.functions.lit
import org.apache.spark.mllib.linalg.distributed.{MatrixEntry, CoordinateMatrix}
val entries = indexers.foldLeft(df)((df, idx) => idx.transform(df))
.select($"row_idx", $"col_idx", lit(1.0))
.as[MatrixEntry] // Spark 1.6. For < 1.5 map manually
.rdd
Create matrix
new CoordinateMatrix(entries)
This matrix can be further converted to any other type of distributed matrix including RowMatrix and IndexedRowMatrix.

Related

How to get Rdd values that exists in array?

I have Rdd[(Int, Double)]
and an array[Int] and i want to get a new Rdd[(Int, Double)] with only those Int that exist in the array too.
E.g if my array is [0, 1, 2] and my rdd is (1, 4.2), (5, 4.3), i want to get as output rdd only the (1, 4.2)
I am thinking about using filter with a function that iterates the array, do the comparison and returns true/false but i am not sure if it is the logic of spark.
Something like:
val newrdd = rdd.filter(x => f(x._1, array))
where
f(x:Int, y:Array[In]): Boolean ={
val z = false
for (a<-0 to y.length-1){
if (x == y(a)){
z = true
z}
z
}
//Input rdd
val rdd = sc.parallelize(Seq((1,4.2),(5,4.3)))
//array, convert to rdd
val arrRdd = sc.parallelize(Array(0,1,2))
//convert rdd and arrRdd to dataframe
val arrDF = arrRdd.toDF()
val df = rdd.toDF()
//do join and again convert it to rdd
df.join(arrDF,df.col("_1") === arrDF.col("value"),"leftsemi").rdd.collect
//output Array([1,4.2])
Try this:
rdd.filter(x => Array(0,1,2).contains(x._1)).collect.foreach(println)
Output:
(1,4.2)
val acceptableValues = array.toSet
rdd.filter { case (x, _) => acceptableValues(x) }

Compute the maximum length assigned to each element using scala

For example, this is the content in a file:
20,1,helloworld,alaaa
2,3,world,neww
1,223,ala,12341234
Desired output"
0-> 2
1-> 3
2-> 10
3-> 8
I want to find max-length assigned to each element.
It's possible to extend this to any number of columns. First read the file as a dataframe:
val df = spark.read.csv("path")
Then create an SQL expression for each column and evaluate it with expr:
val cols = df.columns.map(c => s"max(length(cast($c as String)))").map(expr(_))
Select the new columns as an array and covert to Map:
df.select(array(cols:_*)).as[Seq[Int]].collect()
.head
.zipWithIndex.map(_.swap)
.toMap
This should give you the desired Map.
Map(0 -> 2, 1 -> 3, 2 -> 10, 3 -> 8)
Update:
OP's example suggests that they will be of equal lengths.
Using Spark-SQL and max(length()) on the DF columns is the idea that is being suggested in this answer.
You can do:
val xx = Seq(
("20","1","helloworld","alaaa"),
("2","3","world","neww"),
("1","223","ala","12341234")
).toDF("a", "b", "c", "d")
xx.registerTempTable("yy")
spark.sql("select max(length(a)), max(length(b)), max(length(c)), max(length(d)) from yy")
I would recommend using RDD's aggregate method:
val rdd = sc.textFile("/path/to/textfile").
map(_.split(","))
// res1: Array[Array[String]] = Array(
// Array(20, 1, helloworld, alaaa), Array(2, 3, world, neww), Array(1, 223, ala, 12341234)
// )
val seqOp = (m: Array[Int], r: Array[String]) =>
(r zip m).map( t => Seq(t._1.length, t._2).max )
val combOp = (m1: Array[Int], m2: Array[Int]) =>
(m1 zip m2).map( t => Seq(t._1, t._2).max )
val size = rdd.collect.head.size
rdd.
aggregate( Array.fill[Int](size)(0) )( seqOp, combOp ).
zipWithIndex.map(_.swap).
toMap
// res2: scala.collection.immutable.Map[Int,Int] = Map(0 -> 2, 1 -> 3, 2 -> 10, 3 -> 8)
Note that aggregate takes:
an array of 0's (of size equal to rdd's row size) as the initial value,
a function seqOp for calculating maximum string lengths within a partition, and,
another function combOp to combine results across partitions for the final maximum values.

How to transform a dense matrix to rdd in Scala Spark?

I have dense matrix:
-0.1503191229976037 -0.17794560268465542 0.3372516173766848
-0.6265768782935162 -0.6986084179343495 -1.6553741696973772
How do I convert it to RDD of format:
0, 0, -0.1503191229976037
0, 1, -0.17794560268465542
0, 2, 0.3372516173766848
1, 0, -0.6265768782935162
1, 1, -0.6986084179343495
1, 2, -1.6553741696973772
The first two values are indices.
The type of my input matrix is:
org.apache.spark.mllib.linalg.DenseMatrix
The expected output type is: org.apache.spark.rdd.RDD[scala.Tuple2[scala.Tuple2[Int, Int], Double]]
How do I do it on Spark using Scala?
Assuming that your vectors are represented by the actual data structure Vector:
val vectors: List[Vector[Double]] = ???
val vecsWithIndices = for {
(vIdx, vec) <- Stream.from(0).zip(vectors)
i <- 0 until 3
} yield (vIdx, i, vec(i))
val rdd = spark.parallelize(vecsWithIndices)
The Stream.from(0) generates the index of the vector, the i runs over components of the vector.
Solved it like this:
val denseMatrix=for (
i <- 0 to 2;
j <- 0 to 2
) yield ((i, j), z.apply(i,j))
val rdd = sc.parallelize(z1)
Please let me know if there is a better way. Thank you.

Dropping multiple columns from Spark dataframe by Iterating through the columns from a Scala List of Column names

I have a dataframe which has columns around 400, I want to drop 100 columns as per my requirement.
So i have created a Scala List of 100 column names.
And then i want to iterate through a for loop to actually drop the column in each for loop iteration.
Below is the code.
final val dropList: List[String] = List("Col1","Col2",...."Col100”)
def drpColsfunc(inputDF: DataFrame): DataFrame = {
for (i <- 0 to dropList.length - 1) {
val returnDF = inputDF.drop(dropList(i))
}
return returnDF
}
val test_df = drpColsfunc(input_dataframe)
test_df.show(5)
If you just want to do nothing more complex than dropping several named columns, as opposed to selecting them by a particular condition, you can simply do the following:
df.drop("colA", "colB", "colC")
Answer:
val colsToRemove = Seq("colA", "colB", "colC", etc)
val filteredDF = df.select(df.columns .filter(colName => !colsToRemove.contains(colName)) .map(colName => new Column(colName)): _*)
This should work fine :
val dropList : List[String] |
val df : DataFrame |
val test_df = df.drop(dropList : _*)
You can just do,
def dropColumns(inputDF: DataFrame, dropList: List[String]): DataFrame =
dropList.foldLeft(inputDF)((df, col) => df.drop(col))
It will return you the DataFrame without the columns passed in dropList.
As an example (of what's happening behind the scene), let me put it this way.
scala> val list = List(0, 1, 2, 3, 4, 5, 6, 7)
list: List[Int] = List(0, 1, 2, 3, 4, 5, 6, 7)
scala> val removeThese = List(0, 2, 3)
removeThese: List[Int] = List(0, 2, 3)
scala> removeThese.foldLeft(list)((l, r) => l.filterNot(_ == r))
res2: List[Int] = List(1, 4, 5, 6, 7)
The returned list (in our case, map it to your DataFrame) is the latest filtered. After each fold, the latest is passed to the next function (_, _) => _.
You can use the drop operation to drop multiple columns. If you are having column names in the list that you need to drop than you can pass that using :_* after the column list variable and it would drop all the columns in the list that you pass.
Scala:
val df = Seq(("One","Two","Three"),("One","Two","Three"),("One","Two","Three")).toDF("Name","Name1","Name2")
val columnstoDrop = List("Name","Name1")
val df1 = df.drop(columnstoDrop:_*)
Python:
In python you can use the * operator to do the same stuff.
data = [("One", "Two","Three"), ("One", "Two","Three"), ("One", "Two","Three")]
columns = ["Name","Name1","Name2"]
df = spark.sparkContext.parallelize(data).toDF(columns)
columnstoDrop = ["Name","Name1"]
df1 = df.drop(*columnstoDrop)
Now in df1 you would get the dataframe with only one column i.e Name2.

Convert local Vectors to RDD[Vector]

I'm new to Spark and Scala and I'm trying to read its documentation on MLlib.
The tutorial on http://spark.apache.org/docs/1.4.0/mllib-data-types.html,
import org.apache.spark.mllib.linalg.Vector
import org.apache.spark.mllib.linalg.distributed.RowMatrix
val rows: RDD[Vector] = ... // an RDD of local vectors
// Create a RowMatrix from an RDD[Vector].
val mat: RowMatrix = new RowMatrix(rows)
// Get its size.
val m = mat.numRows()
val n = mat.numCols()
does not show how to construct an RDD[Vector] (variable rows) from a list of local vectors.
So for example, I have executed (as part of my exploration) in spark-shell
val v0: Vector = Vectors.dense(1.0, 0.0, 3.0)
val v1: Vector = Vectors.sparse(3, Array(1), Array(2.5))
val v2: Vector = Vectors.sparse(3, Seq((0, 1.5),(1, 1.8)))
which if 'merged' will look like this matrix
1.0 0.0 3.0
0.0 2.5 0.0
1.5 1.8 0.0
So, how do I transform Vectors v0, v1, v2 to rows?
By using the property of Spark Context which parallelize the Sequence, we can achieve the thing you want, Since you have created vectors,now all you required to bring them in sequence and parallelize by the process given below.
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.linalg.distributed.RowMatrix
val v0 = Vectors.dense(1.0, 0.0, 3.0)
val v1 = Vectors.sparse(3, Array(1), Array(2.5))
val v2 = Vectors.sparse(3, Seq((0, 1.5), (1, 1.8)))
val rows = sc.parallelize(Seq(v0, v1, v2))
val mat: RowMatrix = new RowMatrix(rows)
// Get its size.
val m = mat.numRows()
val n = mat.numCols()