Spark 1.5.1, MLLib Random Forest Probability - scala

I am using Spark 1.5.1 with MLLib. I built a random forest model using MLLib, now use the model to do prediction. I can find the predict category (0.0 or 1.0) using the .predict function. However, I can't find the function to retrieve the probability (see the attached screenshot). I thought spark 1.5.1 random forest would provide the probability, am I missing anything here?

Unfortunately the feature is not available in the older Spark MLlib 1.5.1.
You can however find it in the recent Pipeline API in Spark MLlib 2.x as RandomForestClassifier:
import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.classification.RandomForestClassifier
import org.apache.spark.ml.feature.{IndexToString, StringIndexer, VectorIndexer}
import org.apache.spark.mllib.util.MLUtils
// Load and parse the data file, converting it to a DataFrame.
val data = MLUtils.loadLibSVMFile(sc, "data/mllib/sample_libsvm_data.txt").toDF
// Index labels, adding metadata to the label column.
// Fit on whole dataset to include all labels in index.
val labelIndexer = new StringIndexer()
.setInputCol("label")
.setOutputCol("indexedLabel").fit(data)
// Automatically identify categorical features, and index them.
// Set maxCategories so features with > 4 distinct values are treated as continuous.
val featureIndexer = new VectorIndexer()
.setInputCol("features")
.setOutputCol("indexedFeatures")
.setMaxCategories(4).fit(data)
// Split the data into training and test sets (30% held out for testing)
val Array(trainingData, testData) = data.randomSplit(Array(0.7, 0.3))
// Train a RandomForest model.
val rf = new RandomForestClassifier()
.setLabelCol(labelIndexer.getOutputCol)
.setFeaturesCol(featureIndexer.getOutputCol)
.setNumTrees(10)
// Convert indexed labels back to original labels.
val labelConverter = new IndexToString()
.setInputCol("prediction")
.setOutputCol("predictedLabel")
.setLabels(labelIndexer.labels)
// Chain indexers and forest in a Pipeline
val pipeline = new Pipeline()
.setStages(Array(labelIndexer, featureIndexer, rf, labelConverter))
// Fit model. This also runs the indexers.
val model = pipeline.fit(trainingData)
// Make predictions.
val predictions = model.transform(testData)
// predictions: org.apache.spark.sql.DataFrame = [label: double, features: vector, indexedLabel: double, indexedFeatures: vector, rawPrediction: vector, probability: vector, prediction: double, predictedLabel: string]
predictions.show(10)
// +-----+--------------------+------------+--------------------+-------------+-----------+----------+--------------+
// |label| features|indexedLabel| indexedFeatures|rawPrediction|probability|prediction|predictedLabel|
// +-----+--------------------+------------+--------------------+-------------+-----------+----------+--------------+
// | 0.0|(692,[124,125,126...| 1.0|(692,[124,125,126...| [0.0,10.0]| [0.0,1.0]| 1.0| 0.0|
// | 0.0|(692,[124,125,126...| 1.0|(692,[124,125,126...| [1.0,9.0]| [0.1,0.9]| 1.0| 0.0|
// | 0.0|(692,[129,130,131...| 1.0|(692,[129,130,131...| [1.0,9.0]| [0.1,0.9]| 1.0| 0.0|
// | 0.0|(692,[154,155,156...| 1.0|(692,[154,155,156...| [1.0,9.0]| [0.1,0.9]| 1.0| 0.0|
// | 0.0|(692,[154,155,156...| 1.0|(692,[154,155,156...| [1.0,9.0]| [0.1,0.9]| 1.0| 0.0|
// | 0.0|(692,[181,182,183...| 1.0|(692,[181,182,183...| [1.0,9.0]| [0.1,0.9]| 1.0| 0.0|
// | 1.0|(692,[99,100,101,...| 0.0|(692,[99,100,101,...| [4.0,6.0]| [0.4,0.6]| 1.0| 0.0|
// | 1.0|(692,[123,124,125...| 0.0|(692,[123,124,125...| [10.0,0.0]| [1.0,0.0]| 0.0| 1.0|
// | 1.0|(692,[124,125,126...| 0.0|(692,[124,125,126...| [10.0,0.0]| [1.0,0.0]| 0.0| 1.0|
// | 1.0|(692,[125,126,127...| 0.0|(692,[125,126,127...| [10.0,0.0]| [1.0,0.0]| 0.0| 1.0|
// +-----+--------------------+------------+--------------------+-------------+-----------+----------+--------------+
// only showing top 10 rows
Note: This example is from the official documentation of Spark MLlib's ML - Random forest classifier.
And here is some explanation on some output columns :
predictionCol represents the predicted label .
rawPredictionCol represents a Vector of length # classes, with the counts of training instance labels at the tree node which makes the prediction (available for Classification only).
probabilityCol represents the probability Vector of length # classes equal to rawPrediction normalized to a multinomial distribution (available with Classification only).

You can't directly get the classification probabilities but it is relatively easy to calculate it yourself. RandomForest is an ensemble of trees and its output probability is the majority vote of these trees divided by the total number of trees.
Since the RandomForestModel in MLib gives you the trained trees it is easy to do it yourself. The following code gives the probability for the binary classification problem. Its generalization to multi-class classification is straightforward.
def predict(points: RDD[LabeledPoint], model: RandomForestModel) = {
val numTrees = model.trees.length
val trees = points.sparkContext.broadcast(model.trees)
points.map { point =>
trees.value
.map(_.predict(point.features))
.sum / numTrees
}
}
for multi-class case you only need to replace map with .map(_.predict(point.features)-> 1.0) and group by key instead of sum and finally take the max of values.

Related

Calculating Precision and Recall for specific threshold values

I want to set the threshold value of my logistic regression to 0.5 and I want to get precision, recall, f1 score for this value for pipeline model .but
model.setThreshold(0.5)
give me
value setThreshold is not a member of
org.apache.spark.ml.PipelineModel
val Array(train, test) = dataset
.randomSplit(Array(0.8, 0.2), seed = 1234L)
.map(_.cache())
val assembler = new VectorAssembler()
.setInputCols(Array("label", "id", "features"))
.setOutputCol("feature")
val pca = new PCA()
.setInputCol("feature")
.setK(2)
.setOutputCol("pcaFeatures")
val classifier = new LogisticRegression()
.setFeaturesCol("pcaFeatures")
.setLabelCol("label")
.setMaxIter(10).setRegParam(0.3).setElasticNetParam(0.8)
val pipeline = new Pipeline().setStages(Array(assembler, pca, classifier))
val model = pipeline.fit(train)
val predicted = model.transform(test)
predicted.show()
import org.apache.spark.ml.linalg.DenseVector
import org.apache.spark.sql.Row
val predictions = predicted.filter(row => row.getAs[Int]("label") == 1).map(row => (row.getAs[Int]("label"), row.getAs[DenseVector] ("probability")(0)))
predictions.show()
import org.apache.spark.mllib.evaluation.MulticlassMetrics
val predictionAndLabels = predicted.
select($"label",$"prediction").
as[(Double, Double)].
rdd
val metrics = new BinaryClassificationMetrics(predictionAndLabels)
val precision = metrics.precisionByThreshold()
precision.foreach { case (t, p) =>
println(s"Threshold is: $t, Precision is: $p")
}
val recall = metrics.recallByThreshold
recall.foreach { case (t, p) =>
println(s"Threshold is: $t,recall is: $p")
}
+---+-------------------+-----+--------------------+--------------------+--------------------+--------------------+----------+
| id| features|label| feature| pcaFeatures| rawPrediction| probability|prediction|
+---+-------------------+-----+--------------------+--------------------+--------------------+--------------------+----------+
| 3|0.03731932516607228| 1|[1.0,3.0,0.037319...|[-3.0000000581646...|[-0.8840273374633...|[0.29234391132806...| 1.0|
| 7| 0.9636476860201426| 1|[1.0,7.0,0.963647...|[-7.0000000960209...|[-0.8831455606697...|[0.29252636578097...| 1.0|
| 8| 0.4766320058073684| 0|[0.0,8.0,0.476632...|[-8.0000000194785...|[0.87801311177017...|[0.70641031990863...| 0.0|
| 45| 0.1474318959104205| 1|[1.0,45.0,0.14743...|[-45.000000062664...|[-0.8839183791391...|[0.29236645302163...| 1.0|
|103| 0.3443839885873453| 1|[1.0,103.0,0.3443...|[-103.00000007071...|[-0.8837251994055...|[0.29240642125330...| 1.0|
How to set threshold t value of my Logistic regression model with pipeline?

How to convert a mllib matrix to a spark dataframe?

I want to pretty print the result of a correlation in a zeppelin notebook:
val Row(coeff: Matrix) = Correlation.corr(data, "features").head
One of the ways to achieve this is to convert the result into a DataFrame with each value in a separate column and call z.show().
However, looking into the Matrix api I don't see any way to do this.
Is there another straight forward way to achieve this?
Edit:
The dataframe has 50 columns. Just converting to a string would not help as the output get truncated.
Using the toString method should be the easiest and fastest way if you simply want to print the matrix. You can change the output by inputting the maximum number of lines to print as well as max line width. You can change the formatting by splitting on new lines and ",". For example:
val matrix = Matrices.dense(2,3, Array(1.0, 2.0, 3.0, 4.0, 5.0, 6.0))
matrix.toString
.split("\n")
.map(_.trim.split(" ").filter(_ != "").mkString("[", ",", "]"))
.mkString("\n")
which will give the following:
[1.0,3.0,5.0]
[2.0,4.0,6.0]
However, if you want to convert the matrix to an DataFrame, the easiest way would be to first create an RDD and then use toDF().
val matrixRows = matrix.rowIter.toSeq.map(_.toArray)
val df = spark.sparkContext.parallelize(matrixRows).toDF("Row")
Then to put each value in a separate column you can do the following
val numOfCols = matrixRows.head.length
val df2 = (0 until numOfCols).foldLeft(df)((df, num) =>
df.withColumn("Col" + num, $"Row".getItem(num)))
.drop("Row")
df2.show(false)
Result using the example data:
+----+----+----+
|Col0|Col1|Col2|
+----+----+----+
|1.0 |3.0 |5.0 |
|2.0 |4.0 |6.0 |
+----+----+----+

Match Dataframe Categorical Variables in vector Spark Scala

I have been trying to follow the stack overflow example about creating dataframes for machine learning ml library in spark scala.
How to create correct data frame for classification in Spark ML
However, I cannot get the matching udf to work.
Syntax: "kinds of the type arguments (Vector,Int,Int,String,String) do
not conform to the expected kinds of the type parameters (type RT,type
A1,type A2,type A3,type A4). Vector's type parameters do not match
type RT's expected parameters: type Vector has one type parameter, but
type RT has none"
I need to create a dataframe to input into the logistic regression library. Source sample data example has:
Source, Amount, Account, Fraud
CACC1, 9120.50, 999, 0
CACC2, 3897.25, 999, 0
AMXCC1, -523, 999, 0
MASCC2, -8723.15, 999, 0
I suppose my desired output is:
+-------------------+-----+
| features|label|
+-------------------+-----+
|[1.0,9120.50,999] | 0.0|
|[1.0,3897.25,999] | 0.0|
|[2.0,-523.00,999] | 0.0|
|[0.0,-8723.15,999] | 0.0|
+-------------------+-----+
So far I have:
val df = sqlContext.sql("select * from prediction_test")
val df_2 = df.select("source","amount","account")
val toVec3 = udf[Vector,String,Int,Int] { (a,b,c) =>
val e3 = c match {
case "MASCC2" => 0
case "CACC1" => 1
case "AMXCC1" => 2
}
Vectors.dense(e1, b, c)
}
val encodeLabel = udf[Double, Int](_match{case "0" => 0.0 case "1" => 1.0})
val df_3 = df_2.withColumn("features", toVec3(df_2("source"),df_2("amount"),df_2("account")).withColumn("label", encodeLabel(df("fraud"))).select("features","label")
How to create correct data frame for classification in Spark ML
By using Spark 2.3.1 I suggest following codes for classification ready Spark ML Pipeline. If you want to include classification object into Pipeline you need to just add it where I point out. ClassificationPipeline returns a PipelineModel. Once you transform this model you can get a classification ready columns named features and label.
// Handles categorical features
def stringIndexerPipeline(inputCol: String): (Pipeline, String) = {
val indexer = new StringIndexer()
.setHandleInvalid("skip")
.setInputCol(inputCol)
.setOutputCol(inputCol + "_indexed")
val pipeline = new Pipeline().setStages(Array(indexer))
(pipeline, inputCol + "_indexed")
}
// Classification Pipeline Function
def ClassificationPipeline(df:DataFrame): PipelineModel = {
// Preprocessing categorical features
val (SourcePipeline, Source_indexed) = stringIndexerPipeline("Source")
// Use StringIndexer output as input for OneHotEncoderEstimator
val oneHotEncoder = new OneHotEncoderEstimator()
//.setDropLast(true)
//.setHandleInvalid("skip")
.setInputCols(Array("Source_indexed"))
.setOutputCols(Array("Source_indexedVec"))
// Gather features that will be pass through pipeline
val inputCols = oneHotEncoder.getOutputCols ++ Array("Amount","Account")
// Put all inputs in a column as a vector
val vectorAssembler = new VectorAssembler()
.setInputCols(inputCols)
.setOutputCol("featureVector")
// Scale vector column
val standartScaler = new StandardScaler()
.setInputCol("featureVector")
.setOutputCol("features")
.setWithStd(true)
.setWithMean(false)
// Create stringindexer for label col
val labelIndexer = new StringIndexer().
setHandleInvalid("skip").
setInputCol("Fraud").
setOutputCol("label")
// create classification object in here
// val classificationObject = new ....
// Create a pipeline
val pipeline = new Pipeline().setStages(
Array(SourcePipeline, oneHotEncoder, vectorAssembler, standartScaler, labelIndexer/*, classificationObject*/))
pipeline.fit(df)
}
val pipelineModel = ClassificationPipeline(df)
val transformedDF = pipelineModel.transform(df)

Prepare data for MultilayerPerceptronClassifier in scala

Please keep in mind I'm new to scala.
This is the example I am trying to follow:
https://spark.apache.org/docs/1.5.1/ml-ann.html
It uses this dataset:
https://github.com/apache/spark/blob/master/data/mllib/sample_multiclass_classification_data.txt
I have prepared my .csv using the code below to get a data frame for classification in Scala.
//imports for ML
import org.apache.spark.ml.classification.MultilayerPerceptronClassifier
import org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
import org.apache.spark.mllib.util.MLUtils
import org.apache.spark.sql.Row
//imports for transformation
import sqlContext.implicits._
import com.databricks.spark.csv._
import org.apache.spark.mllib.linalg.{Vector, Vectors}
//load data
val data2 = sqlContext.csvFile("/Users/administrator/Downloads/ds_15k_10-2.csv")
//Rename any one column to features
//val df2 = data.withColumnRenamed("ip_crowding", "features")
val DF2 = data2.select("gst_id_matched","ip_crowding","lat_long_dist");
scala> DF2.take(2)
res6: Array[org.apache.spark.sql.Row] = Array([0,0,0], [0,0,1628859.542])
//define doublelfunc
val toDouble = udf[Double, String]( _.toDouble)
//Convert all to double
val featureDf = DF2
.withColumn("gst_id_matched",toDouble(DF2("gst_id_matched")))
.withColumn("ip_crowding",toDouble(DF2("ip_crowding")))
.withColumn("lat_long_dist",toDouble(DF2("lat_long_dist")))
.select("gst_id_matched","ip_crowding","lat_long_dist")
//Define the format
val toVec4 = udf[Vector, Double,Double] { (v1,v2) => Vectors.dense(v1,v2) }
//Format for features which is gst_id_matched
val encodeLabel = udf[Double, String]( _ match
{ case "0.0" => 0.0 case "1.0" => 1.0} )
//Transformed dataset
val df = featureDf
.withColumn("features",toVec4(featureDf("ip_crowding"),featureDf("lat_long_dist")))
.withColumn("label",encodeLabel(featureDf("gst_id_matched")))
.select("label", "features")
val splits = df.randomSplit(Array(0.6, 0.4), seed = 1234L)
val train = splits(0)
val test = splits(1)
// specify layers for the neural network:
// input layer of size 4 (features), two intermediate of size 5 and 4 and output of size 3 (classes)
val layers = Array[Int](0, 0, 0, 0)
// create the trainer and set its parameter
val trainer = new MultilayerPerceptronClassifier().setLayers(layers).setBlockSize(12).setSeed(1234L).setMaxIter(10)
// train the model
val model = trainer.fit(train)
The last line generates this error
15/11/21 22:46:23 ERROR Executor: Exception in task 1.0 in stage 11.0 (TID 15)
java.lang.ArrayIndexOutOfBoundsException: 0
My suspicions:
When I examine the dataset,it looks fine for classification
scala> df.take(2)
res3: Array[org.apache.spark.sql.Row] = Array([0.0,[0.0,0.0]], [0.0,[0.0,1628859.542]])
But the apache example dataset is different and my transformation does not give me what I need.Can some one please help me with the dataset transformation or understand the root cause of the problem.
This is what the apache dataset looks like:
scala> data.take(1)
res8: Array[org.apache.spark.sql.Row] = Array([1.0,(4,[0,1,2,3],[-0.222222,0.5,-0.762712,-0.833333])])
The source of your problems is a wrong definition of layers. When you use
val layers = Array[Int](0, 0, 0, 0)
it means you want a network with zero nodes in each layer which simply doesn't make sense. Generally speaking number of neurons in the input layer should be equal to the number of features and each hidden layer should contain at least one neuron.
Lets recreate your data simpling your code on the way:
import org.apache.spark.sql.functions.col
val df = sc.parallelize(Seq(
("0", "0", "0"), ("0", "0", "1628859.542")
)).toDF("gst_id_matched", "ip_crowding", "lat_long_dist")
Convert all columns to doubles:
val numeric = df
.select(df.columns.map(c => col(c).cast("double").alias(c)): _*)
.withColumnRenamed("gst_id_matched", "label")
Assemble features:
import org.apache.spark.ml.feature.VectorAssembler
val assembler = new VectorAssembler()
.setInputCols(Array("ip_crowding","lat_long_dist"))
.setOutputCol("features")
val data = assembler.transform(numeric)
data.show
// +-----+-----------+-------------+-----------------+
// |label|ip_crowding|lat_long_dist| features|
// +-----+-----------+-------------+-----------------+
// | 0.0| 0.0| 0.0| (2,[],[])|
// | 0.0| 0.0| 1628859.542|[0.0,1628859.542]|
// +-----+-----------+-------------+-----------------+
Train and test network:
import org.apache.spark.ml.classification.MultilayerPerceptronClassifier
val layers = Array[Int](2, 3, 5, 3) // Note 2 neurons in the input layer
val trainer = new MultilayerPerceptronClassifier()
.setLayers(layers)
.setBlockSize(128)
.setSeed(1234L)
.setMaxIter(100)
val model = trainer.fit(data)
model.transform(data).show
// +-----+-----------+-------------+-----------------+----------+
// |label|ip_crowding|lat_long_dist| features|prediction|
// +-----+-----------+-------------+-----------------+----------+
// | 0.0| 0.0| 0.0| (2,[],[])| 0.0|
// | 0.0| 0.0| 1628859.542|[0.0,1628859.542]| 0.0|
// +-----+-----------+-------------+-----------------+----------+

Stratified sampling in Spark

I have data set which contains user and purchase data. Here is an example, where first element is userId, second is productId, and third indicate boolean.
(2147481832,23355149,1)
(2147481832,973010692,1)
(2147481832,2134870842,1)
(2147481832,541023347,1)
(2147481832,1682206630,1)
(2147481832,1138211459,1)
(2147481832,852202566,1)
(2147481832,201375938,1)
(2147481832,486538879,1)
(2147481832,919187908,1)
...
I want to make sure I only take 80% of each users data and build an RDD while take the rest of the 20% and build a another RDD. Lets call train and test. I would like to stay away from using groupBy to start with since it can create memory problem since data set is large. Whats the best way to do this?
I could do following but this will not give 80% of each user.
val percentData = data.map(x => ((math.random * 100).toInt, x._1. x._2, x._3)
val train = percentData.filter(x => x._1 < 80).values.repartition(10).cache()
One possible solution is in Holden's answer, and here is some other solutions :
Using RDDs :
You can use the sampleByKeyExact transformation, from the PairRDDFunctions class.
sampleByKeyExact(boolean withReplacement, scala.collection.Map fractions, long seed)
Return a subset of this RDD sampled by key (via stratified sampling) containing exactly math.ceil(numItems * samplingRate) for each stratum (group of pairs with the same key).
And this is how I would do :
Considering the following list :
val seq = Seq(
(2147481832,23355149,1),(2147481832,973010692,1),(2147481832,2134870842,1),(2147481832,541023347,1),
(2147481832,1682206630,1),(2147481832,1138211459,1),(2147481832,852202566,1),(2147481832,201375938,1),
(2147481832,486538879,1),(2147481832,919187908,1),(214748183,919187908,1),(214748183,91187908,1)
)
I would create an RDD Pair, mapping all the users as keys :
val data = sc.parallelize(seq).map(x => (x._1,(x._2,x._3)))
Then I'll set up fractions for each key as following, since sampleByKeyExact takes a Map of fraction for each key :
val fractions = data.map(_._1).distinct.map(x => (x,0.8)).collectAsMap
What I have done here is mapping on the keys to find distinct keys and then associate each to a fraction equals to 0.8. I collect the whole as a Map.
To sample now :
import org.apache.spark.rdd.PairRDDFunctions
val sampleData = data.sampleByKeyExact(false, fractions, 2L)
or
val sampleData = data.sampleByKeyExact(withReplacement = false, fractions = fractions,seed = 2L)
You can check the count on your keys or data or data sample :
scala > data.count
// [...]
// res10: Long = 12
scala > sampleData.count
// [...]
// res11: Long = 10
Using DataFrames :
Let's consider the same data (seq) from the previous section.
val df = seq.toDF("keyColumn","value1","value2")
df.show
// +----------+----------+------+
// | keyColumn| value1|value2|
// +----------+----------+------+
// |2147481832| 23355149| 1|
// |2147481832| 973010692| 1|
// |2147481832|2134870842| 1|
// |2147481832| 541023347| 1|
// |2147481832|1682206630| 1|
// |2147481832|1138211459| 1|
// |2147481832| 852202566| 1|
// |2147481832| 201375938| 1|
// |2147481832| 486538879| 1|
// |2147481832| 919187908| 1|
// | 214748183| 919187908| 1|
// | 214748183| 91187908| 1|
// +----------+----------+------+
We will need the underlying RDD to do that on which we creates tuples of the elements in this RDD by defining our key to be the first column :
val data: RDD[(Int, Row)] = df.rdd.keyBy(_.getInt(0))
val fractions: Map[Int, Double] = data.map(_._1)
.distinct
.map(x => (x, 0.8))
.collectAsMap
val sampleData: RDD[Row] = data.sampleByKeyExact(withReplacement = false, fractions, 2L)
.values
val sampleDataDF: DataFrame = spark.createDataFrame(sampleData, df.schema) // you can use sqlContext.createDataFrame(...) instead for spark 1.6)
You can now check the count on your keys or df or data sample :
scala > df.count
// [...]
// res9: Long = 12
scala > sampleDataDF.count
// [...]
// res10: Long = 10
Since Spark 1.5.0 you can use DataFrameStatFunctions.sampleBy method:
df.stat.sampleBy("keyColumn", fractions, seed)
Something like this is may be well suited to something like "Blink DB", but lets look at the question. There are two ways to interpret what you've asked one is:
1) You want 80% of your users, and you want all of the data for them.
2) You want 80% of each users data
For #1 you could do a map to get the user ids, call distinct, and then sample 80% of them (you may want to look at kFold in MLUtils or BernoulliCellSampler). You can then filter your input data to just the set of IDs you want.
For #2 you could look at BernoulliCellSampler and simply apply it directly.