I am trying to scatter plot the 2 features resulting from the PCA in spark ml library.
To be more precise I am trying to convert result into something like this:
_________
id | X | Y
__________
1 |0.1|0.1
2 |0.2|0.2
3 |0.4|0.4
4 |0.3|0.3
...
from something like this
_________
id | pca
__________
1 |[0.1,0.1]
2 |[0.2,0.2]
3 |[0.4,0.4]
4 |[0.3,0.3]
...
But it seem that spark vector aren't iterable or something like this. I don't understand what is going on. If someone know the answer that would be grate
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.ml.feature.VectorAssembler
val convertToVector = udf((array: Array[Double]) => {
Vectors.dense(array.toArray)
})
val convertToDouble = udf((array: Array[Float]) => {
array.map(_.toDouble).toArray
})
val ds = model.userFactors.withColumn("features", convertToDouble($"features"))
val userMatrixDs = ds.withColumn("features", convertToVector($"features"))
//val df3 = assembler.transform(df2)
val pca = new PCA()
.setInputCol("features")
.setOutputCol("pca")
.setK(2)
.fit(userMatrixDs)
// Project vectors to the linear space spanned by the top 2 principal
// components, keeping the label
val result = pca.transform(userMatrixDs).select("id","pca");
result.show()
result.select(
result.id,
result.col("pca")[0].as("eigenVector1"),
result.col("pca")[1].as("eigenVector2")
)
.show()
Welcome to StackOverflow. Take a look to this example:
val df = spark.createDataFrame(
spark.sparkContext.parallelize(Seq(Row(1, 1.0, 2.0))),
StructType(
List(
StructField("id", IntegerType),
StructField("one", DoubleType),
StructField("two", DoubleType)
)
))
import org.apache.spark.ml.linalg.Vector
import spark.implicits._
val assembler =
new VectorAssembler()
.setInputCols(Array("one", "two"))
.setOutputCol("vector")
val df0 = assembler.transform(df)
df0
.select("id", "vector")
.as[(Int, Vector)]
.map { case (id, vector) =>
val arr = vector.toArray
(id, arr(0), arr(1))
}
.select($"_1".as("id"), $"_2".as("pca_x"), $"_3".as("pca_y"))
First I create with VectorAsembler a Vector column and then extract the value transforming it to a Dataset[(Int, Vector)]. With map you can easily manipulate the row.
Related
I'm trying to use Spark to learn multiclass logistic regression on a windowed text file. What I'm doing is first creating windows and explode them into $"word_winds". Then move the center word of each window into $"word". To fit the LogisticRegression model, I convert each different word into a class ($"label"), thereby it learns. I count the different labels to prone those with few minF samples.
The problem is that some part of the code is very very slow, even for small input files (you can use some README file to test the code). Googling, some users have been experiencing slowness by using explode. They suggest some modifications to the code in order to speed up 2x. However, I think that with a 100MB input file, this wouldn't be sufficient. Please suggest something different, probably to avoid actions that slow down the code. I'm using Spark 2.4.0 and sbt 1.2.8 on a 24-core machine.
import org.apache.spark.sql.functions._
import org.apache.spark.ml.classification.LogisticRegression
import org.apache.spark.ml.feature.{HashingTF, IDF}
import org.apache.spark.ml.feature.StringIndexer
import org.apache.spark.sql.SparkSession
import org.apache.spark.storage.StorageLevel
import org.apache.spark.sql.types._
object SimpleApp {
def main(args: Array[String]) {
val spark = SparkSession.builder().getOrCreate()
import spark.implicits._
spark.sparkContext.setCheckpointDir("checked_dfs")
val in_file = "sample.txt"
val stratified = true
val wsize = 7
val ngram = 3
val minF = 2
val windUdf = udf{s: String => s.sliding(ngram).toList.sliding(wsize).toList}
val get_mid = udf{s: Seq[String] => s(s.size/2)}
val rm_punct = udf{s: String => s.replaceAll("""([\p{Punct}|¿|\?|¡|!]|\p{C}|\b\p{IsLetter}{1,2}\b)\s*""", "")}
// Read and remove punctuation
var df = spark.read.text(in_file)
.withColumn("value", rm_punct($"value"))
// Creating windows and explode them, and get the center word into $"word"
df = df.withColumn("char_nGrams", windUdf('value))
.withColumn("word_winds", explode($"char_nGrams"))
.withColumn("word", get_mid('word_winds))
val indexer = new StringIndexer().setInputCol("word")
.setOutputCol("label")
df = indexer.fit(df).transform(df)
val hashingTF = new HashingTF().setInputCol("word_winds")
.setOutputCol("freqFeatures")
df = hashingTF.transform(df)
val idf = new IDF().setInputCol("freqFeatures")
.setOutputCol("features")
df = idf.fit(df).transform(df)
// Remove word whose freq is less than minF
var counts = df.groupBy("label").count
.filter(col("count") > minF)
.orderBy(desc("count"))
.withColumn("id", monotonically_increasing_id())
var filtro = df.groupBy("label").count.filter(col("count") <= minF)
df = df.join(filtro, Seq("label"), "leftanti")
var dfs = if(stratified){
// Create stratified sample 'dfs'
var revs = counts.orderBy(asc("count")).select("count")
.withColumn("id", monotonically_increasing_id())
revs = revs.withColumnRenamed("count", "ascc")
// Weigh the labels (linearly) inversely ("ascc") proportional NORMALIZED weights to word ferquency
counts = counts.join(revs, Seq("id"), "inner").withColumn("weight", col("ascc")/df.count)
val minn = counts.select("weight").agg(min("weight")).first.getDouble(0) - 0.01
val maxx = counts.select("weight").agg(max("weight")).first.getDouble(0) - 0.01
counts = counts.withColumn("weight_n", (col("weight") - minn) / (maxx - minn))
counts = counts.withColumn("weight_n", when(col("weight_n") > 1.0, 1.0)
.otherwise(col("weight_n")))
var fractions = counts.select("label", "weight_n").rdd.map(x => (x(0), x(1)
.asInstanceOf[scala.Double])).collectAsMap.toMap
df.stat.sampleBy("label", fractions, 36L).select("features", "word_winds", "word", "label")
}else{ df }
dfs = dfs.checkpoint()
val lr = new LogisticRegression().setRegParam(0.01)
val Array(tr, ts) = dfs.randomSplit(Array(0.7, 0.3), seed = 12345)
val training = tr.select("word_winds", "features", "label", "word")
val test = ts.select("word_winds", "features", "label", "word")
val model = lr.fit(training)
def mapCode(m: scala.collection.Map[Any, String]) = udf( (s: Double) =>
m.getOrElse(s, "")
)
var labels = training.select("label", "word").distinct.rdd
.map(x => (x(0), x(1).asInstanceOf[String]))
.collectAsMap
var predictions = model.transform(test)
predictions = predictions.withColumn("pred_word", mapCode(labels)($"prediction"))
predictions.write.format("csv").save("spark_predictions")
spark.stop()
}
}
Since your data is somewhat small it might help if you use coalesce before explode. Sometimes it can be inefficient to have too many nodes especially if there is a lot of shuffling in your code.
Like you said, it does seem like a lot of people have issues with explode. I looked at the link you provided but no one mentioned trying flatMap instead of explode.
i am trying to apply PCA on a dataset that contains a header and contains fields
Here is the code i used , any help to be able to select a specific columns on which we apply PCA .
val inputMatrix = sc.textFile("C:/Users/mhattabi/Desktop/Realase of 01_06_2017/TopDrive_WithoutConstant.csv").map { line =>
val values = line.split(",").map(_.toDouble)
Vectors.dense(values)
}
val mat: RowMatrix = new RowMatrix(inputMatrix)
val pc: Matrix = mat.computePrincipalComponents(4)
// Project the rows to the linear space spanned by the top 4 principal components.
val projected: RowMatrix = mat.multiply(pc)
//updated version
i tried to do this
val spark = SparkSession.builder.master("local").appName("my-spark-app").getOrCreate()
val dataframe = spark.read.format("com.databricks.spark.csv")
val columnsToUse: Seq[String] = Array("Col0","Col1", "Col2", "Col3", "Col4").toSeq
val k: Int = 2
val df = spark.read.format("csv").options(Map("header" -> "true", "inferSchema" -> "true")).load("C:/Users/mhattabi/Desktop/donnee/cassandraTest_1.csv")
val rf = new RFormula().setFormula(s"~ ${columnsToUse.mkString(" + ")}")
val pca = new PCA().setInputCol("features").setOutputCol("pcaFeatures").setK(k)
val featurized = rf.fit(df).transform(df)
//prinpal component
val principalComponent = pca.fit(featurized).transform(featurized)
principalComponent.select("pcaFeatures").show(4,false)
+-----------------------------------------+
|pcaFeatures |
+-----------------------------------------+
|[-0.536798281241379,0.495499034754084] |
|[-0.32969328815797916,0.5672811417154808]|
|[-1.32283465170085,0.5982789033642704] |
|[-0.6199718696225502,0.3173072633712586] |
+-----------------------------------------+
I got this for pricipal component , the question i want to save this in csv file and add header.Any help many thanks
Any help would be appreciated .
Thanks a lot
You can use the RFormula in this case :
import org.apache.spark.ml.feature.{RFormula, PCA}
val columnsToUse: Seq[String] = ???
val k: Int = ???
val df = spark.read.format("csv").options(Map("header" -> "true", "inferSchema" -> "true")).load("/tmp/foo.csv")
val rf = new RFormula().setFormula(s"~ ${columnsToUse.mkString(" + ")}")
val pca = new PCA().setInputCol("features").setK(k)
val featurized = rf.fit(df).transform(df)
val projected = pca.fit(featurized).transform(featurized)
java.lang.NumberFormatException: For input string: "DateTime"
it means that in your input file there is a value DateTime that you then try to convert to Double.
Probably it is somewhere in the header of you input file
I've been using org.apache.spark.ml.Pipeline for machine learning tasks. It is particularly important to know the actual probabilities instead of just a predicted label , and I am having difficulties to get it. Here I am doing a binary classification task with random forest. The class labels are "Yes" and "No". I would like to output probability for label "Yes" . The probabilities are stored in a DenseVector as the pipeline output, such as [0.69, 0.31], but I don't know which one is corresponding to "Yes" (0.69 or 0.31?). I guess there should be someway to retrieve it from labelIndexer?
Here is my task Code for training the model
val sc = new SparkContext(new SparkConf().setAppName(" ML").setMaster("local"))
val data = .... // load data from file
val df = sqlContext.createDataFrame(data).toDF("label", "features")
val labelIndexer = new StringIndexer()
.setInputCol("label")
.setOutputCol("indexedLabel")
.fit(df)
val featureIndexer = new VectorIndexer()
.setInputCol("features")
.setOutputCol("indexedFeatures")
.setMaxCategories(2)
.fit(df)
// Convert indexed labels back to original labels.
val labelConverter = new IndexToString()
.setInputCol("prediction")
.setOutputCol("predictedLabel")
.setLabels(labelIndexer.labels)
val Array(trainingData, testData) = df.randomSplit(Array(0.7, 0.3))
// Train a RandomForest model.
val rf = new RandomForestClassifier()
.setLabelCol("indexedLabel")
.setFeaturesCol("indexedFeatures")
.setNumTrees(10)
.setFeatureSubsetStrategy("auto")
.setImpurity("gini")
.setMaxDepth(4)
.setMaxBins(32)
// Create pipeline
val pipeline = new Pipeline()
.setStages(Array(labelIndexer, featureIndexer, rf,labelConverter))
// Train model
val model = pipeline.fit(trainingData)
// Save model
sc.parallelize(Seq(model), 1).saveAsObjectFile("/my/path/pipeline")
Then I will load the pipeline and make predictions on new data, and here is the code piece
// Ignoring loading data part
// Create DF
val testdf = sqlContext.createDataFrame(testData).toDF("features", "line")
// Load pipeline
val model = sc.objectFile[org.apache.spark.ml.PipelineModel]("/my/path/pipeline").first
// My Question comes here : How to extract the probability that corresponding to class label "1"
// This is my attempt, I would like to output probability for label "Yes" and predicted label . The probabilities are stored in a denseVector, but I don't know which one is corresponding to "Yes". Something like this:
val predictions = model.transform(testdf).select("probability").map(e=> e.asInstanceOf[DenseVector])
References regarding to the probabilities and labels for RF:
http://spark.apache.org/docs/latest/ml-classification-regression.html#random-forests
do you mean that you wanna extract probability of positive label in the DenseVector? If so, you may create a udf function to solve the probability.
In the DenseVector of binary classification, the first col presents the probability of "0" and the second col presents of "1".
val prediction = pipelineModel.transform(result)
val pre = prediction.select(getOne($"probability")).withColumnRenamed("UDF(probability)","probability")
You're on the right track with retrieving it from label indexer.
See comments in the code for more information.
This example works with Scala 2.11.8 and Spark 2.2.1.
import org.apache.spark.sql.functions.{col, udf}
import org.apache.spark.SparkConf
import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.classification.LogisticRegression
import org.apache.spark.ml.feature.{IndexToString, StringIndexer}
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.sql.{Column, SparkSession}
object Example {
case class Record(features: org.apache.spark.ml.linalg.Vector)
def main(args: Array[String]): Unit = {
val spark: SparkSession = SparkSession
.builder
.appName("Example")
.config(new SparkConf().setMaster("local[2]"))
.getOrCreate
val sc = spark.sparkContext
import spark.implicits._
val data = sc.parallelize(
Array(
(Vectors.dense(0.9, 0.6), "n"),
(Vectors.dense(0.1, 0.1), "y"),
(Vectors.dense(0.2, 0.15), "y"),
(Vectors.dense(0.8, 0.9), "n"),
(Vectors.dense(0.3, 0.4), "y"),
(Vectors.dense(0.5, 0.5), "n"),
(Vectors.dense(0.6, 0.7), "n"),
(Vectors.dense(0.3, 0.3), "y"),
(Vectors.dense(0.3, 0.3), "y"),
(Vectors.dense(-0.5, -0.1), "dunno"),
(Vectors.dense(-0.9, -0.6), "dunno")
)).toDF("features", "label")
// NOTE: you're fitting StringIndexer to all your data.
// The StringIndexer orders the labels by label frequency.
// In this example there are 5 "y" labels, 4 "n" labels
// and 2 "dunno" labels, so the probability columns will be
// listed in the following order: "y", "n", "dunno".
// You can play with label frequencies to convince yourself
// that it sorts labels by frequency in provided data.
val labelIndexer = new StringIndexer()
.setInputCol("label")
.setOutputCol("label_indexed")
.fit(data)
val indexToLabel = new IndexToString()
.setInputCol("prediction")
.setOutputCol("predicted_label")
.setLabels(labelIndexer.labels)
// Here I use logistic regression, but the exact algorithm doesn't
// matter in this case.
val lr = new LogisticRegression()
.setFeaturesCol("features")
.setLabelCol("label_indexed")
.setPredictionCol("prediction")
val pipeline = new Pipeline().setStages(Array(
labelIndexer,
lr,
indexToLabel
))
val model = pipeline.fit(data)
// Prepare test set
val toPredictDf = sc.parallelize(Array(
Record(Vectors.dense(0.1, 0.5)),
Record(Vectors.dense(0.8, 0.8)),
Record(Vectors.dense(-0.2, -0.5))
)).toDF("features")
// Make predictions
val results = model.transform(toPredictDf)
// The column containing probabilities has to be converted from Vector to Array
val vecToArray = udf( (xs: org.apache.spark.ml.linalg.Vector) => xs.toArray )
val dfArr = results.withColumn("probabilityArr" , vecToArray($"probability") )
// labelIndexer.labels contains the list of your labels.
// It is zipped with index to match the label name with
// related probability found in probabilities array.
// In other words:
// label labelIndexer.labels.apply(idx)
// matches:
// col("probabilityArr").getItem(idx)
// See also: https://stackoverflow.com/a/49917851
val probColumns = labelIndexer.labels.zipWithIndex.map {
case (alias, idx) => (alias, col("probabilityArr").getItem(idx).as(alias))
}
// 'probColumns' is of type Array[(String, Column)] so now
// concatenate these Column objects to DataFrame containing predictions
// See also: https://stackoverflow.com/a/43494322
val columnsAdded = probColumns.foldLeft(dfArr) { case (d, (colName, colContents)) =>
if (d.columns.contains(colName)) {
d
} else {
d.withColumn(colName, colContents)
}
}
columnsAdded.show()
}
}
Once you run this code, it will produce the following data frame:
+-----------+---------------+--------------------+--------------------+--------------------+
| features|predicted_label| y| n| dunno|
+-----------+---------------+--------------------+--------------------+--------------------+
| [0.1,0.5]| y| 0.9999999999994298|5.702468131669394...|9.56953780171369E-19|
| [0.8,0.8]| n|5.850695258713685...| 1.0|4.13416875406573E-81|
|[-0.2,-0.5]| dunno|1.207908506571593...|8.157018363627128...| 0.9998792091493428|
+-----------+---------------+--------------------+--------------------+--------------------+
Columns y, n and dunno are the columns that we have just added to the ordinary output of Spark's ML pipeline.
Please keep in mind I'm new to scala.
This is the example I am trying to follow:
https://spark.apache.org/docs/1.5.1/ml-ann.html
It uses this dataset:
https://github.com/apache/spark/blob/master/data/mllib/sample_multiclass_classification_data.txt
I have prepared my .csv using the code below to get a data frame for classification in Scala.
//imports for ML
import org.apache.spark.ml.classification.MultilayerPerceptronClassifier
import org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
import org.apache.spark.mllib.util.MLUtils
import org.apache.spark.sql.Row
//imports for transformation
import sqlContext.implicits._
import com.databricks.spark.csv._
import org.apache.spark.mllib.linalg.{Vector, Vectors}
//load data
val data2 = sqlContext.csvFile("/Users/administrator/Downloads/ds_15k_10-2.csv")
//Rename any one column to features
//val df2 = data.withColumnRenamed("ip_crowding", "features")
val DF2 = data2.select("gst_id_matched","ip_crowding","lat_long_dist");
scala> DF2.take(2)
res6: Array[org.apache.spark.sql.Row] = Array([0,0,0], [0,0,1628859.542])
//define doublelfunc
val toDouble = udf[Double, String]( _.toDouble)
//Convert all to double
val featureDf = DF2
.withColumn("gst_id_matched",toDouble(DF2("gst_id_matched")))
.withColumn("ip_crowding",toDouble(DF2("ip_crowding")))
.withColumn("lat_long_dist",toDouble(DF2("lat_long_dist")))
.select("gst_id_matched","ip_crowding","lat_long_dist")
//Define the format
val toVec4 = udf[Vector, Double,Double] { (v1,v2) => Vectors.dense(v1,v2) }
//Format for features which is gst_id_matched
val encodeLabel = udf[Double, String]( _ match
{ case "0.0" => 0.0 case "1.0" => 1.0} )
//Transformed dataset
val df = featureDf
.withColumn("features",toVec4(featureDf("ip_crowding"),featureDf("lat_long_dist")))
.withColumn("label",encodeLabel(featureDf("gst_id_matched")))
.select("label", "features")
val splits = df.randomSplit(Array(0.6, 0.4), seed = 1234L)
val train = splits(0)
val test = splits(1)
// specify layers for the neural network:
// input layer of size 4 (features), two intermediate of size 5 and 4 and output of size 3 (classes)
val layers = Array[Int](0, 0, 0, 0)
// create the trainer and set its parameter
val trainer = new MultilayerPerceptronClassifier().setLayers(layers).setBlockSize(12).setSeed(1234L).setMaxIter(10)
// train the model
val model = trainer.fit(train)
The last line generates this error
15/11/21 22:46:23 ERROR Executor: Exception in task 1.0 in stage 11.0 (TID 15)
java.lang.ArrayIndexOutOfBoundsException: 0
My suspicions:
When I examine the dataset,it looks fine for classification
scala> df.take(2)
res3: Array[org.apache.spark.sql.Row] = Array([0.0,[0.0,0.0]], [0.0,[0.0,1628859.542]])
But the apache example dataset is different and my transformation does not give me what I need.Can some one please help me with the dataset transformation or understand the root cause of the problem.
This is what the apache dataset looks like:
scala> data.take(1)
res8: Array[org.apache.spark.sql.Row] = Array([1.0,(4,[0,1,2,3],[-0.222222,0.5,-0.762712,-0.833333])])
The source of your problems is a wrong definition of layers. When you use
val layers = Array[Int](0, 0, 0, 0)
it means you want a network with zero nodes in each layer which simply doesn't make sense. Generally speaking number of neurons in the input layer should be equal to the number of features and each hidden layer should contain at least one neuron.
Lets recreate your data simpling your code on the way:
import org.apache.spark.sql.functions.col
val df = sc.parallelize(Seq(
("0", "0", "0"), ("0", "0", "1628859.542")
)).toDF("gst_id_matched", "ip_crowding", "lat_long_dist")
Convert all columns to doubles:
val numeric = df
.select(df.columns.map(c => col(c).cast("double").alias(c)): _*)
.withColumnRenamed("gst_id_matched", "label")
Assemble features:
import org.apache.spark.ml.feature.VectorAssembler
val assembler = new VectorAssembler()
.setInputCols(Array("ip_crowding","lat_long_dist"))
.setOutputCol("features")
val data = assembler.transform(numeric)
data.show
// +-----+-----------+-------------+-----------------+
// |label|ip_crowding|lat_long_dist| features|
// +-----+-----------+-------------+-----------------+
// | 0.0| 0.0| 0.0| (2,[],[])|
// | 0.0| 0.0| 1628859.542|[0.0,1628859.542]|
// +-----+-----------+-------------+-----------------+
Train and test network:
import org.apache.spark.ml.classification.MultilayerPerceptronClassifier
val layers = Array[Int](2, 3, 5, 3) // Note 2 neurons in the input layer
val trainer = new MultilayerPerceptronClassifier()
.setLayers(layers)
.setBlockSize(128)
.setSeed(1234L)
.setMaxIter(100)
val model = trainer.fit(data)
model.transform(data).show
// +-----+-----------+-------------+-----------------+----------+
// |label|ip_crowding|lat_long_dist| features|prediction|
// +-----+-----------+-------------+-----------------+----------+
// | 0.0| 0.0| 0.0| (2,[],[])| 0.0|
// | 0.0| 0.0| 1628859.542|[0.0,1628859.542]| 0.0|
// +-----+-----------+-------------+-----------------+----------+
Do you guys know where can I find examples of multiclass classification in Spark. I spent a lot of time searching in books and in the web, and so far I just know that it is possible since the latest version according the documentation.
ML
(Recommended in Spark 2.0+)
We'll use the same data as in the MLlib below. There are two basic options. If Estimator supports multilclass classification out-of-the-box (for example random forest) you can use it directly:
val trainRawDf = trainRaw.toDF
import org.apache.spark.ml.feature.{Tokenizer, CountVectorizer, StringIndexer}
import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.classification.RandomForestClassifier
val transformers = Array(
new StringIndexer().setInputCol("group").setOutputCol("label"),
new Tokenizer().setInputCol("text").setOutputCol("tokens"),
new CountVectorizer().setInputCol("tokens").setOutputCol("features")
)
val rf = new RandomForestClassifier()
.setLabelCol("label")
.setFeaturesCol("features")
val model = new Pipeline().setStages(transformers :+ rf).fit(trainRawDf)
model.transform(trainRawDf)
If model supports only binary classification (logistic regression) and extends o.a.s.ml.classification.Classifier you can use one-vs-rest strategy:
import org.apache.spark.ml.classification.OneVsRest
import org.apache.spark.ml.classification.LogisticRegression
val lr = new LogisticRegression()
.setLabelCol("label")
.setFeaturesCol("features")
val ovr = new OneVsRest().setClassifier(lr)
val ovrModel = new Pipeline().setStages(transformers :+ ovr).fit(trainRawDf)
MLLib
According to the official documentation at this moment (MLlib 1.6.0) following methods support multiclass classification:
logistic regression,
decision trees,
random forests,
naive Bayes
At least some of the examples use multiclass classification:
Naive Bayes example - 3 classes
Logistic regression - 10 classes for classifier although only 2 in the example data
General framework, ignoring method specific arguments, is pretty much the same as for all the other methods in MLlib. You have to pre-processes your input to create either data frame with columns representing label and features:
root
|-- label: double (nullable = true)
|-- features: vector (nullable = true)
or RDD[LabeledPoint].
Spark provides broad range of useful tools designed to facilitate this process including Feature Extractors and Feature Transformers and pipelines.
You'll find a rather naive example of using Random Forest below.
First lets import required packages and create dummy data:
import sqlContext.implicits._
import org.apache.spark.ml.feature.{HashingTF, Tokenizer}
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.ml.feature.StringIndexer
import org.apache.spark.mllib.tree.RandomForest
import org.apache.spark.mllib.tree.model.RandomForestModel
import org.apache.spark.mllib.linalg.{Vectors, Vector}
import org.apache.spark.mllib.evaluation.MulticlassMetrics
import org.apache.spark.sql.Row
import org.apache.spark.rdd.RDD
case class LabeledRecord(group: String, text: String)
val trainRaw = sc.parallelize(
LabeledRecord("foo", "foo v a y b foo") ::
LabeledRecord("bar", "x bar y bar v") ::
LabeledRecord("bar", "x a y bar z") ::
LabeledRecord("foobar", "foo v b bar z") ::
LabeledRecord("foo", "foo x") ::
LabeledRecord("foobar", "z y x foo a b bar v") ::
Nil
)
Now let's define required transformers and process train Dataset:
// Tokenizer to process text fields
val tokenizer = new Tokenizer()
.setInputCol("text")
.setOutputCol("words")
// HashingTF to convert tokens to the feature vector
val hashingTF = new HashingTF()
.setInputCol("words")
.setOutputCol("features")
.setNumFeatures(10)
// Indexer to convert String labels to Double
val indexer = new StringIndexer()
.setInputCol("group")
.setOutputCol("label")
.fit(trainRaw.toDF)
def transfom(rdd: RDD[LabeledRecord]) = {
val tokenized = tokenizer.transform(rdd.toDF)
val hashed = hashingTF.transform(tokenized)
val indexed = indexer.transform(hashed)
indexed
.select($"label", $"features")
.map{case Row(label: Double, features: Vector) =>
LabeledPoint(label, features)}
}
val train: RDD[LabeledPoint] = transfom(trainRaw)
Please note that indexer is "fitted" on the train data. It simply means that categorical values used as the labels are converted to doubles. To use classifier on a new data you have to transform it first using this indexer.
Next we can train RF model:
val numClasses = 3
val categoricalFeaturesInfo = Map[Int, Int]()
val numTrees = 10
val featureSubsetStrategy = "auto"
val impurity = "gini"
val maxDepth = 4
val maxBins = 16
val model = RandomForest.trainClassifier(
train, numClasses, categoricalFeaturesInfo,
numTrees, featureSubsetStrategy, impurity,
maxDepth, maxBins
)
and finally test it:
val testRaw = sc.parallelize(
LabeledRecord("foo", "foo foo z z z") ::
LabeledRecord("bar", "z bar y y v") ::
LabeledRecord("bar", "a a bar a z") ::
LabeledRecord("foobar", "foo v b bar z") ::
LabeledRecord("foobar", "a foo a bar") ::
Nil
)
val test: RDD[LabeledPoint] = transfom(testRaw)
val predsAndLabs = test.map(lp => (model.predict(lp.features), lp.label))
val metrics = new MulticlassMetrics(predsAndLabs)
metrics.precision
metrics.recall
Are you using Spark 1.6 rather than Spark 2.1?
I think the problem is that in spark 2.1 the transform method returns a dataset, which can be implicitly converted to a typed RDD, where as prior to that, it returns a data frame or row.
Try as a diagnostic specifying the return type of the transform function as RDD[LabeledPoint] and see if you get the same error.