How to prepare data to apply RandomForest? - scala

I have csv file which contain userId, MovieId,Rating.I want to convert this file to containing label,features .
like in How to prepare data into a LibSVM format from DataFrame?
I need to separete rating column as afile and determine LabeledPoint for label.For applying random forest algorithm I need label column in file but it doesn't exit.
val pca = new PCA()
.setInputCol("features")
.setOutputCol("pcaFeatures")
.setK(3)
.fit(assembled_df)
val pcaTrainingData = pca.transform(assembled_df).select("id","features","pcaFeatures")
val labeled = pca.transform(assembled_df).rdd.map(row => LabeledPoint(
row.getAs[Double]("label"),
row.getAs[org.apache.spark.mllib.linalg.Vector]("pcaFeatures")
))
val numClasses = 10
val categoricalFeaturesInfo = Map[Int, Int]()
val numTrees = 10 // Use more in practice.
val featureSubsetStrategy = "auto" // Let the algorithm choose.
val impurity = "gini"
val maxDepth = 20
val maxBins = 32
val model = RandomForest.trainClassifier(labeled, numClasses, categoricalFeaturesInfo,
numTrees, featureSubsetStrategy, impurity, maxDepth, maxBins)
How to make label column?

Related

Predict and accuracy using neural network with Scala spark

I am a new user of spark on Scala, here is my code, but I can not figure out how I can calculate prediction and accuracy.
Do I have to transform the CSV file into Libsvm format, or can I just load the CSV file?
object Test2 {
def main(args: Array[String]): Unit = {
val spark = SparkSession
.builder
.appName("WineQualityDecisionTreeRegressorPMML")
.master("local")
.getOrCreate()
// Load and parse the data file.
val df = spark.read
.format("csv")
.option("header", "true")
.option("mode", "DROPMALFORMED")
.option("delimiter", ",")
.load("file:///c:/tmp/spark-warehouse/winequality_red_names.csv")
val inputFields = List("fixed acidity", "volatile acidity", "citric acid", "residual sugar", "chlorides",
"free sulfur dioxide", "total sulfur dioxide", "density", "pH", "sulphates", "alcohol")
val toDouble = udf[Double, String]( _.toDouble)
val dff = df.
withColumn("fixed acidity", toDouble(df("fixed acidity"))). // 0 +
withColumn("volatile acidity", toDouble(df("volatile acidity"))). // 1 +
withColumn("citric acid", toDouble(df("citric acid"))). // 2 -
withColumn("residual sugar", toDouble(df("residual sugar"))). // 3 +
withColumn("chlorides", toDouble(df("chlorides"))). // 4 -
withColumn("free sulfur dioxide", toDouble(df("free sulfur dioxide"))). // 5 +
withColumn("total sulfur dioxide", toDouble(df("total sulfur dioxide"))). // 6 +
withColumn("density", toDouble(df("density"))). // 7 -
withColumn("pH", toDouble(df("pH"))). // 8 +
withColumn("sulphates", toDouble(df("sulphates"))). // 9 +
withColumn("alcohol", toDouble(df("alcohol"))) // 10 +
val assembler = new VectorAssembler().
setInputCols(inputFields.toArray).
setOutputCol("features")
// Fit on whole dataset to include all labels in index.
val labelIndexer = new StringIndexer()
.setInputCol("quality")
.setOutputCol("indexedLabel")
.fit(dff)
// specify layers for the neural network:
// input layer of size 11 (features), two intermediate of size 10 and 20
// and output of size 6 (classes)
val layers = Array[Int](11, 10, 20, 6)
// Train a DecisionTree model.
val dt = new MultilayerPerceptronClassifier()
.setLayers(layers)
.setBlockSize(128)
.setSeed(1234L)
.setMaxIter(100)
.setLabelCol("indexedLabel")
.setFeaturesCol("features")
// Convert indexed labels back to original labels.
val labelConverter = new IndexToString()
.setInputCol("prediction")
.setOutputCol("predictedLabel")
.setLabels(labelIndexer.labels)
// create pileline
val pipeline = new Pipeline()
.setStages(Array(assembler, labelIndexer, dt, labelConverter))
// Train model
val model = pipeline.fit(dff)
}
}
Any idea please?
I can't find any example for neural networking with a CSV file using pipline.
When you have your model trained (val model = pipeline.fit(dff)), you need to predict for every test sample the label using model.transform method. For each prediction you have to check, if it matches label. Then accuracy would be the ratio of properly classified to size of training set.
If you want to use the same DataFrame, that was used for training, then simply val predictions = model.transform(dff). Then iterate over predictions and check, if they match with corresponding labels. However I do not recommend reusing DataFrame - it's better to split it for training and testing subsets.

Spark ML Convert Prediction label to string without training DataFrame

I am using NaiveBayes multinomial classifier in Apache Spark ML (version 2.1.0) to predict some text categories.
Problem is how do I convert the prediction label(0.0, 1.0, 2.0) to string without trained DataFrame.
I know IndexToString can be used but its only helpful if training and prediction both are at the same time. But, In my case its independent job.
code looks like as
1) TrainingModel.scala : Train the model and save the model in file.
2) CategoryPrediction.scala : Load the trained model from file and do prediction on test data.
Please suggest the solution:
TrainingModel.scala
val trainData: Dataset[LabeledRecord] = spark.read.option("inferSchema", "false")
.schema(schema).csv("trainingdata1.csv").as[LabeledRecord]
val labelIndexer = new StringIndexer().setInputCol("category").setOutputCol("label").fit(trainData).setHandleInvalid("skip")
val tokenizer = new RegexTokenizer().setInputCol("text").setOutputCol("words")
val hashingTF = new HashingTF()
.setInputCol("words")
.setOutputCol("features")
.setNumFeatures(1000)
val rf = new NaiveBayes().setLabelCol("label").setFeaturesCol("features").setModelType("multinomial")
val pipeline = new Pipeline().setStages(Array(tokenizer, hashingTF, labelIndexer, rf))
val model = pipeline.fit(trainData)
model.write.overwrite().save("naivebayesmodel");
CategoryPrediction.scala
val testData: Dataset[PredictLabeledRecord] = spark.read.option("inferSchema", "false")
.schema(predictSchema).csv("testingdata.csv").as[PredictLabeledRecord]
val model = PipelineModel.load("naivebayesmodel")
val predictions = model.transform(testData)
// val labelConverter = new IndexToString()
// .setInputCol("prediction")
// .setOutputCol("predictedLabelString")
// .setLabels(trainDataFrameIndexer.labels)
predictions.select("prediction", "text").show(false)
trainingdata1.csv
category,text
Drama,"a b c d e spark"
Action,"b d"
Horror,"spark f g h"
Thriller,"hadoop mapreduce"
testingdata.csv
text
"a b c d e spark"
"spark f g h"
Add a converter that will translate the prediction categories back to your labels in your pipeline, something like this:
val categoryConverter = new IndexToString()
.setInputCol("prediction")
.setOutputCol("category")
.setLabels(labelIndexer.labels)
val pipeline = new Pipeline().setStages(Array(tokenizer, hashingTF, labelIndexer, rf, categoryConverter))
This will take the prediction and convert it back to a label using your labelIndexer.

Apply PCA on specific columns with Apache Spark

i am trying to apply PCA on a dataset that contains a header and contains fields
Here is the code i used , any help to be able to select a specific columns on which we apply PCA .
val inputMatrix = sc.textFile("C:/Users/mhattabi/Desktop/Realase of 01_06_2017/TopDrive_WithoutConstant.csv").map { line =>
val values = line.split(",").map(_.toDouble)
Vectors.dense(values)
}
val mat: RowMatrix = new RowMatrix(inputMatrix)
val pc: Matrix = mat.computePrincipalComponents(4)
// Project the rows to the linear space spanned by the top 4 principal components.
val projected: RowMatrix = mat.multiply(pc)
//updated version
i tried to do this
val spark = SparkSession.builder.master("local").appName("my-spark-app").getOrCreate()
val dataframe = spark.read.format("com.databricks.spark.csv")
val columnsToUse: Seq[String] = Array("Col0","Col1", "Col2", "Col3", "Col4").toSeq
val k: Int = 2
val df = spark.read.format("csv").options(Map("header" -> "true", "inferSchema" -> "true")).load("C:/Users/mhattabi/Desktop/donnee/cassandraTest_1.csv")
val rf = new RFormula().setFormula(s"~ ${columnsToUse.mkString(" + ")}")
val pca = new PCA().setInputCol("features").setOutputCol("pcaFeatures").setK(k)
val featurized = rf.fit(df).transform(df)
//prinpal component
val principalComponent = pca.fit(featurized).transform(featurized)
principalComponent.select("pcaFeatures").show(4,false)
+-----------------------------------------+
|pcaFeatures |
+-----------------------------------------+
|[-0.536798281241379,0.495499034754084] |
|[-0.32969328815797916,0.5672811417154808]|
|[-1.32283465170085,0.5982789033642704] |
|[-0.6199718696225502,0.3173072633712586] |
+-----------------------------------------+
I got this for pricipal component , the question i want to save this in csv file and add header.Any help many thanks
Any help would be appreciated .
Thanks a lot
You can use the RFormula in this case :
import org.apache.spark.ml.feature.{RFormula, PCA}
val columnsToUse: Seq[String] = ???
val k: Int = ???
val df = spark.read.format("csv").options(Map("header" -> "true", "inferSchema" -> "true")).load("/tmp/foo.csv")
val rf = new RFormula().setFormula(s"~ ${columnsToUse.mkString(" + ")}")
val pca = new PCA().setInputCol("features").setK(k)
val featurized = rf.fit(df).transform(df)
val projected = pca.fit(featurized).transform(featurized)
java.lang.NumberFormatException: For input string: "DateTime"
it means that in your input file there is a value DateTime that you then try to convert to Double.
Probably it is somewhere in the header of you input file

Building a random forest in spark, explanation?

I have the a data frame df with the following structure:
amount gender_num marital_num
10000 1 1
20000 1 2
1400 2 1
Lets say I am building an ML to predict the column 'gender_num' in spark using random forest
I am doing the following:
val df1 = df("loan_amount", 'loan_amount.cast("Double")).withColumn("gender_num", 'gender_num.cast("String")).
withColumn("marital_num", 'marital_num.cast("String"))
val labeled = df1.map(row => LabeledPoint(df1.gender_num, Vectors.dense(df1.loan_amount, df1.marital_num)))
val numClasses = 7
val categoricalFeaturesInfo = Map[Int, Int]()
val numTrees = 3 // Use more in practice.
val featureSubsetStrategy = "auto" // Let the algorithm choose.
val impurity = "gini"
val maxDepth = 4
val maxBins = 32
val model = RandomForest.trainClassifier(labeled, categoricalFeaturesInfo,
numTrees, featureSubsetStrategy, impurity, maxDepth, maxBins)
Error:
My code is failing at the second step:
138: error: value gender_num is not a member of org.apache.spark.sql.DataFrame
I would really appreciate if someone can explain this to me, the documentation is very hard to follow, newbie here!
That's because you are using R like syntax of DataFrame.
You should access row's data like this:
val labeled = df1.map { row => LabeledPoint(row(1).toDouble, Vectors.dense(row(0).toDouble, row(1).toDouble))}
You can also create case class and use Dataset syntax:
case class ParsedData (amount : Double, gender_num : Int, marital_num : Int)
val labeled = df1.as[ParsedData].map(row => LabeledPoint(df1.gender_num, Vectors.dense(df1.loan_amount, df1.marital_num)))

Handling unseen categorical variables and MaxBins calculation in Spark Multiclass-classification

Below is the code I have for a RandomForest multiclass-classification model. I am reading from a CSV file and doing various transformations as seen in the code.
I am calculating the max number of categories and then giving it as a parameter to RF. This takes a lot of time! Is there a parameter to set or an easier way to make the model automatically infer the max categories?Since it can go more than 1000 and I cannot omit them.
How do I handle unseen labels on new data for prediction since StringIndexer will not work in that case. the code below is just a split of data but I will be introducing new data as well in future
// Need to predict 2 classes
val cols_to_predict=Array("Label1","Label2")
// ID col
val omit_cols=Array("Key")
// reading the csv file
val data = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true") // Use first line of all files as header
.option("inferSchema", "true") // Automatically infer data types
.load("abc.csv")
.cache()
// creating a features DF by droppping the labels so that I can run all
// the cols through String Indexer
val features=data.drop("Label1").drop("Label2").drop("Key")
// Since I do not know my max categories possible, I find it out
// and use it for maxBins parameter in RF
val distinct_col_counts=features.columns.map(x => data.select(x).distinct().count ).max
val transformers: Array[org.apache.spark.ml.PipelineStage] = features.columns.map(
cname => new StringIndexer().setInputCol(cname).setOutputCol(s"${cname}_index").fit(features)
)
val assembler = new VectorAssembler()
.setInputCols(features.columns.map(cname => s"${cname}_index"))
.setOutputCol("features")
val labelIndexer2 = new StringIndexer()
.setInputCol("prog_label2")
.setOutputCol("Label2")
.fit(data)
val labelIndexer1 = new StringIndexer()
.setInputCol("orig_label1")
.setOutputCol("Label1")
.fit(data)
val rf = new RandomForestClassifier()
.setLabelCol("Label1")
.setFeaturesCol("features")
.setNumTrees(100)
.setMaxBins(distinct_col_counts.toInt)
val labelConverter = new IndexToString()
.setInputCol("prediction")
.setOutputCol("predictedLabel")
.setLabels(labelIndexer1.labels)
// Split into train and test
val Array(trainingData, testData) = data.randomSplit(Array(0.7, 0.3))
trainingData.cache()
testData.cache()
// Running only for one label for now Label1
val stages: Array[org.apache.spark.ml.PipelineStage] =transformers :+ labelIndexer1 :+ assembler :+ rf :+ labelConverter //:+ labelIndexer2
val pipeline=new Pipeline().setStages(stages)
val model=pipeline.fit(trainingData)
val predictions = model.transform(testData)