Apache Spark MLLib get maximum value - scala

I have the following model:
case class Product(price:Int,distance:Int)
and I have data that tells me if a customer is willing to buy the product for price x if distance is y (true/false).
I used a logistic regression in spark on it and can now predict (price,distance) pairs. What if I now want to know the maximum price I can charge for distance x?
code:
val products:List[(Product,Double)] = getProductVotes()
val points:List[LabeledPoints] = products.map{ case (product,vote) =>
LabeledPoint(vote,Vectors.dense(product.price,product.distance)) }
val data: RDD[LabeledPoint] = sc.parallelize(points)
val splits = data.randomSplit(Array(0.6, 0.4), seed = 11L)
val training = splits(0).cache()
val test = splits(1).cache()
val model = new LogisticRegressionWithLBFGS()
.setNumClasses(10)
.run(training)

To know the maximum price for a given distance X.
Take subset of your training data for which the vote=True
Build label point with label being "Price" and feature being "Distance"
Train a linear regression model on the set of labeled points to predict "Price" given the "Distance"

Related

Kmeans Spark ML

I would like to perform KMeans using the Spark ML. Input is a libsvm dataset:
val spark = SparkSession.builder.master("local").appName("my-spark-app").getOrCreate()
// Start time
//val intial_Data=spark.read.option("header",true).csv("C://sample_lda_data.txt")
val dataset = spark.read.format("libsvm").load("C:\\spark\\data\\mllib\\sample_kmeans_data.txt")
// Trains a k-means model.
val kmeans = new KMeans().setK(2).setSeed(1L)
val model = kmeans.fit(dataset)
// Evaluate clustering by computing Within Set Sum of Squared Errors.
val WSSSE = model.computeCost(dataset)
println(s"Within Set Sum of Squared Errors = $WSSSE")
// Shows the result.
println("Cluster Centers: ")
model.clusterCenters.foreach(println)
So i would like to use a csv file and apply KMeans by the Spark ML.
I did this:
val intial_Data=spark.read.option("header",true).csv("C://sample_lda_data.txt")
val arrayCol= array(inputData.columns.drop(1).map(col).map(_.cast(DoubleType)): _*)
import spark.implicits._
// select array column and first column, and map into LabeledPoints
val result = inputData.select(col("col1").cast(DoubleType), arrayCol).map(r => LabeledPoint(r.getAs[Double](0),Vectors.dense(r.getAs[WrappedArray[Double]](1).toArray)))
// Trains a k-means model
val kmeans = new KMeans().setK(2)
val model = kmeans.fit(result)
// Evaluate clustering by computing Within Set Sum of Squared Errors.
val WSSSE = model.computeCost(dataset)
println(s"Within Set Sum of Squared Errors = $WSSSE")
// Shows the result.
println("Cluster Centers: ")
model.clusterCenters.foreach(println)
I tried to turn csv file into a Dataset[LabledPoint].
Is my transformation correct?
In spark 2 instead of MLlib , we are using ML package. Which workon dataset and ML flows work in pipeline model. What u need to do is U have to make a dataset and make two columns feature,label. feature is the a vector of features u need to feed into the algo. The other column label is the target column. To make feature column u just need to use vector assembler to assemble all the features u want to use. If you have a target colunm then rename it as label. after fitting this dataset into algo u will get your model.

Spark ML Linear Regression - What Hyper-parameters to Tune

I'm using the LinearRegression model in the Spark ML for predicting price. It is a single variate regression (x=time, y=price).
Assume my data is clean, what are the usual steps to take to improve this model?
So far, I tried tuning regularization parameter using cross-validation, and got rmse=15 given stdev=30.
Are there any other significant hyper-parameters I should care about? It seems Spark ML is not well documented for hyper-parameter tuning...
Update
I was able to tune up parameters using ParamGrid and Cross-Validation. However, is there any way to see how the fitted line looks like after correctly training a linear regression model? How can I know if the line is quadric or cubic etc? It would be great if there is a way to visualize the fitted line with all training data points.
The link you provided points to the main hyperparameters:
.setRegParam(0.3) // lambda for regularization
.setElasticNetParam(0.8) // coefficient for L1 vs L2
You can perform a GridSearch to optimize their usage .. say for
lambda in 0 to 0.8
elasticNet in 0 to 1.0
This can be done by providing ParamMap to CrossValidator
val estimatorParamMaps: Param[Array[ParamMap]]
param for estimator param maps
To answer your follow-up question, LinearRegression will also be a linear fit. You can plot it by predicting on a dataset of points across your range for your y-values with a line plot. Then, you can plot your training data on top of it.
val session = SparkSession.builder().master("local").appName("PredictiveAnalysis").getOrCreate();
def main(args: Array[String]): Unit = {
val data = session.sparkContext.textFile("C:\\Users\\Test\\new_workspace\\PredictionAlgo\\src\\main\\resources\\data.txt");
val parsedData = data.map { line =>
val x : Array[String] = line.replace(",", " ").split(" ")
val y = x.map{ (a => a.toDouble)}
val d = y.size - 1
val c = Vectors.dense(y(0),y(d))
LabeledPoint(y(0), c)
}.cache();
val numIterations = 100;
val stepSize = 0.00000001;
val model = LinearRegressionWithSGD.train(parsedData, numIterations, stepSize);
val valuesAndPreds = parsedData.map { point =>
val prediction = model.predict(point.features)
(point.label, prediction)
}
valuesAndPreds.foreach((result) => println(s"predicted label: ${result._1}, actual label: ${result._2}"))
val MSE = valuesAndPreds.map{ case(v, p) => math.pow((v - p), 2) }.mean()
println("training Mean Squared Error = " + MSE)
}
}

How to get probability from predictions using GeneralizedLinearRegression model using spark

I'm newbie to machine-learning and I was trying to implement binomial family of GeneralizedLinearRegression model using spark.
I tried this,
val trainingData = sparkSession.read.format("libsvm").load("trainingData.txt")
val testData = sparkSession.read.format("libsvm").load("testData.txt")
val glr = new GeneralizedLinearRegression().setFamily("binomial").setLink("logit").setRegParam(0.3).setMaxIter(10)
val glrModel = glr.fit(trainingData)
model.transform(testData).show()
For my testData, I got my prediction value as 1.0E-16. And when I'm using LogisticRegression, it gives probability(0.765394663) and prediction(0.0) value.
I want to know,
How to predict classes using GeneralizedLinearRegression from prediction value. Should I find classes from prediction value by using a threshold value ?
How to find probability of the predicted value ?

How to classify new training example after model training in apache spark?

Reading the src of https://spark.apache.org/docs/1.5.2/ml-ann.html :
import org.apache.spark.ml.classification.MultilayerPerceptronClassifier
import org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
import org.apache.spark.mllib.util.MLUtils
import org.apache.spark.sql.Row
// Load training data
val data = MLUtils.loadLibSVMFile(sc, "data/mllib/sample_multiclass_classification_data.txt").toDF()
// Split the data into train and test
val splits = data.randomSplit(Array(0.6, 0.4), seed = 1234L)
val train = splits(0)
val test = splits(1)
// specify layers for the neural network:
// input layer of size 4 (features), two intermediate of size 5 and 4 and output of size 3 (classes)
val layers = Array[Int](4, 5, 4, 3)
// create the trainer and set its parameters
val trainer = new MultilayerPerceptronClassifier()
.setLayers(layers)
.setBlockSize(128)
.setSeed(1234L)
.setMaxIter(100)
// train the model
val model = trainer.fit(train)
// compute precision on the test set
val result = model.transform(test)
val predictionAndLabels = result.select("prediction", "label")
val evaluator = new MulticlassClassificationEvaluator()
.setMetricName("precision")
println("Precision:" + evaluator.evaluate(predictionAndLabels))
Once the model has been trained how can a new training example be classified ?
Can a new training example be added to the model where the label is not set and the classifier will try to classify this training example based on the training data ?
Why is it required that the dataframe labels are of type Double ?
Firstly, the only way to add another observation to the model is by incorporating that data point into the training data, in this case to your train variable. In order to achieve this, you can convert that point into a DataFrame (obviously of only one record) and then use the unionAll method. Nevertheless, you will have to retrain the model using this new dataset.
However, to classify observations using your model you will have to convert your unclassified data into a DataFrame with the same structure that had your training data. And then use the method transform of your model. By the way, notice that models have that method, because they are subclasses of Transformer.
Finally, you have to use Double because that is the way how the LabeledPoint class was defined. It receives a Double as label and a SparseVector or DenseVector as features. I don't know the exact motivation but in my own experience, which isn't wide, all classification and regression algorithms work with float point numbers.Furthermore, gradient descent algorithm, which is widely used to fit most models, uses numbers not letters nor classes to compute the error in each iteration.

How to define a function and pass training and test datasets in Scala?

I want to define a function in Scala in which I can pass my training and test datasets and then it perform a simple machine learning algorithm and returns some statistics. How should do that? What will be the parameters data type?
Imagine, you need to define a function which by taking training and test datasets performs a simple classification algorithm and then return the accuracy.
What I expect to have is like as follow:
val data = MLUtils.loadLibSVMFile(sc, datadir + "/example.txt");
val splits = data.randomSplit(Array(0.6, 0.4), seed = 11L);
val training = splits(0).cache();
val test = splits(1);
val results1 = SVMFunction(training, test)
val results2 = RegressionFunction(training, test)
val results3 = ClassificationFunction(training, test)
I need just the declaration of the functions and not the code that produce the results1, results2, and results3.
def SVMFunction ("I need help here"){
//I know how to work with the training and test datasets to generate the results.
//So no need to discuss what should be here
}
Thanks.
In case you're using supervised learning you should opt for LabeledPoint. Excerpt from mllib doc:
A labeled point is a local vector, either dense or sparse, associated with a label/response. In MLlib, labeled points are used in supervised learning algorithms. We use a double to store a label, so we can use labeled points in both regression and classification. For binary classification, a label should be either 0 (negative) or 1 (positive). For multiclass classification, labels should be class indices starting from zero: 0, 1, 2, ....
And example is:
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.regression.LabeledPoint
// Create a labeled point with a positive label and a dense feature vector.
val pos = LabeledPoint(1.0, Vectors.dense(1.0, 0.0, 3.0))
// Create a labeled point with a negative label and a sparse feature vector.
val neg = LabeledPoint(0.0, Vectors.sparse(3, Array(0, 2), Array(1.0, 3.0)))