How to obtain coefficient values from Spark-MLlib Linear Regression model (Scala)? - scala

I'd like to obtain coefficient values of Linear Regression(LR) model in Spark-MLlib. Here I use the 'LinearRegressionWithSGD' to build the model and you can find the sample from the following link:
https://spark.apache.org/docs/2.1.0/mllib-linear-methods.html#regression
I could get the coefficient values from Spark-ML Linear Regression. Please find the reference link from below.
https://spark.apache.org/docs/2.1.0/ml-classification-regression.html#linear-regression
Please help me with this. Thanks in advance !!

Took first lines of model creation from the first link you sent:
val model: LinearRegressionModel = LinearRegressionWithSGD.train(parsedData, numIterations, stepSize)
.run(training)
// Here are the coefficient and intercept
val weights: org.apache.spark.mllib.linalg.Vector = model.weights
val intercept = model.intercept
val weightsData: Array[Double] = weights.asInstanceOf[DenseVector].values
The last 3 lines are the coefficient and intercept
The type of weights is
: org.apache.spark.mllib.linalg.Vector
That is a wrapper around the Breeze DenseVector

Related

pyspark extract ROC curve?

Is there a way to get the points on an ROC curve from Spark ML in pyspark? In the documentation I see an example for Scala but not python: https://spark.apache.org/docs/2.1.0/mllib-evaluation-metrics.html
Is that right? I can certainly think of ways to implement it but I have to imagine it’s faster if there’s a pre-built function. I’m working with 3 million scores and a few dozen models so speed matters.
For a more general solution that works for models besides Logistic Regression (like Decision Trees or Random Forest which lack a model summary) you can get the ROC curve using BinaryClassificationMetrics from Spark MLlib.
Note that the PySpark version doesn't implement all of the methods that the Scala version does, so you'll need to use the .call(name) function from JavaModelWrapper. It also seems that py4j doesn't support parsing scala.Tuple2 classes, so they have to be manually processed.
Example:
from pyspark.mllib.evaluation import BinaryClassificationMetrics
# Scala version implements .roc() and .pr()
# Python: https://spark.apache.org/docs/latest/api/python/_modules/pyspark/mllib/common.html
# Scala: https://spark.apache.org/docs/latest/api/java/org/apache/spark/mllib/evaluation/BinaryClassificationMetrics.html
class CurveMetrics(BinaryClassificationMetrics):
def __init__(self, *args):
super(CurveMetrics, self).__init__(*args)
def _to_list(self, rdd):
points = []
# Note this collect could be inefficient for large datasets
# considering there may be one probability per datapoint (at most)
# The Scala version takes a numBins parameter,
# but it doesn't seem possible to pass this from Python to Java
for row in rdd.collect():
# Results are returned as type scala.Tuple2,
# which doesn't appear to have a py4j mapping
points += [(float(row._1()), float(row._2()))]
return points
def get_curve(self, method):
rdd = getattr(self._java_model, method)().toJavaRDD()
return self._to_list(rdd)
Usage:
import matplotlib.pyplot as plt
# Create a Pipeline estimator and fit on train DF, predict on test DF
model = estimator.fit(train)
predictions = model.transform(test)
# Returns as a list (false positive rate, true positive rate)
preds = predictions.select('label','probability').rdd.map(lambda row: (float(row['probability'][1]), float(row['label'])))
points = CurveMetrics(preds).get_curve('roc')
plt.figure()
x_val = [x[0] for x in points]
y_val = [x[1] for x in points]
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.plot(x_val, y_val)
BinaryClassificationMetrics in Scala implements several other useful methods as well:
metrics = CurveMetrics(preds)
metrics.get_curve('fMeasureByThreshold')
metrics.get_curve('precisionByThreshold')
metrics.get_curve('recallByThreshold')
As long as the ROC curve is a plot of FPR against TPR, you can extract the needed values as following:
your_model.summary.roc.select('FPR').collect()
your_model.summary.roc.select('TPR').collect())
Where your_model could be for example a model you got from something like this:
from pyspark.ml.classification import LogisticRegression
log_reg = LogisticRegression()
your_model = log_reg.fit(df)
Now you should just plot FPR against TPR, using for example matplotlib.
P.S.
Here is a complete example for plotting ROC curve using a model named your_model (and anything else!). I've also plot a reference "random guess" line inside the ROC plot.
import matplotlib.pyplot as plt
plt.figure(figsize=(5,5))
plt.plot([0, 1], [0, 1], 'r--')
plt.plot(your_model.summary.roc.select('FPR').collect(),
your_model.summary.roc.select('TPR').collect())
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.show()
To get ROC metrics for train data (trained model), we can use your_model.summary.roc which is a DataFrame with columns FPR and TPR. See Andrea's answer.
For ROC evaluated on arbitrary test data, we can use label and probability columns to pass to sklearn's roc_curve to get FPR and TPR. Here we assume a binary classification problem where the y score is the probability of predicting 1. See also How to split Vector into columns - using PySpark, How to convert a pyspark dataframe column to numpy array
Example
from sklearn.metrics import roc_curve
model = lr.fit(train_df)
test_df_predict = model.transform(test_df)
y_score = test_df_predict.select(vector_to_array("probability")[1]).rdd.keys().collect()
y_true = test_df_predict.select("label").rdd.keys().collect()
fpr, tpr, thresholds = roc_curve(y_true, y_score)

Create Linear Regression Model from an array of coefficients in Spark

I have an array of coefficients already computed and I want to create a Linear Regression Model out of it in Spark 2.0.1 so that I can use it for prediction.
What is the easiest way to create a LinearRegressionModel class with an array of coefficients?
Your linear model is just a linear equation, so for example if your coefficients are
val coefficients=Array[Double](c0,c1,c2,...,cn)
where the first value is the intercept coefficient (assuming you have intercept) then your linear equation is
y = c0 + c1*x1 + c2*x2 + ... + c_n*xn
So you could define
class LinearModel(coefficients:Array[Double]){
def predict(newObservation:Array[Double]):Double={
val intercept=coefficients(0)
val weights=coefficients.drop(1)
val multiplication=newObservation.zip(weights).map{case (x,y)=>x*y}.sum
val prediction=intercept+multiplication
prediction
}
}
For example, if your coefficients are
val coefficients=Array(2.0,2.1,2.2)
then define a new linear model
val model = new LinearModel(coefficients)
So if you have a new observation
newObservation=Array(1.0,1.0)
the prediction is
model.predict(newObservation)
and the output is
scala> model.predict(newObservation)
res16: Double = 6.300000000000001
And you can adapt the previous code if you want to predict a bunch of observations instead of just one.

How to get probability from predictions using GeneralizedLinearRegression model using spark

I'm newbie to machine-learning and I was trying to implement binomial family of GeneralizedLinearRegression model using spark.
I tried this,
val trainingData = sparkSession.read.format("libsvm").load("trainingData.txt")
val testData = sparkSession.read.format("libsvm").load("testData.txt")
val glr = new GeneralizedLinearRegression().setFamily("binomial").setLink("logit").setRegParam(0.3).setMaxIter(10)
val glrModel = glr.fit(trainingData)
model.transform(testData).show()
For my testData, I got my prediction value as 1.0E-16. And when I'm using LogisticRegression, it gives probability(0.765394663) and prediction(0.0) value.
I want to know,
How to predict classes using GeneralizedLinearRegression from prediction value. Should I find classes from prediction value by using a threshold value ?
How to find probability of the predicted value ?

Bicoin price prediction using spark and scala [duplicate]

I am new to Apache Spark and trying to use the machine learning library to predict some data. My dataset right now is only about 350 points. Here are 7 of those points:
"365","4",41401.387,5330569
"364","3",51517.886,5946290
"363","2",55059.838,6097388
"362","1",43780.977,5304694
"361","7",46447.196,5471836
"360","6",50656.121,5849862
"359","5",44494.476,5460289
Here's my code:
def parsePoint(line):
split = map(sanitize, line.split(','))
rev = split.pop(-2)
return LabeledPoint(rev, split)
def sanitize(value):
return float(value.strip('"'))
parsedData = textFile.map(parsePoint)
model = LinearRegressionWithSGD.train(parsedData, iterations=10)
print model.predict(parsedData.first().features)
The prediction is something totally crazy, like -6.92840330273e+136. If I don't set iterations in train(), then I get nan as a result. What am I doing wrong? Is it my data set (the size of it, maybe?) or my configuration?
The problem is that LinearRegressionWithSGD uses stochastic gradient descent (SGD) to optimize the weight vector of your linear model. SGD is really sensitive to the provided stepSize which is used to update the intermediate solution.
What SGD does is to calculate the gradient g of the cost function given a sample of the input points and the current weights w. In order to update the weights w you go for a certain distance in the opposite direction of g. The distance is your step size s.
w(i+1) = w(i) - s * g
Since you're not providing an explicit step size value, MLlib assumes stepSize = 1. This seems to not work for your use case. I'd recommend you to try different step sizes, usually lower values, to see how LinearRegressionWithSGD behaves:
LinearRegressionWithSGD.train(parsedData, numIterartions = 10, stepSize = 0.001)

How to define a function and pass training and test datasets in Scala?

I want to define a function in Scala in which I can pass my training and test datasets and then it perform a simple machine learning algorithm and returns some statistics. How should do that? What will be the parameters data type?
Imagine, you need to define a function which by taking training and test datasets performs a simple classification algorithm and then return the accuracy.
What I expect to have is like as follow:
val data = MLUtils.loadLibSVMFile(sc, datadir + "/example.txt");
val splits = data.randomSplit(Array(0.6, 0.4), seed = 11L);
val training = splits(0).cache();
val test = splits(1);
val results1 = SVMFunction(training, test)
val results2 = RegressionFunction(training, test)
val results3 = ClassificationFunction(training, test)
I need just the declaration of the functions and not the code that produce the results1, results2, and results3.
def SVMFunction ("I need help here"){
//I know how to work with the training and test datasets to generate the results.
//So no need to discuss what should be here
}
Thanks.
In case you're using supervised learning you should opt for LabeledPoint. Excerpt from mllib doc:
A labeled point is a local vector, either dense or sparse, associated with a label/response. In MLlib, labeled points are used in supervised learning algorithms. We use a double to store a label, so we can use labeled points in both regression and classification. For binary classification, a label should be either 0 (negative) or 1 (positive). For multiclass classification, labels should be class indices starting from zero: 0, 1, 2, ....
And example is:
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.regression.LabeledPoint
// Create a labeled point with a positive label and a dense feature vector.
val pos = LabeledPoint(1.0, Vectors.dense(1.0, 0.0, 3.0))
// Create a labeled point with a negative label and a sparse feature vector.
val neg = LabeledPoint(0.0, Vectors.sparse(3, Array(0, 2), Array(1.0, 3.0)))