I am trying to run the spark logistic regression function (ml not mllib). I have a dataframe which looks like (just the first row shown)
+-----+--------+
|label|features|
+-----+--------+
| 0.0| [60.0]|
(Right now just trying to keep it simple with only one dimension in the feature, but will expand later on.)
I run the following code - taken from the Spark ML documentation
import org.apache.spark.ml.classification.LogisticRegression
val lr = new LogisticRegression()
.setMaxIter(10)
.setRegParam(0.3)
.setElasticNetParam(0.8)
val lrModel = lr.fit(df)
This gives me the error -
org.apache.spark.SparkException: Values to assemble cannot be null.
I'm not sure how to fix this error. I looked at sample_libsvm_data.txt which is in the spark github repo and used in some of the examples in the spark ml documentation. That dataframe looks like
+-----+--------------------+
|label| features|
+-----+--------------------+
| 0.0|(692,[127,128,129...|
| 1.0|(692,[158,159,160...|
| 1.0|(692,[124,125,126...|
Based on this example, my data looks like it should be in the right format, with one issue. Is 692 the number of features? Seems rather dumb if so - spark should just be able to look at the length of the feature vector to see how many features there are. If I do need to add the number of features, how would I do that? (Pretty new to Scala/Java)
Cheers
This error is thrown by VectorAssembler when any of the features are null. Please verify that you rows doesn't contain null values. If there are null values you must convert it into a default numeric feature before VectorAssembling.
Regarding format of sample_libsvm_data.txt, Its is in stored in in a sparse array/matrix form. Where data is represented as:
0 128:51 129:159 130:253 (Where 0 is label and the subsequent column contains index:numeric_feature format.
You can form your single feature dataframe in the following way using Vector class as follow (I ran it on 1.6.1 shell):
import org.apache.spark.mllib.linalg.{Vector, Vectors}
import org.apache.spark.ml.classification.LogisticRegression
val training1 = sqlContext.createDataFrame(Seq(
(1.0, Vectors.dense(3.0)),
(0.0, Vectors.dense(3.0)))
).toDF("label", "features")
val lr = new LogisticRegression().setMaxIter(10).setRegParam(0.3).setElasticNetParam(0.8)
val model1 = lr.fit(training)
For more, you can check examples at: https://spark.apache.org/docs/1.6.1/ml-guide.html#dataframe (Refer to section Code examples)
Related
I'm using pyspark 3.2.1 and I'm trying to create a HashingTF object and set input and output columns.
Neither
hashingTF = HashingTF(numFeatures=20).setOutputCol('output'),
nor
hashingTF = HashingTF(numFeatures=20, outputCol='output')
do not work. The same problem is observed when I try to set input column.
According to https://spark.apache.org/docs/3.2.1/api/python/reference/api/pyspark.ml.feature.HashingTF.html both aforementioned methods are expected to work.
I am using Apache Zeppelin (0.9.0) and Scala (2.11.12). I want to pull some data out of a dataframe and store it to InfluxDB, later to be visualized in Grafana, and cannot figure it out. I'm trying a naive approach with a foreach loop. The idea is to iterate through all rows, extract the columns I need, create a Point object (from this InfluxDB client library), and either send it to InfluxDB or add it to a list and then send all the points in bulk, after the loop.
The dataframe looks like this:
+---------+---------+-------------+-----+
|timestamp|sessionId|eventDuration|speed|
+---------+---------+-------------+-----+
| 1| ses1| 0.0| 50|
| 2| ses1| 1.0| 50|
| 3| ses1| 2.0| 50|
I've tried to do what is described above:
import scala.collection.mutable.ListBuffer
import spark.implicits._
import org.apache.spark.sql._
import com.paulgoldbaum.influxdbclient._
import scala.concurrent.ExecutionContext.Implicits.global
val influxdb = InfluxDB.connect("172.17.0.4", 8086)
val database = influxdb.selectDatabase("test")
var influxData = new ListBuffer[Point]()
dfAnalyseReport.foreach(row =>
{
val point = Point("acceleration")
.addTag("speedBin", row.getLong(3).toString)
.addField("eventDuration", row.getDouble(2))
influxData += point
}
)
val influxDataList = influxData.toList
database.bulkWrite(influxDataList)
The only thing I am getting here is a cryptic java.lang.ClassCastException with no additional info, neither in the notebook output nor in the logs of the Zeppelin Docker container. The error seems to be somewhere in the foreach, as it appears even when I comment out the last two lines.
I also tried adapting approach 1 from this answer, using a case class for columns, but to no avail. I got it to run without an error, but the resulting list was empty. Unfortunately I deleted that attempt. I could reconstruct it if necessary, but I've spent so much time on this I'm fairly certain I have some fundamental misunderstanding on how this should be done.
One further question: I also tried writing each Point to the DB as it was constructed (instead of in bulk). The only difference is that instead of appending to the ListBuffer I did a database.write(point) operation. When done outside of the loop with a dummy point, it goes through without a problem - the data ends up in InfluxDB - but inside the loop it results in org.apache.spark.SparkException: Task not serializable
Could someone point me in the right way? How should I tackle this?
I'd do it with the RDD map method and collect the results to a list:
val influxDataList = dfAnalyseReport.rdd.map(
row => Point("acceleration")
.addTag("speedBin", row.getInt(3).toString)
.addField("eventDuration", row.getDouble(2))
).collect.toList
I have a large text file which contains the page views of some Wikimedia projects. (You can find it here if you're really interested) Each line, delimited by a space, contains the statistics for one Wikimedia page. The schema looks as follows:
<project code> <page title> <num hits> <page size>
In Scala, using Spark RDDs or Dataframes, I wish to compute the total number of hits for each project, based on the project code.
So for example for projects with the code "zw", I would like to find all the rows that begin with project code "zw", and add up their hits. Obviously this should be done for all project codes simultaneously.
I have looked at functions like aggregateByKey etc, but the examples I found don't go into enough detail, especially for a file with 4 fields. I imagine it's some kind of MapReduce job, but how exactly to implement it is beyond me.
Any help would be greatly appreciated.
First, you have to read the file in as a Dataset[String]. Then, parse each string into a tuple, so that it can be easily converted to a Dataframe. Once you have a Dataframe, a simple .GroupBy().agg() is enough to finish the computation.
import org.apache.spark.sql.functions.sum
val df = spark.read.textFile("/tmp/pagecounts.gz").map(l => {
val a = l.split(" ")
(a(0), a(2).toLong)
}).toDF("project_code", "num_hits")
val agg_df = df.groupBy("project_code")
.agg(sum("num_hits").as("total_hits"))
.orderBy($"total_hits".desc)
agg_df.show(10)
The above snippet shows the top 10 project codes by total hits.
+------------+----------+
|project_code|total_hits|
+------------+----------+
| en.mw| 5466346|
| en| 5310694|
| es.mw| 695531|
| ja.mw| 611443|
| de.mw| 572119|
| fr.mw| 536978|
| ru.mw| 466742|
| ru| 463437|
| es| 400632|
| it.mw| 400297|
+------------+----------+
It is certainly also possible to do this with the older API as an RDD map/reduce, but you lose many of the optimizations that Dataset/Dataframe api brings.
The file generated from API contains data like below
col1,col2,col3
503004,(d$üíõ$F|'.h*Ë!øì=(.î; ,.¡|®!®3-2-704
when i am reading in spark it is appearing like this. i am using case class to read from RDD then convert it to DataFrame using .todf.
503004,������������,������������������������3-2-704
but i am trying to get value like
503004,dFh,3-2-704-- only alpha-numeric value is retained.
i am using spark 1.6 and scala.
Please share your suggestion
#this ca be achieved by using the regex_replace
val df = spark.sparkContext.parallelize(List(("503004","d$üíõ$F|'.h*Ë!øì=(.î; ,.¡|®!®","3-2-704"))).toDF("col1","col2","col3")
df.withColumn("col2_new", regexp_replace($"col2", "[^a-zA-Z]", "")).show()
Output:
+------+--------------------+-------+--------+
| col1| col2| col3|col2_new|
+------+--------------------+-------+--------+
|503004|d$üíõ$F|'.h*Ë!øì=...|3-2-704| dFh|
+------+--------------------+-------+--------+
Spark version 1.60, Scala version 2.10.5.
I have a spark-sql dataframe df like this,
+-------------------------------------------------+
|addess | attributes |
+-------------------------------------------------+
|1314 44 Avenue | Tours, Mechanics, Shopping |
|115 25th Ave | Restaurant, Mechanics, Brewery|
+-------------------------------------------------+
From this dataframe, I would like values as below,
Tours, Mechanics, Shopping, Brewery
If I do this,
df.select(df("attributes")).collect().foreach(println)
I get,
[Tours, Mechanics, Shopping]
[Restaurant, Mechanics, Brewery]
I thought I could use flatMapinstead found this, so, tried to put this into a variable using,
val allValues = df.withColumn(df("attributes"), explode("attributes"))
but I am getting an error:
error: type mismatch;
found:org.apache.spark.sql.column
required:string
I was thinking if I can get an output using explode I can use distinct to get the unique values after flattening them.
How can I get the desired output?
I strongly recommend you to use spark 2.x version. In Cloudera, when you issue "spark-shell", it launches 1.6.x version.. however, if you issue "spark2-shell", you get the 2.x shell. Check with your admin
But if you need with Spark 1.6 and rdd solution, try this.
import spark.implicits._
import scala.collection.mutable._
val df = Seq(("1314 44 Avenue",Array("Tours", "Mechanics", "Shopping")),
("115 25th Ave",Array("Restaurant", "Mechanics", "Brewery"))).toDF("address","attributes")
df.rdd.flatMap( x => x.getAs[mutable.WrappedArray[String]]("attributes") ).distinct().collect.foreach(println)
Results:
Brewery
Shopping
Mechanics
Restaurant
Tours
If the "attribute" column is not an array, but comma separated string, then use the below one which gives you same results
val df = Seq(("1314 44 Avenue","Tours,Mechanics,Shopping"),
("115 25th Ave","Restaurant,Mechanics,Brewery")).toDF("address","attributes")
df.rdd.flatMap( x => x.getAs[String]("attributes").split(",") ).distinct().collect.foreach(println)
The problem is that withColumn expects a String in its first argument (which is the name of the added column), but you're passing it a Column here df.withColumn(df("attributes").
You only need to pass "attributes" as a String.
Additionally, you need to pass a Column to the explode function, but you're passing a String - to make it a column you can use df("columName") or the Scala shorthand $ syntax, $"columnName".
Hope this example can help you.
import org.apache.spark.sql.functions._
val allValues = df.select(explode($"attributes").as("attributes")).distinct
Note that this will only preserve the attributes Column, since you want the distinct elements on that one.