Spark - How to use QuantileDiscretizer with RandomForestClassifier - scala

Is it possible to use QuantileDiscretizer, keeping NaN values, with a RandomForestClassifier?
I have been getting an error like this:
18/03/23 17:38:15 ERROR Executor: Exception in task 3.0 in stage 133.0 (TID 381)
java.lang.IllegalArgumentException: DecisionTree given invalid data: Feature 1 is categorical with values in {0,...,1, but a data point gives it value 2.0.
Bad data point: (1.0,[1.0,2.0])
Example
The idea here is to create a numeric column and discretize it using quantiles, keeping invalid numbers (NaN) in a special bucket.
import org.apache.spark.ml.feature.{StringIndexer, VectorAssembler,
QuantileDiscretizer}
import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.classification.{RandomForestClassifier}
val tseq = Seq((0, "a", 1.0), (1, "b", 0.0), (2, "c", 2.0),
(3, "a", 1.0), (4, "a", 3.0), (5, "c", Double.NaN))
val tdf = SparkInit.ss.createDataFrame(tseq).toDF("id", "category", "class")
val indexer = new StringIndexer()
.setInputCol("category")
.setOutputCol("categoryIndex")
val discr = new QuantileDiscretizer()
.setInputCol("class")
.setOutputCol("quant")
.setNumBuckets(2)
.setHandleInvalid("keep")
val assembler = new VectorAssembler()
.setInputCols(Array("categoryIndex", "quant"))
.setOutputCol("features")
val rf = new RandomForestClassifier()
.setLabelCol("categoryIndex")
.setFeaturesCol("features")
.setNumTrees(3)
new Pipeline()
.setStages(Array(indexer, discr, assembler, rf))
.fit(tdf)
.transform(tdf)
.show()
Without trying to fit the Random Forest, I was getting a DataFrame like this:
+---+--------+-----+-------------+-----+---------+
| id|category|class|categoryIndex|quant| features|
+---+--------+-----+-------------+-----+---------+
| 0| a| 1.0| 0.0| 1.0|[0.0,1.0]|
| 1| b| 0.0| 2.0| 0.0|[2.0,0.0]|
| 2| c| 2.0| 1.0| 1.0|[1.0,1.0]|
| 3| a| 1.0| 0.0| 1.0|[0.0,1.0]|
| 4| a| 3.0| 0.0| 1.0|[0.0,1.0]|
| 5| c| NaN| 1.0| 2.0|[1.0,2.0]|
+---+--------+-----+-------------+-----+---------+
If I try to fit the model, I get the error:
18/03/23 17:54:12 WARN DecisionTreeMetadata: DecisionTree reducing maxBins from 32 to 6 (= number of training instances)
18/03/23 17:54:12 WARN BlockManager: Putting block rdd_490_3 failed due to an exception
18/03/23 17:54:12 WARN BlockManager: Block rdd_490_3 could not be removed as it was not found on disk or in memory
18/03/23 17:54:12 ERROR Executor: Exception in task 3.0 in stage 143.0 (TID 414)
java.lang.IllegalArgumentException: DecisionTree given invalid data: Feature 1 is categorical with values in {0,...,1, but a data point gives it value 2.0.
Bad data point: (1.0,[1.0,2.0])
at org.apache.spark.ml.tree.impl.TreePoint$.findBin(TreePoint.scala:124)
at org.apache.spark.ml.tree.impl.TreePoint$.org$apache$spark$ml$tree$impl$TreePoint$$labeledPointToTreePoint(TreePoint.scala:93)
at org.apache.spark.ml.tree.impl.TreePoint$$anonfun$convertToTreeRDD$2.apply(TreePoint.scala:73)
at org.apache.spark.ml.tree.impl.TreePoint$$anonfun$convertToTreeRDD$2.apply(TreePoint.scala:72)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:216)
Does QuantileDiscretizer inserts some kind of metadata about the special extra bucket? It's weird that I was able to build a model using columns with the same values before, but without forcing any discretization.
Update
Yes, columns does have attached metadata and it looks like this:
org.apache.spark.sql.types.Metadata = {"ml_attr":
{"ord":true,
"vals":["-Infinity, 5.0","5.0, 10.0","10.0, Infinity"],
"type":"nominal"}
}
The question now might be: how to set correctly the metadata to include values like Double.NaN?

The workaround I used was simply to remove the associated metadata from the discretized columns, letting the decision tree implementation to decide what to do with the data. I think the column would actually become a numerical column ([0, 1, 2, 2, 1], for example), but, if too many categories are created, the column could be discretized again (look for the parameter maxBins).
In my case, the simplest way to remove the metadata was to fill the DataFrame after applying QuantileDiscretizer:
// Nothing is actually filled in my case, since there was no missing
// values before this operation.
df.na.fill(Double.NaN, Array("quant"))
I'm almost sure you could also manually remove the metadata accessing the column object directly.
Update
We can change a column's metadata by creating an alias (reference):
val metadata: Metadata = ...
df.select($"colA".as("colB", metadata))
This answer describes a way to get the column's metadata by getting the respective StructField of a DataFrame's schema.

Related

Looking to substract every value in a row based on the value of a separate DF

As the title states, I would like to subtract each value of a specific column by the mean of that column.
Here is my code attempt:
val test = moviePairs.agg(avg(col("rating1")).alias("avgX"), avg(col("rating2")).alias("avgY"))
val subMean = moviePairs.withColumn("meanDeltaX", col("rating1") - test.select("avgX").collect())
.withColumn("meanDeltaY", col("rating2") - test.select("avgY").collect())
subMean.show()
You can either use Spark's DataFrame functions or a mere SQL query to a DataFrame to aggregate the values of the means for the columns you are focusing on (rating1, rating2).
val moviePairs = spark.createDataFrame(
Seq(
("Moonlight", 7, 8),
("Lord Of The Drinks", 10, 1),
("The Disaster Artist", 3, 5),
("Airplane!", 7, 9),
("2001", 5, 1),
)
).toDF("movie", "rating1", "rating2")
// find the means for each column and isolate the first (and only) row to get their values
val means = moviePairs.agg(avg("rating1"), avg("rating2")).head()
// alternatively, by using a simple SQL query:
// moviePairs.createOrReplaceTempView("movies")
// val means = spark.sql("select AVG(rating1), AVG(rating2) from movies").head()
val subMean = moviePairs.withColumn("meanDeltaX", col("rating1") - means.getDouble(0))
.withColumn("meanDeltaY", col("rating2") - means.getDouble(1))
subMean.show()
Output for the test input DataFrame moviePairs (with the good ol' double precision loss which you can manage as seen here):
+-------------------+-------+-------+-------------------+-------------------+
| movie|rating1|rating2| meanDeltaX| meanDeltaY|
+-------------------+-------+-------+-------------------+-------------------+
| Moonlight| 7| 8| 0.5999999999999996| 3.2|
| Lord Of The Drinks| 10| 1| 3.5999999999999996| -3.8|
|The Disaster Artist| 3| 5|-3.4000000000000004|0.20000000000000018|
| Airplane!| 7| 9| 0.5999999999999996| 4.2|
| 2001| 5| 1|-1.4000000000000004| -3.8|
+-------------------+-------+-------+-------------------+-------------------+

Loading JSON to Spark SQL

I'm doing self study about JSON with Spark SQL in v2.1 and am using the data from the link
https://catalog.data.gov/dataset/air-quality-measures-on-the-national-environmental-health-tracking-network
The problem I have is when I use :
val lines = spark.read
.option("multiLine", true).option("mode", "PERMISSIVE")
.json("E:/VW/meta_plus_sample_Data.json")`
I get Spark SQL returning all the data as one row.
+--------------------+--------------------+
| data| meta|
+--------------------+--------------------+
|[[row-8eh8_xxkx-u...|[[[[1439474950, t...|
+--------------------+--------------------+`
And when I remove:
.option("multiLine", true).option("mode", "PERMISSIVE")
I get an error as
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Exception in thread "main" org.apache.spark.sql.AnalysisException: Since Spark 2.3, the queries from raw JSON/CSV files are disallowed when the
referenced columns only include the internal corrupt record column
(named _corrupt_record by default). For example:
spark.read.schema(schema).json(file).filter($"_corrupt_record".isNotNull).count()
and spark.read.schema(schema).json(file).select("_corrupt_record").show().
Instead, you can cache or save the parsed results and then send the same query.
For example, val df = spark.read.schema(schema).json(file).cache() and then
df.filter($"_corrupt_record".isNotNull).count().;
Is there an option to achieve it in Spark SQL with each record from file as one row in table?
This is expected behavior as we have only one record(in the link provided in the question) in the having meta (object) and data (array).
As one json record is in multiple lines so we need to include multiLine option.
spark.read.option("multiLine",true).option("mode","PERMISSIVE").json("tmp.json").show()
//sample data
//+--------------------+--------------------+
//| data| meta|
//+--------------------+--------------------+
//|[[row-8eh8_xxkx-u...|[[[[1439474950, t...|
//+--------------------+--------------------+
//access meta struct columns
df.select("meta.view.*").show()
//+--------------------+-------------+--------------------+--------------------+----------+--------------------+-----------+-------------+--------------------+--------------------+---------------+----------------+---------+--------------+--------------------+--------------------+----------+----------------+--------+--------------------+----------+------------------------+---------------+----------------+----------------+--------------------+------+--------+-------------+-------------+--------------------+-------+--------------------+---------------+---------+----------------+--------+
//| approvals|averageRating| category| columns| createdAt| description|displayType|downloadCount| flags| grants|hideFromCatalog|hideFromDataJson| id|indexUpdatedAt| metadata| name|newBackend|numberOfComments| oid| owner|provenance|publicationAppendEnabled|publicationDate|publicationGroup|publicationStage| query|rights|rowClass|rowsUpdatedAt|rowsUpdatedBy| tableAuthor|tableId| tags|totalTimesRated|viewCount|viewLastModified|viewType|
//+--------------------+-------------+--------------------+--------------------+----------+--------------------+-----------+-------------+--------------------+--------------------+---------------+----------------+---------+--------------+--------------------+--------------------+----------+----------------+--------+--------------------+----------+------------------------+---------------+----------------+----------------+--------------------+------+--------+-------------+-------------+--------------------+-------+--------------------+---------------+---------+----------------+--------+
//|[[1439474950, tru...| 0|Environmental Hea...|[[, meta_data,, :...|1439381433|The Environmental...| table| 26159|[default, restora...|[[[public], false...| false| false|cjae-szjv| 1528204279|[[table, fatrow, ...|Air Quality Measu...| true| 0|12801487|[Tracking, 94g5-7...| official| false| 1439474950| 3957835| published|[[[true, [2171820...|[read]| | 1439402317| 94g5-7as2|[Tracking, 94g5-7...|3960642|[environmental ha...| 0| 3843| 1528203875| tabular|
//+--------------------+-------------+--------------------+--------------------+----------+--------------------+-----------+-------------+--------------------+--------------------+---------------+----------------+---------+--------------+--------------------+--------------------+----------+----------------+--------+--------------------+----------+------------------------+---------------+----------------+----------------+--------------------+------+--------+-------------+-------------+--------------------+-------+--------------------+---------------+---------+----------------+--------+
//to access data array we need to explode
df.selectExpr("explode(data)").show()
//+--------------------+
//| col|
//+--------------------+
//|[row-8eh8_xxkx-u3...|
//|[row-u2v5_78j5-px...|
//|[row-68zj_7qfn-sx...|
//|[row-8b4d~zt5j~da...|
//|[row-5gee.63td_z6...|
//|[row-tzyx.ssxh_pz...|
//|[row-3yj2_u42c_mr...|
//|[row-va7z.p2v8.7p...|
//|[row-r7kk_e3dm-z2...|
//|[row-bnrc~w34s-4a...|
//|[row-ezrk~m5dc_5n...|
//|[row-nyya.dvnz~c6...|
//|[row-dq3i_wt6d_c6...|
//|[row-u6rc-k3mf-cn...|
//|[row-t9c6-4d4b_r6...|
//|[row-vq6r~mxzj-e6...|
//|[row-vxqn-mrpc~5b...|
//|[row-3akn_5nzm~8v...|
//|[row-ugxn~bhax.a2...|
//|[row-ieav.mdz9-m8...|
//+--------------------+
Load multiple json records:
//json array with two records
spark.read.json(Seq(("""
[{"id":1,"name":"a"},
{"id":2,"name":"b"}]
""")).toDS).show()
//as we have 2 json objects and loaded as 2 rows
//+---+----+
//| id|name|
//+---+----+
//| 1| a|
//| 2| b|
//+---+----+

Retrieve Spark Mllib StringIndexer column mapping

How do I get the mapping out of a trained Spark MLlib StringIndexerModel?
val stringIndexer = new StringIndexer()
.setInputCol("myCol")
.setOutputCol("myColIdx")
val stringIndexerModel = stringIndexer.fit(data)
val res = stringIndexerModel.transform(data)
The code above will add a myColIdx to my DataFrame mapping values in myCol to an index based on the values frequency. i.e. Most frequent value -> 0, second most frequent -> 1, etc...
How do I retrieve that mapping from the model? If I serialize/deserialize the model, will the mapping be stable (i.e. Am I guaranteed to same result after the transform)?
StringIndexerModel exposes the mapping using labels attribute:
stringIndexerModel.labels: Array[String]
where values correspond to consecutive labels for example for:
val data = Seq("foo", "bar", "foo", "bar", "foobar", "bar").toDF("myCol")
you'll get following labels:
import org.apache.spark.ml.feature.IndexToString
Array(bar, foo, foobar)
with bar indexed as 0.0, foo as 1.0 and foobar as 2.0. This is property of the model and is preserved when model is saved.
When used in Pipeline you can also use IndexToString which will use column metadata to map indices back to labels.
indexToString.transform(stringIndexerModel.transform(data)).show
+------+--------+-------------+
| myCol|myColIdx|myColReversed|
+------+--------+-------------+
| foo| 1.0| foo|
| bar| 0.0| bar|
| foo| 1.0| foo|
| bar| 0.0| bar|
|foobar| 2.0| foobar|
| bar| 0.0| bar|
+------+--------+-------------+

Handling NULL values in Spark StringIndexer

I have a dataset with some categorical string columns and I want to represent them in double type. I used StringIndexer for this convertion and It works but when I tried it in another dataset that has NULL values it gave java.lang.NullPointerException error and did not work.
For better understanding here is my code:
for(col <- cols){
out_name = col ++ "_"
var indexer = new StringIndexer().setInputCol(col).setOutputCol(out_name)
var indexed = indexer.fit(df).transform(df)
df = (indexed.withColumn(col, indexed(out_name))).drop(out_name)
}
So how can I solve this NULL data problem with StringIndexer?
Or is there any better solution for converting string typed categorical data with NULL values to double?
Spark >= 2.2
Since Spark 2.2 NULL values can be handled with standard handleInvalid Param:
import org.apache.spark.ml.feature.StringIndexer
val df = Seq((0, "foo"), (1, "bar"), (2, null)).toDF("id", "label")
val indexer = new StringIndexer().setInputCol("label")
By default (error) it will throw an exception:
indexer.fit(df).transform(df).show
org.apache.spark.SparkException: Failed to execute user defined function($anonfun$9: (string) => double)
at org.apache.spark.sql.catalyst.expressions.ScalaUDF.eval(ScalaUDF.scala:1066)
...
Caused by: org.apache.spark.SparkException: StringIndexer encountered NULL value. To handle or skip NULLS, try setting StringIndexer.handleInvalid.
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$9.apply(StringIndexer.scala:251)
...
but configured to skip
indexer.setHandleInvalid("skip").fit(df).transform(df).show
+---+-----+---------------------------+
| id|label|strIdx_46a78166054c__output|
+---+-----+---------------------------+
| 0| a| 0.0|
| 1| b| 1.0|
+---+-----+---------------------------+
or to keep
indexer.setHandleInvalid("keep").fit(df).transform(df).show
+---+-----+---------------------------+
| id|label|strIdx_46a78166054c__output|
+---+-----+---------------------------+
| 0| a| 0.0|
| 1| b| 1.0|
| 3| null| 2.0|
+---+-----+---------------------------+
Spark < 2.2
As for now (Spark 1.6.1) this problem hasn't been resolved but there is an opened JIRA (SPARK-11569). Unfortunately it is not easy to find an acceptable behavior. SQL NULL represents a missing / unknown value so any indexing is kind of meaningless.
Probably the best thing you can do is to use NA actions and either drop:
df.na.drop("column_to_be_indexed" :: Nil)
or fill:
df2.na.fill("__HEREBE_DRAGONS__", "column_to_be_indexed" :: Nil)
before you use indexer.

spark (Scala) dataframe filtering (FIR)

Let say I have a dataframe ( stored in scala val as df) which contains the data from a csv:
time,temperature
0,65
1,67
2,62
3,59
which I have no problem reading this from file as a spark dataframe in scala language.
I would like to add a filtered column (by filter I meant signal processing moving average filtering), (say I want to do (T[n]+T[n-1])/2.0):
time,temperature,temperatureAvg
0,65,(65+0)/2.0
1,67,(67+65)/2.0
2,62,(62+67)/2.0
3,59,(59+62)/2.0
(Actually, say for the first row, I want 32.5 instead of (65+0)/2.0. I wrote it to clarify the expected 2-time-step filtering operation output)
So how to achieve this? I am not familiar with spark dataframe operation which combine rows iteratively along column...
Spark 3.1+
Replace
$"time".cast("timestamp")
with
import org.apache.spark.sql.functions.timestamp_seconds
timestamp_seconds($"time")
Spark 2.0+
In Spark 2.0 and later it is possible to use window function as a input for groupBy. It allows you to specify windowDuration, slideDuration and startTime (offset). It works only with TimestampType column but it is not that hard to find a workaround for that. In your case it will require some additional steps to correct for boundaries but general solution can expressed as shown below:
import org.apache.spark.sql.functions.{window, avg}
df
.withColumn("ts", $"time".cast("timestamp"))
.groupBy(window($"ts", windowDuration="2 seconds", slideDuration="1 second"))
.avg("temperature")
Spark < 2.0
If there is a natural way to partition your data you can use window functions as follows:
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions.mean
val w = Window.partitionBy($"id").orderBy($"time").rowsBetween(-1, 0)
val df = sc.parallelize(Seq(
(1L, 0, 65), (1L, 1, 67), (1L, 2, 62), (1L, 3, 59)
)).toDF("id", "time", "temperature")
df.select($"*", mean($"temperature").over(w).alias("temperatureAvg")).show
// +---+----+-----------+--------------+
// | id|time|temperature|temperatureAvg|
// +---+----+-----------+--------------+
// | 1| 0| 65| 65.0|
// | 1| 1| 67| 66.0|
// | 1| 2| 62| 64.5|
// | 1| 3| 59| 60.5|
// +---+----+-----------+--------------+
You can create windows with arbitrary weights using lead / lag functions:
lit(0.6) * $"temperature" +
lit(0.3) * lag($"temperature", 1) +
lit(0.2) * lag($"temperature", 2)
It is still possible without partitionBy clause but will be extremely inefficient. If this is the case you won't be able to use DataFrames. Instead you can use sliding over RDD (see for example Operate on neighbor elements in RDD in Spark). There is also spark-timeseries package you may find useful.