I'm looking how I can select a lot of columns(2000+) as a feature from a Dataframe. I don't want to write the name one by one.
I'm doing classification and i have around 2000 features.
data is a Dataframe with around 2000 columns.
First, I get all of the columns name of my DF and drop 9 columns because i don't need them.
My idea was to use all the columns names to feed the VectorAssembler. The result should be something like [Value Of the 1st Feature, Value 2nd Feature, Value 3rd Feature...] for the first row and this for all of my Dataframe.
But I have this error :
java.lang.IllegalArgumentException: Field "features" does not exist.
EDIT : If something is unclear, please let me know that I can fix it.
I deleted some Transformers because it's not the point of my question.(StringIndexer, VectorIndexer, IndexToString)
val array = data.columns drop(9)
val assembler = new VectorAssembler()
.setInputCols(array)
.setOutputCol("features")
val Array(trainingData, testData) = data.randomSplit(Array(0.8, 0.2))
val rf = new RandomForestClassifier()
.setLabelCol("indexedLabel")
.setFeaturesCol("features")
.setNumTrees(50)
val pipeline = new Pipeline()
.setStages(Array(assembler, rf))
val model = pipeline.fit(trainingData)
EDIT 2 I fix my problem. I took off the Vector Indexer and used array in the VectorAssembler and it worked perfectly.
Well at least, I get a result.
Related
I have a parquet file containing the id and features columns and I want to apply the pca algorithm.
val dataset = spark.read.parquet("/usr/local/spark/dataset/data/user")
val features = new VectorAssembler()
.setInputCols(Array("id", "features" ))
.setOutputCol("features")
val pca = new PCA()
.setInputCol("features")
.setK(50)
.fit(dataset)
.setOutputCol("pcaFeatures")
val result = pca.transform(dataset).select("pcaFeatures")
pca.save("/usr/local/spark/dataset/out")
but I have this exception
java.lang.IllegalArgumentException: requirement failed: Column features must be of type org.apache.spark.ml.linalg.VectorUDT#3bfc3ba7 but was actually ArrayType(DoubleType,true).
Spark's PCA transformer needs a column created by a VectorAssembler. Here you create one but never use it. Also, the VectorAssembler only takes numbers as input. I don't know what the type of features is, but if it's an array, it won't work. Transform it into numeric columns first. Finally, it is a bad idea to name the assembled column the same way as an original column. Indeed, the VectorAssembler does not remove input columns and you will end up if two features columns.
Here is a working example of PCA computation in Spark:
import org.apache.spark.ml.feature._
val df = spark.range(10)
.select('id, ('id * 'id) as "id2", ('id * 'id * 'id) as "id3")
val assembler = new VectorAssembler()
.setInputCols(Array("id", "id2", "id3")).setOutputCol("features")
val assembled_df = assembler.transform(df)
val pca = new PCA()
.setInputCol("features").setOutputCol("pcaFeatures").setK(2)
.fit(assembled_df)
val result = pca.transform(assembled_df)
I want to run pca with KNN in spark. I have a file that contains id, features.
> KNN.printSchema
root
|-- id: int (nullable = true)
|-- features: double (nullable = true)
code:
val dataset = spark.read.parquet("/usr/local/spark/dataset/data/user")
val features = new VectorAssembler()
.setInputCols(Array("id", "features" ))
.setOutputCol("features")
val Array(train, test) = dataset
.randomSplit(Array(0.7, 0.3), seed = 1234L)
.map(_.cache())
//create PCA matrix to reduce feature dimensions
val pca = new PCA()
.setInputCol("features")
.setK(5)
.setOutputCol("pcaFeatures")
val knn = new KNNClassifier()
.setTopTreeSize(dataset.count().toInt / 5)
.setFeaturesCol("pcaFeatures")
.setPredictionCol("predicted")
.setK(1)
val pipeline = new Pipeline()
.setStages(Array(pca, knn))
.fit(train)
Above code block is throwing this exception
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Column features must be of type org.apache.spark.ml.linalg.VectorUDT#3bfc3ba7 but was actually ArrayType(DoubleType,true).
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.ml.util.SchemaUtils$.checkColumnType(SchemaUtils.scala:42)
at org.apache.spark.ml.feature.PCAParams$class.validateAndTransformSchema(PCA.scala:54)
at org.apache.spark.ml.feature.PCAModel.validateAndTransformSchema(PCA.scala:125)
at org.apache.spark.ml.feature.PCAModel.transformSchema(PCA.scala:162)
at org.apache.spark.ml.Pipeline$$anonfun$transformSchema$4.apply(Pipeline.scala:180)
at org.apache.spark.ml.Pipeline$$anonfun$transformSchema$4.apply(Pipeline.scala:180)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
at scala.collection.mutable.ArrayOps$ofRef.foldLeft(ArrayOps.scala:186)
at org.apache.spark.ml.Pipeline.transformSchema(Pipeline.scala:180)
at org.apache.spark.ml.PipelineStage.transformSchema(Pipeline.scala:70)
at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:132)
at KNN$.main(KNN.scala:63)
at KNN.main(KNN.scala)
Basically, you are trying to split the dataset into training and test, assemble features, run a PCA and then a classifier to predict something. The overall logic is correct but there are several problems with your code.
A PCA in spark needs assembled features. You created one but you do not use it in the code.
You gave the name features to the output of the assembler, and you already have a column named that way. Since you do not use it, you don't see an error but if you were you would get this exception:
java.lang.IllegalArgumentException: Output column features already exists.
When running a classification, you need to specify at the very least the input features with setFeaturesCol and the label you are trying to learn with setLabelCol. You did not specified the label and by default, the label is "label". You don't have any column named that way, hence the exception spark throws at you.
Here is a working example of what you are trying to do.
// a funky dataset with 3 features (`x1`, `x2`, `x`3) and a label `y`,
// the class we are trying to predict.
val dataset = spark.range(10)
.select('id as "x1", rand() as "x2", ('id * 'id) as "x3")
.withColumn("y", (('x2 * 3 + 'x1) cast "int").mod(2))
.cache()
// splitting the dataset, that part was ok ;-)
val Array(train, test) = dataset
.randomSplit(Array(0.7, 0.3), seed = 1234L)
.map(_.cache())
// An assembler, the output name cannot be one of the inputs.
val assembler = new VectorAssembler()
.setInputCols(Array("x1", "x2", "x3"))
.setOutputCol("features")
// A pca, that part was ok as well
val pca = new PCA()
.setInputCol("features")
.setK(2)
.setOutputCol("pcaFeatures")
// A LogisticRegression classifier. (KNN is not part of spark's standard API, but
// requires the same minimum information: features and label)
val classifier = new LogisticRegression()
.setFeaturesCol("pcaFeatures")
.setLabelCol("y")
// And the full pipeline
val pipeline = new Pipeline().setStages(Array(assembler, pca, classifier))
val model = pipeline.fit(train)
I have written the following code to feed data to a machine learning algorithm in Spark 2.3. The code below runs fine. I need to enhance this code to be able to convert not just 3 columns but any number of columns, uploaded via the csv file. For instance, if I had loaded 5 columns, how can I put them automatically in the Vector.dense command below, or some other way to generate the same end result? Does anyone know how this can be done?
val data2 = spark.read.format("csv").option("header",
"true").load("/data/c7.csv")
val goodBadRecords = data2.map(
row =>{
val n0 = row(0).toString.toLowerCase().toDouble
val n1 = row(1).toString.toLowerCase().toDouble
val n2 = row(2).toString.toLowerCase().toDouble
val n3 = row(3).toString.toLowerCase().toDouble
(n0, Vectors.dense(n1,n2,n3))
}
).toDF("label", "features")
Thanks
Regards,
Adeel
A VectorAssembler can do the job:
VectorAssembler is a transformer that combines a given list of columns into a single vector column. It is useful for combining raw features [...] into a single feature vector
Based on your code, the solution would look like:
val data2 = spark.read.format("csv")
.option("header","true")
.option("inferSchema", "true") //1
.load("/data/c7.csv")
val fields = data2.schema.fieldNames
val assembler = new VectorAssembler()
.setInputCols(fields.tail) //2
.setOutputCol("features") //3
val goodBadRecords = assembler.transform(data2)
.withColumn("label", col(fields(0))) //4
.drop(fields:_*) //5
Remarks:
A schema is necessary for the input data, as the VectorAssembler only accepts the following input column types: all numeric types, boolean type, and vector type (same link). You seem to have a csv with doubles, so infering the schema should work. But of course, any other method to transform the string data to doubles is also ok.
Use all but the first column as input for the VectorAssembler
Name the result column of the VectorAssembler features
Create a new column called label as copy of the first column
Drop all orginal columns. This last step is optional as the learning algorithm usually only looks at the label and feature column and ignores all other columns
Here's some code that uses Spark ML to find clusters:
val dfRaw = spark.read.option("header", "true")
.csv("src/main/resources/input.csv")
val K = 5
val assembler = new VectorAssembler().setInputCols(Array("id", "lat", "lon")).setOutputCol("features")
val df = assembler.transform(dfRaw).select("features")
df.show(false)
val kmeans = new KMeans().setK(K).setSeed(1L)
val model : KMeansModel = kmeans.fit(df)
println("cluster centers")
model.clusterCenters.foreach(println)
println("----- predictions")
val predictions = model.transform(df)
predictions.collect().foreach(println)
The input file is made up of the following 4 cols: id, name, lat, lon
I'm sure I'm doing some stupid things in this code, but it kind of works (I think). Any advice on improving it appreciated. I'm wondering if the id column is affecting how it clusters the data.
I'm also struggling to make sense of the results. What's a good way to join predictions (a DF of Vector) with the name column of dfRaw?
Also, I'd like to visualize the results on some kind of free grid or geomap, any recommendations?
I want to use The binary Classificator in Spark.ml to evaluate my model after my Pipeline. I use this code :
val gbt = new GBTClassifier()
.setLabelCol("Label_Index")
.setFeaturesCol("features")
.setMaxIter(10)
.setMaxDepth(7)
.setSubsamplingRate(0.1)
.setMinInstancesPerNode(15)
// Convert indexed labels back to original labels.
val labelConverter = new IndexToString()
.setInputCol("prediction")
.setOutputCol("predictedLabel")
.setLabels(indexer_2.labels)
val evaluator_auc = new BinaryClassificationEvaluator()
.setLabelCol("Label_Index")
.setRawPredictionCol("")
.setMetricName("areaUnderROC")
I don't know really which parameters I need to give to "setRawPredictionCol()", I think I need to give the result of my prediction, the column "prediction"