Pyspark: 'HashingTF' object has no attribute 'setInputCol' - pyspark

I'm using pyspark 3.2.1 and I'm trying to create a HashingTF object and set input and output columns.
Neither
hashingTF = HashingTF(numFeatures=20).setOutputCol('output'),
nor
hashingTF = HashingTF(numFeatures=20, outputCol='output')
do not work. The same problem is observed when I try to set input column.
According to https://spark.apache.org/docs/3.2.1/api/python/reference/api/pyspark.ml.feature.HashingTF.html both aforementioned methods are expected to work.

Related

Adding Column In sparkdataframe

Hi I am trying to add one column in my spark dataframe and calculating value based on existing dataframe column. I am writing below code.
val df1=spark.sql("select id,dt1,salary frm dbdt1.tabledt1")
val df2=df1.withColumn("new_date",WHEN (month(to_date(from_unixtime(unix_timestamp(dt1), 'dd-MM- yyyy')))
IN (01,02,03)) THEN
CONCAT(CONCAT(year(to_date(from_unixtime(unix_timestamp(dt1), 'dd-MM- yyyy')))-1,'-'),
substr(year(to_date(from_unixtime(unix_timestamp(dt1), 'dd-MM-yyyy'))),3,4))
.otherwise(CONCAT(CONCAT(year(to_date(from_unixtime(unix_timestamp(dt1), 'dd-MM- yyyy'))),'-')
,SUBSTR(year(to_date(from_unixtime(unix_timestamp(dt1), 'dd-MM-yyyy')))+1,3,4))))
But it always showing issue error: unclosed character literal. Can someone plase guide me how should i add this new column or modify the existing code.
Incorrect syntax in many places. First I suggest you look at a few spark sql examples online and also the org.apache.spark.sql.functions API documentation because your use of WHEN, CONCAT, IN are all incorrect.
Scala strings are enclosed by double quotes, you appear to be using SQL string syntax.
'dd-MM-yyyy' should be "dd-MM-yyyy"
To reference a column dt1 on DataFrame df1 you can use one of the following:
df1("dt1")
col("dt1") // if you import org.apache.spark.sql.functions.col
$"dt1" // if you import spark.implicits._ locally
For example:
from_unixtime(unix_timestamp(col("dt1")), 'dd-MM- yyyy')

Write Header only CSV record from Spark Scala DataFrame

My requirement is to write only Header CSV record using Spark Scala DataFrame. Can any one help me on this.
val OHead1 = "/xxxxx/xxxx/xxxx/xxx/OHead1/"
val sc = sparkFile.sparkContext
val outDF = csvDF.select("col_01", "col_02", "col_03").schema
sc.parallelize(Seq(outDF.fieldNames.mkString("\t"))).coalesce(1).saveAsTextFile(s"$OHead1")
The above one is working and able to create header in the CSV with tab delimiter. Since I am using spark session I am creating sparkContext in the second line. outDF is my dataframe created before these statements.
Two things are outstanding, can you one of you help me.
1. The above working code is not overriding the files, so every time I need to delete the files manually. I could not find override option, can you help me.
2. Since I am doing a select statement and schema, will it be consider as action and start another lineage for this statement. If it is true then this would degrade the performance.
If you need to output only header you can use this code:
df.schema.fieldNames.reduce(_ + "," + _)
It will create line of CSV with names of columns
I tested and the solution below did not affect any performance.
val OHead1 = "/xxxxx/xxxx/xxxx/xxx/OHead1/"
val sc = sparkFile.sparkContext
val outDF = csvDF.select("col_01", "col_02", "col_03").schema
sc.parallelize(Seq(outDF.fieldNames.mkString("\t"))).coalesce(1).saveAsTextFile(s"$OHead1")
I got a solution to handle this situation. Define the columns in the configuration file and write those columns in an file. Here is the snipet.
val Header = prop.getProperty("OUT_HEADER_COLUMNS").replaceAll("\"","").replaceAll(",","\t")
scala.tools.nsc.io.File(s"$HeadOPath").writeAll(s"$Header")

spark scala tab file read and replace empty

I have a set of tab files which I have to read and save in the database(Cassandra). I can load all the tables which has the data in all the columns. But some table has empty value in some of the columns and those are not getting inserted.
I tried the below,
sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("delimiter", "/t").option("nullValue"," ").load(path)
and also
sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("delimiter", "/t").option("nullValue"," ").option(""," ").load(path)
both the options didnt load the data. Any inputs?
I think I figured it,
var df = sqlContext.read.format("com.databricks.spark.csv").option("header", "true").option("delimiter", "\t").option("treatEmptyValuesAsNulls", "true").option("nullValue","").load(path)
this turns every empty to null and then,
var df1 = df.na.fill(" ",df.columns)
I had to create another df to get the fill reflected. I still need to work on how to dynamically fill based on the dtypes.

how to define features column in spark ml

I am trying to run the spark logistic regression function (ml not mllib). I have a dataframe which looks like (just the first row shown)
+-----+--------+
|label|features|
+-----+--------+
| 0.0| [60.0]|
(Right now just trying to keep it simple with only one dimension in the feature, but will expand later on.)
I run the following code - taken from the Spark ML documentation
import org.apache.spark.ml.classification.LogisticRegression
val lr = new LogisticRegression()
.setMaxIter(10)
.setRegParam(0.3)
.setElasticNetParam(0.8)
val lrModel = lr.fit(df)
This gives me the error -
org.apache.spark.SparkException: Values to assemble cannot be null.
I'm not sure how to fix this error. I looked at sample_libsvm_data.txt which is in the spark github repo and used in some of the examples in the spark ml documentation. That dataframe looks like
+-----+--------------------+
|label| features|
+-----+--------------------+
| 0.0|(692,[127,128,129...|
| 1.0|(692,[158,159,160...|
| 1.0|(692,[124,125,126...|
Based on this example, my data looks like it should be in the right format, with one issue. Is 692 the number of features? Seems rather dumb if so - spark should just be able to look at the length of the feature vector to see how many features there are. If I do need to add the number of features, how would I do that? (Pretty new to Scala/Java)
Cheers
This error is thrown by VectorAssembler when any of the features are null. Please verify that you rows doesn't contain null values. If there are null values you must convert it into a default numeric feature before VectorAssembling.
Regarding format of sample_libsvm_data.txt, Its is in stored in in a sparse array/matrix form. Where data is represented as:
0 128:51 129:159 130:253 (Where 0 is label and the subsequent column contains index:numeric_feature format.
You can form your single feature dataframe in the following way using Vector class as follow (I ran it on 1.6.1 shell):
import org.apache.spark.mllib.linalg.{Vector, Vectors}
import org.apache.spark.ml.classification.LogisticRegression
val training1 = sqlContext.createDataFrame(Seq(
(1.0, Vectors.dense(3.0)),
(0.0, Vectors.dense(3.0)))
).toDF("label", "features")
val lr = new LogisticRegression().setMaxIter(10).setRegParam(0.3).setElasticNetParam(0.8)
val model1 = lr.fit(training)
For more, you can check examples at: https://spark.apache.org/docs/1.6.1/ml-guide.html#dataframe (Refer to section Code examples)

Spark: RDD.saveAsTextFile when using a pair of (K,Collection[V])

I have a dataset of employees and their leave-records. Every record (of type EmployeeRecord) contains EmpID (of type String) and other fields. I read the records from a file and then transform into PairRDDFunctions:
val empRecords = sc.textFile(args(0))
....
val empsGroupedByEmpID = this.groupRecordsByEmpID(empRecords)
At this point, 'empsGroupedByEmpID' is of type RDD[String,Iterable[EmployeeRecord]]. I transform this into PairRDDFunctions:
val empsAsPairRDD = new PairRDDFunctions[String,Iterable[EmployeeRecord]](empsGroupedByEmpID)
Then, I go for processing the records as per the logic of the application. Finally, I get an RDD of type [Iterable[EmployeeRecord]]
val finalRecords: RDD[Iterable[EmployeeRecord]] = <result of a few computations and transformation>
When I try to write the contents of this RDD to a text file using the available API thus:
finalRecords.saveAsTextFile("./path/to/save")
the I find that in the file every record begins with an ArrayBuffer(...). What I need is a file with one EmployeeRecord in each line. Is that not possible? Am I missing something?
I have spotted the missing API. It is well...flatMap! :-)
By using flatMap with identity, I can get rid of the Iterator and 'unpack' the contents, like so:
finalRecords.flatMap(identity).saveAsTextFile("./path/to/file")
That solves the problem I have been having.
I also have found this post suggesting the same thing. I wish I saw it a bit earlier.