created a dataset with below schema
org.apache.spark.sql.Dataset[Records] = [value: string, RowNo: int]
Here value field is fixed length position which I would like to convert it to individual columns and add RowNo as last column using a UDF.
def ReadFixWidthFileWithRDD(SrcFileType:String, rdd: org.apache.spark.rdd.RDD[(String, String)], inputFileLength: Int = 6): DataFrame = {
val postapendSchemaRowNo=StructType(Array(StructField("RowNo", StringType, true)))
val inputLength =List(inputFileLength)
val FileInfoList = FixWidth_Dictionary.get(SrcFileType).toList
val fileSchema = FileInfoList(0)._1
val fileColumnSize = FileInfoList(0)._2
val fileSchemaWithFileName = StructType(fileSchema++postapendSchemaRowNo)
val fileColumnSizeWithFileNameLength = fileColumnSize:::inputLength
val data = rdd
var retDF = spark.createDataFrame(data.map{ x =>;
lsplit(fileColumnSizeWithFileNameLength,x._1+x._2)},fileSchemaWithFileName )
retDF
}
Now in the above function, I want to use a dataset instead of Rdd, as my RowNo is not displaying values beyond 99999.
can someone suggest an alternative
I got the solution.
I had created a Hashkey and associated sequence number into a dataframe.
The hashkey is also associated with a dataframe as well.
I joined those two after splitting the fixed length position.
Related
Let's consider:
val columnNames: Seq[String] = Seq[String]("col_1") // column present in DataFrame df
df.join(usingColumns = columnNames, right = ds)) // ds is some dataset that has exactly one column.
// the problem is about the fact that I don't know name of this column? I only know that
// df.col("col_1") and ds.col(???)` has the same types.
Is it possible to do this join?
you can use something like :
package utils
object Extensions {
implicit class DataFrameExtensions(df: DataFrame) {
def selecti(indices: Int*) = {
val cols = df.columns
df.select(indices.map(cols(_)):_*)
}
}
}
then use this to select column by numbers :
import utils.Extensions._
df.selecti(1,2,3)
Assuming that the "col_1" from the first dataframe will always join to the single column in the ds dataframe you can just rename the column in the ds data frame with a single column like below. Then your join using names only need reference "col_1"
// set the name of the column in ds to col_1
val ds2 = ds.toDF("col_1")
You can change the column name of the dataset to col_1:
val result = df.join(ds.withColumnRenamed(ds.columns(0), "col_1"), "col_1", "right")
As i know, MLlib supports only interger.
Then i want to convert string to interger in scala.
For example, I have many reviewerID, productID in txtfile.
reviewerID productID
03905X0912 ZXASQWZXAS
0325935ODD PDLFMBKGMS
...
StringIndexer is the solution. It will fit into the ML pipeline with an estimator and transformer. Essentially once you set the input column, it computes the frequency of each category and numbers them starting 0. You can add IndexToString at the end of pipeline to replace by original strings if required.
You can look at ML Documentation for "Estimating, transforming and selecting features" for further details.
In your case it will go like:
import org.apache.spark.ml.feature.StringIndexer
val indexer = new StringIndexer().setInputCol("productID").setOutputCol("productIndex")
val indexed = indexer.fit(df).transform(df)
indexed.show()
You can add a new row with a unique id for each reviewerID, productID. You can add a new row in the following ways.
By monotonicallyIncreasingId:
import spark.implicits._
val data = spark.sparkContext.parallelize(Seq(
("123xyx", "ab"),
("123xyz", "cd")
)).toDF("reviewerID", "productID")
data.withColumn("uniqueReviID", monotonicallyIncreasingId).show()
By using zipWithUniqueId:
val rows = data.rdd.zipWithUniqueId.map {
case (r: Row, id: Long) => Row.fromSeq(id +: r.toSeq)
}
val finalDf = spark.createDataFrame(rows, StructType(StructField("uniqueRevID", LongType, false) +: data.schema.fields))
finalDf.show()
You can also do this by using row_number() in SQL syntax:
import spark.implicits._
val data = spark.sparkContext.parallelize(Seq(
("123xyx", "ab"),
("123xyz", "cd")
)).toDF("reviewerID", "productID").createOrReplaceTempView("review")
val tmpTable1 = spark.sqlContext.sql(
"select row_number() over (order by reviewerID) as id, reviewerID, productID from review")
Hope this helps!
How do I select all the columns of a dataframe that has certain indexes in Scala?
For example if a dataframe has 100 columns and i want to extract only columns (10,12,13,14,15), how to do the same?
Below selects all columns from dataframe df which has the column name mentioned in the Array colNames:
df = df.select(colNames.head,colNames.tail: _*)
If there is similar, colNos array which has
colNos = Array(10,20,25,45)
How do I transform the above df.select to fetch only those columns at the specific indexes.
You can map over columns:
import org.apache.spark.sql.functions.col
df.select(colNos map df.columns map col: _*)
or:
df.select(colNos map (df.columns andThen col): _*)
or:
df.select(colNos map (col _ compose df.columns): _*)
All the methods shown above are equivalent and don't impose performance penalty. Following mapping:
colNos map df.columns
is just a local Array access (constant time access for each index) and choosing between String or Column based variant of select doesn't affect the execution plan:
val df = Seq((1, 2, 3 ,4, 5, 6)).toDF
val colNos = Seq(0, 3, 5)
df.select(colNos map df.columns map col: _*).explain
== Physical Plan ==
LocalTableScan [_1#46, _4#49, _6#51]
df.select("_1", "_4", "_6").explain
== Physical Plan ==
LocalTableScan [_1#46, _4#49, _6#51]
#user6910411's answer above works like a charm and the number of tasks/logical plan is similar to my approach below. BUT my approach is a bit faster.
So,
I would suggest you to go with the column names rather than column numbers. Column names are much safer and much ligher than using numbers. You can use the following solution :
val colNames = Seq("col1", "col2" ...... "col99", "col100")
val selectColNames = Seq("col1", "col3", .... selected column names ... )
val selectCols = selectColNames.map(name => df.col(name))
df = df.select(selectCols:_*)
If you are hesitant to write all the 100 column names then there is a shortcut method too
val colNames = df.schema.fieldNames
Example: Grab first 14 columns of Spark Dataframe by Index using Scala.
import org.apache.spark.sql.functions.col
// Gives array of names by index (first 14 cols for example)
val sliceCols = df.columns.slice(0, 14)
// Maps names & selects columns in dataframe
val subset_df = df.select(sliceCols.map(name=>col(name)):_*)
You cannot simply do this (as I tried and failed):
// Gives array of names by index (first 14 cols for example)
val sliceCols = df.columns.slice(0, 14)
// Maps names & selects columns in dataframe
val subset_df = df.select(sliceCols)
The reason is that you have to convert your datatype of Array[String] to Array[org.apache.spark.sql.Column] in order for the slicing to work.
OR Wrap it in a function using Currying (high five to my colleague for this):
// Subsets Dataframe to using beg_val & end_val index.
def subset_frame(beg_val:Int=0, end_val:Int)(df: DataFrame): DataFrame = {
val sliceCols = df.columns.slice(beg_val, end_val)
return df.select(sliceCols.map(name => col(name)):_*)
}
// Get first 25 columns as subsetted dataframe
val subset_df:DataFrame = df_.transform(subset_frame(0, 25))
I have two DataFrames in my code with exact same dimensions, let's say 1,000,000 X 50. I need to add corresponding values in both dataframes. How to achieve that.
One option would be to add another column with ids, union both DataFrames and then use reduceByKey. But is there any other more elegent way?
Thanks.
Your approach is good. Another option can be two take the RDD and zip those together and then iterate over those to sum the columns and create a new dataframe using any of the original dataframe schemas.
Assuming the data types for all the columns are integer, this code snippets should work. Please note that, this has been done in spark 2.1.0.
import spark.implicits._
val a: DataFrame = spark.sparkContext.parallelize(Seq(
(1, 2),
(3, 6)
)).toDF("column_1", "column_2")
val b: DataFrame = spark.sparkContext.parallelize(Seq(
(3, 4),
(1, 5)
)).toDF("column_1", "column_2")
// Merge rows
val rows = a.rdd.zip(b.rdd).map{
case (rowLeft, rowRight) => {
val totalColumns = rowLeft.schema.fields.size
val summedRow = for(i <- (0 until totalColumns)) yield rowLeft.getInt(i) + rowRight.getInt(i)
Row.fromSeq(summedRow)
}
}
// Create new data frame
val ab: DataFrame = spark.createDataFrame(rows, a.schema) // use any of the schemas
ab.show()
Update:
So, I tried to experiment with the performance of my solution vs yours. I tested with 100000 rows and each row has 50 columns. In case of your approach it has 51 columns, the extra one is for the ID column. In a single machine(no cluster), my solution seems to work a bit faster.
The union and group by approach takes about 5598 milliseconds.
Where as my solution takes about 5378 milliseconds.
My assumption is the first solution takes a bit more time because of the union operation of the two dataframes.
Here are the methods which I created for testing the approaches.
def option_1()(implicit spark: SparkSession): Unit = {
import spark.implicits._
val a: DataFrame = getDummyData(withId = true)
val b: DataFrame = getDummyData(withId = true)
val allData = a.union(b)
val result = allData.groupBy($"id").agg(allData.columns.collect({ case col if col != "id" => (col, "sum") }).toMap)
println(result.count())
// result.show()
}
def option_2()(implicit spark: SparkSession): Unit = {
val a: DataFrame = getDummyData()
val b: DataFrame = getDummyData()
// Merge rows
val rows = a.rdd.zip(b.rdd).map {
case (rowLeft, rowRight) => {
val totalColumns = rowLeft.schema.fields.size
val summedRow = for (i <- (0 until totalColumns)) yield rowLeft.getInt(i) + rowRight.getInt(i)
Row.fromSeq(summedRow)
}
}
// Create new data frame
val result: DataFrame = spark.createDataFrame(rows, a.schema) // use any of the schemas
println(result.count())
// result.show()
}
I loaded a csv as dataframe. I would like to cast all columns to float, knowing that the file is to big to write all columns names:
val spark = SparkSession.builder.master("local").appName("my-spark-app").getOrCreate()
val df = spark.read.option("header",true).option("inferSchema", "true").csv("C:/Users/mhattabi/Desktop/dataTest2.csv")
Given this DataFrame as example:
val df = sqlContext.createDataFrame(Seq(("0", 0),("1", 1),("2", 0))).toDF("id", "c0")
with schema:
StructType(
StructField(id,StringType,true),
StructField(c0,IntegerType,false))
You can loop over DF columns by .columns functions:
val castedDF = df.columns.foldLeft(df)((current, c) => current.withColumn(c, col(c).cast("float")))
So the new DF schema looks like:
StructType(
StructField(id,FloatType,true),
StructField(c0,FloatType,false))
EDIT:
If you wanna exclude some columns from casting, you could do something like (supposing we want to exclude the column id):
val exclude = Array("id")
val someCastedDF = (df.columns.toBuffer --= exclude).foldLeft(df)((current, c) =>
current.withColumn(c, col(c).cast("float")))
where exclude is an Array of all columns we want to exclude from casting.
So the schema of this new DF is:
StructType(
StructField(id,StringType,true),
StructField(c0,FloatType,false))
Please notice that maybe this is not the best solution to do it but it can be a starting point.