How to get files name with spark sc.textFile? - scala

I am reading a directory of files using the following code:
val data = sc.textFile("/mySource/dir1/*")
now my data rdd contains all rows of all files in the directory (right?)
I want now to add a column to each row with the source files name, how can I do that?
The other options I tried is using wholeTextFile but I keep getting out of memory exceptions.
5 servers 24 cores 24 GB (executor-core 5 executor-memory 5G)
any ideas?

You can use this code. I have tested it with Spark 1.4 and 1.5.
It gets the file name from the inputSplit and adds it to each line using the iterator using the mapPartitionsWithInputSplit of the NewHadoopRDD
import org.apache.hadoop.mapreduce.lib.input.{FileSplit, TextInputFormat}
import org.apache.spark.rdd.{NewHadoopRDD}
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.hadoop.io.LongWritable
import org.apache.hadoop.io.Text
val sc = new SparkContext(new SparkConf().setMaster("local"))
val fc = classOf[TextInputFormat]
val kc = classOf[LongWritable]
val vc = classOf[Text]
val path :String = "file:///home/user/test"
val text = sc.newAPIHadoopFile(path, fc ,kc, vc, sc.hadoopConfiguration)
val linesWithFileNames = text.asInstanceOf[NewHadoopRDD[LongWritable, Text]]
.mapPartitionsWithInputSplit((inputSplit, iterator) => {
val file = inputSplit.asInstanceOf[FileSplit]
iterator.map(tup => (file.getPath, tup._2))
}
)
linesWithFileNames.foreach(println)

I think it's pretty late to answer this question but I found an easy way to do what you were looking for:
Step 0: from pyspark.sql import functions as F
Step 1: createDataFrame using the RDD as usual. Let's say df
Step 2: Use input_file_name()
df.withColumn("INPUT_FILE", F.input_file_name())
This will add a column to your DataFrame with source file name.

Related

How can I read multiple parquet files in spark scala

Below are some folders, which might keep updating with time. They have multiple .parquet files. How can I read them in a Spark dataframe in scala ?
"id=200393/date=2019-03-25"
"id=200393/date=2019-03-26"
"id=200393/date=2019-03-27"
"id=200393/date=2019-03-28"
"id=200393/date=2019-03-29" and so on ...
Note:- There could be 100 date folders, I need to pick only specific(let's say for 25,26 and 28)
Is there any better way than below ?
import org.apache.spark._
import org.apache.spark.SparkContext._
import org.apache.spark.sql._
val spark = SparkSession.builder.appName("ScalaCodeTest").master("yarn").getOrCreate()
val parquetFiles = List("id=200393/date=2019-03-25", "id=200393/date=2019-03-26", "id=200393/date=2019-03-28")
spark.read.format("parquet").load(parquetFiles: _*)
The above code is working but I want to do something like below-
val parquetFiles = List()
parquetFiles(0) = "id=200393/date=2019-03-25"
parquetFiles(1) = "id=200393/date=2019-03-26"
parquetFiles(2) = "id=200393/date=2019-03-28"
spark.read.format("parquet").load(parquetFiles: _*)
you can read it this way to read all folders in a directory id=200393:
val df = spark.read.parquet("id=200393/*")
If you want to select only some dates, for example only september 2019:
val df = spark.read.parquet("id=200393/2019-09-*")
If you have some special days, you can have the list of days in a list
val days = List("2019-09-02", "2019-09-03")
val paths = days.map(day => "id=200393/" ++ day)
val df = spark.read.parquet(paths:_*)
If you want to keep the column 'id', you could try this:
val df = sqlContext
.read
.option("basePath", "id=200393/")
.parquet("id=200393/date=*")

Saving RDD as textfile gives FileAlreadyExists Exception. How to create new file every time program loads and delete old one using FileUtils

Code:
val badData:RDD[ListBuffer[String]] = rdd.filter(line => line(1).equals("XX") || line(5).equals("XX"))
badData.coalesce(1).saveAsTextFile(propForFile.getString("badDataFilePath"))
First time program runs fine. On running again it throws exception for file AlreadyExists.
I want to resolve this using FileUtils java functionalities and save rdd as a text file.
Before you write the file to a specified path, delete the already existing path.
val fs = FileSystem.get(sc.hadoopConfiguration)
fs.delete(new Path(bad/data/file/path), true)
Then perform your usual write process. Hope this should resolve the problem.
import org.apache.hadoop.fs.FileSystem
import org.apache.hadoop.fs.Path
val fs = spark.SparkContext.hadoopCofigurations
if (fs.exists(new Path(path/to/the/files)))
fs.delete(new Path(path/to/the/files), true)
Pass the file name as String to the method, if directory or files present it will delete. Use this piece of code before writing it to the output path.
Why not use DataFrames? Get the RDD[ListBuffer[String] into an RDD[Row] - something like -
import org.apache.spark.sql.{Row, SparkSession}
import org.apache.spark.sql.types.{DoubleType, StringType, StructField, StructType}
val badData:RDD[ListBuffer[String]] = rdd.map(line =>
Row(line(0), line(1)... line(n))
.filter(row => filter stuff)
badData.toDF().write.mode(SaveMode.Overwrite)

Using iterated writing in HDFS file by using Spark/Scala

I am learning how to read and write from files in HDFS by using Spark/Scala.
I am unable to write in HDFS file, the file is created, but it's empty.
I don't know how to create a loop for writing in a file.
The code is:
import scala.collection.immutable.Map
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
// Read the adult CSV file
val logFile = "hdfs://zobbi01:9000/input/adult.csv"
val conf = new SparkConf().setAppName("Simple Application")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, 2).cache()
//val logFile = sc.textFile("hdfs://zobbi01:9000/input/adult.csv")
val headerAndRows = logData.map(line => line.split(",").map(_.trim))
val header = headerAndRows.first
val data = headerAndRows.filter(_(0) != header(0))
val maps = data.map(splits => header.zip(splits).toMap)
val result = maps.filter(map => map("AGE") != "23")
result.foreach{
result.saveAsTextFile("hdfs://zobbi01:9000/input/test2.txt")
}
If I replace:
result.foreach{println}
Then it works!
but when using the method of (saveAsTextFile), then an error message is thrown as
<console>:76: error: type mismatch;
found : Unit
required: scala.collection.immutable.Map[String,String] => Unit
result.saveAsTextFile("hdfs://zobbi01:9000/input/test2.txt")
Any help please.
result.saveAsTextFile("hdfs://zobbi01:9000/input/test2.txt")
This is all what you need to do. You don't need to loop through all the rows.
Hope this helps!
What this does!!!
result.foreach{
result.saveAsTextFile("hdfs://zobbi01:9000/input/test2.txt")
}
RDD action cannot be triggered from RDD transformations unless special conf set.
Just use result.saveAsTextFile("hdfs://zobbi01:9000/input/test2.txt") to save to HDFS.
I f you need other formats in the file to be written, change in rdd itself before writing.

adding two columns from a data frame in scala

I have two columns age and salary stored in DF. I just want to write a scala code to add these values column wise. i tried
val age_1 = df.select("age")
val salary_1=df.select("salary")
val add = age_1+salary_1
gives me error. please help
In the following spark is an instance of SparkSession, so the import has to come after the instantiation of spark.
$-notation can be used here by importing spark implicits with
import spark.implicits._
then use $-notation
val add = df.select($"age" + $"salary")
final scala code:
import spark.implicits._
val add = df.select($"age" + $"salary")
Apache doc

How to use Spark-Scala to download a CSV file from the web?

world,
How to use Spark-Scala to download a CSV file from the web and load the file into a spark-csv DataFrame?
Currently I depend on curl in a shell command to get my CSV file.
Here is the syntax I want to enhance:
/* fb_csv.scala
This script should load FB prices from Yahoo.
Demo:
spark-shell -i fb_csv.scala
*/
// I should get prices:
import sys.process._
"/usr/bin/curl -o /tmp/fb.csv http://ichart.finance.yahoo.com/table.csv?s=FB"!
import org.apache.spark.sql.SQLContext
val sqlContext = new SQLContext(sc)
val fb_df = sqlContext.read.format("com.databricks.spark.csv").option("header","true").option("inferSchema","true").load("/tmp/fb.csv")
fb_df.head(9)
I want to enhance the above script so it is pure Scala with no shell syntax inside.
val content = scala.io.Source.fromURL("http://ichart.finance.yahoo.com/table.csv?s=FB").mkString
val list = content.split("\n").filter(_ != "")
val rdd = sc.parallelize(list)
val df = rdd.toDF
Found better answer from Process CSV from REST API into Spark
Here you go:
import scala.io.Source._
import org.apache.spark.sql.{Dataset, SparkSession}
var res = fromURL(url).mkString.stripMargin.lines.toList
val csvData: Dataset[String] = spark.sparkContext.parallelize(res).toDS()
val frame = spark.read.option("header", true).option("inferSchema",true).csv(csvData)
frame.printSchema()