How to use getline in spark from hdfs? - scala

I used saveAsTextFile("outputPath") to save file using scala in spark.
I want to read the saved file one by one like getline command in c or java from HDFS.
How can I use this?
Is it possible to read the file one by one?

Related

Read hdfs data to Spark DF without mentioning file type

Is there any approach to read hdfs data to spark df without explicitly mentioning file type.
spark.read.format("auto_detect").option("header", "true").load(inputPath)
We can achieve above requirement by using scala.sys.process_ or python subprocess(cmd). and splitting the extension of any part file. But without using any subprocess or sys.process, can we achieve this ..?

Load XML files from a folder with Pyspark

I want to load XML files from a specific folder with Pyspark. But I don't want to use com.databricks.spark.xml package. From every example, I get using com.databricks.spark.xml package.
Is there any way to read XML files without this package?
Can you use 'xml.etree.ElementTree as ET'? If yes, then write a function in python using this function, and create a udf. Read the XML files into PySpark as RDDs and parse them with the udf.

Difference between File write using spark and scala and the advantages?

DF().write
.format("com.databricks.spark.csv")
.save("filepath/selectedDataset.csv")
vs
scala.tools.nsc.io.File("/Users/saravana-6868/Desktop/hello.txt").writeAll("String"))
In the above code, I used to write a file using both dataframes and scala. What is the difference in the above code?
The first piece of code is specific to SPARK API of writing the dataframe to a file in csv format. You can write to hdfs or local file system using this. even you can repartition and parallellize your write. The second piece of code is SCALA API which can only write in the local file system. You cannot parallelize it. The first code levearage the whole cluster to do its work but not the second piece of code.

Spark Structured Streaming Processing Previous Files

I am implementing the file source in Spark Structures Streaming and want to process the same file name again if the file has been modified. Basically an update to the file. Currently right now Spark will not process the same file name again once processed. Seems limited compared to Spark Streaming with Dstream. Is there a way to do this? Spark Structured Streaming doesn't document this anywhere it only process new file with different names.
I believe this is somewhat of an anti pattern, but you may be able to dig through the checkpoint data and remove the entry for that original file.
Try looking for the original file name in the /checkpoint/sources// files delete the file or entry. That might cause the stream to pick up the file name again. I haven't tried this myself.
If this is a one time manual update, I would just change the file name to something new and drop it in the source directory. This approach won't be maintainable or automated.

Spark saveAsTextFile to Azure Blob creates a blob instead of a text file

I am trying to save an RDD to a text file. My instance of Spark is running on Linux and connected to Azure Blob
val rdd = sc.textFile("wasb:///HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv")
//find the rows which have only one digit in the 7th column in the CSV
val rdd1 = rdd.filter(s => s.split(",")(6).length() == 1)
rdd1.saveAsTextFile("wasb:///HVACOut")
When I look at the output, it is not as a single text file but as a series of application/octet-stream files in a folder called HVACOut.
How can I output it as a single text file instead?
Well I am not sure you can get just one file without a directory. If you do
rdd1 .coalesce(1).saveAsTextFile("wasb:///HVACOut")
you will get one file inside a directory called "HVACOut" the file should like something like part-00001. This is because your rdd is a disturbed on in your cluster with what they call partitions. When you do a call to save (all save functions) it is going to make a file per partition. So by call coalesce(1) your telling you want 1 partition.
Hope this helps.
After finished provisioning a Apache Spark cluster on Azure HDInsight, you can go to the built-in Jupyter notebook for your cluster at: https://YOURCLUSTERNAME.azurehdinsight.net/jupyter.
There you will find sample notebook with example on how to do this.
Specifically, for scala, you can go to the notebook named "02 - Read and write data from Azure Storage Blobs (WASB) (Scala)".
Copying some of the code and comments here:
Note:
Because CSV is not natively supported by Spark, so there is no built-in way to write an RDD to a CSV file. However, you can work around this if you want to save your data as CSV.
Code:
csvFile.map((line) => line.mkString(",")).saveAsTextFile("wasb:///example/data/HVAC2sc.csv")
Hope this helps!