create DataSet<Row> from Dataset created reading from a socket (Spark Java) - sockets

In Spark Streaming when the input source is a csv file and I read it through a socket (Java), a Dataset<Row> is created with only a string column and the value of each row contains each line sent through the socket.
When I know the format of each line, e.g. the first two values of the csv line are Strings the next is an integer and so on, is t possible to declare my schema and create another Dataset<Row> based on that schema and place the data accordingly?
Thank you in advance.

First of all,if it is csv i dont see any point to use spark streaming for that.It will be hisotrical data ,data is not changing.So you should use spark sql only to read and process csv.
You can create your schema by crating StructField and decalre data types.

Related

To Export Data from Spark Dataframe to CSV with User Defined Headers

I am reading a Hive table through Spark SQL and storing it in a Spark Dataframe. I am then exporting the data from the Data frame to CSV using coalesce command & it was successful. The only problem is I want to have the CSV Header in some understandable words but it is just the column names.
Is there a way to have my CSV header customized ?
You can use df.withColumnRenamed('old', 'new') to rename columns before saving as CSV

Populate a Properties Object from Spark Databricks File System

TL:DR
Is there a way to read a Scala/Java properties file from a Databricks file system?
Or, is there a way to convert a spark data frame Rows into a set of text key/value pairs (that Scala will understand)?
Full Problem:
The properties file is not local, it's on the Databricks cluster. Attempts to read a file from "dbfs:/" or "/dbfs" fail to find the file when using the scala.io.Source library. My guess is that Scala Source has no ability to recognize the URI for the Databricks file system(?).
I'm able to read the file into a Spark Dataframe however, but attempts to populate a java.utils.Properties object fail with an error that it doesn't accept the Spark Dataframe "ROW" type. I've tried changing the data frame to an Array and List, but run into the same type mismatch. java.util.List[org.apache.spark.sql.Row] for example, is what I get when converting the data frame to a list. I'm guessing that means dataFrameObject.collectAsList() makes a list of spark rows instead of a text list of key/value pairs.
Obviously I'm new to Scala... If there isn't a way to read/load my properties file directly from DBFS, is there a way to convert the spark Row to a key/value pairs - or a byteStream?
Cheers and thanks,
Simon
If you're using full version of the Databricks, not community edition, then you should be able to access files on DBFS via /dbfs/_the_rest_of_your_path_without_dbfs:/_...
But if you can't access /dbfs/..., then you can still load properties as following:
load the file into Spark using the text format that converts every line in the file into individual row
create text from that rows - first you collect all rows to the driver node, then extract string from rows (using the .getString(0) to fetch first element of the row), and then merging all lines together using the mkString
create reader for that text
create properties object and load data from reader (don't forget to close reader after use):
val path_to_file = "dbfs:/something...."
val df = spark.read.format("text").load(path_to_file)
val allTextg = df.collect().map(_.getString(0)).mkString("\n")
val reader = new java.io.StringReader(allText)
val props = new java.util.Properties()
props.load(reader)
reader.close()
and you can check that properties are loaded with
props.list(System.out)

Spark/Scala: Store temptable data into csv file

I have one program which is expecting a csv file and written in python .
The csv data is suppose to come from scala which is using spark functionality to store the data from source and store into temp table like below.
abb.createOrReplaceTempView("tempt")
temp is outcome of above spark command Described .
I want to store the temp data into csvfile /tmp/something.csv
But I did not find anything as such in scala with spark which will serve my purpose .
Please suggest me what whould be the the best way to store temptinto csv file/
declaring "temp" as tempTable allows you to reference it when you write SQL commands in spark
if you want to save the dataframe use abb.write.csv("file_name")

How to save data in parquet format and append entries

I am trying to follow this example to save some data in parquet format and read it. If I use the write.parquet("filename"), then the iterating Spark job gives error that
"filename" already exists.
If I use SaveMode.Append option, then the Spark job gives the error
".spark.sql.AnalysisException: Specifying database name or other qualifiers are not allowed for temporary tables".
Please let me know the best way to ensure new data is just appended to the parquet file. Can I define primary keys on these parquet tables?
I am using Spark 1.6.2 on Hortonworks 2.5 system. Here is the code:
// Option 1: peopleDF.write.parquet("people.parquet")
//Option 2:
peopleDF.write.format("parquet").mode(SaveMode.Append).saveAsTable("people.parquet")
// Read in the parquet file created above
val parquetFile = spark.read.parquet("people.parquet")
//Parquet files can also be registered as tables and then used in SQL statements.
parquetFile.registerTempTable("parquetFile")
val teenagers = sqlContext.sql("SELECT * FROM people.parquet")
I believe if you use .parquet("...."), you should use .mode('append'),
not SaveMode.Append:
df.write.mode('append').parquet("....")

How to write csv file into one file by pyspark

I use this method to write csv file. But it will generate a file with multiple part files. That is not what I want; I need it in one file. And I also found another post using scala to force everything to be calculated on one partition, then get one file.
First question: how to achieve this in Python?
In the second post, it is also said a Hadoop function could merge multiple files into one.
Second question: is it possible merge two file in Spark?
You can use,
df.coalesce(1).write.csv('result.csv')
Note:
when you use coalesce function you will lose your parallelism.
You can do this by using the cat command line function as below. This will concatenate all of the part files into 1 csv. There is no need to repartition down to 1 partition.
import os
test.write.csv('output/test')
os.system("cat output/test/p* > output/test.csv")
Requirement is to save an RDD in a single CSV file by bringing the RDD to an executor. This means RDD partitions present across executors would be shuffled to one executor. We can use coalesce(1) or repartition(1) for this purpose. In addition to it, one can add a column header to the resulted csv file.
First we can keep a utility function for make data csv compatible.
def toCSVLine(data):
return ','.join(str(d) for d in data)
Let’s suppose MyRDD has five columns and it needs 'ID', 'DT_KEY', 'Grade', 'Score', 'TRF_Age' as column Headers. So I create a header RDD and union MyRDD as below which most of times keeps the header on top of the csv file.
unionHeaderRDD = sc.parallelize( [( 'ID','DT_KEY','Grade','Score','TRF_Age' )])\
.union( MyRDD )
unionHeaderRDD.coalesce( 1 ).map( toCSVLine ).saveAsTextFile("MyFileLocation" )
saveAsPickleFile spark context API method can be used to serialize data that is saved in order save space. Use pickFile to read the pickled file.
I needed my csv output in a single file with headers saved to an s3 bucket with the filename I provided. The current accepted answer, when I run it (spark 3.3.1 on a databricks cluster) gives me a folder with the desired filename and inside it there is one csv file (due to coalesce(1)) with a random name and no headers.
I found that sending it to pandas as an intermediate step provided just a single file with headers, exactly as expected.
my_spark_df.toPandas().to_csv('s3_csv_path.csv',index=False)