I am trying to load a csv file into a distributed dataframe (ddf), whilst giving a schema. The ddf gets loaded but the timestamp column shows only null values. I believe this happens because spark expects timestamp in a particular format. So I have two questions:
1) How do I give spark the format or make it detect format (like
"MM/dd/yyyy' 'HH:mm:ss")
2) If 1 is not an option how to convert the field (assuming I imported as String) to timestamp.
For Q2 I have tried using following :
def convert(row :org.apache.spark.sql.Row) :org.apache.spark.sql.Row = {
import org.apache.spark.sql.Row
val format = new java.text.SimpleDateFormat("MM/dd/yyyy' 'HH:mm:ss")
val d1 = getTimestamp(row(3))
return Row(row(0),row(1),row(2),d1);
}
val rdd1 = df.map(convert)
val df1 = sqlContext.createDataFrame(rdd1,schema1)
The last step doesn't work as there are null values which dont let it finish. I get errors like :
java.lang.RuntimeException: Failed to check null bit for primitive long value.
The sqlContext.load however is able to load the csv without any problems.
val df = sqlContext.load("com.databricks.spark.csv", schema, Map("path" -> "/path/to/file.csv", "header" -> "true"))
Related
I am reading the data from Store table which is in snowflake. I want to pass the date from dataframe maxdatefromtbl to my query in spark sql to filter records.
This condition (s"CREATED_DATE!='$maxdatefromtbl'") is not working as expected
var retail = spark.read.format("snowflake").options(options).option("query","Select MAX(CREATED_DATE) as CREATED_DATE from RSTORE").load()
val maxdatefromtbl = retail.select("CREATED_DATE").toString
var retailnew = spark.read.format("snowflake").options(options).option("query","Select * from RSTORE").load()
var finaldataresult = retailnew.filter(s"CREATED_DATE!='$maxdatefromtbl'")
Select a single value from the retail dataframe to use in the filter.
val maxdatefromtbl = retail.select("CREATED_DATE").collect().head.getString(0)
var finaldataresult = retailnew.filter(col("CREATED_DATE") =!= maxdatefromtbl)
The type of retail.select("CREATED_DATE") is DataFrame, and DataFrame.toString returns the schema rather than the value of the single row you have. Please see the following example from a Spark shell.
scala> val s = Seq(1, 2, 3).toDF()
scala> s.select("value").toString
res0: String = [value: int]
In first line in the code snipped above, collect() wraps the dataframe, with a single row in your case, in an array; head takes the first element of the array, and .getString(0) gets the value from the cell with at the index 0 as String. Please see the DataFrame and Row documentation pages for more information.
I need to insert some values in my hive table using sparksql.I'm using below code.
val filepath:String = "/user/usename/filename.csv'"
val fileName : String = filepath
val result = fileName.split("/")
val fn=result(3) //filename
val e=LocalDateTime.now() //timestamp
First I tried using Insert Into Values but then i found this feature is not available in sparksql.
val ds=sparksession.sql("insert into mytable("filepath,filename,Start_Time") values('${filepath}','${fn}','${e}')
is there any other way to insert these values using sparksql(mytable is empty,I need to load this table everyday)?.
You can directly use Spark Dataframe Write API to insert data into the table.
If you do not have the Spark Dataframe then first create one Dataframe using spark.createDataFrame() then, try as follow to write the data:
df.write.insertInto("name of hive table")
Hi Below code worked for me since i need to use variable in my dataframe so first i created dataframe form selected data then using df.write.insertInto(tablename) saved in hive table.
val filepath:String = "/user/usename/filename.csv'"
val fileName : String = filepath
val result = fileName.split("/")
val fn=result(3) //filename
val e=LocalDateTime.now() //timestamp
val df1=sparksession.sql(s" select '${filepath}' as file_path,'${fn}' as filename,'${e}' as Start_Time")
df1.write.insertInto("dbname.tablename")
I need to write the data from Spark dataframe into HDFS in Avro format. The challenge is that the data should be saved by each day so the directories would look like this: tablename/2019-08-12, tablename/2019-08-13 and so on.
I have only a field of timestamp from which I need to extract date for creating directories names.
I have built an approach which has 2 problems:
1) There are difficulties with a date extraction from the timestamp
3) On large dataset (and it's going to be larger later) performance will be very bad as a lot of tasks are launched.
So how can I change/improve this approach?
Here is the code I used (dataDF is an input data):
val uniqueDates = dataDF.select("update_database_time").distinct.
collect.map(elem => elem.getTimestamp(0).getDate)
uniqueDates.map(date => {
val resultDF = dataDF.where(to_date(dataDF.col("update_database_time")) <=> date)
val pathToSave = s"${dataDir}/${tableNameValue}/${date}"
dataDF.write
.format("avro")
.option("avroSchema", SchemaRegistry.getSchema(
schemaRegistryConfig.url,
schemaRegistryConfig.dataSchemaSubject,
schemaRegistryConfig.dataSchemaVersion))
.save(s"${hdfsURL}${pathToSave}")
resultDF
})
.reduce(_.union(_))
If you can live with directory structure like
tablename/date=2019-08-12
tablename/date=2019-08-13
instead, then DataFrameWriter.partitionBy does the trick. For example
val df =
Seq((Timestamp.valueOf("2019-06-01 12:00:00"), 1),
(Timestamp.valueOf("2019-06-01 12:00:01"), 2),
(Timestamp.valueOf("2019-06-02 12:00:00"), 3)).toDF("time", "foo")
df.withColumn("date", to_date($"time"))
.write
.partitionBy("date")
.format("avro")
.save("/tmp/foo")
yields the following structure
find /tmp/foo
/tmp/foo
/tmp/foo/._SUCCESS.crc
/tmp/foo/date=2019-06-01
/tmp/foo/date=2019-06-01/.part-00000-2a7a63f2-7038-4aec-8f76-87077f91a415.c000.avro.crc
/tmp/foo/date=2019-06-01/part-00000-2a7a63f2-7038-4aec-8f76-87077f91a415.c000.avro
/tmp/foo/date=2019-06-01/.part-00001-2a7a63f2-7038-4aec-8f76-87077f91a415.c000.avro.crc
/tmp/foo/date=2019-06-01/part-00001-2a7a63f2-7038-4aec-8f76-87077f91a415.c000.avro
/tmp/foo/_SUCCESS
/tmp/foo/date=2019-06-02
/tmp/foo/date=2019-06-02/part-00002-2a7a63f2-7038-4aec-8f76-87077f91a415.c000.avro
/tmp/foo/date=2019-06-02/.part-00002-2a7a63f2-7038-4aec-8f76-87077f91a415.c000.avro.crc
I am using spark scala 1.6 version.
I have 2 files, one is a schema file which has hundreds of columns separated by commas and another file is .gz file which contains data.
I am trying to read the data using the schema file and apply different transformation logic on a set of few columns .
I tried running a sample code but I have hardcoded the columns numbers in the attached pic.
Also I want to write a udf which could read any set of columns and apply the transformation like replacing a special character and give the output.
Appreciate any suggestion
import org.apache.spark.SparkContext
val rdd1 = sc.textFile("../inp2.txt")
val rdd2 = rdd1.map(line => line.split("\t"))
val rdd2 = rdd1.map(line => line.split("\t")(1)).toDF
val replaceUDF = udf{s: String => s.replace(".", "")}
rdd2.withColumn("replace", replaceUDF('_1)).show
You can read the field name file with simple scala code and create a list of column names as
// this reads the file and creates a list of columnnames
val line = Source.fromFile("path to file").getLines().toList.head
val columnNames = line.split(",")
//read the text file as an rdd and convert to Dataframe
val rdd1 = sc.textFile("../inp2.txt")
val rdd2 = rdd1.map(line => line.split("\t")(1))
.toDF(columnNames : _*)
This creates a dataframe with columns names that you have in a separate file.
Hope this helps!
I first read a delimited file with multiple rows and index rows using zip with index.
Next I'm trying to write that dataframe which is created from a RDD[Row] to a csv file using scala.
This is my code :
val FileDF = spark.read.csv(inputfilepath)
val rdd = FileDF.rdd.zipWithIndex().map(indexedRow => Row.fromSeq((indexedRow._2.toLong+SEED+1) +: indexedRow._1.toSeq))
val FileDFWithSeqNo = StructType(Array(StructField("UniqueRowIdentifier",LongType)).++(FileDF.schema.fields))
val dataframenew = spark.createDataFrame(rdd,FileDFWithSeqNo)
dataframenew.write.format("com.databricks.spark.csv").option("delimiter","|").save("C:\\Users\\path\\Desktop\\IndexedOutput")
where dataframenew is the final dataframe.
Input data is like :
0|0001|10|1|6001825851|0|0|0000|0|003800543||2017-03-02 00:00:00|95|O|473|3.74|0.05|N|||5676|6001661630||473|1|||UPS|2017-03-02 00:00:00|0.0000||0||20170303|793358|793358115230979
0|0001|10|1|6001825853|0|0|0000|0|003811455||2017-03-02 00:00:00|95|O|90|15.14|0.55|N|||1080|6001661630||90|1|||UPS|2017-03-02 00:00:00|0.0000||0||20170303|793358|793358115230980
0|0001|10|1|6001825854|0|0|0000|0|003812898||2017-03-02 00:00:00|95|O|15|7.60|1.33|N|||720|6001661630||15|1|||UPS|2017-03-02 00:00:00|0.0000||0||20170303|793358|793358115230981
I'm zipping with index to get unique identifier for each row.
But this gives me an output file with data like :
1001,"0|0001|10|1|6001825851|0|0|0000|PS|0|0.0000||0||20170303|793358|793358115230979",cabc
While the expected output should be :
1001,0|0001|10|1|6001825851|0|0|0000|PS|0|0.0000||0||20170303|793358|793358115230979,cabc
Why are the extra quotes getting added into the data and how can I eliminate this?