I am getting status messages of the form from a Spark Structured Streaming Application:
18/02/12 16:38:54 INFO StreamExecution: Streaming query made progress: {
"id" : "a6c37f0b-51f4-47c5-a487-8bd269b80142",
"runId" : "061e41b4-f488-4483-a290-403f1f7eff03",
"name" : null,
"timestamp" : "2018-02-12T11:08:54.323Z",
"numInputRows" : 0,
"processedRowsPerSecond" : 0.0,
"durationMs" : {
"getOffset" : 30,
"triggerExecution" : 46
},
"eventTime" : {
"watermark" : "1970-01-01T00:00:00.000Z"
},
"stateOperators" : [ ],
"sources" : [ {
"description" : "FileStreamSource[file:/home/chiralcarbon/IdeaProjects/spark_structured_streaming/args[0]]",
"startOffset" : null,
"endOffset" : null,
"numInputRows" : 0,
"processedRowsPerSecond" : 0.0
} ],
"sink" : {
"description" : "org.apache.spark.sql.execution.streaming.ConsoleSink#bcc171"
}
}
All of the messages have numInputRows with value 0.
The program streams data from a parquet file and outputs the same stream to the console.Following is the code:
def main(args: Array[String]): Unit = {
val spark: SparkSession = SparkSession.builder.
master("local")
.appName("sparkSession")
.getOrCreate()
val schema = ..
val in = spark.readStream
.schema(schema)
.parquet("args[0]")
val query = in.writeStream
.format("console")
.outputMode("append")
.start()
query.awaitTermination()
}
}
What is the cause and how do I resolve this?
You have an error in readStream:
val in = spark.readStream
.schema(schema)
.parquet("args[0]")
You probably want to read from directory provided in the first argument. Then use instead direct invocation or string interpolation:
val in = spark.readStream
.schema(schema)
.parquet(args(0))
or the last line, if expression is longer or have some concatenation in other situation:
.parquet(s"${args(0)}")
Currently your code tries to read from non-existing directory, so no file will be read. After change, directory will be provided in correct way and Spark will start read files
Related
Am writing a batch application to consume Kafka events and write it to GCS location. Tried deleting the checkpoint location and also verified kafka has 200 messages to consume
Spark - 2.4.8
Scala - 2.21
Submit Command : spark-shell --master yarn --packages org.apache.spark:spark-sql-kafka-0-10_2.12:2.4.8
import org.apache.spark.sql.streaming.Trigger
val readInputKafkaDataNew = spark.readStream
.format("kafka")
.option(
"kafka.bootstrap.servers",
"localhost:9092"
)
.option("subscribe", "changefeed")
.load()
.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
.as[(String, String)]
.writeStream
.outputMode("append")
.format("csv")
.option(
"path",
"gs://test-data-today/test_data/abc"
)
.option(
"checkpointLocation",
"gs://test-data-today/test_data/chkdir"
)
.trigger(Trigger.Once())
.start()
.awaitTermination();
The console log prints its committing the offsets but the dataframe is empty
22/06/24 22:25:35 INFO org.apache.spark.sql.execution.streaming.MicroBatchExecution: Streaming query made progress: {
"id" : "df24d47d-1dbb-4124-9d4f-0e4d9e6a0275",
"runId" : "e0f330e7-fdcf-47ba-85c6-417afbb3cff9",
"name" : null,
"timestamp" : "2022-06-24T22:25:23.257Z",
"batchId" : 0,
"numInputRows" : 0,
"processedRowsPerSecond" : 0.0,
"durationMs" : {
"addBatch" : 5675,
"getBatch" : 1,
"getEndOffset" : 0,
"queryPlanning" : 17,
"setOffsetRange" : 4870,
"triggerExecution" : 12018,
"walCommit" : 715
},
"stateOperators" : [ ],
"sources" : [ {
"description" : "KafkaV2[Subscribe[changefeed]]",
"startOffset" : null,
"endOffset" : {
"changefeed" : {
"2" : 34,
"5" : 28,
"4" : 26,
"1" : 39,
"3" : 49,
"0" : 30
}
},
"numInputRows" : 0,
"processedRowsPerSecond" : 0.0
} ],
"sink" : {
"description" : "FileSink[gs://gs://test-data-today/test_data/abc]"
}
}
I have to join two spark data-frames in Scala based on a custom function. Both data-frames have the same schema.
Sample Row of data in DF1:
{
"F1" : "A",
"F2" : "B",
"F3" : "C",
"F4" : [
{
"name" : "N1",
"unit" : "none",
"count" : 50.0,
"sf1" : "val_1",
"sf2" : "val_2"
},
{
"name" : "N2",
"unit" : "none",
"count" : 100.0,
"sf1" : "val_3",
"sf2" : "val_4"
}
]
}
Sample Row of data in DF2:
{
"F1" : "A",
"F2" : "B",
"F3" : "C",
"F4" : [
{
"name" : "N1",
"unit" : "none",
"count" : 80.0,
"sf1" : "val_5",
"sf2" : "val_6"
},
{
"name" : "N2",
"unit" : "none",
"count" : 90.0,
"sf1" : "val_7",
"sf2" : "val_8"
},
{
"name" : "N3",
"unit" : "none",
"count" : 99.0,
"sf1" : "val_9",
"sf2" : "val_10"
}
]
}
RESULT of Joining these sample rows:
{
"F1" : "A",
"F2" : "B",
"F3" : "C",
"F4" : [
{
"name" : "N1",
"unit" : "none",
"count" : 80.0,
"sf1" : "val_5",
"sf2" : "val_6"
},
{
"name" : "N2",
"unit" : "none",
"count" : 100.0,
"sf1" : "val_3",
"sf2" : "val_4"
},
{
"name" : "N3",
"unit" : "none",
"count" : 99.0,
"sf1" : "val_9",
"sf2" : "val_10"
}
]
}
The result is:
full-outer-join based on value of "F1", "F2" and "F3" +
join of "F4" keeping unique nodes(use name as id) with max value of "count"
I am not very familiar with Scala and have been struggling with this for more than a day now. Here is what I have gotten to so far:
val df1 = sqlContext.read.parquet("stack_a.parquet")
val df2 = sqlContext.read.parquet("stack_b.parquet")
val df4 = df1.toDF(df1.columns.map(_ + "_A"):_*)
val df5 = df2.toDF(df1.columns.map(_ + "_B"):_*)
val df6 = df4.join(df5, df4("F1_A") === df5("F1_B") && df4("F2_A") === df5("F2_B") && df4("F3_A") === df5("F3_B"), "outer")
def joinFunction(r:Row) = {
//Need the real-deal here!
//print(r(3)) //-->Any = WrappedArray([..])
//also considering parsing as json to do the processing but not sure about the performance impact
//val parsed = JSON.parseFull(r.json) //then play with parsed
r.toSeq //
}
val finalResult = df6.rdd.map(joinFunction)
finalResult.collect
I was planning to add the custom merge logic in joinFunction but I am struggling to convert the WrappedArray/Any class to something I can work with.
Any inputs on how to do the conversion or the join in a better way will be very helpful.
Thanks!
Edit (7 Mar, 2021)
The full-outer join actually has to be performed only on "F1".
Hence, using #werner's answer, I am doing:
val df1_a = df1.toDF(df1.columns.map(_ + "_A"):_*)
val df2_b = df2.toDF(df2.columns.map(_ + "_B"):_*)
val finalResult = df1_a.join(df2_b, df1_a("F1_A") === df2_b("F1_B"), "full_outer")
.drop("F1_B")
.withColumn("F4", joinFunction(col("F4_A"), col("F4_B")))
.drop("F4_A", "F4_B")
.withColumn("F2", when(col("F2_A").isNull, col("F2_B")).otherwise(col("F2_A")))
.drop("F2_A", "F2_B")
.withColumn("F3", when(col("F3_A").isNull, col("F3_B")).otherwise(col("F3_A")))
.drop("F3_A", "F3_B")
But I am getting this error. What am I missing..?
You can implement the merge logic with the help of an udf:
//case class to define the schema of the udf's return value
case class F4(name: String, unit: String, count: Double, sf1: String, sf2: String)
val joinFunction = udf((a: Seq[Row], b: Seq[Row]) =>
(a ++ b).map(r => F4(r.getAs[String]("name"),
r.getAs[String]("unit"),
r.getAs[Double]("count"),
r.getAs[String]("sf1"),
r.getAs[String]("sf2")))
//group the elements from both arrays by name
.groupBy(_.name)
//take the element with the max count from each group
.map { case (_, d) => d.maxBy(_.count) }
.toSeq)
//join the two dataframes
val finalResult = df1.withColumnRenamed("F4", "F4_A").join(
df2.withColumnRenamed("F4", "F4_B"), Seq("F1", "F2", "F3"), "full_outer")
//call the merge function
.withColumn("F4", joinFunction('F4_A, 'F4_B))
//drop the the intermediate columns
.drop("F4_A", "F4_B")
I'm converting my team's legacy Redshift SQL code to Spark SQL code. All the Spark examples I've seen define the schema in a non-SQL way using StructType and StructField and I'd prefer to define the schema in SQL, since most of my users know SQL but not Spark.
This is the ugly workaround I'm doing now. Is there a more elegant way that doesn't require defining an empty table just so that I can pull the SQL schema?
create_table_sql = '''
CREATE TABLE public.example (
id LONG,
example VARCHAR(80)
)'''
spark.sql(create_table_sql)
schema = spark.sql("DESCRIBE public.example").collect()
s3_data = spark.read.\
option("delimiter", "|")\
.csv(
path="s3a://"+s3_bucket_path,
schema=schema
)\
.saveAsTable('public.example')
Yes there is a way to create schema from string although I am not sure if it really looks like SQL! So you can use:
from pyspark.sql.types import _parse_datatype_string
_parse_datatype_string("id: long, example: string")
This will create the next schema:
StructType(List(StructField(id,LongType,true),StructField(example,StringType,true)))
Or you may have a complex schema as well:
schema = _parse_datatype_string("customers array<struct<id: long, name: string, address: string>>")
StructType(
List(StructField(
customers,ArrayType(
StructType(
List(
StructField(id,LongType,true),
StructField(name,StringType,true),
StructField(address,StringType,true)
)
),true),true)
)
)
You can check for more examples here
adding up to what has already been said, making a schema (e.g. StructType-based or JSON) is more straightforward in scala spark than in pySpark:
> import org.apache.spark.sql.types.StructType
> val s = StructType.fromDDL("customers array<struct<id: long, name: string, address: string>>")
> s
res3: org.apache.spark.sql.types.StructType = StructType(StructField(customers,ArrayType(StructType(StructField(id,LongType,true),StructField(name,StringType,true),StructField(address,StringType,true)),true),true))
> s.prettyJson
res9: String =
{
"type" : "struct",
"fields" : [ {
"name" : "customers",
"type" : {
"type" : "array",
"elementType" : {
"type" : "struct",
"fields" : [ {
"name" : "id",
"type" : "long",
"nullable" : true,
"metadata" : { }
}, {
"name" : "name",
"type" : "string",
"nullable" : true,
"metadata" : { }
}, {
"name" : "address",
"type" : "string",
"nullable" : true,
"metadata" : { }
} ]
},
"containsNull" : true
},
"nullable" : true,
"metadata" : { }
} ]
}
I have a file with the data in Avro format. I would like to read this data into GenericRecord type data structure or any other type data structure so I would be able to send it from Kafka to Spark.
I tried to use DataFileReader, however the result was this error:
Exception in thread "main" java.io.IOException: Not a data file.
at org.apache.avro.file.DataFileStream.initialize(DataFileStream.java:105)
Here is the code with produced it:
val schema = Source.fromFile(schemaPath).mkString
val parser = new Schema.Parser
val avroSchema = parser.parse(schema)
val avroDataFile = new File(dataPath)
val avroReader = new GenericDatumReader[GenericRecord](avroSchema)
val dataFileReader = new DataFileReader[GenericRecord](avroDataFile, avroReader)
//THIS LINE PRODUCED ERROR
How can I fix this error?
This is how my Avro data schema looks like:
{
"type" : "record",
"namespace" : "input_data",
"name" : "testUser",
"fields" : [
{"name" : "name", "type" : "string", "default": "NONE"},
{"name" : "age", "type" : "int", "default": -1},
{"name" : "phone", "type" : "string", "default" : "NONE"},
{"name" : "city", "type" : "string", "default" : "NONE"},
{"name" : "country", "type" : "string", "default" : "NONE"}
]
}
And this is the data I tried to read (it was generated by this tool):
{
"name" : "O= ~usP3\u0001\bY\u0011k\u0001",
"age" : 585392215,
"phone" : "\u0012\u001F#\u001FH]e\u0015UW\u0000\fo",
"city" : "aWi\u001B'\u000Bh\u00163\u001A_I\u0001\u0001L",
"country" : "]H\u001Dl(n!Sr}oVCH"
}
{
"name" : "\u0011Y~\fV\u001Dv%4\u0006;\u0012",
"age" : -2045540864,
"phone" : "UyOdgny-hA",
"city" : "\u0015f?\u0000\u0015oN{\u0019\u0010\u001D%",
"country" : "eY>c\u0010j\u0002[\u001CdDQ"
}
...
Well, that data is not Avro, it is JSON.
If it were binary Avro data, you would not be able to read the file without first using avro-tools.jar tojson action.
If you look at the usage doc, JSON is the default
-j, --json: Encode outputted data in JSON format (default)
To actually get Avro, use arg -s schema.avsc -b -o out.avro
There are also other ways to generate test data in Kafka
I have scenario where I will be getting different JSON result from multiple API's, I need to read specific value from the response.
For instance my JSON response is as below, now I need a format from user to provider by which I can read the value of Lat, Don't want hard-coded approach for this, user can provided a node to read in some other json file or txt file:
{
"name" : "Watership Down",
"location" : {
"lat" : 51.235685,
"long" : -1.309197
},
"residents" : [ {
"name" : "Fiver",
"age" : 4,
"role" : null
}, {
"name" : "Bigwig",
"age" : 6,
"role" : "Owsla"
} ]
}
You can get the key of json using scala JSON parser as below. Im defining a function to get the lat, which you can make generic as per your need, so that you just need to change the function.
import scala.util.parsing.json.JSON
val json =
"""
|{
| "name" : "Watership Down",
| "location" : {
| "lat" : 51.235685,
| "long" : -1.309197
| },
| "residents" : [ {
| "name" : "Fiver",
| "age" : 4,
| "role" : null
| }, {
| "name" : "Bigwig",
| "age" : 6,
| "role" : "Owsla"
| } ]
|}
""".stripMargin
val jsonObject = JSON.parseFull(json).get.asInstanceOf[Map[String, Any]]
val latLambda : (Map[String, Any] => Option[Double] ) = _.get("location")
.map(_.asInstanceOf[Map[String, Any]]("lat").toString.toDouble)
assert(latLambda(jsonObject) == Some(51.235685))
The expanded version of function,
val latitudeLambda = new Function[Map[String, Any], Double]{
override def apply(input: Map[String, Any]): Double = {
input("location").asInstanceOf[Map[String, Any]]("lat").toString.toDouble
}
}
Make the function generic so that once you know what key you want from the JSON, just change the function and apply the JSON.
Hope it helps. But there are nicer APIs out there like Play JSON lib. You simply can use,
import play.api.libs.json._
val jsonVal = Json.parse(json)
val lat = (jsonVal \ "location" \ "lat").get