Read AVRO structures saved in HBase columns - scala

I am new to Spark and HBase. I am working with the backups of a HBase table. These backups are in a S3 bucket. I am reading them via spark(scala) using newAPIHadoopFile like this:
conf.set("io.serializations", "org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.hbase.mapreduce.ResultSerialization")
val data = sc.newAPIHadoopFile(path,classOf[SequenceFileInputFormat[ImmutableBytesWritable, Result]], classOf[ImmutableBytesWritable], classOf[Result], conf)
The table in question is called Emps. The schema of Emps is :
key: empid {COMPRESSION => 'gz' }
family: data
dob - date of birth of this employee.
e_info - avro structure for storing emp info.
e_dept- avro structure for storing info about dept.
family: extra - Extra Metadata {NAME => 'extra', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
e_region - emp region
e_status - some data about his achievements
.
.
some more meta data
The table has some columns that have simple string data in them, and some columns that has AVRO stuctures in them.
I am trying to read this data directly from the HBase backup files in the S3. I do not want to re-create this HBase table in my local machine as the table is very, very large.
This is how I am trying to read this:
data.keys.map{k=>(new String(k.get()))}.take(1)
res1: Array[String] = Array(111111111100011010102462)
data.values.map{ v =>{ for(cell <- v.rawCells()) yield{
val family = CellUtil.cloneFamily(cell);
val column = CellUtil.cloneQualifier(cell);
val value = CellUtil.cloneValue(cell);
new String(family) +"->"+ new String(column)+ "->"+ new String(value)
}
}
}.take(1)
res2: Array[Array[String]] = Array(Array(info->dob->01/01/1996, info->e_info->?ж�?�ո� ?�� ???̶�?�ո� ?�� ????, info->e_dept->?ж�??�ո� ?̶�??�ո� �ո� ??, extra->e_region-> CA, extra->e_status->, .....))
As expected I can see the simple string data correctly, but the AVRO data is garbage.
I tried reading the AVRO structures using GenericDatumReader:
data.values.map{ v =>{ for(cell <- v.rawCells()) yield{
val family = new String(CellUtil.cloneFamily(cell));
val column = new String(CellUtil.cloneQualifier(cell));
val value = CellUtil.cloneValue(cell);
if(column=="e_info"){
var schema_obj = new Schema.Parser
//schema_e_info contains the AVRO schema for e_info
var schema = schema_obj.parse(schema_e_info)
var READER2 = new GenericDatumReader[GenericRecord](schema)
var datum= READER2.read(null, DecoderFactory.defaultFactory.createBinaryDecoder(value,null))
var result=datum.get("type").toString()
family +"->"+column+ "->"+ new String(result) + "\n"
}
else
family +"->"+column+ "->"+ new String(value)+"\n"
}
}
}
But this is giving me the following error:
org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2101)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:370)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:369)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.map(RDD.scala:369)
... 74 elided
Caused by: java.io.NotSerializableException: org.apache.avro.Schema$RecordSchema
Serialization stack:
- object not serializable (class: org.apache.avro.Schema$RecordSchema, value: .....
So I want to ask:
Is there any way to make the non-serializable class RecordSchema work with the map function?
Is my approach right upto this point? I would be glad to know about better approaches to handle this kind of data.
I read that handling this in a Dataframe would be a lot easier. I tried to convert the Hadoop RDD so formed into a Dataframe, but again I am running blindly there.

As the exception says - the schema is non-serializable. Can you initialize it inside the mapper function? So that it doesn't need to get shipped from the driver to the executors.
Alternatively, you can also create a scala singleton object that contains the schema. You get one scala singleton initialized on each executor, so when you access any member from the singleton, it doesn't need to be serialized & sent across the network. This avoids the unnecessary overhead of re-creating the schema for each and every row in the data.
Just for the purpose of checking that you can read the data fine - you can also convert it to a byte array on the executors, collect it on the driver and do the deserialization (parsing the AVRO data) in the driver code. But this obviously won't scale, it's just to make sure that your data looks good and to avoid spark-related complications while you're writing your prototype code to extract the data.

Related

ParquetProtoWriters creates an unreadable parquet file

My .proto file contains one field of map type.
Message Foo {
...
...
map<string, uint32> fooMap = 19;
}
I'm consuming messages from Kafka source and trying to write the messages as a parquet file to S3 bucket.
The relevant part of the code looks like this:
val basePath = "s3a:// ..."
env
.fromSource(source, WatermarkStrategy.noWatermarks(), "source")
.map(x => toJavaProto(x))
.sinkTo(
FileSink
.forBulkFormat(basePath, ParquetProtoWriters.forType(classOf(Foo)))
.withOutputFileConfig(
OutputFileConfig
.builder()
.withPartPrefix("foo")
.withPartSuffix(".parquet")
.build()
)
.build()
)
.setParallelism(1)
env.execute()
The result is that a parquet file was actually written for S3, but the file appears to be corrupted. When I try to read the file using the Avro / Parquet Viewer plugin I can see this error:
Unable to process file
.../Downloads/foo-9366c15f-270e-4939-ad88-b77ee27ddc2f-0.parquet
java.lang.UnsupportedOperationException: REPEATED not supported
outside LIST or MAP. Type: repeated group fooMap = 19 { optional
binary key (STRING) = 1; optional int32 value = 2; } at
org.apache.parquet.avro.AvroSchemaConverter.convertFields(AvroSchemaConverter.java:277)
at
org.apache.parquet.avro.AvroSchemaConverter.convert(AvroSchemaConverter.java:264)
at
org.apache.parquet.avro.AvroReadSupport.prepareForRead(AvroReadSupport.java:134)
at
org.apache.parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:185)
at
org.apache.parquet.hadoop.ParquetReader.initReader(ParquetReader.java:156)
at
org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:135)
at
uk.co.hadoopathome.intellij.viewer.fileformat.ParquetFileReader.getRecords(ParquetFileReader.java:99)
at
uk.co.hadoopathome.intellij.viewer.FileViewerToolWindow$2.doInBackground(FileViewerToolWindow.java:193)
at
uk.co.hadoopathome.intellij.viewer.FileViewerToolWindow$2.doInBackground(FileViewerToolWindow.java:184)
at java.desktop/javax.swing.SwingWorker$1.call(SwingWorker.java:304)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.desktop/javax.swing.SwingWorker.run(SwingWorker.java:343) at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Flink version 1.15
proto 2
There are some breaking changes in parquet-format and parquet-mr. I'm not familiar with Flink, but I guess you have to configure org.apache.flink.formats.parquet.protobuf.ParquetProtoWriters to generate a correct format.
I used parquet-mr directly and encountered the same issue. An avro parquet reader is unable to read the parquet file generated by the following code:
import org.apache.parquet.proto.ProtoParquetWriter;
import org.apache.parquet.proto.ProtoWriteSupport;
...
var conf = new Configuration();
ProtoWriteSupport.setWriteSpecsCompliant(conf, false);
var builder = ProtoParquetWriter.builder(file)
.withMessage(Xxx.class)
.withCompressionCodec(CompressionCodecName.GZIP)
.withWriteMode(Mode.OVERWRITE)
.withConf(conf);
try (var writer = builder.build()) {
writer.write(pb.toBuilder());
}
If the config value is changed to true, it will succeed:
ProtoWriteSupport.setWriteSpecsCompliant(conf, true);
By looking its source code, we can know this function is for setting the boolean value of parquet.proto.writeSpecsCompliant in the config.
In ParquetProtoWriters.forType's source code, it create a factory with builder class ParquetProtoWriterBuilder, which uses org.apache.parquet.proto.ProtoWriteSupport internally. I guess you can somehow assign it with a correctly configured ProtoWriteSupport to it.
I also install this Python tool: https://pypi.org/project/parquet-tools/ to inspect parquet files. List fields generated in older format will be like:
...
############ Column(f2) ############
name: f2
path: f1.f2
...
and in newer format will be like:
...
############ Column(element) ############
name: element
path: f1.f2.list.element
...
Hope this article could give you some direction.
References
https://github.com/apache/parquet-format/blob/54e53e5d7794d383529dd30746378f19a12afd58/LogicalTypes.md#nested-types
https://github.com/apache/parquet-format/pull/51
https://github.com/apache/parquet-mr/pull/463

Get a spark Column from a spark Row

I am new to Scala, Spark and so struggling with a map function I am trying to create.
The map function on the Dataframe a Row (org.apache.spark.sql.Row)
I have been loosely following this article.
val rddWithExceptionHandling = filterValueDF.rdd.map { row: Row =>
val parsed = Try(from_avro(???, currentValueSchema.value, fromAvroOptions)) match {
case Success(parsedValue) => List(parsedValue, null)
case Failure(ex) => List(null, ex.toString)
}
Row.fromSeq(row.toSeq.toList ++ parsed)
}
The from_avro function wants to accept a Column (org.apache.spark.sql.Column), however I don't see a way in the docs to get a column from a Row.
I am fully open to the idea that I may be doing this whole thing wrong.
Ultimately my goal is to parse the bytes coming in from a Structure Stream.
Parsed records get written to a Delta Table A and the failed records to another Delta Table B
For context the source table looks as follows:
Edit - from_avro returning null on "bad record"
There have been a few comments saying that from_avro returns null if it fails to parse a "bad record". By default from_avro uses mode FAILFAST which will throw an exception if parsing fails. If one sets the mode to PERMISSIVE an object in the shape of the schema is returned but with all properties being null (also not particularly useful...). Link to the Apache Avro Data Source Guide - Spark 3.1.1 Documentation
Here is my original command:
val parsedDf = filterValueDF.select($"topic",
$"partition",
$"offset",
$"timestamp",
$"timestampType",
$"valueSchemaId",
from_avro($"fixedValue", currentValueSchema.value, fromAvroOptions).as('parsedValue))
If there are ANY bad rows the job is aborted with org.apache.spark.SparkException: Job aborted.
A snippet of the log of the exception:
Caused by: org.apache.spark.SparkException: Malformed records are detected in record parsing. Current parse Mode: FAILFAST. To process malformed records as null result, try setting the option 'mode' as 'PERMISSIVE'.
at org.apache.spark.sql.avro.AvroDataToCatalyst.nullSafeEval(AvroDataToCatalyst.scala:111)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:732)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$2(FileFormatWriter.scala:291)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1615)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:300)
... 10 more
Suppressed: java.lang.NullPointerException
at shaded.databricks.org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsOutputStream.write(NativeAzureFileSystem.java:1099)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.parquet.hadoop.util.HadoopPositionOutputStream.write(HadoopPositionOutputStream.java:50)
at shaded.parquet.org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
at shaded.parquet.org.apache.thrift.transport.TTransport.write(TTransport.java:107)
at shaded.parquet.org.apache.thrift.protocol.TCompactProtocol.writeByteDirect(TCompactProtocol.java:482)
at shaded.parquet.org.apache.thrift.protocol.TCompactProtocol.writeByteDirect(TCompactProtocol.java:489)
at shaded.parquet.org.apache.thrift.protocol.TCompactProtocol.writeFieldBeginInternal(TCompactProtocol.java:252)
at shaded.parquet.org.apache.thrift.protocol.TCompactProtocol.writeFieldBegin(TCompactProtocol.java:234)
at org.apache.parquet.format.InterningProtocol.writeFieldBegin(InterningProtocol.java:74)
at org.apache.parquet.format.FileMetaData$FileMetaDataStandardScheme.write(FileMetaData.java:1184)
at org.apache.parquet.format.FileMetaData$FileMetaDataStandardScheme.write(FileMetaData.java:1051)
at org.apache.parquet.format.FileMetaData.write(FileMetaData.java:949)
at org.apache.parquet.format.Util.write(Util.java:222)
at org.apache.parquet.format.Util.writeFileMetaData(Util.java:69)
at org.apache.parquet.hadoop.ParquetFileWriter.serializeFooter(ParquetFileWriter.java:757)
at org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:750)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:135)
at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:58)
at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.abort(FileFormatDataWriter.scala:84)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$3(FileFormatWriter.scala:297)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1626)
... 11 more
Caused by: java.lang.ArithmeticException: Unscaled value too large for precision
at org.apache.spark.sql.types.Decimal.set(Decimal.scala:83)
at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:577)
at org.apache.spark.sql.avro.AvroDeserializer.createDecimal(AvroDeserializer.scala:308)
at org.apache.spark.sql.avro.AvroDeserializer.$anonfun$newWriter$16(AvroDeserializer.scala:177)
at org.apache.spark.sql.avro.AvroDeserializer.$anonfun$newWriter$16$adapted(AvroDeserializer.scala:174)
at org.apache.spark.sql.avro.AvroDeserializer.$anonfun$getRecordWriter$1(AvroDeserializer.scala:336)
at org.apache.spark.sql.avro.AvroDeserializer.$anonfun$getRecordWriter$1$adapted(AvroDeserializer.scala:332)
at org.apache.spark.sql.avro.AvroDeserializer.$anonfun$getRecordWriter$2(AvroDeserializer.scala:354)
at org.apache.spark.sql.avro.AvroDeserializer.$anonfun$getRecordWriter$2$adapted(AvroDeserializer.scala:351)
at org.apache.spark.sql.avro.AvroDeserializer.$anonfun$converter$3(AvroDeserializer.scala:75)
at org.apache.spark.sql.avro.AvroDeserializer.deserialize(AvroDeserializer.scala:89)
at org.apache.spark.sql.avro.AvroDataToCatalyst.nullSafeEval(AvroDataToCatalyst.scala:101)
... 16 more
In order to get a specific column from the Row object you can use either row.get(i) or using the column name with row.getAs[T]("columnName"). Here you can check the details of the Row class.
Then your code would look as next:
val rddWithExceptionHandling = filterValueDF.rdd.map { row: Row =>
val binaryFixedValue = row.getSeq[Byte](6) // or row.getAs[Seq[Byte]]("fixedValue")
val parsed = Try(from_avro(binaryFixedValue, currentValueSchema.value, fromAvroOptions)) match {
case Success(parsedValue) => List(parsedValue, null)
case Failure(ex) => List(null, ex.toString)
}
Row.fromSeq(row.toSeq.toList ++ parsed)
}
Although in your case, you don't really need to go into the map function because then you have to work with primitive Scala types when from_avro works with the Dataframe API. This is the reason that you can't call from_avro directly from map since instances of Column class can be used only in combination with the Dataframe API, i.e: df.select($"c1"), here c1 is an instance of Column. In order to use from_avro, as you initially intended, just type:
filterValueDF.select(from_avro($"fixedValue", currentValueSchema))
As #mike already mentioned, if from_avro fails to parse the AVRO content will return null. Finally, if you want to separate succeeded rows from failed, you could do something like:
val includingFailuresDf = filterValueDF.select(
from_avro($"fixedValue", currentValueSchema) as "avro_res")
.withColumn("failed", $"avro_res".isNull)
val successDf = includingFailuresDf.where($"failed" === false)
val failedDf = includingFailuresDf.where($"failed" === true)
Please be aware that the code was not tested.
From what i understand you just need to fetch a column for a row . You can probably do that by getting a column value at specific index using row.get()

NiFi Avro Kafka message nano-timestamp (19 digits) cast to timestamp with milliseconds

I'm now facing an issue converting Kafka's message record of type long for nano-seconds (19 digits) to a string timestamp with milliseconds. The messages are coming in Avro format and contain different schemas (so we can`t statically define one schema) stored in Confluent Schema Registry. The current process is:
1) ConsumeKafkaRecord_2_0 which reads the message and stores the Avro schema coming from Confluent Schema Registry into avro.schema attribute
2) UpdateAttribute which is looking for a pattern of a timestamp record in the avro.schema and adds "logicalType":"timestamp-micros" (because i can`t find timestamp-nanos type in the Avro specification)
3) ConvertRecord which converts the Avro flowfile using avro.schema into JSON. It uses the logicalType assigned in the previous step and converts the 19 digits long into the yyyy-MM-dd HH:mm:SS.SSSSSS. Here the issue is that 19 digits is a nano-timestamp type which is missing in Avro specification, so we only can use timestamp-micros type and receive the year 51000+ values.
4) ReplaceText - this processor gives us a workaround for an issue described above and we are replacing the values of the 5-digits-year pattern with a "correct" datetime (with milliseconds, because Java somehow can`t work with microseconds) using and expression: ${'$1':toDate('yyyyy-MM-dd HH:mm:ss.SSSSSS'):toNumber():toString():substring(0, 13):toNumber():toDate():format('yyyy-MM-dd HH:mm:ss.SSS')}
After that we go on with other processors, the workaround works but with a strange issue - our resulting timestamps differ for a few milliseconds from what we receive in Kafka. I can only guess this is the result of the transformations described above. That`s why my question is - is there a better way to handle 19-digit values coming in the Avro messages (the schemas are in Confluent Schema Registry, the pattern for timestamp fields in schema is known) so that they are cast into correct millisecond timestamps? Maybe some kind of field value replacement (substring of 13 digits from 19-digit value) in Avro flowfile content based on its schema which is embedded/stored in avro.schema attribute?
Please let me know if something is unclear and if some additional details are needed. Thanks a lot in advance!
The following solution worked for our case, a Groovy script which converts one avro file into another (both schema and content):
#Grab('org.apache.avro:avro:1.8.2')
import org.apache.avro.*
import org.apache.avro.file.*
import org.apache.avro.generic.*
//function which is traversing through all records (including nested ones)
def convertAvroNanosecToMillisec(record){
record.getSchema().getFields().forEach{ Schema.Field field ->
if (record.get(field.name()) instanceof org.apache.avro.generic.GenericData.Record){
convertAvroNanosecToMillisec(record.get(field.name()))
}
if (field.schema().getType().getName() == "union"){
field.schema().getTypes().forEach{ Schema unionTypeSchema ->
if(unionTypeSchema.getProp("connect.name") == "io.debezium.time.NanoTimestamp"){
record.put(field.name(), Long.valueOf(record.get(field.name()).toString().substring(0, 13)))
unionTypeSchema.addProp("logicalType", "timestamp-millis")
}
}
} else {
if(field.schema().getProp("connect.name") == "io.debezium.time.NanoTimestamp"){
record.put(field.name(), Long.valueOf(record.get(field.name()).toString().substring(0, 13)))
field.schema().addProp("logicalType", "timestamp-millis")
}
}
}
return record
}
//start flowfile processing
def flowFile = session.get()
if(!flowFile) return
try {
flowFile = session.write(flowFile, {inStream, outStream ->
// Defining avro reader and writer
DataFileStream<GenericRecord> reader = new DataFileStream<>(inStream, new GenericDatumReader<GenericRecord>())
DataFileWriter<GenericRecord> writer = new DataFileWriter<>(new GenericDatumWriter<GenericRecord>())
def contentSchema = reader.schema //source Avro schema
def records = [] //list will be used to temporary store the processed records
//reading all records from incoming file and adding to the temporary list
reader.forEach{ GenericRecord contentRecord ->
records.add(convertAvroNanosecToMillisec(contentRecord))
}
//creating a file writer object with adjusted schema
writer.create(contentSchema, outStream)
//adding records to the output file from the temporary list and closing the writer
records.forEach{ GenericRecord contentRecord ->
writer.append(contentRecord)
}
writer.close()
} as StreamCallback)
session.transfer(flowFile, REL_SUCCESS)
} catch(e) {
log.error('Error appending new record to avro file', e)
flowFile = session.penalize(flowFile)
session.transfer(flowFile, REL_FAILURE)
}

SparkSQL performance issue with collect method

We are currently facing a performance issue in sparksql written in scala language. Application flow is mentioned below.
Spark application reads a text file from input hdfs directory
Creates a data frame on top of the file using programmatically specifying schema. This dataframe will be an exact replication of the input file kept in memory. Will have around 18 columns in the dataframe
var eqpDF = sqlContext.createDataFrame(eqpRowRdd, eqpSchema)
Creates a filtered dataframe from the first data frame constructed in step 2. This dataframe will contain unique account numbers with the help of distinct keyword.
var distAccNrsDF = eqpDF.select("accountnumber").distinct().collect()
Using the two dataframes constructed in step 2 & 3, we will get all the records which belong to one account number and do some Json parsing logic on top of the filtered data.
var filtrEqpDF =
eqpDF.where("accountnumber='" + data.getString(0) + "'").collect()
Finally the json parsed data will be put into Hbase table
Here we are facing performance issues while calling the collect method on top of the data frames. Because collect will fetch all the data into a single node and then do the processing, thus losing the parallel processing benefit.
Also in real scenario there will be 10 billion records of data which we can expect. Hence collecting all those records in to driver node will might crash the program itself due to memory or disk space limitations.
I don't think the take method can be used in our case which will fetch limited number of records at a time. We have to get all the unique account numbers from the whole data and hence I am not sure whether take method, which takes
limited records at a time, will suit our requirements
Appreciate any help to avoid calling collect methods and have some other best practises to follow. Code snippets/suggestions/git links will be very helpful if anyone have had faced similar issues
Code snippet
val eqpSchemaString = "acoountnumber ....."
val eqpSchema = StructType(eqpSchemaString.split(" ").map(fieldName =>
StructField(fieldName, StringType, true)));
val eqpRdd = sc.textFile(inputPath)
val eqpRowRdd = eqpRdd.map(_.split(",")).map(eqpRow => Row(eqpRow(0).trim, eqpRow(1).trim, ....)
var eqpDF = sqlContext.createDataFrame(eqpRowRdd, eqpSchema);
var distAccNrsDF = eqpDF.select("accountnumber").distinct().collect()
distAccNrsDF.foreach { data =>
var filtrEqpDF = eqpDF.where("accountnumber='" + data.getString(0) + "'").collect()
var result = new JSONObject()
result.put("jsonSchemaVersion", "1.0")
val firstRowAcc = filtrEqpDF(0)
//Json parsing logic
{
.....
.....
}
}
The approach usually take in this kind of situation is:
Instead of collect, invoke foreachPartition: foreachPartition applies a function to each partition (represented by an Iterator[Row]) of the underlying DataFrame separately (the partition being the atomic unit of parallelism of Spark)
the function will open a connection to HBase (thus making it one per partition) and send all the contained values through this connection
This means the every executor opens a connection (which is not serializable but lives within the boundaries of the function, thus not needing to be sent across the network) and independently sends its contents to HBase, without any need to collect all data on the driver (or any one node, for that matter).
It looks like you are reading a CSV file, so probably something like the following will do the trick:
spark.read.csv(inputPath). // Using DataFrameReader but your way works too
foreachPartition { rows =>
val conn = ??? // Create HBase connection
for (row <- rows) { // Loop over the iterator
val data = parseJson(row) // Your parsing logic
??? // Use 'conn' to save 'data'
}
}
You can ignore collect in your code if you have large set of data.
Collect Return all the elements of the dataset as an array at the driver program. This is usually useful after a filter or other operation that returns a sufficiently small subset of the data.
Also this can cause the driver to run out of memory, though, because collect() fetches the entire RDD/DF to a single machine.
I have just edited your code, which should work for you.
var distAccNrsDF = eqpDF.select("accountnumber").distinct()
distAccNrsDF.foreach { data =>
var filtrEqpDF = eqpDF.where("accountnumber='" + data.getString(0) + "'")
var result = new JSONObject()
result.put("jsonSchemaVersion", "1.0")
val firstRowAcc = filtrEqpDF(0)
//Json parsing logic
{
.....
.....
}
}

Reading different Schema in Parquet Partitioned Dir structure

I have following partitioned parquet data on hdfs written using spark:
year
|---Month
|----monthlydata.parquet
|----Day
|---dailydata.parquet
Now when I read df from year path, spark read dailydata.parquet. How can i read monthlydata from all partitions. I tried using setting option mergeSchema = true which gives error.
I would urge you stop doing the following:
year
|---Month
|----monthlydata.parquet
|----Day
|---dailydata.parquet
When you read from year/month/ or even just year/, you won't just get monthlydata.parquet, you'll also be getting dailydata.parquet. I can't speak much to the error you're getting (please post it), but my humble suggestion would be to separate the paths in HDFS since you're already duplicating the data:
dailies
|---year
|---Month
|----Day
|---dailydata.parquet
monthlies
|---year
|---Month
|----monthlydata.parquet
Is there a reason you were keeping them in the same directories?
However, if you insist on this structure, use something like this:
schema = "dailydata1"
val dfList = dates.map { case (month, day) =>
Try(sqlContext.read.parquet(s"/hdfs/table/month=$month/day=$day/$schema.parquet"))
}
val dfUnion = dfList.collect { case Success(v) => v }.reduce { (a, b) =>
a.unionAll(b)
}
Where you can toggle the schema between dailydata1, dailydata2, etc.