Apache Flink dump to multiple files by key (group) - scala

I'm doing some processing on data, and I want to dump the processed data to multiple files, based on the group.
Example of the data:
A,123
B,200
A,400
B,400
So my desired output is:
file 1:
A,123
A,400
file 2:
B,200
B,400
(The number of files is based on the number of groups).
So basically a simple code for exampleData:
exampleData.groupBy(0).sortGroup(1, Order.ASCENDING)
The type now is GroupedDataSet. I want to output each groupedDataSet to a different CSV. How can I do this? I tried using reduceGroup, so I can work with each group individually, but I couldn't make it to work.
I'm using Scala version 2.11.12, and Flink version 1.11.0

Related

Azure Data Factory Overwrite Existing Folder/Partitions in ADLS Gen2

I have my settings in my ADF Sink to Clear the folder but Partitioned via an ID
But this sink already has other partitions in that exists that I do not want to remove.
If an ID comes in, I just want to clear that specific folder/partition but it is actually clearing the full folder versus just partition. Am I missing a setting?
To overwrite only the partitions that appear in new data and keep the rest of the old partition data, you can make use of the pre commands present in the settings tab of the dataflow sink. Look at the following demonstration.
The following is my initial data which I have partitioned based on id.
Now let's say the following is the new data that you are going to write. Here, according to the requirement, you want to overwrite the partitions that are present and keep the rest as it is.
First, we need to get the distinct key column values (id in my case). Then use them in the pre commands of sink settings to remove files only from these partitions.
Take the above data (the 2nd image data) as dataflow1 source. Apply derived column transformation to add a new column with constant value say 'xxx' (to group based on this column and apply collect() aggregate function).
Group by this new column and use the aggregate as distinct(collect(id)).
Now for sink, choose as Cache, check write to activity output. When you run this dataflow in the pipeline, the debug output would be:
Send this array value to a parameter created in another dataflow where you make necessary changes and overwrite partitions. Give the following dynamic content
#activity('Data flow1').output.runStatus.output.sink1.value[0].val
Now in this second dataflow, the source is the same data used in first dataflow. For sink, instead of selecting clear the folder option, scroll down where you can find pre/post commands sections where you give the following dynamic content:
concat('rm /output/id=',toString($parts),'/*')
Now when you run this pipeline, it successfully executes and runs the overwrites only the required partitions, whereas keeps the other partitions.
The following is a sample partition data (id=2) to show that the data is overwritten (only one part file with required data will be available).
Why do not you specify the filename and write it to 1 single file.

Pick up changes in json files that are being read by pyspark readstream?

I have json files where each file describes a particular entity, including it's state. I am trying to pull these into Delta by using readStream and writeStream. This is working perfectly for new files. These json files are frequently updated (i.e., states are changed, comments added, history items added, etc.). The changed json files are not pulled in with the readStream. I assume that is because readStream does not reprocess items. Is there a way around this?
One thing I am considering is changing my initial write of the json to add a timestamp to the file name so that it becomes a different record to the stream (I already have to do a de-duping in my writeStream anyway), but I am trying to not modify the code that is writing the json as it is already being used in production.
Ideally I would like to find something like the changeFeed functionality for Cosmos Db, but for reading json files.
Any suggestions?
Thankss!
This is not supported by the Spark Structured Streaming - after file is processed it won't be processed again.
The closest to your requirement is only exist in Databricks' Autoloader - it has option cloudFiles.allowOverwrites option that allows to reprocess modified files.
P.S. Potentially if you use cleanSource option for file source (https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#input-sources), then it may reprocess files, but I'm not 100% sure.

Is there a way to add literals as columns to a spark dataframe when reading the multiple files at once if the column values depend on the filepath?

I'm trying to read a lot of avro files into a spark dataframe. They all share the same s3 filepath prefix, so initially I was running something like:
path = "s3a://bucketname/data-files"
df = spark.read.format("avro").load(path)
which was successfully identifying all the files.
The individual files are something like:
"s3a://bucketname/data-files/timestamp=20201007123000/id=update_account/0324345431234.avro"
Upon attempting to manipulate the data, the code kept errorring out, with a message that one of the files was not an Avro data file. The actual error message received is: org.apache.spark.SparkException: Job aborted due to stage failure: Task 62476 in stage 44102.0 failed 4 times, most recent failure: Lost task 62476.3 in stage 44102.0 (TID 267428, 10.96.134.227, executor 9): java.io.IOException: Not an Avro data file.
To circumvent the problem, I was able to get the explicit filepaths of the avro files I'm interested in. After putting them in a list (file_list), I was successfully able to run spark.read.format("avro").load(file_list).
The issue now is this - I'm interested in adding a number of fields to the dataframe that are part of the filepath (ie. the timestamp and the id from the example above).
While using just the bucket and prefix filepath to find the files (approach #1), these fields were automatically appended to the resulting dataframe. With the explicit filepaths, I don't get that advantage.
I'm wondering if there's a way to include these columns while using spark to read the files.
Sequentially processing the files would look something like:
for file in file_list:
df = spark.read.format("avro").load(file)
id, timestamp = parse_filename(file)
df = df.withColumn("id", lit(id))\
.withColumn("timestamp", lit(timestamp))
but there are over 500k files and this would take an eternity.
I'm new to Spark, so any help would be much appreciated, thanks!
Two separate things to tackle here:
Specifying Files
Spark has built in handling for reading all files of a particular type in a given path. As #Sri_Karthik suggested, try supplying a path like "s3a://bucketname/data-files/*.avro" (if that doesn't work, maybe try "s3a://bucketname/data-files/**/*.avro"... i can't remember the exact pattern matching syntax spark uses), which should grab all avro files only and get rid of that error where you are seeing non-avro files in those paths. In my opinion this is more elegant than manually fetching the file paths and explicitly specifying them.
As an aside, the reason you are seeing this is likely because folders typically get marked with metadata files like .SUCCESS or .COMPLETED to indicate they are are ready for consumption.
Extracting metadata from filepaths
If you check out this stackoverflow question, it shows how you can add the filename as a new column (both for scala and pyspark). You could then use the regexp_extract function to parse out the desired elements from that filename string. I've never used scala in spark so can't help you there, but it should be similar to the pyspark version.
Why dont you try to read the files first by using wholetextfiles method and add the path name into the data itself at the beginning. Then you can filter out the file names from the data and add it as a column while creating the dataframe. I agree it's a two step process. But it should work. To get a timestamp of file you will need filesystem object which js not serializable , i.e. it cant be used in sparks parallelized operation , So you will have to create a local collection with file and timestamp and join it somehow with the RDD you created with wholetextfiles.

Accessing value of df.write.partitionBy in file name and performing transformations while saving

I am doing something like
df.write.mode("overwrite").partitionBy("sourcefilename").format("orc").save("s3a://my/dir/path/output-data");
The above code does generate orc file name successfully with the partition directory however the naming is something like part-0000.
I need to change the partition by (sourcefilename) value while saving e.g. if source file name is ABC then the partition directory (which would be create while doing a write) should be 123, if DEF then 345 and so on.
How can we do the above requirements? I am using AWS S3 for reading and writing of files.
I am using Spark 2.x and Scala 2.11.
Given that this example show the DF Writer general
df.write.partitionBy("EVENT_NAME","dt","hour").save("/apps/hive/warehouse/db/sample")
format, then your approach should be to create an extra column xc that is set by a UDF or some def or val that sets the xc according to the name, e.g. ABC --> 123, etc. Then you partition by this xc col and accept that part-xxxxx is just how it works in Spark.
You could then rename the files via a script yourself subsequently.
The part-1234 style is how the work is partitioned: different tasks get their own partition of the split data source and saves it with the numbering to guarantee no other task generates output with the same name.
This is fundamental to getting the performance of parallel execution.

Using Talend Open Studio DI to extract extract value from unique 1st row before continuing to process columns

I have a number of excel files where there is a line of text (and blank row) above the header row for the table.
What would be the best way to process the file so I can extract the text from that row AND include it as a column when appending multiple files? Is it possible without having to process each file twice?
Example
This file was created on machine A on 01/02/2013
Task|Quantity|ErrorRate
0102|4550|6 per minute
0103|4004|5 per minute
And end up with the data from multiple similar files
Task|Quantity|ErrorRate|Machine|Date
0102|4550|6 per minute|machine A|01/02/2013
0103|4004|5 per minute|machine A|01/02/2013
0467|1264|2 per minute|machine D|02/02/2013
I put together a small, crude sample of how it can be done. I call it crude because a. it is not dynamic, you can add more files to process but you need to know how many files in advance of building your job, and b. it shows the basic concept, but would require more work to suite your needs. For example, in my test files I simply have "MachineA" or "MachineB" in the first line. You will need to parse that data out to obtain the machine name and the date.
But here is how may sample works. Each Excel is setup as two inputs. For the header the tFileInput_Excel is configured to read only the first line while the body tFileInput_Excel is configured to start reading at line 4.
In the tMap they are combined (not joined) into the output schema. This is done for the Machine A Excel and Machine B excels, then those tMaps are combined with a tUnite for the final output.
As you can see in the log row the data is combined and includes the header info.