I have the below requirement in my project and we are attempting to use PySpark for data processing.
We used to receive sensor data in the form of Parquet files for each vehicle and its one file per vehicle. The file has a lot of sensors but its structured data in Parquet format. Avg file size is 200MB per file.
Assume i received files as below in one batch and ready for processing.
Train FileSize Date
X1 210MB 05-Sep-18 12:10 AM
X1 280MB 05-Sep-18 05:10 PM
Y1 220MB 05-Sep-18 04:10 AM
Y1 241MB 05-Sep-18 06:10 PM
At the end of the processing, I need to receive one aggregated .csv file from every source file or one master file with aggregated data for all these vehicle.
I am aware that HDFS default block size is 128MB and each file will be split into 2 blocks. May i know how can i accomplish this requirement using PySpark? Is it possible to process all these files in parallel?
Please let me know your thoughts
I had a similar problem, and it seems that I found a way:
1. Get a list of files
2.parallelize this list (distribute among all nodes)
3.write a function that reads content of all files from the portion of the big list that was distributed to the node
4.run it with mapPartition, then collect the result as a list, each element is a collected content of each file.
Fot files stored on AWS s3 and json files:
def read_files_from_list(file_list):
#reads files from list
#returns content as list of strings, 1 json per string ['{}','{}',...]
out=[]
for x in file_list:
content = sp.check_output([ 'aws', 's3', 'cp', x, '-']) # content of the file. x here is a full path: 's3://bucket/folder/1.json'
out.append(content)
return out #content of all files from the file_list as list of strings, 1 json per string ['{}','{}',...]
file_list=['f1.json','f2.json',...]
ps3="s3://bucket/folder/"
full_path_chunk=[ps3 + f for f in file_list] #makes list of strings, with full path for each file
n_parts = 100
rdd1 = sc.parallelize(full_path_chunk, n_parts ) #distribute files among nodes
list_of_json_strings = rdd1.mapPartitions(read_files_from_list).collect()
Then, if necessary, you can create spark dataframe like this:
rdd2=sc.parallelize(list_of_json_strings) #this is a trick! via http://spark.apache.org/docs/latest/sql-programming-guide.html#json-datasets
df_spark=sqlContext.read.json(rdd2)
The function read_files_from_list is just an example, it should be changed to read files from hdfs using python tools.
Hope this helps :)
You can put all input files in the same directory then you can pass path of directory to spark. You can also use globbing like /data_dir/*.csv.
I had encountered similar situation recently.
You can pass a list of CSVs with their paths to spark read api like spark.read.json(input_file_paths) (source). This will load all the files in a single dataframe and all the transformations eventually performed will be done in parallel by multiple executors depending on your spark config.
Related
I have been struggling with this issue for a few days now and I decided to see if someone more experienced could help me out. I am currently developing a data analysis program designed to load and manipulate various data files. I have 3 folders each with 1 type of 30 files, represented by trial_001...trial_30, trial_n3d_001...trial_n3d_030, and trial_com_n3d_001...trial_com_n3d_030. These files are similar but possess differences in terms of types of data as well as total number of data columns and rows. Currently, the portion of my code for loading the data looks like this:
cd = dir('*.csv');
n = length(cd);
data = cell(1,n);
for files = 1 : n %For the first file up to the total amount of files
data(files) = csvread(cd(files).name); %Read in csv files
endfor
data = cell2mat(data); %Create single large dataset
This is successful in looping through my current working directory and obtaining all files and then inputting them into a single large dataset which is intended. I am able to perform all of my calculations. The problem is that I cannot seem to specify subfolders with this method, rather it only works if I manually load directly from one of the folders. I need the program to work on multiple computers and I would prefer if I did not have to set the loadpath within the code each time and simply use the path I choose manually when starting octave in the file browser and instead specify generic folder names.
So my question is how do I do exactly what I am doing currently but from the previous folder containing all three folders and their respective files, and search each of these folders at different points in my code? I want my data variable to act as a working dataset, and load in each of the three types of files at different times and perform different calculations (effectively resetting that data variable after loading all files from each folder and performing calculations). I have tried addpath, genpath, etc. as well as manipulating the common directory and creating variables representing each folders location but I cannot seem to get it to work. Any suggestions?
I'm trying to read a lot of avro files into a spark dataframe. They all share the same s3 filepath prefix, so initially I was running something like:
path = "s3a://bucketname/data-files"
df = spark.read.format("avro").load(path)
which was successfully identifying all the files.
The individual files are something like:
"s3a://bucketname/data-files/timestamp=20201007123000/id=update_account/0324345431234.avro"
Upon attempting to manipulate the data, the code kept errorring out, with a message that one of the files was not an Avro data file. The actual error message received is: org.apache.spark.SparkException: Job aborted due to stage failure: Task 62476 in stage 44102.0 failed 4 times, most recent failure: Lost task 62476.3 in stage 44102.0 (TID 267428, 10.96.134.227, executor 9): java.io.IOException: Not an Avro data file.
To circumvent the problem, I was able to get the explicit filepaths of the avro files I'm interested in. After putting them in a list (file_list), I was successfully able to run spark.read.format("avro").load(file_list).
The issue now is this - I'm interested in adding a number of fields to the dataframe that are part of the filepath (ie. the timestamp and the id from the example above).
While using just the bucket and prefix filepath to find the files (approach #1), these fields were automatically appended to the resulting dataframe. With the explicit filepaths, I don't get that advantage.
I'm wondering if there's a way to include these columns while using spark to read the files.
Sequentially processing the files would look something like:
for file in file_list:
df = spark.read.format("avro").load(file)
id, timestamp = parse_filename(file)
df = df.withColumn("id", lit(id))\
.withColumn("timestamp", lit(timestamp))
but there are over 500k files and this would take an eternity.
I'm new to Spark, so any help would be much appreciated, thanks!
Two separate things to tackle here:
Specifying Files
Spark has built in handling for reading all files of a particular type in a given path. As #Sri_Karthik suggested, try supplying a path like "s3a://bucketname/data-files/*.avro" (if that doesn't work, maybe try "s3a://bucketname/data-files/**/*.avro"... i can't remember the exact pattern matching syntax spark uses), which should grab all avro files only and get rid of that error where you are seeing non-avro files in those paths. In my opinion this is more elegant than manually fetching the file paths and explicitly specifying them.
As an aside, the reason you are seeing this is likely because folders typically get marked with metadata files like .SUCCESS or .COMPLETED to indicate they are are ready for consumption.
Extracting metadata from filepaths
If you check out this stackoverflow question, it shows how you can add the filename as a new column (both for scala and pyspark). You could then use the regexp_extract function to parse out the desired elements from that filename string. I've never used scala in spark so can't help you there, but it should be similar to the pyspark version.
Why dont you try to read the files first by using wholetextfiles method and add the path name into the data itself at the beginning. Then you can filter out the file names from the data and add it as a column while creating the dataframe. I agree it's a two step process. But it should work. To get a timestamp of file you will need filesystem object which js not serializable , i.e. it cant be used in sparks parallelized operation , So you will have to create a local collection with file and timestamp and join it somehow with the RDD you created with wholetextfiles.
I am doing something like
df.write.mode("overwrite").partitionBy("sourcefilename").format("orc").save("s3a://my/dir/path/output-data");
The above code does generate orc file name successfully with the partition directory however the naming is something like part-0000.
I need to change the partition by (sourcefilename) value while saving e.g. if source file name is ABC then the partition directory (which would be create while doing a write) should be 123, if DEF then 345 and so on.
How can we do the above requirements? I am using AWS S3 for reading and writing of files.
I am using Spark 2.x and Scala 2.11.
Given that this example show the DF Writer general
df.write.partitionBy("EVENT_NAME","dt","hour").save("/apps/hive/warehouse/db/sample")
format, then your approach should be to create an extra column xc that is set by a UDF or some def or val that sets the xc according to the name, e.g. ABC --> 123, etc. Then you partition by this xc col and accept that part-xxxxx is just how it works in Spark.
You could then rename the files via a script yourself subsequently.
The part-1234 style is how the work is partitioned: different tasks get their own partition of the split data source and saves it with the numbering to guarantee no other task generates output with the same name.
This is fundamental to getting the performance of parallel execution.
I have a process that is pushing bunch of data to the Blob store every hour and creating the following folder structure inside my storage container as below:
/year=16/Month=03/Day=17/Hour=16/mydata.csv
/year=16/Month=03/Day=17/Hour=17/mydata.csv
and so on
form inside my Spark context I want to access all the mydata.csv and process them. I figured out that I needed to set the sc.hadoopConfiguration.set("mapreduce.input.fileinputformat.input.dir.recursive","true") so that we can use recursive search like below:
val csvFile2 = sc.textFile("wasb://mycontainer#mystorage.blob.core.windows.net/*/*/*/mydata.csv")
but when I execute the following command to see how many files I have received, it gives me some really large number like below
csvFile2.count
res41: Long = 106715282
ideally it should be returning me 24*16=384, also i verified on the container, it only has 384 mydata.csv files, but for some reasons i see it returns 106715282.
can someone please help me understand where I went wrong?
Regards
Kiran
SparkContext has two similar methods: textFile and wholeTextFiles.
textFile loads each line of each file as a record in the RDD. So count() will return the total number of lines across all of the files (which in most cases, such as yours, will be a large number).
wholeTextFiles loads each entire file as a record in the RDD. So count() will return the total number of files (384 in your case).
I have been struggling reading nested folders stored on one of bucket on S3, using Scala.
I wrote script with my credentials. In bucket - there are many folders. Let say one folder name is "folder1". In this folder there are many subfolders and so on. I want to get names of each subfolder(any each inside them) for folder1.
val yourAWSCredentials = new BasicAWSCredentials(AWS_ACCESS_KEY, AWS_SECRET_KEY)
val amazonS3Client = new AmazonS3Client(yourAWSCredentials)
print(amazonS3Client.listObjects(bucketName,"folder1").getObjectSummaries())
But this returns not the structure I need. May be there is easier way to get the path?
Amazon S3 is not a regular hierarchical file system. It does not actually have folders.
You need to understand S3 prefixes and delimiters. See Listing Keys Hierarchically Using a Prefix and Delimiter.
Also see Max files per directory in S3.