How to read line with comma-separated fields from file? - scala

I have task to read a positional file. I am able to read positional file with hard-coded data length in code but my task is to read data lengths from external file.
val lengths = Seq(3,10,5,4) // <-- I'd like to read it from an external file

Say, you have a file with the following content (that corresponds to the positions):
$ cat positions.csv
3,10,5,4
In Scala, you could read the file as follows:
val lengths = scala.io.Source.
fromFile("positions.csv").
getLines.
take(1).
toArray.
head.
split(",").
map(_.toInt).
toSeq
scala> lengths.foreach(println)
3
10
5
4

Related

How can I have nice file names & efficient storage usage in my Foundry Magritte dataset export?

I'm working on exporting data from Foundry datasets in parquet format using various Magritte export tasks to an ABFS system (but the same issue occurs with SFTP, S3, HDFS, and other file based exports).
The datasets I'm exporting are relatively small, under 512 MB in size, which means they don't really need to be split across multiple parquet files, and putting all the data in one file is enough. I've done this by ending the previous transform with a .coalesce(1) to get all of the data in a single file.
The issues are:
By default the file name is part-0000-<rid>.snappy.parquet, with a different rid on every build. This means that, whenever a new file is uploaded, it appears in the same folder as an additional file, the only way to tell which is the newest version is by last modified date.
Every version of the data is stored in my external system, this takes up unnecessary storage unless I frequently go in and delete old files.
All of this is unnecessary complexity being added to my downstream system, I just want to be able to pull the latest version of data in a single step.
This is possible by renaming the single parquet file in the dataset so that it always has the same file name, that way the export task will overwrite the previous file in the external system.
This can be done using raw file system access. The write_single_named_parquet_file function below validates its inputs, creates a file with a given name in the output dataset, then copies the file in the input dataset to it. The result is a schemaless output dataset that contains a single named parquet file.
Notes
The build will fail if the input contains more than one parquet file, as pointed out in the question, calling .coalesce(1) (or .repartition(1)) is necessary in the upstream transform
If you require transaction history in your external store, or your dataset is much larger than 512 MB this method is not appropriate, as only the latest version is kept, and you likely want multiple parquet files for use in your downstream system. The createTransactionFolders (put each new export in a different folder) and flagFile (create a flag file once all files have been written) options can be useful in this case.
The transform does not require any spark executors, so it is possible to use #configure() to give it a driver only profile. Giving the driver additional memory should fix out of memory errors when working with larger datasets.
shutil.copyfileobj is used because the 'files' that are opened are actually just file objects.
Full code snippet
example_transform.py
from transforms.api import transform, Input, Output
import .utils
#transform(
output=Output("/path/to/output"),
source_df=Input("/path/to/input"),
)
def compute(output, source_df):
return utils.write_single_named_parquet_file(output, source_df, "readable_file_name")
utils.py
from transforms.api import Input, Output
import shutil
import logging
log = logging.getLogger(__name__)
def write_single_named_parquet_file(output: Output, input: Input, file_name: str):
"""Write a single ".snappy.parquet" file with a given file name to a transforms output, containing the data of the
single ".snappy.parquet" file in the transforms input. This is useful when you need to export the data using
magritte, wanting a human readable name in the output, when not using separate transaction folders this should cause
the previous output to be automatically overwritten.
The input to this function must contain a single ".snappy.parquet" file, this can be achieved by calling
`.coalesce(1)` or `.repartition(1)` on your dataframe at the end of the upstream transform that produces the input.
This function should not be used for large dataframes (e.g. those greater than 512 mb in size), instead
transaction folders should be enabled in the export. This function can work for larger sizes, but you may find you
need additional driver memory to perform both the coalesce/repartition in the upstream transform, and here.
This produces a dataset without a schema, so features like expectations can't be used.
Parameters:
output (Output): The transforms output to write the single custom named ".snappy.parquet" file to, this is
the dataset you want to export
input (Input): The transforms input containing the data to be written to output, this must contain only one
".snappy.parquet" file (it can contain other files, for example logs)
file_name: The name of the file to be written, if the ".snappy.parquet" will be automatically appended if not
already there, and ".snappy" and ".parquet" will be corrected to ".snappy.parquet"
Raises:
RuntimeError: Input dataset must be coalesced or repartitioned into a single file.
RuntimeError: Input dataset file system cannot be empty.
Returns:
void: writes the response to output, no return value
"""
output.set_mode("replace") # Make sure it is snapshotting
input_files_df = input.filesystem().files() # Get all files
input_files = [row[0] for row in input_files_df.collect()] # noqa - first column in files_df is path
input_files = [f for f in input_files if f.endswith(".snappy.parquet")] # filter non parquet files
if len(input_files) > 1:
raise RuntimeError("Input dataset must be coalesced or repartitioned into a single file.")
if len(input_files) == 0:
raise RuntimeError("Input dataset file system cannot be empty.")
input_file_path = input_files[0]
log.info("Inital output file name: " + file_name)
# check for snappy.parquet and append if needed
if file_name.endswith(".snappy.parquet"):
pass # if it is already correct, do nothing
elif file_name.endswith(".parquet"):
# if it ends with ".parquet" (and not ".snappy.parquet"), remove parquet and append ".snappy.parquet"
file_name = file_name.removesuffix(".parquet") + ".snappy.parquet"
elif file_name.endswith(".snappy"):
# if it ends with just ".snappy" then append ".parquet"
file_name = file_name + ".parquet"
else:
# if doesn't end with any of the above, add ".snappy.parquet"
file_name = file_name + ".snappy.parquet"
log.info("Final output file name: " + file_name)
with input.filesystem().open(input_file_path, "rb") as in_f: # open the input file
with output.filesystem().open(file_name, "wb") as out_f: # open the output file
shutil.copyfileobj(in_f, out_f) # write the file into a new file
You can also use the rewritePaths functionality of the export plugin, to rename the file under spark/*.snappy.parquet file to "export.parquet" while exporting. This of course only works if there is only a single file, so .coalesce(1) in the transform is a must:
excludePaths:
- ^_.*
- ^spark/_.*
rewritePaths:
'^spark/(.*[\/])(.*)': $1/export.parquet
uploadConfirmation: exportedFiles
incrementalType: snapshot
retriesPerFile: 0
bucketPolicy: BucketOwnerFullControl
directoryPath: features
setBucketPolicy: true
I ran into the same requirement the only difference was that the dataset required to be split into multiple parts due to the size. Posting here the code and how I have updated it to handle this use case.
def rename_multiple_parquet_outputs(output: Output, input: list, file_name_prefix: str):
"""
Slight improvement to allow multiple output files to be renamed
"""
output.set_mode("replace") # Make sure it is snapshotting
input_files_df = input.filesystem().files() # Get all files
input_files = [row[0] for row in input_files_df.collect()] # noqa - first column in files_df is path
input_files = [f for f in input_files if f.endswith(".snappy.parquet")] # filter non parquet files
if len(input_files) == 0:
raise RuntimeError("Input dataset file system cannot be empty.")
input_file_path = input_files[0]
print(f'input files {input_files}')
print("prefix for target name: " + file_name_prefix)
for i,f in enumerate(input_files):
with input.filesystem().open(f, "rb") as in_f: # open the input file
with output.filesystem().open(f'{file_name_prefix}_part_{i}.snappy.parquet', "wb") as out_f: # open the output file
shutil.copyfileobj(in_f, out_f) # write the file into a new file
Also to use this into a code workbook the input needs to be persisted and the output parameter can be retrieved as shown below.
def rename_outputs(persisted_input):
output = Transforms.get_output()
rename_parquet_outputs(output, persisted_input, "prefix_for_renamed_files")

Trying to open a python file using power shell but it brings up a list 'index out of range' error... but the items are not out of range?

PS C:\OIDv4_ToolKit> python convert_annotations.py
Currently in subdirectory: train
Converting annotations for class: Vehicle registration plate
0%| | 0/400 [00:00<?, ?it/s]0317.44 497.91974400000004 413.44 526.08
0%| | 0/400 [00:00<?, ?it/s]
Traceback (most recent call last):
File "C:\OIDv4_ToolKit\convert_annotations.py", line 66, in <module>
coords = np.asarray([float(labels[1]), float(labels[2]), float(labels[3]), float(labels[4])])
IndexError: list index out of range
python file: this is the error it refers to as line 66 (Line 7 here)
with open(filename) as f:
for line in f:
for class_type in classes:
line = line.replace(class_type, str(classes.get(class_type)))
print(line)
labels = line.split()
coords = np.asarray([float(labels[1]), float(labels[2]), float(labels[3]), float(labels[4])])
coords = convert(filename_str, coords)
This doesn't look like a PowerShell issue; the python interpreter looks like it is being run correctly. I suggest adding the python tag to your question to get the right people involved.
Having located the source, it seems as if some of the text files in the following directory aren't in the format expected by convert_annotations.py:
C:\OIDv4_ToolKit\OID\Dataset\train\Vehicle registration plate\Label\
You can verify this with:
print("labels length =", len(labels))
after the line.split() method. If you get a length of 1, it is likely the items on a line somewhere aren't separated with whitespace, for example with commas. You can also inspect the files manually to determine the format. To find them, you can use:
print(os.path.join(os.getcwd(), filename))
inside the the for loop, which is on Line 54 in the source I linked above. Note also that the string split() method supports a custom separator as the first argument, should the files be in a different format.
This issue occurs when you don't put the class name in classes.txt
The class name should be same in classes.txt as downloaded class.

How to read first record from .dat file transform it and finally store in HDFS

I am trying to read a .dat file in aws s3 using spark scala shell, and create a new file with just the first record of the .dat file.
Let's say my file path to the .dat file is "s3a://filepath.dat"
I assume my logic should look something like but I wasn't able to figure out how to get the first record.
val file = sc.textFile("s3a://filepath.dat")
val onerecord = file.getFirstRecord()
onerecord.saveAsTextFile("s3a://newfilepath.dat")
I've been trying to follow these solutions
How to skip first and last line from a dat file and make it to dataframe using scala in databricks
https://stackoverflow.com/questions/51809228/spark-scalahow-to-read-data-from-dat-file-transform-it-and-finally-store-in-h#:~:text=dat%20file%20in%20Spark%20RDD,be%20delimited%20by%20%22%20%25%24%20%22%20signs
It depends on how records are separated in your .dat file, but in general, you could do something like this(think delimiter is '|'):
val raw = session.sqlContext.read.format("csv").option("delimiter","|").load("data/input.txt")
val firstItem = raw.first()
It looks weird but it will solve your problem.

How can count words of multiple files present in a directory using spark scala

How can I perform word count of multiple files present in a directory using Apache Spark with Scala?
All the files have newline delimiter.
O/p should be:
file1.txt,5
file2.txt,6 ...
I tried using below way:
val rdd= spark.sparkContext.wholeTextFiles("file:///C:/Datasets/DataFiles/")
val cnt=rdd.map(m =>( (m._1,m._2),1)).reduceByKey((a,b)=> a+b)
O/p I'm getting:
((file:/C:/Datasets/DataFiles/file1.txt,apple
orange
bag
apple
orange),1)
((file:/C:/Datasets/DataFiles/file2.txt,car
bike
truck
car
bike
truck),1)
I tried sc.textFile() first, but didn't give me the filename.
wholeTextFile() returns key-value pair, in which the key is the filename, but couldn't get the desired output.
You are starting in the right track, but need to work out in your solution a bit more.
The method sparkContext.wholeTextFiles(...) gives you a (file, contents) pair, so when you reduce it by key you get (file, 1) because that's the amount of whole file contents that you have per pair-key.
In order to count the words of each file, you need to break the contents of each file into those words so you can count them.
Let's do it here, let's start reading the file directory:
val files: RDD[(String, String)] = spark.sparkContext.wholeTextFiles("file:///C:/Datasets/DataFiles/")
That gives one row per file, alongside the full file contents. Now let's break the file contents into individual items. Given the fact your files seem to have one word per line, this is really easy using line breaks:
val wordsPerFile: RDD[(String, Array[String])] = files.mapValues(_.split("\n"))
Now we just need to count the number of items that are present in each of those Array[String]:
val wordCountPerFile: RDD[(String, Int)] = wordsPerFile.mapValues(_.size)
And that's basically it. It's worth mentioning though the the word counting is not being distributed at all (it's just using an Array[String]) because you are loading the whole contents of your files at once.

Split file text by newline Scala

I want to read 100 numbers from a file which are stored in such a fashion:
Each number is on the different line. I am not sure which data structure should be used here because later I will need to sum all these numbers altogether and extract first 10 digits of the sum.
I only managed to simply read the file, but I want to split all the text by newline separators and get each number as a list or array element:
val source = Source.fromFile("pathtothefile")
val lines = source.getLines.mkString
I would be grateful for any advice on a data structure to be used here!
Update on approach:
val lines = Source.fromFile("path").getLines.toList
you almost have it there, just map to BigInt, then you have a list of BigInt
val lines = Source.fromFile("path").getLines.map(BigInt(_)).toList
(and then you can use .sum to sum them all up, etc)