How can I merge multiple tfrecords file into one file? - merge

My question is, if I want to create one tfrecords file for my data , it will take approximately 15 days to finish it, it has 500000 pairs of template , and each template is 32 frames( images). In order to save the time, I have 3 GPUs, so I thought I can create three tfrocords file each one file on one GPUs and then I can finish creating the tfrecords in 5 days. But then I searched about a way to merge these three files in one file and couldn't find proper solution.
So Is there any way to merge these three files in one file, OR is there any way that I can train my network by feeding batch of example extracted form the three tfrecords files, knowing I am using Dataset API.

As the question is asked two months ago, I thought you already find the solution. For the follows, the answer is NO, you do not need to create a single HUGE tfrecord file. Just use the new DataSet API:
dataset = tf.data.TFRecordDataset(filenames_to_read,
compression_type=None, # or 'GZIP', 'ZLIB' if compress you data.
buffer_size=10240, # any buffer size you want or 0 means no buffering
num_parallel_reads=os.cpu_count() # or 0 means sequentially reading
)
# Maybe you want to prefetch some data first.
dataset = dataset.prefetch(buffer_size=batch_size)
# Decode the example
dataset = dataset.map(single_example_parser, num_parallel_calls=os.cpu_count())
dataset = dataset.shuffle(buffer_size=number_larger_than_batch_size)
dataset = dataset.batch(batch_size).repeat(num_epochs)
...
For details, check the document.

Addressing the question title directly for anyone looking to merge multiple .tfrecord files:
The most convenient approach would be to use the tf.Data API:
(adapting an example from the docs)
# Create dataset from multiple .tfrecord files
list_of_tfrecord_files = [dir1, dir2, dir3, dir4]
dataset = tf.data.TFRecordDataset(list_of_tfrecord_files)
# Save dataset to .tfrecord file
filename = 'test.tfrecord'
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(dataset)
However, as pointed out by holmescn, you'd likely be better off leaving the .tfrecord files as separate files and reading them together as a single tensorflow dataset.
You may also refer to a longer discussion regarding multiple .tfrecord files on Data Science Stackexchange

The answer by MoltenMuffins works for higher versions of tensorflow. However, if you are using lower versions, you have to iterate through the three tfrecords and save them them into a new record file as follows. This works for tf versions 1.0 and above.
def comb_tfrecord(tfrecords_path, save_path, batch_size=128):
with tf.Graph().as_default(), tf.Session() as sess:
ds = tf.data.TFRecordDataset(tfrecords_path).batch(batch_size)
batch = ds.make_one_shot_iterator().get_next()
writer = tf.python_io.TFRecordWriter(save_path)
while True:
try:
records = sess.run(batch)
for record in records:
writer.write(record)
except tf.errors.OutOfRangeError:
break

Customizing the above the script for better tfrecords listing
import os
import glob
import tensorflow as tf
save_path = 'data/tf_serving_warmup_requests'
tfrecords_path = glob.glob('data/*.tfrecords')
dataset = tf.data.TFRecordDataset(tfrecords_path)
writer = tf.data.experimental.TFRecordWriter(save_path)
writer.write(dataset)

Related

Read oneline file into dataframe

I have the task of reading a one line json file into spark. I´ve thought about either modifying the input file so that it fits spark.read.json(path) or read the whole file and modify it inmemory to make it fit the previous line as shown bellow:
import spark.implicit._
val file = sc.textFile(path).collect()(0)
val data = file.split("},").map(json => s"$json}")
val ds = data.toSeq.toDF()
Is there a way of directly reading the json or read the one line file into multiple rows?
Edit:
Sorry I didn´t crealy explain the json format, all the json in the same line:
{"key":"value"},{"key":"value2"},{"key":"value2"}
If imported with spark.read.json(path) it would only take the first value.
Welcome to SO HugoDife! I believe single line load is what spark.read.json() does and you are perhaps looking for this answer. If not maybe you want to adjust your question with a data example.

how to read CSV file in scala

I have a CSV file and I want to read that file and store it in case class. As I know A CSV is a comma separated values file. But in case of my csv file there are some data which have already comma itself. and it creates new column for every comma. So the problem how to split data from that.
1st data
04/20/2021 16:20(1st column) Here a bunch of basic techniques that suit most businesses, and easy-to-follow steps that can help you create a strategy for your social media marketing goals.(2nd column)
2nd data
11-07-2021 12:15(1st column) Focus on attracting real followers who are genuinely interested in your content, and make the most of your social media marketing efforts.(2nd column)
var i=0
var length=0
val data=Source.fromFile(file)
for (line <- data.getLines) {
val cols = line.split(",").map(_.trim)
length = cols.length
while(i<length){
//println(cols(i))
i=i+1
}
i=0
}
If you are reading a complex CSV file then the ideal solution is to use an existing library. Here is a link to the ScalaDex search results for CSV.
ScalaDex CSV Search
However, based on the comments, it appears that you might actually be wanting to read data stored in a Google Sheet. If that is the case, you can utilize the fact that you have some flexibility to save the data in a text file yourself. When I want to read data from a Google Sheet in Scala, the approach I use first is to save the file in a format that isn't hard to read. If the fields have embedded commas but no tabs, which is common, then I will save the file as a TSV and parse that with split("\t").
A simple bit of code that only uses the standard library might look like the following:
val source = scala.io.Source.fromFile("data.tsv")
val data = source.getLines.map(_.split("\t")).toArray
source.close
After this, data will be an Array[Array[String]] with your data in it that you can process as you desire.
Of course, if your data includes both tabs and commas then you'll really want to use one of those more robust external libraries.
You could use univocity CSV parser for faster stuffs.
You can also use it for creation as well.
Univocity parsers

How to perform processinng on .csv files used by my code for each person in one go without defining functions for each person in Matlab?

I have a main code named as process.m in which I define the path to 4 different .csv files for calculating values for each person. If I have a list of 30 persons and I don't want to define process.m as a function for each person, how can I do the processing for all the persons in one go. I want some idea by which process.m itself picks files for one person, then generate the results, then move to other person, pick his .csv files, generate the result and so on.
A breif outline of my code is attached here that would project the problem.
file1='Joseph_front.csv';
file2='Joseph_side.csv';
file3='Joseph_knee.csv';
file4='Joseph_back.csv';
A1=initiate2(file1); %initiate2 function reads the csv and perfoms some filtering operations on image in .csv format
A2=initiate2(file2);
A3=initiate2(file3);
A4=initiate2(file4);
%%omitted large part of code
cal(1) = p+q+r*s;
cal(2) = p+q+r+s;
cal(3) = p+q+r-s;
cal=cal'
%code to write the calculation in excel file
excelfile= 'test.xlsx';
xlswrite(excelfile,ValuesInInches,'Joseph_data',posColumn);
Describing more i want my code to process for 30 people all at once by selecting and picking the files itself, although i have done this operation by making the same code as a function for each person, but that is not very efficient as when I have to make a small change I have to make it in every function that means one change needs to be edited in 30 functions. Please suggest an efficient way to do it.
Note: All persons .csv files are named in the same manner and exist in the current folder.
I am assuming all of your files in one directory and there's no other files .
This portion of code will get the available filenames.
listFiles = dir('path of the directory');
filenames = strings; % an empty string array to save the filenames
j = 1;
for i = 1:1:length(listFiles)
if ~listFiles(i).isdir % to avoid the directory names
filenames(j,1) = listFiles(i).name;
j = j+1;
end
end
Now, there's 4 file for each person. So the loop should take 4 files at a time.
for ii = 1:4:length(filenames)
file1=filename(i);
file2=filename(i+1);
file3=filename(i+2);
file4=filename(i+3);
%% continue with your code
end

Export data to same CSV file for multiple runs of .m file

I am running a MATLAB program and storing the results in two matrices. For each run of the program, those matrices are written to the same .csv file.
How can I continue to store data to the same file for future runs of the program? Is there a function that checks for data already being present to avoid overwriting cells?
t = 0.0001*[0:70];
v = B_2*R_R.*exp(-alpha.*t).*sin(omega_d.*t);
tv = [t; v].';
csvwrite('thedata.csv',tv,3,0)
I couldn't resist rewriting your code a bit.
This should be equivalent to what you have, and print both vectors to thedata.csv.
t = 0.0001*[0:70];
v = B_2*R_R.*exp(-alpha.*t).*sin(omega_d.*t);
tv = [t; v].';
csvwrite('thedata.csv',tv,3,0)
Due to the way csv-files are stored, you can only append data at the end of the file, which happens to be the bottom row, not the last column. What you should do, is concatenate all data before writing to the csv-file. That way you avoid multiple calls to csvwrite or dlmwrite (they are time consuming).
If that's impossible, then I suggest reading the data from the csv-file, using csvread, append the new data to the data you retrieve, and write it all back again.
csvwrite('thedata.csv',tv)
mydata = csvread('thedata.csv');
mydata2 = [mydata, tv2];
csvwrite('thedata.csv',mydata2)

Reading structured variable from MAT file

I am performing an analysis which involves simulation of over 1000 cases. I extracting lots of data for each case as well (about 70MB). Currently I am saving the results for each case as:
Vessel.TotalForce
Vessel.WindForce
Vessel.CurrentForce
Vessel.WaveForce
Vessel.ConnectionForce
...
Line1.EffectiveTension
Line1.X
Line1.Y
Line2.EfectiveTension
Line2.X
Line2.Y
...
save('CaseNo1.mat')
Now, I need to perform my analysis for CaseNo1.mat to CaseNo1000. Initially I planned to create a Database.mat file by loading all cases in it and then accessing any variable using h5read. This way Matlab doesn't need to load all the data at a time. However, I am concerned now that my database file will be too big.
Is there any way I can read the structured variables from individual case files for example CaseNo1.mat without loading the CaseNo1.mat file in memory.
Matlab examples shows loading just the variables directly from MAT file without loading the whole MAT file. But I am not sure how to read structures data the same way.
x=load('CaseNo1.mat','Line1.X')
says Line1.X not found. But it's there. The command is not correct to access the data. Also tried using h5read, but it says CaseNo1.mat is not an HDF5 file.
Can anyone help with this.
Apart from this, I would also appreciate if there is any suggestion about performing such data intensive analysis.
I was wrong! I'm leaving my old answer for context, though I've edited it to reference this one. I thought I had used matfile() in that way before, but I hadn't. I just did a thorough search and ran a few test cases. You've actually run into a limitation of the way Matlab handles and references structures stored in .mat files. There is, however, a solution. It does involve some refactoring of your original code, but it shouldn't be too egregious.
Vessel_TotalForce
Vessel_WindForce
Vessel_CurrentForce
Vessel_WaveForce
Vessel_ConnectionForce
...
Line1_EffectiveTension
Line1_X
Line1_Y
Line2_EfectiveTension
Line2_X
Line2_Y
...
save('CaseNo1.mat')
Then to access, just use matfile (or load) as you were before. Like so:
Vessel_WaveForce = load('CaseNo1.mat'', 'Vessel_WaveForce')
It's important to note that this restriction doesn't appear to be caused by anything you've chosen to do in your program, but rather is imposed by the way Matlab interacts with it's native storage files when they contain structures.
EDIT: This answer works, but doesn't actually solve the problem posed in OP's question. I thought I had used matfile to generate a handle that I could access, but I was wrong. See my other answer for details.
You could use matfile, like so:
myMatFileHandle = matfile('caseNo1.mat');
thisVessel = myMatFileHandle.vessel;
Also, from the little bit I can see, you seem to be on the right track for high-volume analysis. Just remember to use sparse when applicable, and generally avoid conditionals inside of loops if possible.
Good luck!
The objective of storing data in structured format is:
To be organized
Easy scripting post processor where looping through data under one data set it required.
To store structured dataset containing integer, floating and string variables in MAT file and to be able to read just the required variable using h5read command was sought. Matlab load command is not able to read variable beyond first level from stored data in a MAT file. The h5write couldn't write string variables. Hence needed a work around to solve this problem.
To do this I have used following method:
filename = 'myMatFile';
Vessel.TotalForce = %store some data
Vessel.WindForce = %store some data
Vessel.CurrentForce = %store some data
Vessel.WaveForce = %store some data
Vessel.ConnectionForce = %store some data
...
Lin1.LineType = 'Wire'
Line1.ArcLength_0.EffectiveTension = %store some data
Line1.ArcLength_50.EffectiveTension= %store some data
Line1.ArcLength_100.EffectiveTension= %store some data
Lin2.LineType = 'Chain'
Line2.ArcLength_0.EffectiveTension= %store some data
Line2.ArcLength_50.EffectiveTension= %store some data
Line2.ArcLength_100.EffectiveTension= %store some data
save([filename '_temp.mat']);
PointToMat=matfile([filename '.mat'],'Writable',true);
PointToMat.(char(filename)) = load([filename '_temp.mat']);
delete([filename '_temp.mat']);
Now to read from the MAT file created, we can use h5read as usual. To extract the EffectiveTension for Line1, ArcLength_0:
EffectiveTension = h5read([filename '.mat'],['/' filename '/Line1/ArcLength_0/EffectiveTension']);
For string variables, h5read returns decimal values corresponding to each character. To obtain the actual string I used:
name = char(h5read([filename '.mat'],['/' filename '/Line1/LineType']));
Tried this method on my data set which is about 200MB and I could process them pretty fast. Hope this would help someone someday.
Short answer:
Having saved the data into a MAT file with the '-v7.3' option, use something like h5read(filename, '/Line2/X') to read just one structure field. You can even read an array partially, for example:
s.a = 1:100;
save('test.mat', '-v7.3', 's');
clear
h5read('test.mat', '/s/a', [1 10], [1 5], [1 3])
returns each third element of the 1:100 array, starting with the 10th element and returning 5 values:
10 13 16 19 22
Long answer:
See answer by #Amitava for the more elaborate code and topic coverage.