In Flutter, how can I combine data into a string very quickly? - flutter

I am gathering accelerometer data from my phone using the sensors package, adding that data to a List<AccelerometerEvent>, and then combining that data into a (csv) String so I can use file.writeAsString() to save this data as a csv file. The problem I am having is that it takes too long to combine the data into a string.
For example:
List length : 28645
Milliseconds to combine into csv string: 113580
Code:
for (AccelerometerEvent event in history) {
dataString = dataString + '${event.timestamp},${event.x},${event.y},${event.z}\n';
}
What would be a more efficient way to do this?
Should I even combine the data into a string, or is there a better way to save this data to a file?
Thanks

Create a file object
write first line with column names, and after that each row (after \n) will be an event
See: FileMode.append
Will add new strings without replacing existing string in file
File file = File('events.csv');
file.writeAsStringSync('TIMESTAMP, X, Y, Z\n', mode: FileMode.append);
for (AccelerometerEvent event in history) {
final x = event.x;
final y = event.y;
final z = event.z;
final timestamp = event.timestamp;
String data = '$timestamp, $x, $y, $z';
file.writeAsStringSync('$data\n', mode: FileMode.append);
}

Related

how to save data on csv file with flutter?

I have data come from a device to my flutter application via bluetooth.
I have lists that are displayed one by one on alerts. I want to collect data on all the alerts and then save it on csv file.
The problem is that there are lot of files created and every file contain juste one list this mean that for every alert there is a file created.
this is the code:
setupMode.generateCsv((message) async {
List<List<String>> data = [
["No.", "X", " Y", "Z", "Z"],
[message.data[1], message.data[2], message.data[3], message.data[4]],
];
String csvData = ListToCsvConverter().convert(data);
final String directory = (await getApplicationSupportDirectory()).path;
final path = "$directory/csv-${DateTime.now()}.csv";
print(path);
final File file = File(path);
await file.writeAsString(csvData);
print("file saved");
print(csvData.length);
});
}
I want to collect all the data and store it on one file.
In order to write into one file, there are two things you need to change:
File name:
final path = "$directory/csv-${DateTime.now()}.csv";
This creates a new file every time you call it. You should use the same file name, e.g.:
final path = "$directory/csv-alert-data.csv";
The second change is the write type:
await file.writeAsString(csvData,mode:FileMode.append);
This will append the new data to the end of the existing CSV file.

String transformation for subject course code for Dart/Flutter

For interaction with an API, I need to pass the course code in <string><space><number> format. For example, MCTE 2333, CCUB 3621, BTE 1021.
Yes, the text part can be 3 or 4 letters.
Most users enter the code without the space, eg: MCTE2333. But that causes error to the API. So how can I add a space between string and numbers so that it follows the correct format.
You can achieve the desired behaviour by using regular expressions:
void main() {
String a = "MCTE2333";
String aStr = a.replaceAll(RegExp(r'[^0-9]'), ''); //extract the number
String bStr = a.replaceAll(RegExp(r'[^A-Za-z]'), ''); //extract the character
print("$bStr $aStr"); //MCTE 2333
}
Note: This will produce the same result, regardless of how many whitespaces your user enters between the characters and numbers.
Try this.You have to give two texfields. One is for name i.e; MCTE and one is for numbers i.e; 1021. (for this textfield you have to change keyboard type only number).
After that you can join those string with space between them and send to your DB.
It's just like hack but it will work.
Scrolling down the course codes list, I noticed some unusual formatting.
Example: TQB 1001E, TQB 1001E etc. (With extra letter at the end)
So, this special format doesn't work with #Jahidul Islam's answer. However, inspired by his answer, I manage to come up with this logic:
var code = "TQB2001M";
var i = course.indexOf(RegExp(r'[^A-Za-z]')); // get the index
var j = course.substring(0, i); // extract the first half
var k = course.substring(i).trim(); // extract the others
var formatted = '$j $k'.toUpperCase(); // combine & capitalize
print(formatted); // TQB 1011M
Works with other formats too. Check out the DartPad here.
Here is the entire logic you need (also works for multiple whitespaces!):
void main() {
String courseCode= "MMM 111";
String parsedCourseCode = "";
if (courseCode.contains(" ")) {
final ensureSingleWhitespace = RegExp(r"(?! )\s+| \s+");
parsedCourseCode = courseCode.split(ensureSingleWhitespace).join(" ");
} else {
final r1 = RegExp(r'[0-9]', caseSensitive: false);
final r2 = RegExp(r'[a-z]', caseSensitive: false);
final letters = courseCode.split(r1);
final numbers = courseCode.split(r2);
parsedCourseCode = "${letters[0].trim()} ${numbers.last}";
}
print(parsedCourseCode);
}
Play around with the input value (courseCode) to test it - also use dart pad if you want. You just have to add this logic to your input value, before submitting / handling the input form of your user :)

How to export an csv file to a bigqery table using java dataflow?

I want to read an csv file from the cloud bucket and write it to a bigquery table with columns using dataflow in java. How can I set the headers to the csv file while writing to bigquery?
There are two issues to solve here
Skipping the header when reading the data, and
Using the header to correctly populate teh bigquery table columns.
For (1) this is, as of June 2019, not implemented natively, though you could try the options listed at Skipping header rows - is it possible with Cloud DataFlow?. For (2) the easiest would be to read the first line of your CSV in your main program, and pass the list of column names in the constructor to a DoFn that converts CSV lines into TableRow objects ready to write to Bigquery.
Your final program would look something like
public void CsvToBigquery(csvInputPattern, bigqueryTable) {
final String[] columns = readAndSplitFirstLineOfFirstFile(csvInputPattern);
Pipeline p = new Pipeline.create(...);
p
.apply(TextIO.read().from(csvInputPattern)
.apply(Filter.by(new MatchIfNonHeader())
.apply(ParDo.of(new DoFn<String, TableRow>() {
... // use columns here to TableRows
})
.apply(BigtableIO.write().withTableId(bigqueryTable)...);
}
I've done a similar task and used Apache Common library in ParDo function to extract the data from CSV files and then converted them to Table Row Objects for BQ.
String fileData = c.element();
BufferedReader fileReader = new BufferedReader(new InputStreamReader(
new ByteArrayInputStream(fileData.getBytes("UTF-8")), "UTF-8"));
CSVParser csvParser = new CSVParser(fileReader,CSVFormat.DEFAULT.withFirstRecordAsHeader().withIgnoreHeaderCase().withTrim());
Iterable<CSVRecord> csvRecords = csvParser.getRecords();
for (CSVRecord csvRecord : csvRecords) {
TableRow row = new TableRow();
checkAndConvertIntoBqDataType(csvRecord.toMap());
c.output(row);
}

Change date column to integer

I have a large csv file as below:
DATE status code value value2
2014-12-13 Shipped 105732491-20091002165230 0.000803398 0.702892835
2014-12-14 Shipped 105732491-20091002165231 0.012925206 1.93748834
2014-12-15 Shipped 105732491-20091002165232 0.000191278 0.004772389
2014-12-16 Shipped 105732491-20091002165233 0.007493046 0.44883348
2014-12-17 Shipped 105732491-20091002165234 0.022015049 3.081006137
2014-12-18 Shipped 105732491-20091002165235 0.001894693 0.227268466
2014-12-19 Shipped 105732491-20091002165236 0.000312871 0.003113062
2014-12-20 Shipped 105732491-20091002165237 0.001754068 0.105016053
2014-12-21 Shipped 105732491-20091002165238 0.009773315 0.585910214
:
:
What i need to do is remove the header and change the date format to an integer yyyymmdd (eg. 20141217)
I am using opencsv to read and write the file.
Is there a way where i can change all the dates at once without parsing them one by one?
Below is my code to remove the header and create a new file:
void formatCsvFile(String fileToChange) throws Exception {
CSVReader reader = new CSVReader(new FileReader(new File(fileToChange)), CSVParser.DEFAULT_SEPARATOR, CSVParser.NULL_CHARACTER, CSVParser.NULL_CHARACTER, 1)
info "Read all rows at once"
List<String[]> allRows = reader.readAll();
CSVWriter writer = new CSVWriter(new FileWriter(fileToChange), CSVWriter.DEFAULT_SEPARATOR, CSVWriter.NO_QUOTE_CHARACTER)
info "Write all rows at once"
writer.writeAll(allRows)
writer.close()
}
Please can some one help?
Thanks
You don't need to parse the dates, but you do need to process each line in the file and convert the data on each line you want to convert. Java/Groovy doesn't have anything like awk where you can work with file data as columns, for example, the first 10 "columns" (characters usually) in every line in a file. Java/Groovy only deals with "rows" of data in a file, not "columns".
You could try something like this: (in Groovy)
reader.eachLine { String theLine ->
int idx = theLine.indexOf(' ')
String oldDate = theLine.subString(0, idx)
String newDate = oldDate.replaceAll('-', '')
String newLine = newDate + theLine.subString(idx);
writer.writeLine(newline);
}
Edit:
If your CSVReader class is not derived from File, then you can't use Groovy's eachLine method on it. And if the CSVReader class's readAll() method really returns a List of String arrays, then the above code could change to this:
allRows.each { String[] theLine ->
String newDate = theLine[0].replaceAll('-', '')
writer.writeLine(newDate + theLine[1..-1])
}
Ignore the first line (the header):
List<String[]> allRows = reader.readAll()[1..-1];
and replace the '-' in the dates by splitting each row and editting the first:
allrows = allrows.collect{
row -> row.split(',')[0].replace(',','') // the date
+ row.split(',')[1..-1] // the rest
}
I don't know what you mean by "all dates at once". For me can only be iterated.

how to Papa.unparse a huge JSON object

I'm using Papa.unparse() to convert a JSON object to csv then downloading the file. The method fails with:
"allocation size overflow papaparse.min.js:6:1580"
This happens in firefox when there's > than 500,000 items to unparse in the JSON array.
The Papa.parse() method allows you to stream data from a file. Is there any similar approach you can take for Papa.unparse()?
You don't need PapaParse to do this.
The allocation size overflow problem is from converting your 500,000 rows into 500,000 strings and concatenating them together to form one massive string to create the CSV file with. JavaScript creates a new String when you concatenate one or more together, and eventually you run out of memory and crash.
The solution is to use TextEncoder to encode your strings into utf-8 (or whatever you need) ArrayBuffers, push each one into a giant array, and then use that array to create your file.
Here's some rough code for how you might do that:
var textEncoder = new TextEncoder("utf-8");
var headers = ["header1","header2","header3"];
var row1 = ["column1-1","column1-2","column1-3"];
var row2 = ["column2-1","column-2-2","column2-3"];
var data = [headers,row1,row2];
var arrayBuffers = [];
var csvString = '';
var encodedString = null;
var file = null;
for(var x=0;x<data.length;x++){
csvString = data[x].join(",").concat('\r\n');
encodedString = textEncoder.encode(csvString);
arrayBuffers.push(encodedString);
}
file = new File(arrayBuffers,"yourCsvFile.csv",{ type: "text/csv" });
I use text-encoding for a TextEncoder polyfill, and presumably if you're trying to Papa.unparse you've got FileAPI already.