I'm using Papa.unparse() to convert a JSON object to csv then downloading the file. The method fails with:
"allocation size overflow papaparse.min.js:6:1580"
This happens in firefox when there's > than 500,000 items to unparse in the JSON array.
The Papa.parse() method allows you to stream data from a file. Is there any similar approach you can take for Papa.unparse()?
You don't need PapaParse to do this.
The allocation size overflow problem is from converting your 500,000 rows into 500,000 strings and concatenating them together to form one massive string to create the CSV file with. JavaScript creates a new String when you concatenate one or more together, and eventually you run out of memory and crash.
The solution is to use TextEncoder to encode your strings into utf-8 (or whatever you need) ArrayBuffers, push each one into a giant array, and then use that array to create your file.
Here's some rough code for how you might do that:
var textEncoder = new TextEncoder("utf-8");
var headers = ["header1","header2","header3"];
var row1 = ["column1-1","column1-2","column1-3"];
var row2 = ["column2-1","column-2-2","column2-3"];
var data = [headers,row1,row2];
var arrayBuffers = [];
var csvString = '';
var encodedString = null;
var file = null;
for(var x=0;x<data.length;x++){
csvString = data[x].join(",").concat('\r\n');
encodedString = textEncoder.encode(csvString);
arrayBuffers.push(encodedString);
}
file = new File(arrayBuffers,"yourCsvFile.csv",{ type: "text/csv" });
I use text-encoding for a TextEncoder polyfill, and presumably if you're trying to Papa.unparse you've got FileAPI already.
Related
I've been given string data (stored in a database) where the originator read binary tiff files as a string using something like (they actually used the VB FileSystemObject):
var txt = File.ReadAllText(#"c:\some_image.tiff");
I need to re-create the original tiff files. I've tried looping through all encodings with:
var txt = File.ReadAllText(#"c:\test\some_image.tiff");
var encodings = Encoding.GetEncodings();
for (var i = 0; i < encodings.Length; i++)
{
File.WriteAllBytes(#"c:\test\some_image_" + i + ".tiff", encodings[i].GetEncoding().GetBytes(txt));
}
but to no avail.
I am gathering accelerometer data from my phone using the sensors package, adding that data to a List<AccelerometerEvent>, and then combining that data into a (csv) String so I can use file.writeAsString() to save this data as a csv file. The problem I am having is that it takes too long to combine the data into a string.
For example:
List length : 28645
Milliseconds to combine into csv string: 113580
Code:
for (AccelerometerEvent event in history) {
dataString = dataString + '${event.timestamp},${event.x},${event.y},${event.z}\n';
}
What would be a more efficient way to do this?
Should I even combine the data into a string, or is there a better way to save this data to a file?
Thanks
Create a file object
write first line with column names, and after that each row (after \n) will be an event
See: FileMode.append
Will add new strings without replacing existing string in file
File file = File('events.csv');
file.writeAsStringSync('TIMESTAMP, X, Y, Z\n', mode: FileMode.append);
for (AccelerometerEvent event in history) {
final x = event.x;
final y = event.y;
final z = event.z;
final timestamp = event.timestamp;
String data = '$timestamp, $x, $y, $z';
file.writeAsStringSync('$data\n', mode: FileMode.append);
}
I am getting the data from my JSON file through this code:
var content = await rootBundle.loadString("json/story1.json");
decodedContent = json.decode(content);
And I would like to be able to store an integer inside the story1.json file.
I have:
Raw xml filled by a select query.This xml transformed into a HL7
message
One of the tags of this xml represents a clob column from a table in
the database
I mapped this data (from edit transformer section) as a variable.
Now I am trying to convert this variable into a base64 string then
replace it in my transformed hl7 message.
5.I tried this conversion on a destination channel which is a javascript writer.
I read and tried several conversion methods like
Packages.org.apache.commons.codec.binary.Base64.encodeBase64String();
I have got only error messages like :
EvaluatorException: Can't find method org.apache.commons.codec.binary.Base64.encodeBase64String(java.lang.String);
Code piece:
var ads=$('V_REPORT_CLOB');
var encoded = Packages.org.apache.commons.codec.binary.Base64.encodeBase64String(ads.toString());
It is pretty clear that I am a newbie on that.How can I manage to do this conversion ?
Here is what I use for Base64 encoding a string with your var substituted.
//Encode Base 64//
var ads = $('V_REPORT_CLOB');
var adsLength = ads.length;
var base64Bytes = [];
for(i = 0; i < adsLength;i++){
base64Bytes.push(ads.charCodeAt(i));
}
var encodedData = FileUtil.encode(base64Bytes);
I used the code in the link mentioned below to merge word files into a single file
http://devpinoy.org/blogs/keithrull/archive/2007/06/09/updated-how-to-merge-multiple-microsoft-word-documents.aspx
However, seeing the output file i realized that it was unable to copy header image in the first document. How do we merge documents preserving format and content.
I will suggest to use GroupDocs.Merger Cloud for merging multiple word document to a single word document, it keeps the formatting and contents of the source documents. It is a platform independent REST API solution without depending on any third-party tool or software.
Sample C# code:
var configuration = new GroupDocs.Merger.Cloud.Sdk.Client.Configuration(MyAppSid, MyAppKey);
var apiInstance_Document = new GroupDocs.Merger.Cloud.Sdk.Api.DocumentApi(configuration);
var apiInstance_File = new GroupDocs.Merger.Cloud.Sdk.Api.FileApi(configuration);
var pathToSourceFiles = #"C:/Temp/input/";
var remoteFolder = "Temp/";
var joinItem_list = new List<JoinItem>();
try
{
DirectoryInfo dir = new DirectoryInfo(pathToSourceFiles);
System.IO.FileInfo[] files = dir.GetFiles();
foreach (System.IO.FileInfo file in files)
{
var request_upload = new GroupDocs.Merger.Cloud.Sdk.Model.Requests.UploadFileRequest(remoteFolder + file.Name, File.Open(file.FullName, FileMode.Open));
var response_upload = apiInstance_File.UploadFile(request_upload);
var item = new JoinItem
{
FileInfo = new GroupDocs.Merger.Cloud.Sdk.Model.FileInfo
{ FilePath = remoteFolder + file.Name }
};
joinItem_list.Add(item);
}
var options = new JoinOptions
{
JoinItems = joinItem_list,
OutputPath = remoteFolder + "Merged_Document.docx"
};
var request = new JoinRequest(options);
var response = apiInstance_Document.Join(request);
Console.WriteLine("Output file path: " + response.Path);
}
catch (Exception e)
{
Console.WriteLine("Exception while Merging Documents: " + e.Message);
}
That code is inserting a page break after each file.
Since sections control headers, if a second or subsequent document has a header, you'll probably be wanting to keep the original section properties, and insert those after your first document.
If you look at your original document as a docx, you'll probably see that your section is a document level section properties element.
The easiest way around your problem may be to create a second section properties element inside the last paragraph (which contains the header information). Then this should just stay there when the documents are merged (ie other paragraphs added after it).
That's the theory. See also http://www.pcreview.co.uk/forums/thread-898133.php
But I haven't tried it; it assumes InsertFile behaves as I expect it should.