Flutter http large file download and save on disk - flutter

I'm trying to save a large file from network to disk (160MB file). I'm saving the chunks as soon as they arrive but my saved file on disk is always corrupted and having variable size each time I re-run the script.
The reason I'm trying to save it as soon as a chunk has arrived is to save memory. I tried fetching the entire file and then save it, which caused my flutter to consume about 2GB of memory.
This is my code:
final request = http.Request('GET', uri);
final streamedResponse = await request.send();
File pathToSave = File('C:\\Downloads\\test.zip');
streamedResponse.stream.listen((value) async
{
await pathToSave.writeAsBytes(value, mode: FileMode.writeOnlyAppend, flush: true);
})
I'm not sure but it seems the listen function doesn't wait for the previous callback to get finished and fires the next callback immediately and meddles with the previous file write.
Same script works fine with small files.
Flutter 3.0.5 (desktop)
Windows 11

Related

Maximum limit of cache size in flutter

I am using Firestore as a database and cached_network_image to load and cache images in my flutter app (iOS & Android). I noticed that the app cache size gets too big (+300 mb) after running the app for a while (in debug mode).
Is there a maximum limit on the cache size that app uses in flutter?
Is there a way to force some limit on the cache size such that whenever the cache size reaches its maximum limit, oldest cached files will be removed?
cached_network_image relies on flutter_cache_manager
A CacheManager to download and cache files in the cache directory of
the app. Various settings on how long to keep a file can be changed.
How it works
By default the cached files are stored in the temporary directory of the app. This means the OS can delete the files any time.
Information about the files is stored in a database using sqflite. The file name of the database is the key of the cacheManager, that's why that has to be unique.
This cache information contains the end date till when the file is valid and the eTag to use with the http cache-control.
methods
removeFile removes a file from the cache.
emptyCache removes all files from the cache.
example
void _clearCache() {
DefaultCacheManager().emptyCache();
}
if you want to be able to delete images after sometime you will have to implement a custom cache that deletes images after a given no of days.
from docs TL:DR
class CustomCacheManager extends BaseCacheManager {
static const key = "customCache";
static CustomCacheManager _instance;
factory CustomCacheManager() {
if (_instance == null) {
_instance = new CustomCacheManager._();
}
return _instance;
}
CustomCacheManager._() : super(key,
maxAgeCacheObject: Duration(days: 7),
maxNrOfCacheObjects: 20);
Future<String> getFilePath() async {
var directory = await getTemporaryDirectory();
return p.join(directory.path, key);
}
Image cache can cache up to 1000 images, and up to 100 MB. This may be more but it is min 100 MB.
For your use case use extended_image for caching the image and clear the cache using clearDiskCachedImages method
// Clear the disk cache directory then return if it succeed.
/// <param name="duration">timespan to compute whether file has expired or not</param>
Future<bool> clearDiskCachedImages({Duration duration})

Where can I store the images for my android app?

I am building an app in flutter and I want to store many images. So will anyone suggest me where I can store the images which is easy to use in my app. I mean should I store it locally or in cloud? If yes which cloud or backend should I use, whichone is good and fully optimized for my flutter app (like mongo, django, firebase etc. ). Will anyone suggest me the best?
Anyone kind of help is appreaciated as I have no prior knowledge about the production part....
Storing Images on a server can be very expensive, since the file sizes are very large compared to the usual data. So if you do not NEED to store them on a server, don't.
Storing images locally is pretty simple. You will want to use the path_provider package https://pub.dev/packages/path_provider . I ll post a function I am using in my current project that does this. You ll see, its pretty simple.
Note: In my Code I pull the file from my server. Obviously leave that part out if you are getting your images from a different source.
Future<File> createFileOfPdfUrl(String fileLocation, String name) async {
final url = Helper.baseUrl + "Files/Newsletter/" + fileLocation;
final filename = url.substring(url.lastIndexOf("/") + 1);
var request = await HttpClient().getUrl(Uri.parse(url));
var response = await request.close();
var bytes = await consolidateHttpClientResponseBytes(response);
String dir = (await pathProvider.getApplicationDocumentsDirectory()).path;
File file = new File('$dir/$filename');
await file.writeAsBytes(bytes);
return file;
}

Loading video files from device as `ByteData` flutter

I'm using the flutter camera package to record videos and save videos to a temporary directory after which I use flutter's ffmpeg package to do some transformation. However, to achieved this, I first had to make a copy of the recorded video to create the output file path.
The challenge comes in when I'm trying to load the asset from the device. The block of code below does the copying and renaming of the file.
static Future<File> copyFileAssets(String assetName, String localName) async {
ByteData assetByteData = await rootBundle.load(assetName);
final List<int> byteList = assetByteData.buffer
.asUint8List(assetByteData.offsetInBytes, assetByteData.lengthInBytes);
final String fullTemporaryPath =
join((await tempDirectory).path, localName);
return new File(fullTemporaryPath)
.writeAsBytes(byteList, mode: FileMode.writeOnly, flush: true);
}
The issue lies with this line ByteData assetByteData = await rootBundle.load(assetName);
I get this error message Unable to load asset: /storage/emulated/0/Android/data/com.timz/files/timz/1585820950555.mp4, but the weird thing is, this only happens when I run the build for the first. Everything else works fine on subsequent hot restarts.
I later got this fix by myself rootBundle is meant for loading only assets you've declared their paths on your pubspec.yaml but somehow, it miraculously loads the saved file when hot restart was applied.
Reading the file as bytes gave what I wanted as load it with root bundle. Here's the code below.
Uint8List assetByteData = await File(assetName).readAsBytes();

UI is Freezing when compressing an image

I'm trying to compress image from camera or gallery, but i tried answer in this question Flutter & Firebase: Compression before upload image
But the UI was freeze , so do you guys have any solution for that, and why the image plugin meet that problem ?
UPDATE:
compressImage(imageFile).then((File file) {
imageFile = file;
});
Future<File> compressImage(File imageFile) async {
return compute(decodeImage, imageFile);
}
File decodeImage(File imageFile) {
Im.Image image = Im.decodeImage(imageFile.readAsBytesSync());
Im.Image smallerImage = Im.copyResize(
image, 150); // choose the size here, it will maintain aspect ratio
return new File('123.jpg')
..writeAsBytesSync(Im.encodeJpg(smallerImage, quality: 85));
}
I meet "unhandled exception" in this code
This is because compression is done in the UI thread.
You can move computation to a new thread using compute() https://docs.flutter.io/flutter/foundation/compute.html
There are currently serious limitations what a non-UI thread can do.
If you pass the image data, it is copied from one thread to the other, which can be slow. If you have the image in a file like you get it from image_picker it is better to pass the file path and read the image in the new thread.
You can only pass values that can be encoded as JSON (it's not actually encoded as JSON, but it supports the same types)
You can not use plugins. This means you need to move the compressed data back to the UI thread by passing the data (which again is copied) or by writing in a file and passing back the path to the file, but in this case copying might be faster because writing a file in one thread and reading it in the other is even slower).
Then you can for example invoke image uploading to Firebase Cloud Storage in the UI thread, but because this is a plugin it will run in native code and not in the UI thread. It's just the UI thread that needs to pass the image along.

Meteor: uploading file from client to Mongo collection vs file system vs GridFS

Meteor is great but it lacks native supports for traditional file uploading. There are several options to handle file uploading:
From the client, data can be sent using:
Meteor.call('saveFile',data) or collection.insert({file:data})
'POST' form or HTTP.call('POST')
In the server, the file can be saved to:
a mongodb file collection by collection.insert({file:data})
file system in /path/to/dir
mongodb GridFS
What are the pros and cons for these methods and how best to implement them? I am aware that there are also other options such as saving to a third party site and obtain an url.
You can achieve file uploading with Meteor without using any more packages or a third party
Option 1: DDP, saving file to a mongo collection
/*** client.js ***/
// asign a change event into input tag
'change input' : function(event,template){
var file = event.target.files[0]; //assuming 1 file only
if (!file) return;
var reader = new FileReader(); //create a reader according to HTML5 File API
reader.onload = function(event){
var buffer = new Uint8Array(reader.result) // convert to binary
Meteor.call('saveFile', buffer);
}
reader.readAsArrayBuffer(file); //read the file as arraybuffer
}
/*** server.js ***/
Files = new Mongo.Collection('files');
Meteor.methods({
'saveFile': function(buffer){
Files.insert({data:buffer})
}
});
Explanation
First, the file is grabbed from the input using HTML5 File API. A reader is created using new FileReader. The file is read as readAsArrayBuffer. This arraybuffer, if you console.log, returns {} and DDP can't send this over the wire, so it has to be converted to Uint8Array.
When you put this in Meteor.call, Meteor automatically runs EJSON.stringify(Uint8Array) and sends it with DDP. You can check the data in chrome console websocket traffic, you will see a string resembling base64
On the server side, Meteor call EJSON.parse() and converts it back to buffer
Pros
Simple, no hacky way, no extra packages
Stick to the Data on the Wire principle
Cons
More bandwidth: the resulting base64 string is ~ 33% larger than the original file
File size limit: can't send big files (limit ~ 16 MB?)
No caching
No gzip or compression yet
Take up lots of memory if you publish files
Option 2: XHR, post from client to file system
/*** client.js ***/
// asign a change event into input tag
'change input' : function(event,template){
var file = event.target.files[0];
if (!file) return;
var xhr = new XMLHttpRequest();
xhr.open('POST', '/uploadSomeWhere', true);
xhr.onload = function(event){...}
xhr.send(file);
}
/*** server.js ***/
var fs = Npm.require('fs');
//using interal webapp or iron:router
WebApp.connectHandlers.use('/uploadSomeWhere',function(req,res){
//var start = Date.now()
var file = fs.createWriteStream('/path/to/dir/filename');
file.on('error',function(error){...});
file.on('finish',function(){
res.writeHead(...)
res.end(); //end the respone
//console.log('Finish uploading, time taken: ' + Date.now() - start);
});
req.pipe(file); //pipe the request to the file
});
Explanation
The file in the client is grabbed, an XHR object is created and the file is sent via 'POST' to the server.
On the server, the data is piped into an underlying file system. You can additionally determine the filename, perform sanitisation or check if it exists already etc before saving.
Pros
Taking advantage of XHR 2 so you can send arraybuffer, no new FileReader() is needed as compared to option 1
Arraybuffer is less bulky compared to base64 string
No size limit, I sent a file ~ 200 MB in localhost with no problem
File system is faster than mongodb (more of this later in benchmarking below)
Cachable and gzip
Cons
XHR 2 is not available in older browsers, e.g. below IE10, but of course you can implement a traditional post <form> I only used xhr = new XMLHttpRequest(), rather than HTTP.call('POST') because the current HTTP.call in Meteor is not yet able to send arraybuffer (point me if I am wrong).
/path/to/dir/ has to be outside meteor, otherwise writing a file in /public triggers a reload
Option 3: XHR, save to GridFS
/*** client.js ***/
//same as option 2
/*** version A: server.js ***/
var db = MongoInternals.defaultRemoteCollectionDriver().mongo.db;
var GridStore = MongoInternals.NpmModule.GridStore;
WebApp.connectHandlers.use('/uploadSomeWhere',function(req,res){
//var start = Date.now()
var file = new GridStore(db,'filename','w');
file.open(function(error,gs){
file.stream(true); //true will close the file automatically once piping finishes
file.on('error',function(e){...});
file.on('end',function(){
res.end(); //send end respone
//console.log('Finish uploading, time taken: ' + Date.now() - start);
});
req.pipe(file);
});
});
/*** version B: server.js ***/
var db = MongoInternals.defaultRemoteCollectionDriver().mongo.db;
var GridStore = Npm.require('mongodb').GridStore; //also need to add Npm.depends({mongodb:'2.0.13'}) in package.js
WebApp.connectHandlers.use('/uploadSomeWhere',function(req,res){
//var start = Date.now()
var file = new GridStore(db,'filename','w').stream(true); //start the stream
file.on('error',function(e){...});
file.on('end',function(){
res.end(); //send end respone
//console.log('Finish uploading, time taken: ' + Date.now() - start);
});
req.pipe(file);
});
Explanation
The client script is the same as in option 2.
According to Meteor 1.0.x mongo_driver.js last line, a global object called MongoInternals is exposed, you can call defaultRemoteCollectionDriver() to return the current database db object which is required for the GridStore. In version A, the GridStore is also exposed by the MongoInternals. The mongo used by current meteor is v1.4.x
Then inside a route, you can create a new write object by calling var file = new GridStore(...) (API). You then open the file and create a stream.
I also included a version B. In this version, the GridStore is called using a new mongodb drive via Npm.require('mongodb'), this mongo is the latest v2.0.13 as of this writing. The new API doesn't require you to open the file, you can call stream(true) directly and start piping
Pros
Same as in option 2, sent using arraybuffer, less overhead compared to base64 string in option 1
No need to worry about file name sanitisation
Separation from file system, no need to write to temp dir, the db can be backed up, rep, shard etc
No need to implement any other package
Cachable and can be gzipped
Store much larger sizes compared to normal mongo collection
Using pipe to reduce memory overload
Cons
Unstable Mongo GridFS. I included version A (mongo 1.x) and B (mongo 2.x). In version A, when piping large files > 10 MB, I got lots of error, including corrupted file, unfinished pipe. This problem is solved in version B using mongo 2.x, hopefully meteor will upgrade to mongodb 2.x soon
API confusion. In version A, you need to open the file before you can stream, but in version B, you can stream without calling open. The API doc is also not very clear and the stream is not 100% syntax exchangeable with Npm.require('fs'). In fs, you call file.on('finish') but in GridFS you call file.on('end') when writing finishes/ends.
GridFS doesn't provide write atomicity, so if there are multiple concurrent writes to the same file, the final result may be very different
Speed. Mongo GridFS is much slower than file system.
Benchmark
You can see in option 2 and option 3, I included var start = Date.now() and when writing end, I console.log out the time in ms, below is the result. Dual Core, 4 GB ram, HDD, ubuntu 14.04 based.
file size GridFS FS
100 KB 50 2
1 MB 400 30
10 MB 3500 100
200 MB 80000 1240
You can see that FS is much faster than GridFS. For a file of 200 MB, it takes ~80 sec using GridFS but only ~ 1 sec in FS. I haven't tried SSD, the result may be different. However, in real life, the bandwidth may dictate how fast the file is streamed from client to server, achieving 200 MB/sec transfer speed is not typical. On the other hand, a transfer speed ~2 MB/sec (GridFS) is more the norm.
Conclusion
By no mean this is comprehensive, but you can decide which option is best for your need.
DDP is the simplest and sticks to the core Meteor principle but the data are more bulky, not compressible during transfer, not cachable. But this option may be good if you only need small files.
XHR coupled with file system is the 'traditional' way. Stable API, fast, 'streamable', compressible, cachable (ETag etc), but needs to be in a separate folder
XHR coupled with GridFS, you get the benefit of rep set, scalable, no touching file system dir, large files and many files if file system restricts the numbers, also cachable compressible. However, the API is unstable, you get errors in multiple writes, it's s..l..o..w..
Hopefully soon, meteor DDP can support gzip, caching etc and GridFS can be faster...
Hi just to add on to Option1 regarding viewing of the file. I did it without ejson.
<template name='tryUpload'>
<p>Choose file to upload</p>
<input name="upload" class='fileupload' type='file'>
</template>
Template.tryUpload.events({
'change .fileupload':function(event,template){
console.log('change & view');
var f = event.target.files[0];//assuming upload 1 file only
if(!f) return;
var r = new FileReader();
r.onload=function(event){
var buffer = new Uint8Array(r.result);//convert to binary
for (var i = 0, strLen = r.length; i < strLen; i++){
buffer[i] = r.charCodeAt(i);
}
var toString = String.fromCharCode.apply(null, buffer );
console.log(toString);
//Meteor.call('saveFiles',buffer);
}
r.readAsArrayBuffer(f);};