I need to send a file .txt from a device to my app (worst case almost 2mb). The BLE device divide the file into packages. I don't know if my method is correct, but I create a loop of characteristic.write/characteristic.read telling everytime what package the device has to send.
Here's my code:
for(int i = 0; i < packNumber.length; i++) {
initialValue = '9,50,100,$i,0,$checksumId,0/';
await characteristic
.write(utf8.encode(initialValue)).then((wValue) async {
await Future.delayed(Duration(milliseconds: 100)).then((value) async {
await characteristic.read().then((rValue) {
//do something with rVlaue
});
});
});
}
It works, but is it the best solution? And in case how can I speed up the transfer (for now I have to set a delay before reading, waiting for characteristic.write to finish)?
Thank you guys
You should change the remote device to send notifications instead of using reads. That gives maximum throughput.
Related
is it possible to save the processed image as a File?
Here is what I'm trying to do, our app have a KYC (Know your customer) and we implemented the
face detection to make the users do several poses. What I want is to save them as an image file and upload it on the database
Example Scenario:
App ask the user to smile > The user smiled > save the image.
Here is what I have right now:
Where the app checks if the user smiled
if (faces.isNotEmpty) {
if (inputImage.inputImageData?.size != null &&
inputImage.inputImageData?.imageRotation != null) {
if (faces[0].smilingProbability! > 0.85) {
await _getImg();
}
}
}
Then I call a Function to stop image stream then take a picture (this works but on some physical device it crashes) but if I dont stop the image stream then called the takePicture() right-away it just crashes all the time.
_getImg() async {
setState(() {
globalBusy = true;
});
await _controller.stopImageStream();
var img = await _controller.takePicture();
VerificationVarHandler.livelinesImgsPaths.add(img.path);
}
As you can see it's not the best way at least for me I think, so maybe I can use the
inputImage from the _processCameraImage() because it has a byte? then I can pass that bytes to a decoder and save it locally when I trigger a function?
Or maybe better yet there is more elegant way of achieving this?
You can check out this gfg article - https://www.geeksforgeeks.org/face-detection-in-flutter-using-firebase-ml-kit/
Learn flutter in easiest way - https://auth.geeksforgeeks.org/user/ms471841/articles
I have a messaging app that receives push notification from Firebase Cloud Messaging and when it does, it pushes it to a stream. In order to keep track of different streams, I have a streamtable which is a global variable. This is what I have
Map<String, StreamController<dynamic>> streamtable;
Future<dynamic> myBackgroundMessageHandler(Map<String, dynamic> message) async {
log('FCM onBackground: $message');
log('streamTable: $streamtable');
addToStream(message);
await showNotification(message);
return;
}
bool addToStream(Map<String, dynamic> message) {
var data;
if (Platform.isAndroid) {
data = message['data'];
} else if (Platform.isIOS) {
data = message;
}
bool status;
try {
switch (data['streamType']) {
case "Message":
streamtable['MessageStream'].add(message);
status = true;
break;
default:
log("streamType: ${data['streamType']}");
status = false;
break;
}
} catch (e) {
status = false;
log('addToStream: error($e)');
}
return status;
}
When the app is in the foreground, everything works great. However, when the app is in the background, streamtable becomes null, and nothing works.
First off, why is that happening? If I bring the app back to foreground, streamtable is not null.
How do I store data I receive when the app is in the background?
Mobile systems work differently than desktop systems. For desktop systems, "in the background" just means the UI is not on top, but the program is running. For mobile systems, "in the background" means the app is not used and as such can be killed by the operating system any time it sees fit.
Different systems have different callbacks to notify the app of impeding shutdown so it can write it's last data to a storage and restore from that state when it's called again.
The thought that your app holds global data and retains it in memory, since it's running, just "in the background" is not correct. It won't. It can be gone any time on mobile systems.
You will need to actually save the data you want to restore later, the "memory" only lasts as long as it's in the foreground or the operating system feels generous.
Seems like it's a bug in flutter: (https://github.com/FirebaseExtended/flutterfire/issues/1878)
I am quite new to Flutter, and I am currently working on an application that needs barcode reading. So I used the barcode_scan library, and I am now able to get the barcode number from a scanned barcode.
To be clear, I am not asking how to scan a barcode / get the barcode number.
My question is: how can I get more information about a product from its barcode number in Flutter (e.g. product name)? Can I do this from the barcode_scan library, or will I need something else?
Edit: Forgot to mention, when scanning QR codes I am able to get the information it encodes (e.g. product name or URL), but this is not the case with barcodes.
My current (relevant) code is as follows:
String result = 'Hey there!';
Future scanBarcode() async {
if(await Permissions.checkCameraPermission()) {
try {
await BarcodeScanner.scan().then((scan_result) {
//here scan_result is the obtained barcode number
setState(() { result = scan_result; });
});
}
//handling exceptions...
}
else {
setState(() { result = 'Camera permission denied'; });
}
}
(Side note, here Permissions.checkCameraPermission() is a function I created to check if camera permission is granted, shouldn't be relevant for the question).
I'm using the same plugin for scanning barcode or qr code for my flutter app..
String result = 'Hey there!';
Future scanBarcode() async {
if(await Permissions.checkCameraPermission()) {
try {
// I get the scan result from BarcodeScanner this way and it is working for me.
result = await BarcodeScanner.scan();
}
//handling exceptions...
}
else {
setState(() { result = 'Camera permission denied'; });
}
}
Well, it seems the only plausible way to do it from Flutter would be to use a barcode lookup API like #RichardHeap proposed in the comments. If my app was Flutter only (without a backend server connected to it) I believe I would have gone that way.
But I guess I'll just send the barcode number itself to the backend and deal with it on the server-side.
What I understand is, on scanning the code, you need the information / data about the product.
If I understood it right, there are several aggregators of the barcode. You can get APIs from them. But there is not single place from where you get all the information.
I need to ensure that a certain HTTP request was send successfully. Therefore, I'm wondering if a simple way exists to move such a request into a background service task.
The background of my question is the following:
We're developing a survey application using flutter. Unfortunately, the app is intended to be used in an environment where no mobile internet connection can be guaranteed. Therefore, I’m not able to simply post the result of the survey one time but I have to retry it if it fails due to network problems. My current code looks like the following. The problem with my current solution is that it only works while the app is active all the time. If the user minimizes or closes the app, the data I want to upload is lost.
Therefore, I’m looking for a solution to wrap the upload process in a background service task so that it will be processed even when the user closes the app. I found several posts and plugins (namely https://medium.com/flutter-io/executing-dart-in-the-background-with-flutter-plugins-and-geofencing-2b3e40a1a124 and https://pub.dartlang.org/packages/background_fetch) but they don’t help in my particular use case. The first describes a way how the app could be notified when a certain event (namely the geofence occurred) and the second only works every 15 minutes and focuses a different scenario as well.
Does somebody knows a simple way how I can ensure that a request was processed even when there is a bad internet connection (or even none at the moment) while allowing the users to minimize or even close the app?
Future _processUploadQueue() async {
int retryCounter = 0;
Future.doWhile(() {
if(retryCounter == 10){
print('Abborted after 10 tries');
return false;
}
if (_request.uploaded) {
print('Upload ready');
return false;
}
if(! _request.uploaded) {
_networkService.sendRequest(request: _request.entry)
.then((id){
print(id);
setState(() {
_request.uploaded = true;
});
}).catchError((e) {
retryCounter++;
print(e);
});
}
// e ^ retryCounter, min 0 Sec, max 10 minutes
int waitTime = min(max(0, exp(retryCounter)).round(), 600);
print('Waiting $waitTime seconds till next try');
return new Future.delayed(new Duration(seconds: waitTime), () {
print('waited $waitTime seconds');
return true;
});
})
.then(print)
.catchError(print);
}
You can use the plugin shared_preferences to save each HTTP response to the device until the upload completes successfully. Like this:
requests: [
{
id: 8eh1gc,
request: "..."
},
...
],
Then whenever the app is launched, check if any requests are in the list, retry them, and delete them if they complete. You could also use the background_fetch to do this every 15 minutes.
I'm using the Plugin.Media from #JamesMontemagno version 2.4.0-beta (which fixes picture orientation), it's working on Adroind 4.1.2 (Jelly Bean) and Marshmallow, but NOT on my Galaxy S5 Neo with Android version 5.1.1.
Basically when I take a picture it never returns back on the page from where I started the process; always returns back to the initial home page.
On devices where it works, when I take a picture, I see that first of all the application fires OnSleep, then after taking the picture fires OnResume.
On my device where is NOT working it fires OnSleep and after taking the picture doesn't fire OnResume, it fires the initialization page and then OnStart.
For this reason it doesn't open the page where I was when taking the picture.
What should I do to make sure it fires OnResume returning to the correct page and not OnStart which returns on initial fome page ?
In addition, when I take a picture it takes almost 30 seconds to get back to the code after awaiting TakePhotoAsync process, and it's too slow!
Following my code:
MyTapGestureRecognizerEditPicture.Tapped += async (sender, e) =>
{
//Display action sheet
String MyActionResult = await DisplayActionSheet(AppLocalization.UserInterface.EditImage,
AppLocalization.UserInterface.Cancel,
AppLocalization.UserInterface.Delete,
AppLocalization.UserInterface.TakePhoto,
AppLocalization.UserInterface.PickPhoto);
//Execute action result
if (MyActionResult == AppLocalization.UserInterface.TakePhoto)
{
//-----------------------------------------------------------------------------------------------------------------------------------------------
//Take photo
await CrossMedia.Current.Initialize();
if (!CrossMedia.Current.IsCameraAvailable || !CrossMedia.Current.IsTakePhotoSupported)
{
await DisplayAlert(AppLocalization.UserInterface.Alert, AppLocalization.UserInterface.NoCameraAvailable, AppLocalization.UserInterface.Ok);
}
else
{
var MyPhotoFile = await CrossMedia.Current.TakePhotoAsync(new Plugin.Media.Abstractions.StoreCameraMediaOptions
{
Directory = "MyApp",
Name = "MyAppProfile.jpg",
SaveToAlbum = true,
PhotoSize = Plugin.Media.Abstractions.PhotoSize.Small
});
if (MyPhotoFile != null)
{
//Render image
MyProfilePicture.Source = ImageSource.FromFile(MyPhotoFile.Path);
//Save image on database
MemoryStream MyMemoryStream = new MemoryStream();
MyPhotoFile.GetStream().CopyTo(MyMemoryStream);
byte[] MyArrBytePicture = MyMemoryStream.ToArray();
await SaveProfilePicture(MyArrBytePicture);
MyPhotoFile.Dispose();
MyMemoryStream.Dispose();
}
}
}
if (MyActionResult == AppLocalization.UserInterface.PickPhoto)
{
//-----------------------------------------------------------------------------------------------------------------------------------------------
//Pick photo
await CrossMedia.Current.Initialize();
if (!CrossMedia.Current.IsPickPhotoSupported)
{
await DisplayAlert(AppLocalization.UserInterface.Alert, AppLocalization.UserInterface.PermissionNotGranted, AppLocalization.UserInterface.Ok);
}
else
{
var MyPhotoFile = await CrossMedia.Current.PickPhotoAsync();
if (MyPhotoFile != null)
{
//Render image
MyProfilePicture.Source = ImageSource.FromFile(MyPhotoFile.Path);
//Save image on database
MemoryStream MyMemoryStream = new MemoryStream();
MyPhotoFile.GetStream().CopyTo(MyMemoryStream);
byte[] MyArrBytePicture = MyMemoryStream.ToArray();
await SaveProfilePicture(MyArrBytePicture);
MyPhotoFile.Dispose();
MyMemoryStream.Dispose();
}
}
}
};
Please help!! We need to deploy this app but we cannot do it with this problem.
Thank you in advance!
It is perfectly normal to have the Android OS terminate and restart an Activity. As you are seeing, your app's Activity it will be automatically restarted when the camera app exits and the OS returns control to your app. The odds are it just needed more memory in order to take that photo with the Neo's 16MP camera, you can watch the logcat output to confirm that.
Restarted – It is possible for an activity that is anywhere from paused to stopped in the lifecycle to be removed from memory by Android. If the user navigates back to the activity it must be restarted, restored to its previously saved state, and then displayed to the user.
What to do:
So on the Xamarin.Forms OnStart lifecycle method you need to restore your application to a valid running state (initializing variables, preforming any bindings, etc...).
Plug code:
The Android platform code for the TakePhotoAsync method looks fine to me, but remember that the memory for that image that is passed back via the Task will be doubled as it is marshaled from the ART VM back the Mono VM. Calling GC.Collect() as soon as possible after the return will help (but your Activity is restarting anyway...)
public async Task<MediaFile> TakePhotoAsync(StoreCameraMediaOptions options)
{
~~~
var media = await TakeMediaAsync("image/*", MediaStore.ActionImageCapture, options);
In turn calls:
this.context.StartActivity(CreateMediaIntent(id, type, action, options));
Not much less you can really do within the Android OS to popup the Camera.
In addition, when I take a picture it takes almost 30 seconds to get back to the code after awaiting TakePhotoAsync process, and it's too slow!
Is that on your Neo? Or all devices?
I would call that very suspect (ie. a bug) as even flushing all the Java memory after the native Camera Intent/Activity and the restart time for your app's Activity should not take 30 seconds on a oct-core 1.6 GHz Cortex... but I do not have your device, app and code in front of me....