Live text recognition ( region of interest) - flutter

i have live text recognition i used library https://pub.dev/packages/google_mlkit_text_recognition and https://pub.dev/packages/camera,
but i have some problem...
i need detect text only in marked part...
Get live preview function:
//
Future _processCameraImage(CameraImage image) async {
final WriteBuffer allBytes = WriteBuffer();
for (final Plane plane in image.planes) {
allBytes.putUint8List(plane.bytes);
}
final bytes = allBytes.done().buffer.asUint8List();
final Size imageSize =
Size(image.width.toDouble(), image.height.toDouble());
//
final camera = cameras[_cameraIndex];
final imageRotation =
InputImageRotationValue.fromRawValue(camera.sensorOrientation) ??
InputImageRotation.rotation0deg;
final inputImageFormat =
InputImageFormatValue.fromRawValue(image.format.raw) ??
InputImageFormat.nv21;
final planeData = image.planes.map(
(Plane plane) {
return InputImagePlaneMetadata(
bytesPerRow: plane.bytesPerRow,
height: plane.height,
width: plane.width,
);
},
).toList();
//
final inputImageData = InputImageData(
size: imageSize,
imageRotation: imageRotation,
inputImageFormat: inputImageFormat,
planeData: planeData,
);
final inputImage =
InputImage.fromBytes(bytes: bytes, inputImageData: inputImageData);
//
widget.onImage(inputImage);
}
Processing image function:
//
Future<void> processImage(InputImage inputImage) async {
if (!_canProcess) return;
if (_isBusy) return;
_isBusy = true;
final recognizedText = await _textRecognizer.processImage(inputImage);
//
if (mounted) {
for (var element in recognizedText.blocks) {
for (var line in element.lines) {
for (var txt in line.elements) {
if (txt.text.length == 17) {
setState(() {
_text = txt.text;
});
}
}
}
}
}
_isBusy = false;
}
}

I had a similar task, I used the module mask_for_camera_view
create your own frame and find the values ​​of the cropped picture
more details & photo example in github

Related

How can I upload mutiple-photo in flutter app via ImgePicker

I want to add the function which can upload multiple Photo image via ImagePicker
In this code, I can just upload single photo, not mutiple.
This app operating by flutter, dart and firebase server.
[Code]
void dispose() {
textEditingController.dispose();
super.dispose();
}
File _image;
Future _getImage() async {
var image = await ImagePicker.pickImage(
source: ImageSource.gallery,
maxWidth: 1000,
maxHeight: 1000,
);
setState(() {
_image = image;
});
}
Future _uploadFile(BuildContext context) async {
if (_image != null) {
final firebaseStorageRef = FirebaseStorage.instance
.ref()
.child('post')
.child('${DateTime.now().millisecondsSinceEpoch}.png');
final task = firebaseStorageRef.putFile(
_image,
StorageMetadata(contentType: 'image/png'),
);
final storageTaskSnapshot = await task.onComplete;
final downloadUrl = await storageTaskSnapshot.ref.getDownloadURL();
await Firestore.instance.collection('post').add(
{
'contents': textEditingController.text,
'displayName': widget.user.displayName,
'email': widget.user.email,
'photoUrl': downloadUrl,
'userPhotoUrl': widget.user.photoUrl,
});
}
final images = await _picker.pickMultiImage(
maxHeight: 1024,
maxWidth: 1024,
imageQuality: 50,
);
I created here 3 functions used to pick files from imagePicker and to upload them to firebase storage.
first, pick images from gallery:
final imageFiles = await pickImages();
second, upload the images:
final path = 'path/where/you/want/to/save/your/images';
final imageUrls = uploadImages(imagesFiles, path)
print(imageUrls);
you can now use the images urls to save to firestore
Future<List<File>> pickeImages() async {
ImagePicker picker = ImagePicker();
final images = await picker.pickMultiImage(
maxHeight: 1000, maxWidth: 1000, imageQuality: 90);
List<File> files = [];
if (images == null || images.isEmpty) return [];
for (var i = 0; i < images.length; i++) {
final file = File(images[i].path);
files.add(file);
}
return files;
}
Future<String?> _uploadImageFile(File file, String path) async {
try {
final storage = FirebaseStorage.instance;
TaskSnapshot? taskSnapshot;
final storageRef = storage.ref().child(path);
final uploadTask = storageRef.putFile(file);
taskSnapshot = await uploadTask.whenComplete(() {});
final imageUrl = await taskSnapshot.ref.getDownloadURL();
return imageUrl;
} catch (e) {
throw Exception(e.toString());
}
}
Future<List<String>> uploadImages(
List<File> files,
String path,
) async {
final urls = <String>[];
try {
if (files.isNotEmpty) {
for (var i = 0; i < files.length; i++) {
final file = files[i];
final imagePath = '$path/${Random().nextInt(10000)}.jpg';
final url = await _uploadImageFile(file, imagePath);
urls.add(url!);
}
}
return urls;
} on FirebaseException {
rethrow;
}
}
Instead of using ImagePicker.pickImage, use ImagePicker.pickMultiImage. That gives you a List instead of an XFile. Then you can just upload all images in the list. For instance, add an image parameter to your _uploadFile Function so that its function signature is
Future _uploadFile(BuildContext context, XFile image)
and just upload all images like
for (final image of images) {
_uploadFile(context, image)
}

Flutter Google-Ml-Kit-plugin - FaceDetector not working in iPhone cameras

If we take photos in iPhone
final XFile? image = await picker.pickImage(
source: ImageSource.camera,
maxWidth: 300,
maxHeight: 300,
preferredCameraDevice: CameraDevice.front,
);
final List faces = await faceDetector.processImage(inputImage);
always give empty list
final FaceDetector faceDetector = FaceDetector(
options: FaceDetectorOptions(
enableTracking: true,
enableContours: true,
enableClassification: true,
enableLandmarks: true,
performanceMode: FaceDetectorMode.accurate));
final inputImage = InputImage.fromFilePath(image.path);
final List faces = await faceDetector.processImage(inputImage);
print(faces.length); // IS ALWAYS ZERO
Works perfectly fine if source: ImageSource.gallery in iPhone
Fixed by Giving
import 'dart:math' as Math;
import 'dart:math';
import 'package:image/image.dart' as Im;
class CompressObject {
File imageFile;
String path;
int rand;
CompressObject(this.imageFile, this.path, this.rand);
}
Future<String> compressImage(CompressObject object) async {
return compute(decodeImage, object);
}
String decodeImage(CompressObject object) {
Im.Image? image = Im.decodeImage(object.imageFile.readAsBytesSync());
I.m.Image smallerImage = Im.copyResize(
image!,width: 200,height: 200
); // choose the size here, it will maintain aspect ratio
var decodedImageFile = File(object.path + '/img_${object.rand}.jpg');
decodedImageFile.writeAsBytesSync(Im.encodeJpg(smallerImage, quality: 85));
return decodedImageFile.path;
}
Then:
void chooseImage(bool isFromGallery) async {
final XFile? image = await picker.pickImage(
source: isFromGallery ? ImageSource.gallery : ImageSource.camera,
preferredCameraDevice: CameraDevice.front,
);
if (image != null) {
if (Platform.isIOS) {
final tempDir = await getTemporaryDirectory();
final rand = Math.Random().nextInt(10000);
CompressObject compressObject = CompressObject(File(image.path), tempDir.path, rand);
String filePath = await compressImage(compressObject);
print('new path: ' + filePath);
file = File(filePath);
} else {
file = File(image.path);
}
}
final inputImage = InputImage.fromFilePath(i file.path);
final List<Face> faces = await faceDetector.processImage(inputImage);
/.........../
}

Image Picker does not work properly with google_mlkit_object_detector only on IOS Simlutor

Once i added the google_mlkit_object_detector then the image_picker stopped working. i can access the gallery but there's no image will return when added this package.
void _incrementCounter() async {
print('start');
image = await ImagePicker().pickImage(source: ImageSource.gallery); // IT WILL STOP HERE AND CAN"T PROCESS THE EXECUTION
print('succeed');
setState(() {});
}
void processing() async {
final inputImage = InputImage.fromFile(File(image!.path));
final objectDetector = ObjectDetector(
options: ObjectDetectorOptions(mode: DetectionMode.singleImage));
final List<DetectedObject> objects =
await objectDetector.processImage(inputImage);
for (DetectedObject detectedObject in objects) {
final rect = detectedObject.boundingBox;
final trackingId = detectedObject.trackingId;
for (Label label in detectedObject.labels) {
print('${label.text} ${label.confidence}');
}
}
}

Image dimension, ByteBuffer size and format don't match

I'm trying to make a face recognition app in flutter. Most of the code is taken from here. That project used Firebase ML Vision(which is now deprecated), so I followed the migration guide to Google ML Kit. I made changes to the face detection part of the code.
Following is the code for detect function:
Future<List<Face>> detect(CameraImage image, InputImageRotation rotation) {
final faceDetector = GoogleMlKit.vision.faceDetector(
const FaceDetectorOptions(
mode: FaceDetectorMode.accurate,
enableLandmarks: true,
),
);
return faceDetector.processImage(
InputImage.fromBytes(
bytes: image.planes[0].bytes,
inputImageData:InputImageData(
inputImageFormat:InputImageFormatMethods.fromRawValue(image.format.raw)!,
size: Size(image.width.toDouble(), image.height.toDouble()),
imageRotation: rotation,
planeData: image.planes.map(
(Plane plane) {
return InputImagePlaneMetadata(
bytesPerRow: plane.bytesPerRow,
height: plane.height,
width: plane.width,
);
},
).toList(),
),
),
);
}
When I call this function, I get the following error:
I'm unable to figure out where I'm doing something wrong.
Here's the initializeCamera function(detect function is called inside it):
void _initializeCamera() async {
CameraDescription description = await getCamera(_direction);
InputImageRotation rotation = rotationIntToImageRotation(
description.sensorOrientation,
);
_camera =
CameraController(description, ResolutionPreset.ultraHigh, enableAudio: false);
await _camera!.initialize();
await loadModel();
//await Future.delayed(const Duration(milliseconds: 500));
tempDir = await getApplicationDocumentsDirectory();
String _embPath = tempDir!.path + '/emb.json';
jsonFile = File(_embPath);
if (jsonFile!.existsSync()) data = json.decode(jsonFile!.readAsStringSync());
_camera!.startImageStream((CameraImage image)async {
if (_camera != null) {
if (_isDetecting) {
return;
}
_isDetecting = true;
String res;
dynamic finalResult = Multimap<String, Face>();
List<Face> faces = await detect(image, rotation); <------------------ Detect Function
if (faces.isEmpty) {
_faceFound = false;
} else {
_faceFound = true;
}
Face _face;
imglib.Image convertedImage =
_convertCameraImage(image, _direction);
for (_face in faces) {
double x, y, w, h;
x = (_face.boundingBox.left - 10);
y = (_face.boundingBox.top - 10);
w = (_face.boundingBox.width + 10);
h = (_face.boundingBox.height + 10);
imglib.Image croppedImage = imglib.copyCrop(
convertedImage, x.round(), y.round(), w.round(), h.round());
croppedImage = imglib.copyResizeCropSquare(croppedImage, 112);
// int startTime = new DateTime.now().millisecondsSinceEpoch;
res = _recog(croppedImage);
// int endTime = new DateTime.now().millisecondsSinceEpoch;
// print("Inference took ${endTime - startTime}ms");
finalResult.add(res, _face);
}
setState(() {
_scanResults = finalResult;
});
_isDetecting = false;
}
});
}
EDIT: I finally got the solution
The following "detect" function solved the problem for me:
Future<List<Face>> detect(CameraImage image, InputImageRotation rotation) {
final faceDetector = GoogleMlKit.vision.faceDetector(
const FaceDetectorOptions(
mode: FaceDetectorMode.accurate,
enableLandmarks: true,
),
);
final WriteBuffer allBytes = WriteBuffer();
for (final Plane plane in image.planes) {
allBytes.putUint8List(plane.bytes);
}
final bytes = allBytes.done().buffer.asUint8List();
final Size imageSize =
Size(image.width.toDouble(), image.height.toDouble());
final inputImageFormat =
InputImageFormatMethods.fromRawValue(image.format.raw) ??
InputImageFormat.NV21;
final planeData = image.planes.map(
(Plane plane) {
return InputImagePlaneMetadata(
bytesPerRow: plane.bytesPerRow,
height: plane.height,
width: plane.width,
);
},
).toList();
final inputImageData = InputImageData(
size: imageSize,
imageRotation: rotation,
inputImageFormat: inputImageFormat,
planeData: planeData,
);
return faceDetector.processImage(
InputImage.fromBytes(
bytes: bytes,
inputImageData:inputImageData
),
);
}
The problem is in this function
faceDetector.processImage(
InputImage.fromBytes(
bytes: image.planes[0].bytes,
inputImageData:InputImageData(
inputImageFormat:InputImageFormatMethods.fromRawValue(image.format.raw)!,
size: Size(image.width.toDouble(), image.height.toDouble()),
imageRotation: rotation,
planeData: image.planes.map(
(Plane plane) {
return InputImagePlaneMetadata(
bytesPerRow: plane.bytesPerRow,
height: plane.height,
width: plane.width,
);
},
).toList(),
),
),
The solution is instead of taking bytes of only first plane image.planes[0].bytes combine bytes from all planes using
faceDetector.processImage(
InputImage.fromBytes(
bytes: Uint8List.fromList(
image.planes.fold(
<int>[],
(List<int> previousValue, element) =>
previousValue..addAll(element.bytes)),
),
inputImageData:InputImageData(
inputImageFormat:InputImageFormatMethods.fromRawValue(image.format.raw)!,
size: Size(image.width.toDouble(), image.height.toDouble()),
imageRotation: rotation,
planeData: image.planes.map(
(Plane plane) {
return InputImagePlaneMetadata(
bytesPerRow: plane.bytesPerRow,
height: plane.height,
width: plane.width,
);
},
).toList(),
),
),
I think this is because of difference between the way ios and android CameraImage format. On Android CameraImage has multiple planes and all of them have byte data so we have to combine them all. I am not sure how it works on Ios.
The answer from #mumboFromAvnotaklu worked for me and should be accepted as the answer. Below I have just updated the code to work with the latest versions of the Google ML Kit.
if (image.planes.isNotEmpty) {
// There are usually a few planes per image, potentially worth looking
// at some sort of best from provided planes solution
InputImageData iid = InputImageData(
inputImageFormat: InputImageFormatValue.fromRawValue(image.format.raw)!,
size: Size(image.width.toDouble(), image.height.toDouble()),
imageRotation: InputImageRotation.rotation90deg,
planeData: image.planes
.map((Plane plane) => InputImagePlaneMetadata(
bytesPerRow: plane.bytesPerRow,
height: plane.height,
width: plane.width,
))
.toList(),
);
Uint8List bytes = Uint8List.fromList(
image.planes.fold(<int>[], (List<int> previousValue, element) => previousValue..addAll(element.bytes)),
);
return InputImage.fromBytes(
bytes: bytes,
inputImageData: iid,
);
}
Even OPs solution didnt work for me, finally I found a different solution
First change the dependency from ml_kit to the face detection specific library so that this works
google_mlkit_face_detection: ^0.0.1
I am only including what code needs to be changed.
InputImageData _inputImageData = InputImageData(
imageRotation:
_cameraService.cameraRotation ?? InputImageRotation.Rotation_0deg,
inputImageFormat:
InputImageFormatMethods.fromRawValue(image.format.raw) ??
InputImageFormat.NV21,
size:
Size(image.planes[0].bytesPerRow.toDouble(), image.height.toDouble()),
planeData: image.planes.map(
(Plane plane) {
return InputImagePlaneMetadata(
bytesPerRow: plane.bytesPerRow,
height: image.height,
width: image.width,
);
},
).toList(),
);
final WriteBuffer allBytes = WriteBuffer();
for (Plane plane in image.planes) {
allBytes.putUint8List(plane.bytes);
}
final bytes = allBytes.done().buffer.asUint8List();
InputImage _inputImage = InputImage.fromBytes(
bytes: bytes,
inputImageData: _inputImageData,
);
return faceDetector.processImage(
InputImage.fromBytes(
bytes: bytes,
inputImageData:inputImageData
),
);
For more information link to the forum that gave me this solution Click here

Can I use Async inside Dart Isolate ??? its not working

I am using ImageEditor package to merge different images. below is my code. its working perfectly fine without using Isolate, when i use it with isolate, i get null error.
Working code without Isolate
startEditing() async {
for (var i = 0; i < image1.length || i == 0; i++) {
if (image1.isNotEmpty) {
img1 = await File(image1[i].path).readAsBytes();
}
for (var i = 0; i < image2.length || i == 0; i++) {
if (image2.isNotEmpty) {
img2 = await File(image2[i].path).readAsBytes();
}
final ImageEditorOption optionGroup = ImageEditorOption();
optionGroup.outputFormat = const OutputFormat.png(100);
optionGroup.addOptions([
MixImageOption(
x: 0,
y: 0,
width: 1000,
height: 1000,
target: MemoryImageSource(img1),
),
MixImageOption(
x: 0,
y: 0,
width: 1000,
height: 1000,
target: MemoryImageSource(img2),
),
]);
try {
final Uint8List? result = await ImageEditor.editImage(
image: mainImg, imageEditorOption: optionGroup);
if (result == null) {
image = null;
} else {
await saveImage(result, index);
setState(() {
image = MemoryImage(result);
index++;
});
}
} catch (e) {
print(e);
}
}
}
}
Code with Isolate not working
startEditing(SendPort sendPort) async {
for (var i = 0; i < image1.length || i == 0; i++) {
if (image1.isNotEmpty) {
img1 = await File(image1[i].path).readAsBytes();
}
for (var i = 0; i < image2.length || i == 0; i++) {
if (image2.isNotEmpty) {
img2 = await File(image2[i].path).readAsBytes();
}
final ImageEditorOption optionGroup = ImageEditorOption();
optionGroup.outputFormat = const OutputFormat.png(100);
optionGroup.addOptions([
MixImageOption(
x: 0,
y: 0,
width: 1000,
height: 1000,
target: MemoryImageSource(img1),
),
MixImageOption(
x: 0,
y: 0,
width: 1000,
height: 1000,
target: MemoryImageSource(img2),
),
]);
try {
final Uint8List? result = await ImageEditor.editImage(
image: mainImg, imageEditorOption: optionGroup);
if (result == null) {
image = null;
} else {
await saveImage(result, index);
image = MemoryImage(result);
index++;
sendPort.send(image);
}
} catch (e) {
print(e);
}
}
}
}
saveImage method
Future<String> saveImage(Uint8List bytes, int i) async {
final name = '${filenames[i]}';
final result = await ImageGallerySaver.saveImage(bytes, name: name);
print(result);
return result['filePath'];
}
Receiving in main thread
getImageas() async {
ReceivePort receiverPort =
ReceivePort();
final isolate =
await Isolate.spawn(startEditing, receiverPort.sendPort);
receiverPort.listen((data) {
print('Receiving: ' + data + ', ');
});
}
I get this error:
I/flutter (21937): Null check operator used on a null value
in this line:
final Uint8List? result = await ImageEditor.editImage(
image: mainImg, imageEditorOption: optionGroup);
I am sure that img1, img2, mainImg, image1, image2 values are not null... check 1000 times. I have also used flutter compute, and same result.
Flutter plugins that call into native code (such as image_editor) do not work in isolates spawned by Isolate.spawn.
The flutter_isolate package spawns isolates from native code for this reason. You should be able to use it to call image_editor in an isolate. (Disclaimer: I've never used flutter_isolate.)