Passing a base64 string encoded image/ byte image as an image for processsing in Firebase ML Vision in Flutter - flutter

I want to OCR text from a base64 encoded image.
I know the image works because I can display it using
Image.memory(base64Decode(captchaEncodedImgFetched))
Now, the problem is I need to pass this image to Firebase ML Vision for processing.
The library firebase_ml_vision has an example for using a image from file
final File imageFile = getImageFile();
final FirebaseVisionImage visionImage = FirebaseVisionImage.fromFile(imageFile);`
However I have a base64 encoded image.
I tried the following
final FirebaseVisionImage visionImage = FirebaseVisionImage.fromBytes(
base64Decode(captchaEncodedImgFetched));
But it seems to need FirebaseVisionImageMetadata() as a argument, but I know nothing about byte images.
This class needs a lot more arguments which I don't understand.
For example, it needs a size : Size(width, height) argument. Isn't the image supposed to have a size already? Why do I need to specify it again?
For now I set it to Size(200, 50). Then there are the other arugments and I don't know what to pass to them. For exmaple the planeData and rawFormat.
Here are the docs for these:
https://pub.dev/documentation/firebase_ml_vision/latest/firebase_ml_vision/FirebaseVisionImageMetadata-class.html
https://pub.dev/documentation/firebase_ml_vision/latest/firebase_ml_vision/FirebaseVisionImagePlaneMetadata-class.html
https://pub.dev/documentation/firebase_ml_vision/latest/

FirebaseVisionImage.fromBytes needs FirebaseVisionImageMetadata which intern needs FirebaseVisionImagePlaneMetadata. Example below :
// Below example uses metadata values based on an RGBA-encoded 1080x1080 image
final planeMetadata = FirebaseVisionImagePlaneMetadata(
width: 1080,
height: 1080,
bytesPerRow: 1080 * 4,
);
final imageMetadata = FirebaseVisionImageMetadata(
size: Size(1080, 1080),
planeData: planeMetadata,
rawFormat: 'RGBA',
);
final visionImage = FirebaseVisionImage.fromBytes(decoded, metadata);
The simpler workaround though at a cost of performance is to write the bytes to the disk and read the image from there, as :
File imgFile = File('myimage.png');
imageFile.writeAsBytesSync(decoded.ToList());
final visionImage = FirebaseVisionImage.fromFile(imageFile);

Related

In Flutter, how to reduce the image data before saving it to a file?

I'm working on a screen recording app in Flutter. In order to keep the quality of the captured screen image, I have to scale the image as shown below. However, it makes the image data very large. How can I reduce the data size (scale back to the original size) before saving it to a file? For performance reasons, I want to work with raw RGBA instead of encoding it PNG.
double dpr = ui.window.devicePixelRatio;
final boundary =
key.currentContext!.findRenderObject() as RenderRepaintBoundary;
return getBufferSync(key);
}
ui.Image image = boundary.toImageSync(pixelRatio: dpr);
var rawBytes = await image.toByteData(format: ui.ImageByteFormat.rawRgba);
Uint8List uData = rawBytes!.buffer.asUint8List();

Dart how to resize image height and width when its in bytes

I have the following code and I need a way to resize the image height and width that it in ByteData here is my code
Future<ByteData?> _createChartImage() async {
var data = await _chartKey.currentState?.toImage(pixelRatio: 3.0);
var byteData = await data!.toByteData(format: ImageByteFormat.png);
return byteData;
}
You can't resize byte data, especially not byte data encoded in PNG format. So you have to first parse the byte data from PNG format back into a bitmap. And then resize the bitmap and then encode it back to PNG.
I would suggest taking a look at the image package.
Something like the following should work:
final image = decodePng(byteData!.buffer.asUint8List());
final resized = copyResize(image, width: 120);
final resizedByteData = encodePng(image);
return ByteData.sublistView(resizedByteData);

How can I convert back and forth between Blob and Image in Flutter Web?

Context
I use image_picker with Flutter web to allow users to select an image. This returns the URI of a local network Blob object, which I can display with Image.network(pickedFile.path). Where I get into trouble is when I want to start manipulating that image. First, I need to pull it off the network and into memory. When I'm done, I need to push it back up to a network-accessible Blob.
How do I create a Blob from an Image?
I don't mean the built-in Image widget. I mean an ImageLib.Image where ImageLib is the Dart image library. Why do I want to do this? Well, I have a web app in which the user selects an image, which is returned as a Blob. I bring this into memory, use ImageLib to crop and resize it, and then want to push it back up to a Blob URL. This is where my code is currently:
# BROKEN:
var png = ImageLib.encodePng(croppedImage);
var blob = html.Blob([base64Encode(png)], 'image/png');
var url = html.Url.createObjectUrl(blob);
The code does not throw an error until I try to display the image with Image(image: NetworkImage(url)). The error begins with:
The following Event$ object was thrown resolving an image frame:
Copying and pasting url into the browser reveals a black screen, which I take to be a 0x0 image. And so I come to my questions:
How do I properly encode the image and create a Blob?
Is there a better way to manipulate images in Flutter web besides using Blobs? I am basically only using it because that is what image_picker_for_web returns, and so it is the only method I know aside from possibly using a virtual filesystem, which I haven't explored too much.
How do I pull an image into memory?
While I'm at it, I might as well ask what is the best practice for bringing an image into memory. For mobile, I used image_picker to get the name of a file, and I would use the package:image/image.dart as ImageLib to manipulate it:
// pickedfile.path is the name of a file
ImageLib.Image img = ImageLib.decodeImage(File(pickedfile.path).readAsBytesSync());
With web I don't have filesystem access, so I've been doing this instead:
// pickedfile.path is the URL of an HTML Blob
var response = await http.get(pickedfile.path);
ImageLib.Image img = ImageLib.decodeImage(response.bodyBytes);
This is considerably slower than the old way, probably because of the GET. Is this really the best (or only) way to get my image into memory?
The secret, as suggested by Brendan Duncan, was to use the browser's native decoding functionality:
// user browser to decode
html.ImageElement myImageElement = html.ImageElement(src: imagePath);
await myImageElement.onLoad.first; // allow time for browser to render
html.CanvasElement myCanvas = html.CanvasElement(width: myImageElement.width, height: myImageElement.height);
html.CanvasRenderingContext2D ctx = myCanvas.context2D;
//ctx.drawImage(myImageElement, 0, 0);
//html.ImageData rgbaData = ctx.getImageData(0, 0, myImageElement.width, myImageElement.height);
// resize to save time on encoding
int _MAXDIM = 500;
int width, height;
if (myImageElement.width > myImageElement.height) {
width = _MAXDIM;
height = (_MAXDIM*myImageElement.height/ myImageElement.width).round();
} else {
height = _MAXDIM;
width = (_MAXDIM*myImageElement.width/ myImageElement.height).round();
}
ctx.drawImageScaled(myImageElement, 0, 0, width, height);
html.ImageData rgbaData = ctx.getImageData(0, 0, width, height);
var myImage = ImageLib.Image.fromBytes(rgbaData.width, rgbaData.height, rgbaData.data);
He proposed a similar trick for encoding, but for my use case it was sufficient to do it with Dart:
int width, height;
if (myImageElement.width > myImageElement.height) {
width = 800;
height = (800*myImageElement.height/ myImageElement.width).round();
} else {
height = 800;
width = (800*myImageElement.width/ myImageElement.height).round();
}
ctx.drawImageScaled(myImageElement, 0, 0, width, height);
html.ImageData rgbaData = ctx.getImageData(0, 0, width, height);
var myImage = ImageLib.Image.fromBytes(rgbaData.width, rgbaData.height, rgbaData.data);
Note that in both cases I resize the image first to reduce the size.

Reading / converting camera stream images in Flutter

On my iOS device, I'm accessing the camera through a CameraController (provided by the camera package) with controller.startImageStream((CameraImage img) {...}
The data coming out of the camera is in a bgra8888 format on my phone, but I've read that it's in yuv420 on android devices. To convert the image stream data to a usable, consistent format, I'm using:
import 'dart:typed_data';
import 'package:camera/camera.dart';
import 'package:image/image.dart' as im;
Uint8List concatenatePlanes(List<Plane> planes) {
final WriteBuffer allBytes = WriteBuffer();
planes.forEach((Plane plane) => allBytes.putUint8List(plane.bytes));
return allBytes.done().buffer.asUint8List();
}
List<CameraDescription> cameras = await availableCameras();
CameraController controller = CameraController(cameras[0], ResolutionPreset.medium);
controller.startImageStream((CameraImage img) {
print(img.format.group); // returns ImageFormatGroup.bgra8888
List<int> imgData = concatenatePlanes(img.planes);
im.Image image = im.decodeImage(imgData);
});
The imgData variable is full of data streaming off the camera, but the converted image returned from decodeImage is null.
I had read on other posts that the image package would be up to the task of decoding bgra8888/yuv420 images (https://stackoverflow.com/a/57635827/479947) but I'm not seeing support in its formats.dart source (https://github.com/brendan-duncan/image/blob/master/lib/src/formats/formats.dart).
The target Image format is defined as:
An image buffer where pixels are encoded into 32-bit unsigned ints
(Uint32). Pixels are stored in 32-bit unsigned integers in #AARRGGBB
format. This is to be consistent with the Flutter image data.
How would I get my image stream image in bgra8888/yuv420 converted into the desired Image format?
Have you check this? convertImage
If you convert from bgra8888 to Image, it's kinda fast but yuv420 to Image take much more time, so if you have problem about performance, I can help a little bit with my experience. By the way, if you have performance problem and trying using Isolate, u will meet the memory issue.

Print byte[] to PDF using PDFBox

I have a question about writing image to PDF using PDFBox.
My requirement is very simple: I get an image from a web service using Spring RestTemplate, I store it in a byte[] variable, but I need to draw the image into a PDF document.
I know that the following is provided:
final byte[] image = this.restTemplate.getForObject(
this.imagesUrl + cableReference + this.format,
byte[].class
);
JPEGFactory.createFromStream() for JPEG format, CCITTFactory.createFromFile() for TIFF images, LosslessFactory.createFromImage() if starting with buffered images. But I don't know what to use, as the only information I know about those images is that they are in THUMBNAIL format and I don't know how to convert from byte[] to those formats.
Thanks a lot for any help.
(This applies to version 2.0, not to 1.8)
I don't know what you mean with THUMBNAIL format, but give this a try:
final byte[] image = ... // your code
ByteArrayInputStream bais = new ByteArrayInputStream(image);
BufferedImage bim = ImageIO.read(bais);
PDImageXObject pdImage = LosslessFactory.createFromImage(doc, bim);
It might be possible to create a more advanced solution by using
PDImageXObject.createFromFileByContent()
but this one uses a file and not a stream, so it would be slower (but produce the best possible image type).
To add this image to your PDF, use this code:
PDDocument doc = new PDDocument();
try
{
PDPage page = new PDPage();
doc.addPage(page);
PDPageContentStream contents = new PDPageContentStream(doc, page);
// draw the image at full size at (x=20, y=20)
contents.drawImage(pdImage, 20, 20);
// to draw the image at half size at (x=20, y=20) use
// contents.drawImage(pdImage, 20, 20, pdImage.getWidth() / 2, pdImage.getHeight() / 2);
contents.close();
doc.save(pdfPath);
}
finally
{
doc.close();
}