How can I convert back and forth between Blob and Image in Flutter Web? - flutter

Context
I use image_picker with Flutter web to allow users to select an image. This returns the URI of a local network Blob object, which I can display with Image.network(pickedFile.path). Where I get into trouble is when I want to start manipulating that image. First, I need to pull it off the network and into memory. When I'm done, I need to push it back up to a network-accessible Blob.
How do I create a Blob from an Image?
I don't mean the built-in Image widget. I mean an ImageLib.Image where ImageLib is the Dart image library. Why do I want to do this? Well, I have a web app in which the user selects an image, which is returned as a Blob. I bring this into memory, use ImageLib to crop and resize it, and then want to push it back up to a Blob URL. This is where my code is currently:
# BROKEN:
var png = ImageLib.encodePng(croppedImage);
var blob = html.Blob([base64Encode(png)], 'image/png');
var url = html.Url.createObjectUrl(blob);
The code does not throw an error until I try to display the image with Image(image: NetworkImage(url)). The error begins with:
The following Event$ object was thrown resolving an image frame:
Copying and pasting url into the browser reveals a black screen, which I take to be a 0x0 image. And so I come to my questions:
How do I properly encode the image and create a Blob?
Is there a better way to manipulate images in Flutter web besides using Blobs? I am basically only using it because that is what image_picker_for_web returns, and so it is the only method I know aside from possibly using a virtual filesystem, which I haven't explored too much.
How do I pull an image into memory?
While I'm at it, I might as well ask what is the best practice for bringing an image into memory. For mobile, I used image_picker to get the name of a file, and I would use the package:image/image.dart as ImageLib to manipulate it:
// pickedfile.path is the name of a file
ImageLib.Image img = ImageLib.decodeImage(File(pickedfile.path).readAsBytesSync());
With web I don't have filesystem access, so I've been doing this instead:
// pickedfile.path is the URL of an HTML Blob
var response = await http.get(pickedfile.path);
ImageLib.Image img = ImageLib.decodeImage(response.bodyBytes);
This is considerably slower than the old way, probably because of the GET. Is this really the best (or only) way to get my image into memory?

The secret, as suggested by Brendan Duncan, was to use the browser's native decoding functionality:
// user browser to decode
html.ImageElement myImageElement = html.ImageElement(src: imagePath);
await myImageElement.onLoad.first; // allow time for browser to render
html.CanvasElement myCanvas = html.CanvasElement(width: myImageElement.width, height: myImageElement.height);
html.CanvasRenderingContext2D ctx = myCanvas.context2D;
//ctx.drawImage(myImageElement, 0, 0);
//html.ImageData rgbaData = ctx.getImageData(0, 0, myImageElement.width, myImageElement.height);
// resize to save time on encoding
int _MAXDIM = 500;
int width, height;
if (myImageElement.width > myImageElement.height) {
width = _MAXDIM;
height = (_MAXDIM*myImageElement.height/ myImageElement.width).round();
} else {
height = _MAXDIM;
width = (_MAXDIM*myImageElement.width/ myImageElement.height).round();
}
ctx.drawImageScaled(myImageElement, 0, 0, width, height);
html.ImageData rgbaData = ctx.getImageData(0, 0, width, height);
var myImage = ImageLib.Image.fromBytes(rgbaData.width, rgbaData.height, rgbaData.data);
He proposed a similar trick for encoding, but for my use case it was sufficient to do it with Dart:
int width, height;
if (myImageElement.width > myImageElement.height) {
width = 800;
height = (800*myImageElement.height/ myImageElement.width).round();
} else {
height = 800;
width = (800*myImageElement.width/ myImageElement.height).round();
}
ctx.drawImageScaled(myImageElement, 0, 0, width, height);
html.ImageData rgbaData = ctx.getImageData(0, 0, width, height);
var myImage = ImageLib.Image.fromBytes(rgbaData.width, rgbaData.height, rgbaData.data);
Note that in both cases I resize the image first to reduce the size.

Related

How to capture the window where flutter widget's area is located

hi guys im making my own flutter desktop program
but when trying to make a fucntion that capturing the window where flutter widget's area is located i have troubled..
first, i tried to transparent the program's background and capture the widget(by using flutter_acrylic and RenderRepaintBoundary).transparent image
when i caputred widget the image wasnt include transparent window.. but only tranparent color was. because caputre algorithm see only the widget. im blocked in this problem.widget capture image
anyone who have idea for this problem please give me wisdom...
+Now I tying to get information about the win32 API. Any ideas on win32 capture API would be much appreciated.
i did!! anyone want to know see these links https://learn.microsoft.com/ko-kr/windows/win32/gdi/capturing-an-image + https://pub.dev/packages/win32
output image
About win32 capture API, you can use GDI BitBlt function. Here is a MSDN sample Capturing an Image.
You need to find the flutter desktop program window's handle with FindWindow, then get the rect range, and capture the window from DC.
HDC hWndDC = GetWindowDC(hwnd);
RECT capture_rect{ 0,0,0,0 };
RECT wnd_rect;
RECT real_rect;
GetWindowRect(hwnd, &wnd_rect);
DwmGetWindowAttribute(hwnd, DWMWINDOWATTRIBUTE::DWMWA_EXTENDED_FRAME_BOUNDS, &real_rect, sizeof(RECT));
int offset_left = real_rect.left - wnd_rect.left;
int offset_top = real_rect.top - wnd_rect.top;
capture_rect = RECT{ offset_left,offset_top,real_rect.right - real_rect.left + offset_left,real_rect.bottom - real_rect.top + offset_top };
//capture_rect ?? wnd_rect (You can calculate the capture_rect based on the size of your window)
int width = capture_rect.right - capture_rect.left;
int height = capture_rect.bottom - capture_rect.top;
HDC hMemDC = CreateCompatibleDC(hWndDC);
HBITMAP hBitmap = CreateCompatibleBitmap(hWndDC, width, height);
SelectObject(hMemDC, hBitmap);
BitmapPtr bitmap;
bool ok = BitBlt(hMemDC, 0, 0, width, height, hWndDC, capture_rect.left, capture_rect.top, SRCCOPY);
DeleteDC(hWndDC);
DeleteDC(hMemDC);
DeleteObject(hBitmap);

Passing a base64 string encoded image/ byte image as an image for processsing in Firebase ML Vision in Flutter

I want to OCR text from a base64 encoded image.
I know the image works because I can display it using
Image.memory(base64Decode(captchaEncodedImgFetched))
Now, the problem is I need to pass this image to Firebase ML Vision for processing.
The library firebase_ml_vision has an example for using a image from file
final File imageFile = getImageFile();
final FirebaseVisionImage visionImage = FirebaseVisionImage.fromFile(imageFile);`
However I have a base64 encoded image.
I tried the following
final FirebaseVisionImage visionImage = FirebaseVisionImage.fromBytes(
base64Decode(captchaEncodedImgFetched));
But it seems to need FirebaseVisionImageMetadata() as a argument, but I know nothing about byte images.
This class needs a lot more arguments which I don't understand.
For example, it needs a size : Size(width, height) argument. Isn't the image supposed to have a size already? Why do I need to specify it again?
For now I set it to Size(200, 50). Then there are the other arugments and I don't know what to pass to them. For exmaple the planeData and rawFormat.
Here are the docs for these:
https://pub.dev/documentation/firebase_ml_vision/latest/firebase_ml_vision/FirebaseVisionImageMetadata-class.html
https://pub.dev/documentation/firebase_ml_vision/latest/firebase_ml_vision/FirebaseVisionImagePlaneMetadata-class.html
https://pub.dev/documentation/firebase_ml_vision/latest/
FirebaseVisionImage.fromBytes needs FirebaseVisionImageMetadata which intern needs FirebaseVisionImagePlaneMetadata. Example below :
// Below example uses metadata values based on an RGBA-encoded 1080x1080 image
final planeMetadata = FirebaseVisionImagePlaneMetadata(
width: 1080,
height: 1080,
bytesPerRow: 1080 * 4,
);
final imageMetadata = FirebaseVisionImageMetadata(
size: Size(1080, 1080),
planeData: planeMetadata,
rawFormat: 'RGBA',
);
final visionImage = FirebaseVisionImage.fromBytes(decoded, metadata);
The simpler workaround though at a cost of performance is to write the bytes to the disk and read the image from there, as :
File imgFile = File('myimage.png');
imageFile.writeAsBytesSync(decoded.ToList());
final visionImage = FirebaseVisionImage.fromFile(imageFile);

How to insert image in mupdf library

I am using mupdf to sign a pdf.
And I succeed to sign a annotation in pdf with function "pdf_update_ink_appearance"
Now I'm trying to insert an image into pdf.
I add below codes to insert image:
image = fz_new_image_from_file(ctx, "/storage/emulated/0/a.jpg");
fz_fill_image(ctx, dev, image, &page_ctm, 1.0f);
And the image doesn't show up in pdf.
I try another method, but the image also can't show up in pdf.
How to add a transparent image to PDF with mupdf using SMask?
Can anyone help this situation?
Thanks.
I am one of PyMuPDF's authors (a Python binding for MuPDF) and solved this exact task. You might have a look at the source code in GitHub.
The basic process is:
Create an image and then a pixmap from the file. If this pixmap has alpha (i.e. transparency), create another pixmap containg the alpha bytes and link like follows. After this add the main pixmap to the PDF. Finally Add an XObject referencing it to the page's /Resources, and invoke the XObject with a "Do" operator in the page's /Contents object.
if (filename)
{
image = fz_new_image_from_file(gctx, filename);
pix = fz_get_pixmap_from_image(gctx, image, NULL, NULL, 0, 0);
if (pix->alpha == 1)
{
j = pix->n - 1;
pm = fz_new_pixmap(gctx, NULL, pix->w, pix->h, seps, 0);
s = pix->samples;
t = pm->samples;
for (i = 0; i < pix->w * pix->h; i++)
t[i] = s[j + i * pix->n];
mask = fz_new_image_from_pixmap(gctx, pm, NULL);
zimg = fz_new_image_from_pixmap(gctx, pix, mask);
fz_drop_image(gctx, image);
image = zimg;
zimg = NULL;
}
}
Do have a look at file fitz.i and search for function insertImage. It's an SWIG interface file, but that part is plain C interfacing with MuPDF.

Print byte[] to PDF using PDFBox

I have a question about writing image to PDF using PDFBox.
My requirement is very simple: I get an image from a web service using Spring RestTemplate, I store it in a byte[] variable, but I need to draw the image into a PDF document.
I know that the following is provided:
final byte[] image = this.restTemplate.getForObject(
this.imagesUrl + cableReference + this.format,
byte[].class
);
JPEGFactory.createFromStream() for JPEG format, CCITTFactory.createFromFile() for TIFF images, LosslessFactory.createFromImage() if starting with buffered images. But I don't know what to use, as the only information I know about those images is that they are in THUMBNAIL format and I don't know how to convert from byte[] to those formats.
Thanks a lot for any help.
(This applies to version 2.0, not to 1.8)
I don't know what you mean with THUMBNAIL format, but give this a try:
final byte[] image = ... // your code
ByteArrayInputStream bais = new ByteArrayInputStream(image);
BufferedImage bim = ImageIO.read(bais);
PDImageXObject pdImage = LosslessFactory.createFromImage(doc, bim);
It might be possible to create a more advanced solution by using
PDImageXObject.createFromFileByContent()
but this one uses a file and not a stream, so it would be slower (but produce the best possible image type).
To add this image to your PDF, use this code:
PDDocument doc = new PDDocument();
try
{
PDPage page = new PDPage();
doc.addPage(page);
PDPageContentStream contents = new PDPageContentStream(doc, page);
// draw the image at full size at (x=20, y=20)
contents.drawImage(pdImage, 20, 20);
// to draw the image at half size at (x=20, y=20) use
// contents.drawImage(pdImage, 20, 20, pdImage.getWidth() / 2, pdImage.getHeight() / 2);
contents.close();
doc.save(pdfPath);
}
finally
{
doc.close();
}

Cropping image with CIImage.ImageByCroppingToRect or CICrop in MonoTouch

I have a very large image that I would like to show a 200x200px thumbnail of (showing a portion of the image, not a strethed version of the entire image). To achieve this I am looking into using CIImage.ImageByCroppingToRect or CICrop - but I am not able to get anything useful. Either the result is just black (I assume what I see is the black portion of the cropped image) or I get a SIGABRT ("Cannot handle a (6000 x 3000) sized texture with the given GLES context!")
There is a ObjC sample in this thread:
Cropping CIImage with CICrop isn't working properly
But I haven't managed to translate it in to C# and get it working properly.
Here's a MonoTouch port of the answer from the post you mentioned:
var croppedImaged = CIImage.FromCGImage (inputCGImage).ImageByCroppingToRect (new RectangleF (150, 150, 300, 300));
var transformFilter = new CIAffineTransform();
var affineTransform = CGAffineTransform.MakeTranslation (-150, 150);
transformFilter.Transform = affineTransform;
transformFilter.Image = croppedImaged;
CIImage transformedImage = transformFilter.OutputImage;