How to resize an image in flutter with conditional statements - flutter

I achieved to resize my images using the package image
What I want to achieve is a conditional statement that reduces the size of the image if image is > than max (1920) and dont resize it otherwise.
This is my code:
Widget build(BuildContext context) {
void uploadImage(File result) async {
var image = decodeImage(result.readAsBytesSync());
var thumbnail = copyResize(image, width: 1920);
var encoded = base64.encode(encodeJpg(thumbnail));
var userAccount =
await context.api.customersApi.accounts.uploadProfileImage(defaultOrganisationId, context.blocs.auth.state.user.id, ImageUploadRequest(encoded));
// update the current user
context.blocs.auth.updateUserAccount(userAccount.data);
}

You can check the width and height of the image:
if (image.width > 1920 || image.height > 1920) {
thumbnail = copyResze(image, width: 1920, height: 1920);
}
else {
thumbnail = image;
}
I'm assuming you care mostly about the file size (~ area) of the image, though you should have no problem changing it to look at only one coordinate.
I might also note that a 1920 pixels wide image no longer qualifies as a thumbnail in my opinion, that's as wide as my entire screen.

From the Docs,
I think you can use both height and width property.
Here it like, image.height and image.width.
Hope that works!

Related

Cropping an image in flutter

So I've been trying really hard to crop an image according to my needs in flutter.
Problem statement:
I have a screen and in that screen I show a frame while the device camera is run on the background. Now, what I want is that whenever the user clicks a photo, only the area of that image inside the frame should be kept and rest should be cropped.
What I have done so far?
Added a package image 3.1.3
Wrote code to fetch x,y coordinates of my frame.
Using the calculated x,y coordinates and copyCrop method from the Image package to crop the clicked image.
Now the problem is that I do not know how copyCrop works and the code right now does not give me the expected results.
final GlobalKey _key = GlobalKey();
void _getOffset(GlobalKey key) {
RenderBox? box = key.currentContext?.findRenderObject() as RenderBox?;
Offset? position = box?.localToGlobal(Offset.zero);
if (position != null) {
setState(() {
_x = position.dx;
_y = position.dy;
});
}
}
I assign this _key to my Image.file(srcToFrameImage) and the function above yields 10, 289.125
Here 10 is the offset from x and 289.125 is the offset from y. I used this tutorial for the same.
Code to crop my image using the Image package:
var bytes = await File(pictureFile!.path).readAsBytes();
img.Image src = img.decodeImage(bytes)!;
img.Image destImage = img.copyCrop(
src, _x!.toInt(), _y!.toInt(), src.width, src.height);
var jpg = img.encodeJpg(destImage);
await File(pictureFile!.path).writeAsBytes(jpg);
bloc.addFrontImage(File(pictureFile!.path));
Now, can anyone tell me how i can do this effectively? Right now, it does crop my image but not as I want it to be. It would be really great if someone could tell me how does copyCrop work and what is the meaning of all these different parameters that we pass into it.
Any help would be appreciated.
Edit:
Now, as you can see, i only want the image between this frame to be kept after being captured and rest to be cropped off.

Dynamically resizing images using MediaQuery in Flutter

I have two pictures of different dimensions meant to be rendered as per the screen size of the phone the app is being run on which I suppose is something that can be achieved using MediaQuery in the following way:
if(MediaQuery.of(context).size.height < initialSize) {
return Image.asset('Image meant for smaller screens')
} else {
return Image.asset('Image meant for bigger screens')
}
What I don't understand is what initialSize value I should be putting in to check and resize accordingly? Hope I have been able to make my question clear!
initialSize differentiate the small image and large image screen based on screen height.
Let say if device's height is less than 200px we want to show X image, else show Y image.
Image getImage() {
if (MediaQuery.of(context).size.height < 200) {
return Image.asset('Image meant for smaller screens');
} else {
return Image.asset('Image meant for bigger screens');
}
}
You need to choose how you like to define small and large screen image, like here 200, initialSize.

Angular9 Signature Pad issue: Signature drawn offset from pen touch

I implemented a signature pad using https://github.com/ramsatt/Angular9SignaturePad/tree/master/src/app/_componets/signature-pad and it works fine on smaller devices but on iPad or bigger devices like 7" upwards, it doesn't work properly.
When drawing on the screen, the resulting line has an offset from where the user touched (Signature drawn doesn't appear directly under the pen as user draws).
please how can I fix this.
So I fixed it by adding the below code and calling it in ngOnInit
resizeCanvas() {
var width = this.signaturePadElement.nativeElement.width;
var height = this.signaturePadElement.nativeElement.height;
var ratio = Math.max(window.devicePixelRatio || 1, 1);
if (ratio <= 2) {
this.signaturePadElement.nativeElement.width = width * ratio;
this.signaturePadElement.nativeElement.height = height * ratio;
this.signaturePadElement.nativeElement
.getContext("2d")
.scale(ratio, ratio);
}
then do
ngOnInit(){
this.resizeCanvas()
}
this.signaturePadElement is your Element gotten using ViewChild()

How can I convert back and forth between Blob and Image in Flutter Web?

Context
I use image_picker with Flutter web to allow users to select an image. This returns the URI of a local network Blob object, which I can display with Image.network(pickedFile.path). Where I get into trouble is when I want to start manipulating that image. First, I need to pull it off the network and into memory. When I'm done, I need to push it back up to a network-accessible Blob.
How do I create a Blob from an Image?
I don't mean the built-in Image widget. I mean an ImageLib.Image where ImageLib is the Dart image library. Why do I want to do this? Well, I have a web app in which the user selects an image, which is returned as a Blob. I bring this into memory, use ImageLib to crop and resize it, and then want to push it back up to a Blob URL. This is where my code is currently:
# BROKEN:
var png = ImageLib.encodePng(croppedImage);
var blob = html.Blob([base64Encode(png)], 'image/png');
var url = html.Url.createObjectUrl(blob);
The code does not throw an error until I try to display the image with Image(image: NetworkImage(url)). The error begins with:
The following Event$ object was thrown resolving an image frame:
Copying and pasting url into the browser reveals a black screen, which I take to be a 0x0 image. And so I come to my questions:
How do I properly encode the image and create a Blob?
Is there a better way to manipulate images in Flutter web besides using Blobs? I am basically only using it because that is what image_picker_for_web returns, and so it is the only method I know aside from possibly using a virtual filesystem, which I haven't explored too much.
How do I pull an image into memory?
While I'm at it, I might as well ask what is the best practice for bringing an image into memory. For mobile, I used image_picker to get the name of a file, and I would use the package:image/image.dart as ImageLib to manipulate it:
// pickedfile.path is the name of a file
ImageLib.Image img = ImageLib.decodeImage(File(pickedfile.path).readAsBytesSync());
With web I don't have filesystem access, so I've been doing this instead:
// pickedfile.path is the URL of an HTML Blob
var response = await http.get(pickedfile.path);
ImageLib.Image img = ImageLib.decodeImage(response.bodyBytes);
This is considerably slower than the old way, probably because of the GET. Is this really the best (or only) way to get my image into memory?
The secret, as suggested by Brendan Duncan, was to use the browser's native decoding functionality:
// user browser to decode
html.ImageElement myImageElement = html.ImageElement(src: imagePath);
await myImageElement.onLoad.first; // allow time for browser to render
html.CanvasElement myCanvas = html.CanvasElement(width: myImageElement.width, height: myImageElement.height);
html.CanvasRenderingContext2D ctx = myCanvas.context2D;
//ctx.drawImage(myImageElement, 0, 0);
//html.ImageData rgbaData = ctx.getImageData(0, 0, myImageElement.width, myImageElement.height);
// resize to save time on encoding
int _MAXDIM = 500;
int width, height;
if (myImageElement.width > myImageElement.height) {
width = _MAXDIM;
height = (_MAXDIM*myImageElement.height/ myImageElement.width).round();
} else {
height = _MAXDIM;
width = (_MAXDIM*myImageElement.width/ myImageElement.height).round();
}
ctx.drawImageScaled(myImageElement, 0, 0, width, height);
html.ImageData rgbaData = ctx.getImageData(0, 0, width, height);
var myImage = ImageLib.Image.fromBytes(rgbaData.width, rgbaData.height, rgbaData.data);
He proposed a similar trick for encoding, but for my use case it was sufficient to do it with Dart:
int width, height;
if (myImageElement.width > myImageElement.height) {
width = 800;
height = (800*myImageElement.height/ myImageElement.width).round();
} else {
height = 800;
width = (800*myImageElement.width/ myImageElement.height).round();
}
ctx.drawImageScaled(myImageElement, 0, 0, width, height);
html.ImageData rgbaData = ctx.getImageData(0, 0, width, height);
var myImage = ImageLib.Image.fromBytes(rgbaData.width, rgbaData.height, rgbaData.data);
Note that in both cases I resize the image first to reduce the size.

How to get the width/height of a Canvas?

I'd like to have a canvas on my html page, with width = 100%. I'd like to then get the width in pixels of the canvas at runtime. Something like:
Canvas c = Canvas.createIfSupported();
c.setWidth("100%");
c.setHeight("100%");
// let's draw a rectangle to fill the whole canvas:
c.getContext2d().rect(0, 0, ?, ?); // <-- what's its actual width/height?
Thanks
It's (surprisingly)
c.getContext2d().rect(0, 0, 299, 149);
... no matter what the actual pixel size of the canvas is. The reason is, that 300x150 is the default coordinate space size, see http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#the-canvas-element
You can change this by using canvas.setCoordinateSpaceWidth() and canvas.setCoordinateSpaceHeight() to anything you want.
Often, people want to draw independently of the actual size of the canvas, so they just set the coordinate space to something like 100x100, and the drawn image gets scaled automatically.
However, you may want to find out about the actual canvas pixel size:
Scheduler.get().scheduleDeferred(new ScheduledCommand() {
#Override
public void execute() {
final int clientWidth = canvas.getElement().getClientWidth();
final int clientHeight = canvas.getElement().getClientHeight();
setupCoordinateSpace(canvas, clientWidth, clientHeight);
drawMyImage(canvas);
}
});
Note, that this must be done in scheduleDeferred, because the client size will only be known after the browser had a chance to perform the layout.
You can use this information to adjust the aspect ratio to match the canvas (which I would recommend, because otherwise you get distorted images)
void setupCoordinateSpace(Canvas canvas, int clientWidth, int clientHeight) {
final double aspect = (double) clientWidth / (double) clientHeight;
canvas.setCoordinateSpaceHeight(
(int) (myCoordinateSpaceWidth / aspect));
}
Or, alternatively, you can also set the coordinate space to match the pixel size, so you can perform pixel-exact drawing on an HTML5 canvas:
void setupCoordinateSpace(Canvas canvas, int clientWidth, int clientHeight) {
canvas.setCoordinateSpaceWidth(clientWidth);
canvas.setCoordinateSpaceHeight(clientHeight);
}
After the coordinate space has been set up, you can call drawMyImage()
use canvas width and height methods
var width = c.width;
var height = c.height;