How to capture the window where flutter widget's area is located - flutter

hi guys im making my own flutter desktop program
but when trying to make a fucntion that capturing the window where flutter widget's area is located i have troubled..
first, i tried to transparent the program's background and capture the widget(by using flutter_acrylic and RenderRepaintBoundary).transparent image
when i caputred widget the image wasnt include transparent window.. but only tranparent color was. because caputre algorithm see only the widget. im blocked in this problem.widget capture image
anyone who have idea for this problem please give me wisdom...
+Now I tying to get information about the win32 API. Any ideas on win32 capture API would be much appreciated.
i did!! anyone want to know see these links https://learn.microsoft.com/ko-kr/windows/win32/gdi/capturing-an-image + https://pub.dev/packages/win32
output image

About win32 capture API, you can use GDI BitBlt function. Here is a MSDN sample Capturing an Image.
You need to find the flutter desktop program window's handle with FindWindow, then get the rect range, and capture the window from DC.
HDC hWndDC = GetWindowDC(hwnd);
RECT capture_rect{ 0,0,0,0 };
RECT wnd_rect;
RECT real_rect;
GetWindowRect(hwnd, &wnd_rect);
DwmGetWindowAttribute(hwnd, DWMWINDOWATTRIBUTE::DWMWA_EXTENDED_FRAME_BOUNDS, &real_rect, sizeof(RECT));
int offset_left = real_rect.left - wnd_rect.left;
int offset_top = real_rect.top - wnd_rect.top;
capture_rect = RECT{ offset_left,offset_top,real_rect.right - real_rect.left + offset_left,real_rect.bottom - real_rect.top + offset_top };
//capture_rect ?? wnd_rect (You can calculate the capture_rect based on the size of your window)
int width = capture_rect.right - capture_rect.left;
int height = capture_rect.bottom - capture_rect.top;
HDC hMemDC = CreateCompatibleDC(hWndDC);
HBITMAP hBitmap = CreateCompatibleBitmap(hWndDC, width, height);
SelectObject(hMemDC, hBitmap);
BitmapPtr bitmap;
bool ok = BitBlt(hMemDC, 0, 0, width, height, hWndDC, capture_rect.left, capture_rect.top, SRCCOPY);
DeleteDC(hWndDC);
DeleteDC(hMemDC);
DeleteObject(hBitmap);

Related

Cropping an image in flutter

So I've been trying really hard to crop an image according to my needs in flutter.
Problem statement:
I have a screen and in that screen I show a frame while the device camera is run on the background. Now, what I want is that whenever the user clicks a photo, only the area of that image inside the frame should be kept and rest should be cropped.
What I have done so far?
Added a package image 3.1.3
Wrote code to fetch x,y coordinates of my frame.
Using the calculated x,y coordinates and copyCrop method from the Image package to crop the clicked image.
Now the problem is that I do not know how copyCrop works and the code right now does not give me the expected results.
final GlobalKey _key = GlobalKey();
void _getOffset(GlobalKey key) {
RenderBox? box = key.currentContext?.findRenderObject() as RenderBox?;
Offset? position = box?.localToGlobal(Offset.zero);
if (position != null) {
setState(() {
_x = position.dx;
_y = position.dy;
});
}
}
I assign this _key to my Image.file(srcToFrameImage) and the function above yields 10, 289.125
Here 10 is the offset from x and 289.125 is the offset from y. I used this tutorial for the same.
Code to crop my image using the Image package:
var bytes = await File(pictureFile!.path).readAsBytes();
img.Image src = img.decodeImage(bytes)!;
img.Image destImage = img.copyCrop(
src, _x!.toInt(), _y!.toInt(), src.width, src.height);
var jpg = img.encodeJpg(destImage);
await File(pictureFile!.path).writeAsBytes(jpg);
bloc.addFrontImage(File(pictureFile!.path));
Now, can anyone tell me how i can do this effectively? Right now, it does crop my image but not as I want it to be. It would be really great if someone could tell me how does copyCrop work and what is the meaning of all these different parameters that we pass into it.
Any help would be appreciated.
Edit:
Now, as you can see, i only want the image between this frame to be kept after being captured and rest to be cropped off.

ImGui overlay with UpdateLayeredWindow function

As title, I'm trying to create a partially transparent in-game overlay using ImGui that's clickable on the UI but click-through otherwise, i.e. you can click on the ImGUI elements but outside the elements you can interact with the game.
I was able to do it using
https://github.com/ocornut/imgui/blob/master/examples/example_win32_directx11/main.cpp
by making the window style as WS_EX_TOPMOST | WS_EX_LAYERED, and using
SetLayeredWindowAttributes(hwnd, RGB(0, 0, 0), 0, ULW_COLORKEY);
However this would impact the rendering performance of the underlying game.
So, I decided to try the UpdateLayeredWindow function.
Here is one of the templates I referenced :
https://github.com/riley-x/TransparentWindow
This does not impact the performance and worked perfectly, and now I have to integrate ImGUI rendering to replace the green ellipse.
However, ImGUI uses the following code :
g_pd3dDeviceContext->OMSetRenderTargets(1, &g_mainRenderTargetView, NULL);
g_pd3dDeviceContext->ClearRenderTargetView(g_mainRenderTargetView, clear_color_with_alpha);
But in order to use the
BOOL UpdateLayeredWindow(
HWND hWnd,
HDC hdcDst,
POINT *pptDst,
SIZE *psize,
HDC hdcSrc,
POINT *pptSrc,
COLORREF crKey,
BLENDFUNCTION *pblend,
DWORD dwFlags
);
function, I have to tell ImGUI to render to an HDC.
I've thought of some possible solutions,
I'm not sure how g_mainRenderTargetView can possibly bind to an HDC, unlike ID2D1Factory::CreateDCRenderTarget which can actually bind to an HDC with BindDC.
Or I could let the ImGUI render as usual but retrieve the back buffer as a bitmap, and then do the following
HDC hdcWnd = GetDC(hwnd);
HDC hdcMem = CreateCompatibleDC(hdcWnd);
HBITMAP memBitmap = CreateCompatibleBitmap(hdcWnd, rect.right - rect.left, rect.bottom - rect.top);
SelectObject(hdcMem, memBitmap);
m_pRenderTarget->BindDC(hdcMem, &rect);
m_pRenderTarget->BeginDraw();
m_pRenderTarget->Clear({ 0 });
m_pRenderTarget->DrawBitmap(bitmap.Get()); // ???
m_pRenderTarget->EndDraw();
POINT pt0 = { 0 };
SIZE sz = { rect.right - rect.left, rect.bottom - rect.top };
BLENDFUNCTION bfunc = { 0 };
bfunc.AlphaFormat = AC_SRC_ALPHA;
bfunc.BlendFlags = 0;
bfunc.BlendOp = AC_SRC_OVER;
bfunc.SourceConstantAlpha = 255;
UpdateLayeredWindow(hwnd, hdcWnd, &pt0, &sz, hdcMem, &pt0, 0, &bfunc, ULW_COLORKEY);
DeleteObject(memBitmap);
ReleaseDC(hwnd, hdcWnd);
ReleaseDC(0, hdcMem);
where m_pRenderTarget is made from :
D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties(
D2D1_RENDER_TARGET_TYPE_DEFAULT,
D2D1::PixelFormat(
DXGI_FORMAT_B8G8R8A8_UNORM,
D2D1_ALPHA_MODE_PREMULTIPLIED),
0,
0,
D2D1_RENDER_TARGET_USAGE_NONE,
D2D1_FEATURE_LEVEL_10
);
d2Factory->CreateDCRenderTarget(&props, m_pRenderTarget.GetAddressOf());
The problem is when I tried to retrieve the backbuffer bitmap, it always failed.
Or maybe there is some other way to make the overlay work without using UpdateLayeredWindow?

How can I convert back and forth between Blob and Image in Flutter Web?

Context
I use image_picker with Flutter web to allow users to select an image. This returns the URI of a local network Blob object, which I can display with Image.network(pickedFile.path). Where I get into trouble is when I want to start manipulating that image. First, I need to pull it off the network and into memory. When I'm done, I need to push it back up to a network-accessible Blob.
How do I create a Blob from an Image?
I don't mean the built-in Image widget. I mean an ImageLib.Image where ImageLib is the Dart image library. Why do I want to do this? Well, I have a web app in which the user selects an image, which is returned as a Blob. I bring this into memory, use ImageLib to crop and resize it, and then want to push it back up to a Blob URL. This is where my code is currently:
# BROKEN:
var png = ImageLib.encodePng(croppedImage);
var blob = html.Blob([base64Encode(png)], 'image/png');
var url = html.Url.createObjectUrl(blob);
The code does not throw an error until I try to display the image with Image(image: NetworkImage(url)). The error begins with:
The following Event$ object was thrown resolving an image frame:
Copying and pasting url into the browser reveals a black screen, which I take to be a 0x0 image. And so I come to my questions:
How do I properly encode the image and create a Blob?
Is there a better way to manipulate images in Flutter web besides using Blobs? I am basically only using it because that is what image_picker_for_web returns, and so it is the only method I know aside from possibly using a virtual filesystem, which I haven't explored too much.
How do I pull an image into memory?
While I'm at it, I might as well ask what is the best practice for bringing an image into memory. For mobile, I used image_picker to get the name of a file, and I would use the package:image/image.dart as ImageLib to manipulate it:
// pickedfile.path is the name of a file
ImageLib.Image img = ImageLib.decodeImage(File(pickedfile.path).readAsBytesSync());
With web I don't have filesystem access, so I've been doing this instead:
// pickedfile.path is the URL of an HTML Blob
var response = await http.get(pickedfile.path);
ImageLib.Image img = ImageLib.decodeImage(response.bodyBytes);
This is considerably slower than the old way, probably because of the GET. Is this really the best (or only) way to get my image into memory?
The secret, as suggested by Brendan Duncan, was to use the browser's native decoding functionality:
// user browser to decode
html.ImageElement myImageElement = html.ImageElement(src: imagePath);
await myImageElement.onLoad.first; // allow time for browser to render
html.CanvasElement myCanvas = html.CanvasElement(width: myImageElement.width, height: myImageElement.height);
html.CanvasRenderingContext2D ctx = myCanvas.context2D;
//ctx.drawImage(myImageElement, 0, 0);
//html.ImageData rgbaData = ctx.getImageData(0, 0, myImageElement.width, myImageElement.height);
// resize to save time on encoding
int _MAXDIM = 500;
int width, height;
if (myImageElement.width > myImageElement.height) {
width = _MAXDIM;
height = (_MAXDIM*myImageElement.height/ myImageElement.width).round();
} else {
height = _MAXDIM;
width = (_MAXDIM*myImageElement.width/ myImageElement.height).round();
}
ctx.drawImageScaled(myImageElement, 0, 0, width, height);
html.ImageData rgbaData = ctx.getImageData(0, 0, width, height);
var myImage = ImageLib.Image.fromBytes(rgbaData.width, rgbaData.height, rgbaData.data);
He proposed a similar trick for encoding, but for my use case it was sufficient to do it with Dart:
int width, height;
if (myImageElement.width > myImageElement.height) {
width = 800;
height = (800*myImageElement.height/ myImageElement.width).round();
} else {
height = 800;
width = (800*myImageElement.width/ myImageElement.height).round();
}
ctx.drawImageScaled(myImageElement, 0, 0, width, height);
html.ImageData rgbaData = ctx.getImageData(0, 0, width, height);
var myImage = ImageLib.Image.fromBytes(rgbaData.width, rgbaData.height, rgbaData.data);
Note that in both cases I resize the image first to reduce the size.

GTK ignores any type of window positioning

I've been trying to position a splash screen at the center of the screen. In Windows, such a request is easy using SetWindowPos and a bit of geometric arithmetic. I found out that all requests to move a window are ignored by the windowing manager. So my question is, how come I see so many applications with pretty splash screens properly centered? I started with GTK_WINDOW_TOPLEVEL BTW and just switch to popup trying a few things. Setting the gravity and position do not fail, they are just ignored. Even defined in a .glade file, the window is ignored.
{
GtkWidget *pNewWindow = gtk_window_new(GTK_WINDOW_POPUP);
gtk_window_move(GTK_WINDOW(pNewWindow), 0, 0);
gtk_widget_show_all(pNewWindow);
while (gtk_events_pending())
gtk_main_iteration();
gtk_widget_set_size_request(pNewWindow, width, height);
gtk_window_set_decorated(GTK_WINDOW(pNewWindow), FALSE);
// gtk_window_set_position(GTK_WINDOW(pNewWindow), GTK_WIN_POS_CENTER_ALWAYS);
gtk_window_set_resizable(GTK_WINDOW(pNewWindow), FALSE);
// gtk_window_set_gravity(GTK_WINDOW(pNewWindow), GDK_GRAVITY_CENTER);
gtk_window_move(GTK_WINDOW(pNewWindow), 0, 0);
while (gtk_events_pending())
gtk_main_iteration();
return pNewWindow;
}

Raphael-GWT: fill image of a shape (Rect) appears offset. How to resolve this?

I'm wondering about the behavior of {Shape}.attr("fill","url({image.path})").
when applying a fill image to a shape:
public class AppMapCanvas extends Raphael {
public AppMapCanvas(int width, int height) {
super(width, height);
this.hCenter = width / 2;
this.vCenter = height / 2;
...
Rect rect = this.new Rect(hCenter, vCenter, 144, 40, 4);
rect.attr("fill", "url('../images/app-module-1-bg.png')"); // <--
...
}
}
The background image seem to teal accross the canvas behind the shape, thus gets weird positioning (an illustration snapshot is enclosed - i marked the original image borders in red).
This seem to resolve itself in the presence of an animation along a path (a mere path.M(0,0) is sufficiant).
How can i position the fill-image properly in the first place?
The proper way to do this from what I can understand would be to use an SVG pattern declaration to specify the portion and position of the image you would want to use. Then you would use that pattern to fill the rectangle element. Unfortunately, the Raphael javascript library doesn't have support for patterns... so there's no direct way to use an image to fill a rectangle.