I have associated the PictureBox to OpenGL through its handle
hDC = GetDC((HWND)picBoxA->Handle.ToInt64());
This works perfectly and the graphics are drawn in the PictureBox.
The problem is that I've been looking for all day for a way to get the data to be copied to another PictureBox or a bitmap, but I can not find the way.
I have tried it in different ways and I think the logic solution should be copy the device context using BitBlt, copying from the HDC, but this just copies the background color (PicBoxA->backColor).
HDC hdcA = GetDC((HWND)picBoxA->Handle.ToInt64());
HDC hdcB = GetDC((HWND)picBoxB->Handle.ToInt64());
BitBlt(hdcB, 0, 0, 100, 100, hdcA, 0, 0, SRCCOPY);
Where are these image data? How can I access them? Is it not possible to capture the inside of the PictureBox?
Related
hi guys im making my own flutter desktop program
but when trying to make a fucntion that capturing the window where flutter widget's area is located i have troubled..
first, i tried to transparent the program's background and capture the widget(by using flutter_acrylic and RenderRepaintBoundary).transparent image
when i caputred widget the image wasnt include transparent window.. but only tranparent color was. because caputre algorithm see only the widget. im blocked in this problem.widget capture image
anyone who have idea for this problem please give me wisdom...
+Now I tying to get information about the win32 API. Any ideas on win32 capture API would be much appreciated.
i did!! anyone want to know see these links https://learn.microsoft.com/ko-kr/windows/win32/gdi/capturing-an-image + https://pub.dev/packages/win32
output image
About win32 capture API, you can use GDI BitBlt function. Here is a MSDN sample Capturing an Image.
You need to find the flutter desktop program window's handle with FindWindow, then get the rect range, and capture the window from DC.
HDC hWndDC = GetWindowDC(hwnd);
RECT capture_rect{ 0,0,0,0 };
RECT wnd_rect;
RECT real_rect;
GetWindowRect(hwnd, &wnd_rect);
DwmGetWindowAttribute(hwnd, DWMWINDOWATTRIBUTE::DWMWA_EXTENDED_FRAME_BOUNDS, &real_rect, sizeof(RECT));
int offset_left = real_rect.left - wnd_rect.left;
int offset_top = real_rect.top - wnd_rect.top;
capture_rect = RECT{ offset_left,offset_top,real_rect.right - real_rect.left + offset_left,real_rect.bottom - real_rect.top + offset_top };
//capture_rect ?? wnd_rect (You can calculate the capture_rect based on the size of your window)
int width = capture_rect.right - capture_rect.left;
int height = capture_rect.bottom - capture_rect.top;
HDC hMemDC = CreateCompatibleDC(hWndDC);
HBITMAP hBitmap = CreateCompatibleBitmap(hWndDC, width, height);
SelectObject(hMemDC, hBitmap);
BitmapPtr bitmap;
bool ok = BitBlt(hMemDC, 0, 0, width, height, hWndDC, capture_rect.left, capture_rect.top, SRCCOPY);
DeleteDC(hWndDC);
DeleteDC(hMemDC);
DeleteObject(hBitmap);
As title, I'm trying to create a partially transparent in-game overlay using ImGui that's clickable on the UI but click-through otherwise, i.e. you can click on the ImGUI elements but outside the elements you can interact with the game.
I was able to do it using
https://github.com/ocornut/imgui/blob/master/examples/example_win32_directx11/main.cpp
by making the window style as WS_EX_TOPMOST | WS_EX_LAYERED, and using
SetLayeredWindowAttributes(hwnd, RGB(0, 0, 0), 0, ULW_COLORKEY);
However this would impact the rendering performance of the underlying game.
So, I decided to try the UpdateLayeredWindow function.
Here is one of the templates I referenced :
https://github.com/riley-x/TransparentWindow
This does not impact the performance and worked perfectly, and now I have to integrate ImGUI rendering to replace the green ellipse.
However, ImGUI uses the following code :
g_pd3dDeviceContext->OMSetRenderTargets(1, &g_mainRenderTargetView, NULL);
g_pd3dDeviceContext->ClearRenderTargetView(g_mainRenderTargetView, clear_color_with_alpha);
But in order to use the
BOOL UpdateLayeredWindow(
HWND hWnd,
HDC hdcDst,
POINT *pptDst,
SIZE *psize,
HDC hdcSrc,
POINT *pptSrc,
COLORREF crKey,
BLENDFUNCTION *pblend,
DWORD dwFlags
);
function, I have to tell ImGUI to render to an HDC.
I've thought of some possible solutions,
I'm not sure how g_mainRenderTargetView can possibly bind to an HDC, unlike ID2D1Factory::CreateDCRenderTarget which can actually bind to an HDC with BindDC.
Or I could let the ImGUI render as usual but retrieve the back buffer as a bitmap, and then do the following
HDC hdcWnd = GetDC(hwnd);
HDC hdcMem = CreateCompatibleDC(hdcWnd);
HBITMAP memBitmap = CreateCompatibleBitmap(hdcWnd, rect.right - rect.left, rect.bottom - rect.top);
SelectObject(hdcMem, memBitmap);
m_pRenderTarget->BindDC(hdcMem, &rect);
m_pRenderTarget->BeginDraw();
m_pRenderTarget->Clear({ 0 });
m_pRenderTarget->DrawBitmap(bitmap.Get()); // ???
m_pRenderTarget->EndDraw();
POINT pt0 = { 0 };
SIZE sz = { rect.right - rect.left, rect.bottom - rect.top };
BLENDFUNCTION bfunc = { 0 };
bfunc.AlphaFormat = AC_SRC_ALPHA;
bfunc.BlendFlags = 0;
bfunc.BlendOp = AC_SRC_OVER;
bfunc.SourceConstantAlpha = 255;
UpdateLayeredWindow(hwnd, hdcWnd, &pt0, &sz, hdcMem, &pt0, 0, &bfunc, ULW_COLORKEY);
DeleteObject(memBitmap);
ReleaseDC(hwnd, hdcWnd);
ReleaseDC(0, hdcMem);
where m_pRenderTarget is made from :
D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties(
D2D1_RENDER_TARGET_TYPE_DEFAULT,
D2D1::PixelFormat(
DXGI_FORMAT_B8G8R8A8_UNORM,
D2D1_ALPHA_MODE_PREMULTIPLIED),
0,
0,
D2D1_RENDER_TARGET_USAGE_NONE,
D2D1_FEATURE_LEVEL_10
);
d2Factory->CreateDCRenderTarget(&props, m_pRenderTarget.GetAddressOf());
The problem is when I tried to retrieve the backbuffer bitmap, it always failed.
Or maybe there is some other way to make the overlay work without using UpdateLayeredWindow?
I am in OpenGL es 2.0 with glKit trying to render to iOS devices.
Basically my goal is to instead of drawing to the main buffer draw to a texture. Then render that texture to the screen. I have been trying to follow another topic on so. Unfortunately they mention something about the power of two (im assuming with regards to resolution) but I don't know how to fix it. Anyway here is my swift interpretation of the code from that topic.
import Foundation
import GLKit
import OpenGLES
class RenderTexture {
var framebuffer:GLuint = 0
var tex:GLuint = 0
var old_fbo:GLint = 0
init(width: GLsizei, height: GLsizei)
{
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING), &old_fbo)
glGenFramebuffers(1, &framebuffer)
glGenTextures(1, &tex)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
glBindTexture(GLenum(GL_TEXTURE_2D), tex)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGBA, GLsizei(width), GLsizei(height), 0, GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex, 0)
glClearColor(0, 0.1, 0, 1)
glClear(GLenum(GL_COLOR_BUFFER_BIT))
let status = glCheckFramebufferStatus(GLenum(GL_FRAMEBUFFER))
if (status != GLenum(GL_FRAMEBUFFER_COMPLETE))
{
print("DIDNT GO WELL WITH", width, " " , height)
print(status)
}
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLenum(old_fbo))
}
func begin()
{
glGetIntegerv(GLenum(GL_FRAMEBUFFER_BINDING), &old_fbo)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), framebuffer)
}
func end()
{
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), GLenum(old_fbo))
}
}
Then as far as rendering I have some things going on.
A code that theoretically renders any texture full screen. This has been tested with two manually loaded pngs (using no buffer changes) and works great.
func drawTriangle(texture: GLuint)
{
loadBuffers()
//glViewport(0, 0, width, height)
//glClearColor(0, 0.0, 0, 1.0)
//glClear(GLbitfield(GL_COLOR_BUFFER_BIT) | GLbitfield(GL_DEPTH_BUFFER_BIT))
glEnable(GLenum(GL_TEXTURE_2D))
glActiveTexture(GLenum(GL_TEXTURE0))
glUseProgram(texShader)
let loc1 = glGetUniformLocation(texShader, "s_texture")
glUniform1i(loc1, 0)
let loc3 = glGetUniformLocation(texShader, "matrix")
if (loc3 != -1)
{
glUniformMatrix4fv(loc3, 1, GLboolean(GL_FALSE), &matrix)
}
glBindTexture(GLenum(GL_TEXTURE_2D), texture)
glDrawArrays(GLenum(GL_TRIANGLE_STRIP), 0, 6)
glDisable(GLenum(GL_TEXTURE_2D))
destroyBuffers()
}
I also have a function that draws a couple dots on the screen. You dont really need to see the methods but it works. This is how I am going to know that OpenGL is drawing from the buffer texture and NOT a preloaded texture.
Finally here is the gist of the code I am trying to do.
func initialize()
{
nfbo = RenderTexture(width: width, height: height)
}
fun draw()
{
glViewport(0, 0, GLsizei(width * 2), GLsizei(height * 2)) //why do I have to multiply for 2 to get it to work?????
nfbo.begin()
drawDots() //Draws the dots
nfbo.end()
reset()
drawTriangle(nfbo.tex)
}
At the end of all this all that is drawn is a blank screen. If there is any more code that would help you figure things out let me know. I tried to trim it to make it less annoying for you.
Note: Considering the whole power of two thing I have tried passing the fbo class 512 x 512 just in case it would make things work being a power of two. Unfortunately it didnt do that.
Another Note: All I am doing is going to be 2D so I dont need depth buffers right?
yesterday I saw exactly the same issue.
after struggling for hours, I found out why.
the trick is configuring your texture map with the following:
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GL_CLAMP_TO_EDGE);
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GL_CLAMP_TO_EDGE);
otherwise, you won't draw anything on the texture map.
the reason seems to be that while ios supports texture maps that are not power of 2. it requires GL_CLAMP_TO_EDGE. otherwise it won't work.
it should really report incomplete framebuffer. it took me quite long time to debug this problem!
here a related discussion:
Rendering to non-power-of-two texture on iPhone
I have a very large image that I would like to show a 200x200px thumbnail of (showing a portion of the image, not a strethed version of the entire image). To achieve this I am looking into using CIImage.ImageByCroppingToRect or CICrop - but I am not able to get anything useful. Either the result is just black (I assume what I see is the black portion of the cropped image) or I get a SIGABRT ("Cannot handle a (6000 x 3000) sized texture with the given GLES context!")
There is a ObjC sample in this thread:
Cropping CIImage with CICrop isn't working properly
But I haven't managed to translate it in to C# and get it working properly.
Here's a MonoTouch port of the answer from the post you mentioned:
var croppedImaged = CIImage.FromCGImage (inputCGImage).ImageByCroppingToRect (new RectangleF (150, 150, 300, 300));
var transformFilter = new CIAffineTransform();
var affineTransform = CGAffineTransform.MakeTranslation (-150, 150);
transformFilter.Transform = affineTransform;
transformFilter.Image = croppedImaged;
CIImage transformedImage = transformFilter.OutputImage;
I am in the process of adding drag and drop support to an existing Mono/C#/GTK# application. I was wondering whether it was possible to use RGBA transparency on the icons that appear under the mouse pointer when I start dragging an object.
So far, I realized the following:
I can set the bitmap in question by calling the Gtk.Drag.SourceSetIconPixbuf() method. However, no luck with alpha transparency: pixels that are not fully opaque would get 100% transparent this way.
I also tried calling RenderPixmapAndMask() on the GdkPixbuf so that I could use Gtk.Drag.SourceSetIcon() with an RGBA colormap of my Screen. It didn't work either: whenever I started dragging, I got the following error:
[Gdk] IA__gdk_window_set_back_pixmap: assertion 'pixmap == NULL || gdk_drawable_get_depth (window) == gdk_drawable_get_depth (pixmap)' failed.
This way, the pixmap doesn't even get copied, only a white shape (presumably set by the mask argument of SetSourceIcon()) shows up on dragging.
I'd like to ask if there's a way to make these icons have alpha transparency, despite the fact that I failed to do so. In case it's impossible, answers discussing the reasons of the lack of this feature would also be helpful. Thank you.
(Compositing is - of course - enabled on my desktop (Ubuntu/10.10, Compiz/0.8.6-0ubuntu9).)
Ok, finally I solved it. You should create a new Gtk.Window of POPUP type, set its Colormap to your screen's RGBA colormap, have the background erased by Cairo to a transparent color, draw whatever you'd like on it and finally pass it on to Gtk.Drag.SetIconWidget().
Sample code (presumably you'll want to use this inside OnDragBegin, or at a point where you have a valid drag context to be passed to SetIconWidget()):
Gtk.Window window = new Gtk.Window (Gtk.WindowType.Popup);
window.Colormap = window.Screen.RgbaColormap;
window.AppPaintable = true;
window.Decorated = false;
window.Resize (/* specify width, height */);
/* The cairo context can only be created when the window is being drawn by the
* window manager, so wrap drawing code into an ExposeEvent delegate. */
window.ExposeEvent += delegate {
Context ctx = Gdk.CairoHelper.Create (window.GdkWindow);
/* Erase the background */
ctx.SetSourceRGBA (0, 0, 0, 0);
ctx.Operator = Operator.Source;
ctx.Paint ();
/* Draw whatever you'd like to here, and then clean up by calling
Dispose() on the context's target. */
(ctx.Target as IDisposable).Dispose ();
};
Gtk.Drag.SetIconWidget(drag_context, window, 10, 10);