Cropping image with CIImage.ImageByCroppingToRect or CICrop in MonoTouch - iphone

I have a very large image that I would like to show a 200x200px thumbnail of (showing a portion of the image, not a strethed version of the entire image). To achieve this I am looking into using CIImage.ImageByCroppingToRect or CICrop - but I am not able to get anything useful. Either the result is just black (I assume what I see is the black portion of the cropped image) or I get a SIGABRT ("Cannot handle a (6000 x 3000) sized texture with the given GLES context!")
There is a ObjC sample in this thread:
Cropping CIImage with CICrop isn't working properly
But I haven't managed to translate it in to C# and get it working properly.

Here's a MonoTouch port of the answer from the post you mentioned:
var croppedImaged = CIImage.FromCGImage (inputCGImage).ImageByCroppingToRect (new RectangleF (150, 150, 300, 300));
var transformFilter = new CIAffineTransform();
var affineTransform = CGAffineTransform.MakeTranslation (-150, 150);
transformFilter.Transform = affineTransform;
transformFilter.Image = croppedImaged;
CIImage transformedImage = transformFilter.OutputImage;

Related

How to capture the window where flutter widget's area is located

hi guys im making my own flutter desktop program
but when trying to make a fucntion that capturing the window where flutter widget's area is located i have troubled..
first, i tried to transparent the program's background and capture the widget(by using flutter_acrylic and RenderRepaintBoundary).transparent image
when i caputred widget the image wasnt include transparent window.. but only tranparent color was. because caputre algorithm see only the widget. im blocked in this problem.widget capture image
anyone who have idea for this problem please give me wisdom...
+Now I tying to get information about the win32 API. Any ideas on win32 capture API would be much appreciated.
i did!! anyone want to know see these links https://learn.microsoft.com/ko-kr/windows/win32/gdi/capturing-an-image + https://pub.dev/packages/win32
output image
About win32 capture API, you can use GDI BitBlt function. Here is a MSDN sample Capturing an Image.
You need to find the flutter desktop program window's handle with FindWindow, then get the rect range, and capture the window from DC.
HDC hWndDC = GetWindowDC(hwnd);
RECT capture_rect{ 0,0,0,0 };
RECT wnd_rect;
RECT real_rect;
GetWindowRect(hwnd, &wnd_rect);
DwmGetWindowAttribute(hwnd, DWMWINDOWATTRIBUTE::DWMWA_EXTENDED_FRAME_BOUNDS, &real_rect, sizeof(RECT));
int offset_left = real_rect.left - wnd_rect.left;
int offset_top = real_rect.top - wnd_rect.top;
capture_rect = RECT{ offset_left,offset_top,real_rect.right - real_rect.left + offset_left,real_rect.bottom - real_rect.top + offset_top };
//capture_rect ?? wnd_rect (You can calculate the capture_rect based on the size of your window)
int width = capture_rect.right - capture_rect.left;
int height = capture_rect.bottom - capture_rect.top;
HDC hMemDC = CreateCompatibleDC(hWndDC);
HBITMAP hBitmap = CreateCompatibleBitmap(hWndDC, width, height);
SelectObject(hMemDC, hBitmap);
BitmapPtr bitmap;
bool ok = BitBlt(hMemDC, 0, 0, width, height, hWndDC, capture_rect.left, capture_rect.top, SRCCOPY);
DeleteDC(hWndDC);
DeleteDC(hMemDC);
DeleteObject(hBitmap);

How can I convert back and forth between Blob and Image in Flutter Web?

Context
I use image_picker with Flutter web to allow users to select an image. This returns the URI of a local network Blob object, which I can display with Image.network(pickedFile.path). Where I get into trouble is when I want to start manipulating that image. First, I need to pull it off the network and into memory. When I'm done, I need to push it back up to a network-accessible Blob.
How do I create a Blob from an Image?
I don't mean the built-in Image widget. I mean an ImageLib.Image where ImageLib is the Dart image library. Why do I want to do this? Well, I have a web app in which the user selects an image, which is returned as a Blob. I bring this into memory, use ImageLib to crop and resize it, and then want to push it back up to a Blob URL. This is where my code is currently:
# BROKEN:
var png = ImageLib.encodePng(croppedImage);
var blob = html.Blob([base64Encode(png)], 'image/png');
var url = html.Url.createObjectUrl(blob);
The code does not throw an error until I try to display the image with Image(image: NetworkImage(url)). The error begins with:
The following Event$ object was thrown resolving an image frame:
Copying and pasting url into the browser reveals a black screen, which I take to be a 0x0 image. And so I come to my questions:
How do I properly encode the image and create a Blob?
Is there a better way to manipulate images in Flutter web besides using Blobs? I am basically only using it because that is what image_picker_for_web returns, and so it is the only method I know aside from possibly using a virtual filesystem, which I haven't explored too much.
How do I pull an image into memory?
While I'm at it, I might as well ask what is the best practice for bringing an image into memory. For mobile, I used image_picker to get the name of a file, and I would use the package:image/image.dart as ImageLib to manipulate it:
// pickedfile.path is the name of a file
ImageLib.Image img = ImageLib.decodeImage(File(pickedfile.path).readAsBytesSync());
With web I don't have filesystem access, so I've been doing this instead:
// pickedfile.path is the URL of an HTML Blob
var response = await http.get(pickedfile.path);
ImageLib.Image img = ImageLib.decodeImage(response.bodyBytes);
This is considerably slower than the old way, probably because of the GET. Is this really the best (or only) way to get my image into memory?
The secret, as suggested by Brendan Duncan, was to use the browser's native decoding functionality:
// user browser to decode
html.ImageElement myImageElement = html.ImageElement(src: imagePath);
await myImageElement.onLoad.first; // allow time for browser to render
html.CanvasElement myCanvas = html.CanvasElement(width: myImageElement.width, height: myImageElement.height);
html.CanvasRenderingContext2D ctx = myCanvas.context2D;
//ctx.drawImage(myImageElement, 0, 0);
//html.ImageData rgbaData = ctx.getImageData(0, 0, myImageElement.width, myImageElement.height);
// resize to save time on encoding
int _MAXDIM = 500;
int width, height;
if (myImageElement.width > myImageElement.height) {
width = _MAXDIM;
height = (_MAXDIM*myImageElement.height/ myImageElement.width).round();
} else {
height = _MAXDIM;
width = (_MAXDIM*myImageElement.width/ myImageElement.height).round();
}
ctx.drawImageScaled(myImageElement, 0, 0, width, height);
html.ImageData rgbaData = ctx.getImageData(0, 0, width, height);
var myImage = ImageLib.Image.fromBytes(rgbaData.width, rgbaData.height, rgbaData.data);
He proposed a similar trick for encoding, but for my use case it was sufficient to do it with Dart:
int width, height;
if (myImageElement.width > myImageElement.height) {
width = 800;
height = (800*myImageElement.height/ myImageElement.width).round();
} else {
height = 800;
width = (800*myImageElement.width/ myImageElement.height).round();
}
ctx.drawImageScaled(myImageElement, 0, 0, width, height);
html.ImageData rgbaData = ctx.getImageData(0, 0, width, height);
var myImage = ImageLib.Image.fromBytes(rgbaData.width, rgbaData.height, rgbaData.data);
Note that in both cases I resize the image first to reduce the size.

Unity 5.1 Distorted image after download from web

When I load my png after compressing with tiny png, they get distorted( all purple and transparent)
http://s22.postimg.org/b39g0bhn5/Screen_Shot_2015_06_28_at_10_39_50_AM.png
the background for example should be blue
http://postimg.org/image/fez234o6d/
this only happens when i use pictures that got compressed by tinypng.com
and only after i updated to unity 5.1.
Im downloading the image with WWW class and loading texture using Texture2D.
is this problem known to anyone?
I had exactly the same issue. I was able to solve it using the following code
mat.mainTexture = new Texture2D(32, 32, TextureFormat.DXT5, false);
Texture2D newTexture = new Texture2D(32, 32, TextureFormat.DXT5, false);
WWW stringWWW = new WWW(texture1URL);
yield return stringWWW;
if(stringWWW.error == null)
{
stringWWW.LoadImageIntoTexture(newTexture);
mat.mainTexture = newTexture;
}
The key seemed to be using DXT5 as the texture format, and using the method LoadImageIntoTexture(...);

How to resize a Gtk.Image in vala

I'm trying to resize a Image in vala.
So I read valadoc and end up writing this code
var img = new Gtk.Image.from_file ("fire.png");
var pix_buf = img.get_pixbuf ();
pix_buf.scale_simple (50, 50, InterpType.BILINEAR);
window.add (img);
But it has no effect.
If there is a way to dynamically scale the image so that it fill his container it would be awesome, but just scaling it would be fine.
Pixbuf.scale_simple does not modify the image. It returns a new Pixbuf that has been scaled. Use Image.from_pixbuf to create a new image and add that to your window.

vImageAlphaBlend crashes

I'm trying to alpha blend some layers: [CGImageRef] in the drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!) routine of my custom NSView. Until now I used CGContextDrawImage() for drawing those layers into the drawLayer context. While profiling I noticed CGContextDrawImage() needs 70% of the CPU time so I decided to try the Accelerate framework. I changed the code but it just crashes and I have no clue what the reason could be.
I'm creating those layers like this:
func addLayer() {
let colorSpace: CGColorSpaceRef = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB)
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedFirst.rawValue)
var layerContext = CGBitmapContextCreate(nil, UInt(canvasSize.width), UInt(canvasSize.height), 8, UInt(canvasSize.width * 4), colorSpace, bitmapInfo)
var newLayer = CGBitmapContextCreateImage(layerContext)
layers.append( newLayer )
}
My drawLayers routine looks like this:
override func drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!)
{
var ctxImageBuffer = vImage_Buffer(data:CGBitmapContextGetData(ctx),
height:CGBitmapContextGetHeight(ctx),
width:CGBitmapContextGetWidth(ctx),
rowBytes:CGBitmapContextGetBytesPerRow(ctx))
for imageLayer in layers
{
//CGContextDrawImage(ctx, CGRect(origin: frameOffset, size: canvasSize), imageLayer)
var inProvider:CGDataProviderRef = CGImageGetDataProvider(imageLayer)
var inBitmapData:CFDataRef = CGDataProviderCopyData(inProvider)
var buffer:vImage_Buffer = vImage_Buffer(data: &inBitmapData, height:
CGImageGetHeight(imageLayer), width: CGImageGetWidth(imageLayer), rowBytes:
CGImageGetBytesPerRow(imageLayer))
vImageAlphaBlend_ARGB8888(&buffer, &ctxImageBuffer, &ctxImageBuffer, 0)
}
}
the canvasSize is allways the same and also all the layers have the same size, so I don't understand why the last line crashes.
Also I don't see how to use the new convenience functions to create vImageBuffers directly from CGLayerRefs. That's why I do it the complicated way.
Any help appreciated.
EDIT
inBitmapData indeed holds pixel data that reflect the background color I set. However the debugger can not po &inBitmapData and fails with this message:
error: reference to 'CFData' not used to initialize a inout parameter &inBitmapData
So I looked for a way to get the pointer to inBitmapData. That is what I came up with:
var bitmapPtr: UnsafeMutablePointer<CFDataRef> = UnsafeMutablePointer<CFDataRef>.alloc(1)
bitmapPtr.initialize(inBitmapData)
I also had to change the way to point at my data for both buffers that i need for the alpha blend input. Now it's not crashing anymore and luckily the speed boost is inspectable with a profiler (vImageAlphaBlend only takes about a third of CGContextDrawImage), but unfortunately the image results in a transparent image with pixel failures instead of the white image background.
So far I don't get any runtime errors anymore but since the result is not as expected I fear that I still don't use the alpha blend function correctly.
vImage_Buffer.data should point to the CFData data (pixel data), not the CFDataRef.
Also, not all images store their data as four channel, 8-bit per channel data. If it turns out to be three channel or RGBA or monochrome, you may get more crashing or funny colors. Also, you have assumed that the raw image data is not premultiplied, which may not be a safe assumption.
You are better off using vImageBuffer_initWithCGImage so that you can guarantee the format and colorspace of the raw image data. A more specific question about that function might help us resolve your confusion about it.
Some CG calls fall back on vImage to do the work. Rewriting your code in this way might be unprofitable in such cases. Usually the right thing to do first is to look carefully at the backtraces in the CG call to try to understand why you are causing so much work for it. Often the answer is colorspace conversion. I would look carefully at the CGBitmapInfo and colorspace of the drawing surface and your images and see if there wasn't something I could do to get those to match up a bit better.
IIRC, CALayerRefs usually have their data in non cacheable storage for better GPU access. That could cause problems for the CPU. If the data is in a CALayerRef I would use CA to do the compositing. Also, I thought that CALayers are nearly always BGRA 8-bit premultiplied. If you are not going to use CA to do the compositing, then the right vImage function is probably vImagePremultipliedAlphaBlend_RGBA/BGRA8888.