Is it possible to use the image distortion called "Glass Lozenge" which is available as part of core image on the iPhone to distort an image? If you don't know what I mean just open core image funhouse.
Related
I'm developing a tool on jailbroken device.
I get a frame buffer by IOSurfaceGetBaseAddress, which give a mirrored data.
So i would like to mirror the image back. It's too slow to reverse it pixel by pixel, so i want to mirror it by OpenGL ES. Any one has this kind of sample code?
You can render the back side of texture.
Then, pixels is reversed on horizontal aspect.
I am very new with IOS and I am developing a funny painting app for iPhone & iPad which use Object - C.
The app will allow you to touch on a the image then it will fill all the near and same color pixel with your touched pixel by your selected color (Paint bucket tool) .
I know Floodfill algorithm is what I need but I am really stuck on how to implement FloodFill algorithm to fill color on which area I want .
I also saw that one, but it just has 2 files and no any description, I tried to use it, but I wasnt sucessfull .
All I want is loading an image (like that one) to ImageView , and it will fill color when I touch on the ImageView.
If you use UIBezierPath's for drawing you can use its -fill method to fill the shapes.
You can get the byte data of the UIImage by access the CGImage of it.
From this you can find the colour of the pixel that was touched.
Then it's just a case of running a simple flood fill algorithm from the pixel that was touched.
Getting colour of pixel... How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?
Flood fill algorithm... http://en.wikipedia.org/wiki/Flood_fill
I'd probably do this myself rather than look for a framework or anything.
How to create an image viewer for iOS which doesn't do image smoothing when zooming? Like the image viewer of Screenshots Journal.
If you have a NSGraphicsContext in which your drawing takes place, you can call [-NSGraphicsContext setShouldAntialias:NO].
i've a question about UIImagePickerController class reference, in photo-camera mode.
I'm developing an OpenGl game. I should add to game photo-camera features: game should open a window (say for example 200x200 pixels, in the middle of screen), that display in real time a preview of photo-camera IN FRONT OF GL VIEWPORT. So photo-camera preview must be in a window of 200x200 in front of our Gl viewport that display game.
I've some questions:
- main problem is that i've difficult to open UIImagePickerController window in front of our Gl viewport (normally UIImagePickerController window covers all iPhone screen);
which is the better way to capture an image buffer periodically to perform some operations, like face detection (we have library to perform this on a bitmap image) ?
iPhone can reject such approach? It's possible to have this approach with camera (camera preview window that partially overlap an openGl viewport) ?
at the end, it's possible to avoid visualization of camera shutter? I'd like to initialize camera without opening sound and shutter visualization.
This is a screenshot:
http://www.powerwolf.it/temp/UIImagePickerController.jpg
If you want to do something more custom than what Apple intended for the UIImagePickerController, you'll need to use the AV Foundation framework instead. The camera input can be ported to a layer or view. Here is an example that will get you half way there (it is intended for frame capture). You could modify it for face detection by taking sample images using a timer. As long as you use these public APIs, it'll be accepted in the app store. There are a bunch of augmented reality applications that use similar techniques.
http://developer.apple.com/library/ios/#qa/qa2010/qa1702.html
I am using the image picker controller to get an image from the user. After some operations on the image, I want the user to be able to save the image at 1600x1200 px, 1024x1024 px or 640x480 px (something like iFlashReady app).
The last option is the size of image I get in the UIImagePickerControllerDelegate method (when using image from camera roll)
Is there any way we can save the image at these resolutions without pixelating the images?
I tried creating a bitmap context with the width and height I want (CGBitmapContextCreate) and drawing the image there. But the image gets pixelated at 1600x1200.
Thanks
This is non-trivial. Your image just doesn't have enough data. To enlarge it you'll need to resample the image and interpolate between pixels (like photoshop when you resize an image).
Most likely you'll want to use a 3rd party library such as:
http://code.google.com/p/simple-iphone-image-processing/
This performs this and many other image processing functions.
From faint memories of computer vision class from long ago, I think what you do is to blur the image after the up-convert.
Before drawing try adjusting your CGBitmapContext's antialiasing and/or interpolation quality:
CGContextSetShouldAntialias( context, 1 == 1 )
CGContextSetInterpolationQuality( context, kCGInterpolationHigh ) ;
If I remember right, antialiasing is turned off on CGContext by default.