I'm working with an image processing tool in MATLAB. How can I convert MATLAB code to Objective-C?
Here are some of the tasks I want to do:
I want to rotate an oblique line to normal.
I have algorıthm that converts an colored image to black & white. (How can I access pixel color values in Objective-C?)
In the function getrgbfromimage, how can I access output of pixel values to consol?
How can I run a function (getrotatedımage) on each element of an array?
Quartz 2D (aka. Core Graphics) is the 2D drawing API in iOS. Quartz will, most likely, do everything you're looking for. I recommend checking out the Quartz 2D Programming Guide in the documentation. For your specific requests check out these sections:
Colors and Color Spaces - for color and b&w images
Transforms - for rotating or performing any affine transform
Bitmap Images and Image Masks - for information on the underlying image data
As for running a function on each element of an array, you can use the block iteration API (as long as your app can require iOS 4.0 or higher). An example:
[myArray enumerateObjectsUsingBlock:^(id item, NSUInteger index, BOOL *stop) {
doSomethingWith(item);
}];
If you just want to call a method on each item in the array, there is also:
[myArray makeObjectsPerformSelector:#selector(doSomething)];
CGBitmapContext will get pixel values from an image.
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Reference/CGBitmapContext/Reference/reference.html
This has same demo code.
Retrieving a pixel alpha value for a UIImage (MonoTouch)
printf will dump the RGB values to the console.
NSArray or NSMutableArray will hold your images and a simple for loop will let you iterate through them.
Related
I am creating a mobile painting application. I have two textures (Texture2D), which is a template of an image and a color map for it.
This color map contains a unique color for each region of the template where the player can draw.
I need to have several other textures, one texture per each unique color in the color map.
For now I am trying to use GetPixels for the color map, and using a dictionary, I check every pixel.
If there is no color as a key value in this dictionary, create a new texture with SetPixel using the coordinate
If there is a color as a key, get the texture by using the key and SetPixel with the coordinates to get this texture.
But when I run this even my computer begins to extremely lag, no word about mobiles.
Is there a more efficent way?
To help you visualize the issue I am adding the color map, the texture I need to split.
I don't see a magically fast way to do it, but here are a few tips that may help:
Try using GetPixels32 (and SetPixels32) instead of simply GetPixels - the return value is not Color but Color32 which uses bytes and not floating points, it should be faster. See http://docs.unity3d.com/ScriptReference/Texture2D.SetPixels32.html http://docs.unity3d.com/ScriptReference/Texture2D.GetPixels32.html
Do not call SetPixel for each pixel, this is really slow. Instead, for each color create a temporary Color32 array and work with it, and only at the end assign all the arrays to new textures using SetPixels32.
If you use foreach loop or Array.ForEach or some linq stuff to parse the colors array, don't do it - use simple for loop, it is the fastest way.
Hope this helps.
Now there is a faster way to do this, which is using Texture2D.GetRawTextureData() and Texture2D.LoadRawTextureData().
In my iOS project, I have a CGImage in RGB that I'd like to binarize (convert to black and white). I would like to use OpenCV to do this, but I'm new to OpenCV. I found a book on OpenCV, but it was not for iPhone.
How can I binarize such an image using OpenCV on iOS?
If you don't want to set up OpenCV in your iOS project, my open source GPUImage framework has two threshold filters within it for binarization of images, a simple threshold and an adaptive one based on local luminance near a pixel.
You can apply a simple threshold to an image and then extract a resulting binarized UIImage using code like the following:
UIImage *inputImage = [UIImage imageNamed:#"inputimage.png"];
GPUImageLuminanceThresholdFilter *thresholdFilter = [[GPUImageLuminanceThresholdFilter alloc] init];
thresholdFilter.threshold = 0.5;
UIImage *thresholdFilter = [thresholdFilter imageByFilteringImage:inputImage];
(release the above filter if not using ARC in your application)
If you wish to display this image to the screen instead, you can send the thresholded output to a GPUImageView. You can also process live video with these filters, if you wish, because they are run entirely on the GPU.
Take a look at cv::threshold() adn pass thresholdType as cv::THRESH_BINARY:
double cv::threshold(const cv::Mat& src,
cv::Mat& dst,
double thresh,
double maxVal,
int thresholdType)
This example uses the C interface of OpenCV to convert an image to black & white.
What you want to do is remove the low rate of changes and leave the high rate of changes, this is a high pass filter. I only have experience with audio signal processing so I don't really know what options are available to you but that is the direction I would be looking.
I am stuck in my application feature. I want cropping feature similar to Cam Scanner Cropping.
The screens of CAM-SCANNER are:
I have created similar crop view.
I have obtained CGPoint of four corners.
But How can I obtained cropped image in slant.
Please provide me some suggestions if possible.
This is a perspective transform problem. In this case they are plotting a 3D projection in a 2D plane.
As, the first image has selection corners in quadrilateral shape and when you transform it in a rectangular shape, then you will either need to add more pixel information(interpolation) or remove some pixels.
So now actual problem is to add additional pixel information to cropped image and project it to generate second image. It can be implemented in various ways:
<> you can implement it by your own by applying perspective tranformation matrix with interpolation.
<> you can use OpenGL .
<> you can use OpenCV.
.. and there are many more ways to implement it.
I had solved this problem using OpenCV. Following functions in OpenCV will help you to achieve this.
cvPerspectiveTransform
cvWarpPerspective
First function will calculate transformation matrix using source and destination projection coordinates. In your case src array will have values from CGPoint for all the corners. And dest will have rectangular projection points for example {(0,0)(200,0)(200,150)(0,150)}.
Once you get transformation matrix you will need to pass it to second function. you can visit this thread.
There may be few other alternatives to OpenCV library, but it has good collection of image processing algorithms.
iOS application with opencv library is available at eosgarden.
I see 2 possibilities. The first is to calculate a transformation matrix that slants the image, and installing it in the CATransform3D property of your view's layer.
That would be simple, assuming you knew how to form the transformation matrix that did the stretching. I've never learned how to construct transformation matrixes that stretch or skew images, so I can't be of any help. I'd suggest googling transformation matrixes and stretching/skewing.
The other way would be to turn the part of the image you are cropping into an OpenGL texture and map the texture onto your output. The actual texture drawing part of that would be easy, but there are about 1000 kilos of OpenGL setup to do, and a whole lot to learning in order to get anything done at all. If you want to pursue that route, I'd suggest searching for simple 2D texture examples using the new iOS 5 GLKit.
Using the code given in Link : http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
Instead of using CGRect and CGContextClipToRect Try using CGContextEOClip OR CGContextClosePath
Though i havnt tried this... But i have tried drawing closed path using CGContextClosePath on TouchesBegan and TouchesMoved and TouchesEnd events.
Hope this can give more insight to your problem...
I want to use bitmap images as a "map" for levels in iphone game. Basicly it's all about the location of obstacles in the rectangular world. The obstacles would be color-coded -- where the white pixel is, there's no obstacle. Black means there is one at this point.
Now I need to use this data to do 2 things: (a) display the level map, (b) for in-game calculations. So, in general, I need a way to read the data from the bitmap and create some data structure (matrix-like) with those information - to both overlay the bitmap onto the level graphics as well as to calculate collisions and such.
How should I do it? Is there any easy way to read the data from image? And what's the best format to keep the images for this?
Have you looked at how Texture2D translates an image file to an OpenGL Texture ?
Tip: take a look at this Method in Texture2D.m:
- (id) initWithCGImage:(CGImageRef)image orientation:(UIImageOrientation)orientation sizeToFit:(BOOL)sizeToFit pixelFormat:(Texture2DPixelFormat)pixelFormat filter:(GLenum) filter
In 3D apps, it's quite common to use this kind of representation for height maps, in a height map, you use a Texture with colors that range from black to white ( white represents the maximum altitude )
For example, from this:
To this:
That was just to tell you that your representation is not that crazy :).
About reading the bitmap, I would also recommend you to read this (just in case you want to go deeper)
Hope I helped a bit!
I'm trying to draw an image using OpenGL in a project for iPad.
The image data:
A data blob of UInt8 that represents the grayscale value for each pixel in three dimensions (I'm going to draw slices from the 3D-body). I also have information on height and width for the image.
My current (unsuccessful) approach is to use it as a texture on a square and I am looking at some example code I found on the net. That code, however, loads an image file from the disc.
While setting up the view there is a call to CGContextDrawImage and the last parameter is suppose to be an CGImageRef. Do you know how I can create one from my data or is this a dead end?
Thankful for all input. I really haven't gotten the grip of OpenGL yet so please be gentle :-)
It's not a dead end.
You can create an CGImageRef from a blob of pixel memory using CGBitmapContextCreate() to create a bitmap context and CGBitmapContextCreateImage() to create the image ref from the bitmap context.