Copying a portion of an IplImage into another Iplimage (that is of same size is the source) - copying

I have a set of mask images that I need to use everytime I recognise a previously-known scene on my camera. All the mask images are in IplImage format. There will be instances where, for example, the camera has panned to a slightly different but nearby location. this means that if I do a template matching somewhere in the middle of the current scene, I will be able to recognise the scene with some amount of shift of the template in this scene. All I need to do is use those shifts to adjust the mask image ROIs so that they can be overlayed appropriately based on the template-matching. I know that there are functions such as:
cvSetImageROI(Iplimage* img, CvRect roi)
cvResetImageROI(IplImage* img);
Which I can use to set crop/uncrop my image. However, it didn't work for me quit the way I expected. I would really appreciate if someone could suggest an alternative or what I am doing wrong, or even what I haven't thought of!
**I must also point out that I need to keep the image size same at all times. The only thing that will be different is the actual area of interest in the image. I can probably use the zero/one padding to cover the unused areas.

I believe a solution without making too many copies of the original image would be:
// Make a new IplImage
IplImage* img_src_cpy = cvCreateImage(cvGetSize(img_src), img_src->depth, img_src->nChannels);
// Crop Original Image without changing the ROI
for(int rows = roi.y; rows < roi.height; rows++) {
for(int cols = roi.x; rows < roi.width; cols++) {
img_src_cpy->imageData[(rows-roi.y)*img_src_cpy->widthStep + (cols-roi.x)] = img_src[rows*img_src + cols];
}
{
//Now copy everything to the original image OR simply return the new image if calling from a function
cvCopy(img_src_cpy, img_src); // OR return img_src_cpy;
I tried the code out on itself and it is also fast enough for me (executes in about 1 ms for 332 x 332 Greyscale image)

Related

Ellipsoid fitting for 3D data on matlab

I am working on a 3D volume of CT lung images, In order to detect nodules I need to fit an ellipsoid model for each suspected nodule, How can I make a code for that ???
Nodule is the suspected object to be a tumor, my algorithm needs to check every object, and approximate it to an ellipsoid, and from the ellipsoid parameters we calculate 8 features to build a classifier which detects whether it a nodule or not through training and testing data, so I need to fit such ellipsoid
here one slice of the volume CT lung image
here another slice of the same volume but it contains a nodule (the yellow circle there is a nodule) so I need my code to check every shape to determine whether it is a nodule or not
As we do not have 3D dataset at disposal I start with 2D.
So first we need to select the lungs so we do not count any other objects then those inside. As this is gray scale we first need to binarize it somehow. I use my own class picture for DIP and this will heavily use my growthfill so I strongly recommend to first read this:
Fracture detection in hand using image proccessing
Where you will find all explanations you need for this. Now for your task I would:
turn RGBA to grayscale <0,765>
I just compute intensity i=R+G+B as for 24 bit image the channels are 8 bit the result is up to 3*255 = 765. As the input image was compressed by JPEG there are distortions in color and also noise in the image so do not forget about that.
crop out the white border
Just cast rays (scan lines) from middle of each outer line of the border towards middle and stop if non white-ish pixel hit. I treshold with 700 instead of 765 to compensate the noise in image. Now you got the bounding box of usable image so crop out the rest.
compute histogram
To compensate distortions in image smooth the histogram enough to remove all unwanted noise and gaps. Then find the local maximum from left and from right (red). This will be used for binarisation treshold (middle between them green) This is my final result:
binarise image
Just treshold image against the **green* intensity from histogram. So if i0,i1 are the local maximum intensities from left and right in the histogram then treshold against (i0+i1)/2. This is the result:
remove everything except lungs
That is easy just fill the black from outside to some predefined background color. Then the same way all the white stuff neighboring background color. That will remove human surface, skeleton, organs and the CT machine leaving just the lungs. Now recolor the remaining black with some predefined Lungs color.
There should be no black color left and the remaining white are the possible nodules.
process all remaining white pixels
So just loop through the image and on first white pixel hit flood fill it with predefined nodule color or distinct object index for latter use. I also distinct the surface (aqua) and the inside (magenta). This is the result:
Now you can compute your features per nodule. If you code your custom floodfill for this then you can obtain from it directly things like:
Volume in [pixels]
Surface area in [pixels]
Bounding box
Position (relative to Lungs)
Center of gravity or centroid
Which all can be used as your feature variables and also to help with fitting.
fit the found surface points
There are many methods for this but I would ease up it as much as I could to improve performance and accuracy. For example you can use centroid as your ellipsoid center. Then find the min and max distant points from it and use them as semi-axises (+/- some orthogonality corrections). And then just search around these initial values. For more info see:
How approximation search works
You will find examples of use in linked Q/As there.
[Notes]
All of the bullets are applicable in 3D. While constructing custom floodfill be careful with the recursion tail. Too much info will really fast overflow your stack and also slow things down considerably. Here small example of how I deal with it with few custom return parameters + growthfill I used:
//---------------------------------------------------------------------------
void growfill(DWORD c0,DWORD c1,DWORD c2); // grow/flood fill c0 neigbouring c1 with c2
void floodfill(int x,int y,DWORD c); // flood fill from (x,y) with color c
DWORD _floodfill_c0,_floodfill_c1; // recursion filled color and fill color
int _floodfill_x0,_floodfill_x1,_floodfill_n; // recursion bounding box and filled pixel count
int _floodfill_y0,_floodfill_y1;
void _floodfill(int x,int y); // recursion for floodfill
//---------------------------------------------------------------------------
void picture::growfill(DWORD c0,DWORD c1,DWORD c2)
{
int x,y,e;
for (e=1;e;)
for (e=0,y=1;y<ys-1;y++)
for ( x=1;x<xs-1;x++)
if (p[y][x].dd==c0)
if ((p[y-1][x].dd==c1)
||(p[y+1][x].dd==c1)
||(p[y][x-1].dd==c1)
||(p[y][x+1].dd==c1)) { e=1; p[y][x].dd=c2; }
}
//---------------------------------------------------------------------------
void picture::_floodfill(int x,int y)
{
if (p[y][x].dd!=_floodfill_c0) return;
p[y][x].dd=_floodfill_c1;
_floodfill_n++;
if (_floodfill_x0>x) _floodfill_x0=x;
if (_floodfill_y0>y) _floodfill_y0=y;
if (_floodfill_x1<x) _floodfill_x1=x;
if (_floodfill_y1<y) _floodfill_y1=y;
if (x> 0) _floodfill(x-1,y);
if (x<xs-1) _floodfill(x+1,y);
if (y> 0) _floodfill(x,y-1);
if (y<ys-1) _floodfill(x,y+1);
}
void picture::floodfill(int x,int y,DWORD c)
{
if ((x<0)||(x>=xs)||(y<0)||(y>=ys)) return;
_floodfill_c0=p[y][x].dd;
_floodfill_c1=c;
_floodfill_n=0;
_floodfill_x0=x;
_floodfill_y0=y;
_floodfill_x1=x;
_floodfill_y1=y;
_floodfill(x,y);
}
//---------------------------------------------------------------------------
And here C++ code I did the example images with:
picture pic0,pic1;
// pic0 - source img
// pic1 - output img
int x0,y0,x1,y1,x,y,i,hist[766];
color c;
// copy source image to output
pic1=pic0;
pic1.pixel_format(_pf_u); // grayscale <0,765>
// 0xAARRGGBB
const DWORD col_backg=0x00202020; // gray
const DWORD col_lungs=0x00000040; // blue
const DWORD col_out =0x0000FFFF; // aqua nodule surface
const DWORD col_in =0x00800080; // magenta nodule inside
const DWORD col_test =0x00008040; // green-ish distinct color just for safe recoloring
// [remove white background]
// find white background area (by casting rays from middle of border into center of image till non white pixel hit)
for (x0=0 ,y=pic1.ys>>1;x0<pic1.xs;x0++) if (pic1.p[y][x0].dd<700) break;
for (x1=pic1.xs-1,y=pic1.ys>>1;x1> 0;x1--) if (pic1.p[y][x1].dd<700) break;
for (y0=0 ,x=pic1.xs>>1;y0<pic1.ys;y0++) if (pic1.p[y0][x].dd<700) break;
for (y1=pic1.ys-1,x=pic1.xs>>1;y1> 0;y1--) if (pic1.p[y1][x].dd<700) break;
// crop it away
pic1.bmp->Canvas->Draw(-x0,-y0,pic1.bmp);
pic1.resize(x1-x0+1,y1-y0+1);
// [prepare data]
// raw histogram
for (i=0;i<766;i++) hist[i]=0;
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
hist[pic1.p[y][x].dd]++;
// smooth histogram a lot (remove noise and fill gaps due to compression and scanning nature of the image)
for (x=0;x<100;x++)
{
for (i=0;i<765;i++) hist[i]=(hist[i]+hist[i+1])>>1;
for (i=765;i>0;i--) hist[i]=(hist[i]+hist[i-1])>>1;
}
// find peaks in histogram (for tresholding)
for (x=0,x0=x,y0=hist[x];x<766;x++)
{
y=hist[x];
if (y0<y) { x0=x; y0=y; }
if (y<y0>>1) break;
}
for (x=765,x1=x,y1=hist[x];x>=0;x--)
{
y=hist[x];
if (y1<y) { x1=x; y1=y; }
if (y<y1>>1) break;
}
// binarize image (tresholding)
i=(x0+x1)>>1; // treshold with middle intensity between peeks
pic1.pf=_pf_rgba; // result will be RGBA
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
if (pic1.p[y][x].dd>=i) pic1.p[y][x].dd=0x00FFFFFF;
else pic1.p[y][x].dd=0x00000000;
pic1.save("out0.png");
// recolor outer background
for (x=0;x<pic1.xs;x++) pic1.p[ 0][x].dd=col_backg; // render rectangle along outer border so the filling starts from there
for (x=0;x<pic1.xs;x++) pic1.p[pic1.ys-1][x].dd=col_backg;
for (y=0;y<pic1.ys;y++) pic1.p[y][ 0].dd=col_backg;
for (y=0;y<pic1.ys;y++) pic1.p[y][pic1.xs-1].dd=col_backg;
pic1.growfill(0x00000000,col_backg,col_backg); // fill its black content outside in
// recolor white human surface and CT machine
pic1.growfill(0x00FFFFFF,col_backg,col_backg);
// recolor Lungs dark matter
pic1.growfill(0x00000000,col_backg,col_lungs); // outer border
pic1.growfill(0x00000000,col_lungs,col_lungs); // fill its black content outside in
pic1.save("out1.png");
// find/recolor individual nodules
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
if (pic1.p[y][x].dd==0x00FFFFFF)
{
pic1.floodfill(x,y,col_test);
pic1.growfill(col_lungs,col_test,col_out);
pic1.floodfill(x,y,col_in);
}
pic1.save("out2.png");
// render histogram
for (x=0;(x<766)&&(x>>1<pic1.xs);x++)
for (y=0;(y<=hist[x]>>6)&&(y<pic1.ys);y++)
pic1.p[pic1.ys-1-y][x>>1].dd=0x000040FF;
for (x=x0 ,y=0;(y<=100)&&(y<pic1.ys);y++) pic1.p[pic1.ys-1-y][x>>1].dd=0x00FF0000;
for (x=x1 ,y=0;(y<=100)&&(y<pic1.ys);y++) pic1.p[pic1.ys-1-y][x>>1].dd=0x00FF0000;
for (x=(x0+x1)>>1,y=0;(y<=100)&&(y<pic1.ys);y++) pic1.p[pic1.ys-1-y][x>>1].dd=0x0000FF00;
you may be interested in a recent plugin that we developed for the open-source software Icy http://icy.bioimageanalysis.org/
The plugin name is FitEllipsoid, it allows rapidly fitting an ellipsoid to the image contents by first clicking on a few points on the orthogonal view.
A tutorial is available here: https://www.youtube.com/watch?v=MjotgTZi6RQ
Also note that we provide Matlab and Java source codes on GitHub (but I cannot provide them since it is my first appearance on the website).

How do I convert the whole image to grayscale except for a sub image which should be in color?

I have an image and a subimage which is cropped out of the original image.
Here's the code I have written so far:
val1 = imread(img);
val2 = imread(img_w);
gray1 = rgb2gray(val1);%grayscaling both images
gray2 = rgb2gray(val2);
matchingval = normxcorr2(gray1,gray2);%normalized cross correlation
[max_c,imax]=max(abs(matchingval(:)));
After this I am stuck. I have no idea how to change the whole image grayscale except for the sub image which should be in color.
How do I do this?
Thank you.
If you know what the coordinates are for your image, you can always just use the rgb2gray on just the section of interest.
For instance, I tried this on an image just now:
im(500:1045,500:1200,1)=rgb2gray(im(500:1045,500:1200,1:3));
im(500:1045,500:1200,2)=rgb2gray(im(500:1045,500:1200,1:3));
im(500:1045,500:1200,3)=rgb2gray(im(500:1045,500:1200,1:3));
Where I took the rows (500 to 1045), columns (500 to 1200), and the rgb depth (1 to 3) of the image and applied the rgb2gray function to just that. I did it three times as the output of rgb2gray is a 2d matrix and a color image is a 3d matrix, so I needed to change it layer by layer.
This worked for me, making only part of the image gray but leaving the rest in color.
The issue you might have though is this, a color image is 3 dimensions while a gray scale need only be 2 dimensions. Combining them means that the gray scale must be in a 3d matrix.
Depending on what you want to do, this technique may or may not help.
Judging from your code, you are reading the image and the subimage in MATLAB. What you need to know are the coordinates of where you extracted the subimage. Once you do that, simply take your original colour image, convert that to grayscale, then duplicate this image in the third dimension three times. You need to do this so that you can place colour pixels in this image.
For RGB images, grayscale images have the RGB components to all be the same. Duplicating this image in the third dimension three times creates the RGB version of the grayscale image. Once you do that, simply use the row and column coordinates of where you extracted the subimage and place that into the equivalent RGB grayscale image.
As such, given your colour image that is stored in img and your subimage stored in imgsub, and specifying the rows and columns of where you extracted the subimage in row1,col1 and row2,col2 - with row1,col1 being the top left corner of the subimage and row2,col2 is the bottom right corner, do this:
img_gray = rgb2gray(img);
img_gray = cat(3, img_gray, img_gray, img_gray);
img_gray(row1:row2, col1:col2,:) = imgsub;
To demonstrate this, let's try this with an image in MATLAB. We'll use the onion.png image that's part of the image processing toolbox in MATLAB. Therefore:
img = imread('onion.png');
Let's also define our row1,col1,row2,col2:
row1 = 50;
row2 = 90;
col1 = 80;
col2 = 150;
Let's get the subimage:
imgsub = img(row1:row2,col1:col2,:);
Running the above code, this is the image we get:
I took the same example as rayryeng's answer and tried to solve by HSV conversion.
The basic idea is to set the second layer i.e saturation layer to 0 (so that they are grayscale). then rewrite the layer with the original saturation layer only for the sub image area, so that, they alone have the saturation values.
Code:
img = imread('onion.png');
img = rgb2hsv(img);
sPlane = zeros(size(img(:,:,1)));
sPlane(50:90,80:150) = img(50:90,80:150,2);
img(:,:,2) = sPlane;
img = hsv2rgb(img);
imshow(img);
Output: (Same as rayryeng's output)
Related Answer with more details here

Drawing a resizeable box on an image

I'm working on a gui and using GUIDE. It loads and image and has the user draw an ROI around a point (the particle ROI). I would then like to have two sliders for creating a second ROI (the Scan ROI) where the user can use sliders to set the width and height of the second roi and see it updated on the image. The sliders seem to work ok but my gui keeps drawing a new roi on top of the image so it gets messy looking really fast. I would like to remove the user sizeable roi from the image before redrawing it (while still keeping the original particle ROI on the image. I currently do it the following way :
Inside the callback for the setroi size button (this should be for the particel ROI)
handles=guidata(hObject);
particleroiSize=imrect;% - draw a rectagle around the particle to get a meausr eof ROI size
roiPoints=getPosition(particleroiSize); %-get tha parameters fo the rectanlge
partX1 = round(roiPoints(1));
partY1 = round(roiPoints(2));
partX2 = round(partX1 + roiPoints(3));
partY2 = round(partY1 + roiPoints(4)); % these are the ROi positions in pixels
roiHeight = round(roiPoints(3)); % - these are just the ROI width and height
roiWidth = round(roiPoints(4));
handles=guidata(hObject); %_ update all the handles...
handles.partX1=partX1;
handles.partX2=partX2;
handles.partY1=partY1;
handles.partY2=partY2;
handles.roicenterX = (partX1 + round(roiPoints(3))/2);
handles.roicenterY= (partY1 + round(roiPoints(4))/2);
handles.roiHeight = roiHeight;
handles.roiWidth = roiWidth;
current_slice = round(get(handles.Image_Slider,'Value'));
particleImage=handles.Image_Sequence_Data(partY1:partY2,partX1:partX2,current_slice);
handles.particleImage=particleImage;
set(handles.RoiSizeDisplay,'String',strcat('Particle ROI is ',' ',num2str(roiHeight),' ', ' by ',num2str(roiWidth)) );
guidata(hObject,handles);
And then inside the call back for the sliders that set the Scan ROI size I have (this is inside two different sliders one adjusts the width and one the height :
handles=guidata(hObject);
try
delete(handles.ScanArea);
% plus any cleanup code you want
catch
end
WidthValue = get(handles.ScanAreaSliderWidth,'value');
HeightValue = get(handles.ScanAreaSliderHeight,'value');
set(handles.ScanAreaWidthDisplay,'String',strcat('Scan Area Width is ',' ', num2str(WidthValue))); % sets the display..now to do the drawing...
%h = imrect(hparent, position);
%position = [Xmin Ymin Width Heigth];
position = [ round(handles.roicenterX-WidthValue/2) round(handles.roicenterY-HeightValue/2) WidthValue HeightValue];
handles.ScanArea = imrect(handles.Image_Sequence_Plot,position);
%h = imrect(hparent, position)
handles=guidata(hObject);
guidata(hObject, handles);
But it never deletes the scan area ROI and keeps redrawign over it..I thought the try...catch would work but it doens't seem to. Am I making extra copies of the ROI or something? Please help..
Thanks.
If you need to delete the ROI drawn with imrect, you can use findobj to look for rectangle objects (which are of type "hggroup") and delete them:
hfindROI = findobj(gca,'Type','hggroup');
delete(hfindROI)
and that should do it. Since you first draw particleroiSize, which is of the hggroup type as well, you might not want to delete all the outputs from the call to findobj. If there are multiple rectangles in your current axis, then hfindROI will contain multiple entries. As such you might want to delete all of them but the first one, which corresponds to particleroiSize.
I hope i got your question right. If not please ask for clarifications!
Thanks. This worked perfectly except that I had to use
hfindROI = findobj(handles.Image_Sequence_Plot,'Type','hggroup');
delete(hfindROI(1:end-1))
to get rid of everything but the first ROI, so I guessteh hggoup objects are added at the start ? (I thought I would use deleted(hfindROI(2:end)) to delete all but the first. Also, why does hfindROI return a list of numbers? Do they represent the hggroup objects or something like that?
thanks..

Trim alpha from CGImage

I need to get a trimmed CGImage. I have an image which has empty space (alpha = 0) around some colors and need to trim it to get the size of only the visible colors.
Thanks.
There's three ways of doing this :
1) Use photoshop (or image editor of choice) to edit the image - I assume you can't do this, it's too obvious an answer!
2) Ignore it - why not just ignore it, draw the image it's full size? It's transparent so the user will never notice.
3) Write some code that goes through each pixel in the image until it gets to one that has an alpha value > 0. This should give you the number of rows to trim from the top. However, this will slow down your UI so you might want to do it on a background thread.
e.g.
// To get the number of transparent rows at the top of the image
// Sorry this code is so ugly
uint32 *p = start_of_image;
while ( 0 == *p & 0x000000ff && p < end_of_image_data) ++p;
uint number_of_white_rows_at_top = (p - start_of_image) / width_of_image;
When you know the amount of transparent space from around the image you can draw it using a UIImageView, set the renderMode to center and let it do the trimming for you :)

AVFoundation buffer comparison to a saved image

I am a long time reader, first time poster on StackOverflow, and must say it has been a great source of knowledge for me.
I am trying to get to know the AVFoundation framework.
What I want to do is save what the camera sees and then detect when something changes.
Here is the part where I save the image to a UIImage :
if (shouldSetBackgroundImage) {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(rowBase, bufferWidth,
bufferHeight, 8, bytesPerRow,
colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage * image = [UIImage imageWithCGImage:quartzImage];
[self setBackgroundImage:image];
NSLog(#"reference image actually set");
// Release the Quartz image
CGImageRelease(quartzImage);
//Signal that the image has been saved
shouldSetBackgroundImage = NO;
}
and here is the part where I check if there is any change in the image seen by the camera :
else {
CGImageRef cgImage = [backgroundImage CGImage];
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
char* data = CFDataGetBytePtr(bitmapData);
if (data != NULL)
{
int64_t numDiffer = 0, pixelCount = 0;
NSMutableArray * pointsMutable = [NSMutableArray array];
for( int row = 0; row < bufferHeight; row += 8 ) {
for( int column = 0; column < bufferWidth; column += 8 ) {
//we get one pixel from each source (buffer and saved image)
unsigned char *pixel = rowBase + (row * bytesPerRow) + (column * BYTES_PER_PIXEL);
unsigned char *referencePixel = data + (row * bytesPerRow) + (column * BYTES_PER_PIXEL);
pixelCount++;
if ( !match(pixel, referencePixel, matchThreshold) ) {
numDiffer++;
[pointsMutable addObject:[NSValue valueWithCGPoint:CGPointMake(SCREEN_WIDTH - (column/ (float) bufferHeight)* SCREEN_WIDTH - 4.0, (row/ (float) bufferWidth)* SCREEN_HEIGHT- 4.0)]];
}
}
}
numberOfPixelsThatDiffer = numDiffer;
points = [pointsMutable copy];
}
For some reason, this doesn't work, meaning that the iPhone detects almost everything as being different from the saved image, even though I set a very low threshold for detection in the match function...
Do you have any idea of what I am doing wrong?
There are three possibilities I can think of for why you might be seeing nearly every pixel be different: colorspace conversions, incorrect mapping of pixel locations, or your thresholding being too sensitive for the actual movement of the iPhone camera. The first two aren't very likely, so I think it might be the third, but they're worth checking.
There might be some color correction going on when you place your pixels within a UIImage, then extract them later. You could try simply storing them in their native state from the buffer, then using that original buffer as the point of comparison, not the UIImage's backing data.
Also, check to make sure that your row / column arithmetic works out for the actual pixel locations in both images. Perhaps generate a difference image the absolute difference of subtracting the two images, then use a simple black / white divided area as a test image for the camera.
The most likely case is that the overall image is shifting by more than one pixel simply through the act of a human hand holding it. These whole-frame image shifts could cause almost every pixel to be different in a simple comparison. You may need to adjust your thresholding or do more intelligent motion estimation, like is used in video compression routines.
Finally, when it comes to the comparison operation, I'd recommend taking a look at OpenGL ES 2.0 shaders for performing this. You should see a huge speedup (14-28X in my benchmarks) over doing this pixel-by-pixel comparison on the CPU. I show how to do color-based thresholding using the GPU in this article, which has this iPhone sample application that tracks colored objects in real time using GLSL shaders.
Human eyes are way much different than a camera (even a very expensive one) in the way that we don't perceive minimal light changes or small motion changes. Cameras DO, they are very sensitive but not smart at all!
With your current approach (it seems you are comparing each pixel):
What would happen if the frame is shifted only 1 pixel to the right?! You can image right the result of your algorithm, right?. Humans will perceive nothing or almost nothing.
There is also the camera shutter problem: That means that every frame might not have the same amount of light. Hence, a pixel-by-pixel comparison method is too prone to fail.
You want to at least pre-process your image and extract some basic features. Maybe edges, corners, etc. OpenCV is easy for that but I am not sure that doing such a processing will be fast in the iPhone. (It depends on your image size)
Alternatively you can try the naive template matching algorithm with a template size that will be a little short than your hole view size.
Image Processing is computationally expensive so don't expect it to be fast from the first time, specially in a mobile device and even more if you don't have experience in Image Processing/Computer Vision stuff.
Hope it helps ;)