How can I detect an upward-facing arrow in an image on iOS? - iphone

I have an image that contains an upward-facing arrow in its left-hand side. I would like to be able to detect that particular shape in that orientation, and if it's present, capture just that portion of the image. If the arrow faces downward, I want it to be ignored.
I've looked into using SURF descriptors and OpenCV to detect and match a shape like this, but there are licensing issues around the SURF algorithms. What alternative method(s) could I use to detect this shape in this particular orientation within an image on iOS?

A simple template matching can solve this. Build a template (or crop from an existing image) an 'upwards arrow.' Then use Normalized Cross Correlation to see where all the 'upwards arrows' are present in your test image. Since you are only looking for 'upwards arrows' on left-side of image, limit your Normalized Cross Correlation search in that region. Here is a good tutorial: http://www.mathworks.com/products/demos/image/cross_correlation/imreg.html

Related

ImageJ : Overlay 2 Images When One is Distorted

I am asking for a step-by-step process with the appropriate plugins (I have been attempting with multipoint and landmark correspondence). Please include images in answer if possible.
I want to overlay two scientific images
The images are not oriented the same due to distortion of the second image from collection at a 45o angle and the object was also at a different orientation (flipped horizontally and slightly rotated)
In Adobe Photoshop I transformed the distorted image to overlay with the
first image by eyeballing the match as you can see below but I am having
difficulty using ImageJ to perform this overlay. I have been told that my
eyeballing method in Adobe Photoshop will not be sufficient for my methods
section of my manuscript and that I must use a scientific program such as
ImageJ.
I tried to follow instructions from the ImageJ forum for Multipoint and Landmark Correspondence but it does not overlay the two images or transform the second image to match the first. Rather, it distorts a portion of the second image and appears to crop the rest out.

Extracting measurements from a finger via ROI and image processing MATLAB

I am trying to do a number of things via MATLAB but I am getting a bit lost with what techniques to use. My ultimate goal is to extract various measurements from a users fingerprint presentation, e.g. how far the finger over/undershoots, the co-ordinates of where the finger enters, the angle of the finger.
In my current setup, I have a web camera recording footage of a top down view of the presentation which I then take the video file and break down into individual frames. https://www.dropbox.com/s/zhvo1vs2615wr29/004.bmp?dl=0
What I am trying to work on at the moment is using ROI based image processing to create a binary mask around the edges of the scanner. I'm using the imbw function to get a binarised image and getting this as a result. https://www.dropbox.com/s/1re7a3hl90pggyl/mASK.bmp?dl=0
What I could use is some guidance on where to go from here. I want to be able to take measurements from the defined ROI to work out various metrics e.g. how far a certain point is from the ROI so I must have some sort of border for the scanner edges. From my experience in image processing so far, this has been hard to clearly define. I would like to get a clearer image where the finger is outlined and defined and the background (i.e. the scanner light/blocks) are removed.
Any help would be appreciated.
Thanks

Human Detection using edge detection

I am trying to detect exact silhouette of human body in this dataset using background subtraction. After doing some thresholding I was getting split blobs so I looked at this tutorial by Steve but now I am getting blob other that human body as shown below
So here is the original
After Subtracting it from background, background was considered as the first frame of the video, so after subtracting it from orignal image I get the following image
so I did basic thresholding and I get the following image, which is split from further areas
and using Steve's method I get this
But this contains a lot of area which is not a part of human body, any suggestion if somehow or using edges I can get good blob of human body.
EDIT
As #lennon310 asked me to upload color image so here it is
and as #NKN asked me to upload edge information of the same image so here it is
Instead of literally subtracting the background, try using the vision.ForegroundDetector object, which is part of the Computer Vision System Toolbox. It implements the mixture-of-gaussians adaptive background modeling, and it may give you a cleaner segmentation.
Having said that, it is very unlikely that you will get the "exact" silhouette. Some error is inevitable.
In your result image, you have tow types of black regions. one is moving and the other is stationary.
So when you you want to fill the human body, you have to choose only the moving region, for this purpose, I suggest to segment your image by adding optical flow technique to know where the moving regions are.
This is an interesting tutorial doing what you need to do:
http://docs.opencv.org/trunk/doc/py_tutorials/py_video/py_lucas_kanade/py_lucas_kanade.html

Detecting particular objects in the image i.e image segmentation with opencv

I have to select any particular object visible in my image on i-phone.
Basically my project is to segment image objects on the basis of my touch.
The method I am following is to first detect contours of the image and then select a particular sequence based on finger touch.
Is there any other method which would be more robust because I have to run it on video frames?
I am using OpenCV and iphone for the project.
PLease help if there is any other idea which has been implemented or is feasible to implement.
Have you looked at SIFT or SURF implementations? They both track object features and are resilient (to a certain degree) to rotation, translation and scale.
Also check out FAST which is a corner detection algorithm which might help you, they have an app on the app store showing how quick it is too.

Iphone opengl es - glu, glPushName

Using Iphone and Objective C
Im trying to find what plane has been clicked/touched in my opengl view. Typically i would use glPushName/ flPopName but this function doesn't seem to be implemented in the sdk or defined in . Does anyone know where to get there useful functions or another way to get the object that was clicked?
OpenGL ES doesn't support these functions. You'll have to find another way to pick. Either:
Render solid faces with distinct colors into a low-res buffer. Select the render buffer resolution so that the pick square occupies a 3x3 pixel grid, choose either the color in the center pixel, or the color that occupies the most edge pixels.
Determine the pick geometrically. This usually entails placing your geometry in a BSP of some sort and doing interesection tests with a ray emanating downwards into the screen, starting from the tapped pixel.
Determine the pick analytically. If you geometry is simple and/or regular enough, you might be able to use some straightforward math to find out what you tapped.