Convert TuriCreate Image to OpenCV Mat - turi-create

Is there a way to convert images loaded using image_analysis.load_images() to the cv::mat format?
I am using the OpenCV HOG detector and am stuck bridging the gap between Turi Images and OpenCV.
Thanks!

By looking at the source code for Image, I see there is a property called pixel_data that returns the pixel data of the image object as a numpy array. I am then able to use that numpy array where a cv::mat object is asked for.
I am not a python expert, so there may be some conversion magic going on behind the scenes. If there is better way to accomplish this, please add your own answer.

Related

Matlab - Center of mass of object having only its edge

I'm trying to make an object recognition program using a k-NN classifier. I've got a bunch of images for the training part of the classifier and a bunch of images to recognize. Those images are in grayscale and there's an object per image. The problem is that there's only the edge of the object (not filled), so I don't think using regionprops(img,'centroid') will work properly for what I understand...
So how can I get their center of mass?
xenoclast's answer should be quite clear, just to add something extra.
As you are done creating the binary image from the grayscale image of yours using im2bw; if the edge of your the object is a the boundary that covers the object fully, you may use regionprops(bw,'centroid') directly without going through imfill.
The first step would be to binarise the image with im2bw. Then you can use imfill(img, 'holes') to turn it from an outline into a filled solid. After that regionprops will work as expected.

How to plot vector (not rastered as pixels) graphics in opencv

Being used to Matlab and its great capabilities of drawing vector graphics, I am looking for something similar in OpenCV. OpenCV drawing functions seem to raster the lines or points at pixel level. Currently, I am dumping the data into text, copy-paste to Matlab and doing all the plots. I also thought about using Matlab engine to pass it the parameters and running plots, but it seems to be too much mess for simple debug operation.
I want to be able to do the following:
Zoom in, out of the image
Draw a line/point which is re-rastered each time I do zoom, like in Matlab.
Currently, I found image watch plugin to take care of zooming, but it does not help with the second part.
Any idea?
OpenCV has a lot of capabilities to process an image but only minimal ones for displaying the result. It has nothing that can display vector graphics like Matlab. When I need to see polygons on image (or just polygons) I am dumping them to file and using third party viewer (usually Giv viewer).

Is there any open source sdk like cam scanner in iphone sdk [duplicate]

I am stuck in my application feature. I want cropping feature similar to Cam Scanner Cropping.
The screens of CAM-SCANNER are:
I have created similar crop view.
I have obtained CGPoint of four corners.
But How can I obtained cropped image in slant.
Please provide me some suggestions if possible.
This is a perspective transform problem. In this case they are plotting a 3D projection in a 2D plane.
As, the first image has selection corners in quadrilateral shape and when you transform it in a rectangular shape, then you will either need to add more pixel information(interpolation) or remove some pixels.
So now actual problem is to add additional pixel information to cropped image and project it to generate second image. It can be implemented in various ways:
<> you can implement it by your own by applying perspective tranformation matrix with interpolation.
<> you can use OpenGL .
<> you can use OpenCV.
.. and there are many more ways to implement it.
I had solved this problem using OpenCV. Following functions in OpenCV will help you to achieve this.
cvPerspectiveTransform
cvWarpPerspective
First function will calculate transformation matrix using source and destination projection coordinates. In your case src array will have values from CGPoint for all the corners. And dest will have rectangular projection points for example {(0,0)(200,0)(200,150)(0,150)}.
Once you get transformation matrix you will need to pass it to second function. you can visit this thread.
There may be few other alternatives to OpenCV library, but it has good collection of image processing algorithms.
iOS application with opencv library is available at eosgarden.
I see 2 possibilities. The first is to calculate a transformation matrix that slants the image, and installing it in the CATransform3D property of your view's layer.
That would be simple, assuming you knew how to form the transformation matrix that did the stretching. I've never learned how to construct transformation matrixes that stretch or skew images, so I can't be of any help. I'd suggest googling transformation matrixes and stretching/skewing.
The other way would be to turn the part of the image you are cropping into an OpenGL texture and map the texture onto your output. The actual texture drawing part of that would be easy, but there are about 1000 kilos of OpenGL setup to do, and a whole lot to learning in order to get anything done at all. If you want to pursue that route, I'd suggest searching for simple 2D texture examples using the new iOS 5 GLKit.
Using the code given in Link : http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
Instead of using CGRect and CGContextClipToRect Try using CGContextEOClip OR CGContextClosePath
Though i havnt tried this... But i have tried drawing closed path using CGContextClosePath on TouchesBegan and TouchesMoved and TouchesEnd events.
Hope this can give more insight to your problem...

slicing a 3D image data set at different angles [duplicate]

This question already has answers here:
Extract arbitrarily rotated plane of data from 3D array as 2D array
(2 answers)
Closed 8 years ago.
I am working with a 3d stack of CT data. I'm interested to define a plane and slice this 3D image dataset with this plane. I'm using MATLAB to do this. I have attempted a few different approaches, including rotating the image data set prior to slicing it, however, imrotate() only rotates the image in one direction (about the z-axis I believe).
I have also tried defining the plane and intersecting it with each image slice and defining the data points by interpolation. I thought and still think this is a clean way of approaching the problem, however I have not succeeded in finding out why the approach is not working. I understand that my image is defined as coordinates, while when I try to define the plane MATLAB does this through dimensions. As straightforward as it sounds I have been struggling with figuring out the solution for a while now.
I appreciate any help guiding me to a solution.
Thank you in advance!
I would strongly recommend using ITK (http://www.itk.org/Doxygen41/html/annotated.html) for working on medical images. MATLAB is not very helpful when working with large medical images. There are varuous filters in ITK which cna solve your purpose, e.g., ExtractSliceImageFIlter... May be a simple cropping is what you want... Its bit of a pain to learn ITK initially but totally worth it... refer the ITK Documentation and examples... all the doubts that you have about using any function etc can be understood by looking at solved examples give...
http://www.mathworks.com/products/demos/image/3d_mri/tform3.html
i hope this help,
i would also go with magarwal suggestion , matlab people are usually taking ITK filters and implementing it in Matlab,
so if you have C++, java, python , c# or any skill of the above you can use itk .
and Trust me you will be ahead than Matlab while waiting for them to implement filters they already have in ITK

Cropping UIImage like camscanner

I am stuck in my application feature. I want cropping feature similar to Cam Scanner Cropping.
The screens of CAM-SCANNER are:
I have created similar crop view.
I have obtained CGPoint of four corners.
But How can I obtained cropped image in slant.
Please provide me some suggestions if possible.
This is a perspective transform problem. In this case they are plotting a 3D projection in a 2D plane.
As, the first image has selection corners in quadrilateral shape and when you transform it in a rectangular shape, then you will either need to add more pixel information(interpolation) or remove some pixels.
So now actual problem is to add additional pixel information to cropped image and project it to generate second image. It can be implemented in various ways:
<> you can implement it by your own by applying perspective tranformation matrix with interpolation.
<> you can use OpenGL .
<> you can use OpenCV.
.. and there are many more ways to implement it.
I had solved this problem using OpenCV. Following functions in OpenCV will help you to achieve this.
cvPerspectiveTransform
cvWarpPerspective
First function will calculate transformation matrix using source and destination projection coordinates. In your case src array will have values from CGPoint for all the corners. And dest will have rectangular projection points for example {(0,0)(200,0)(200,150)(0,150)}.
Once you get transformation matrix you will need to pass it to second function. you can visit this thread.
There may be few other alternatives to OpenCV library, but it has good collection of image processing algorithms.
iOS application with opencv library is available at eosgarden.
I see 2 possibilities. The first is to calculate a transformation matrix that slants the image, and installing it in the CATransform3D property of your view's layer.
That would be simple, assuming you knew how to form the transformation matrix that did the stretching. I've never learned how to construct transformation matrixes that stretch or skew images, so I can't be of any help. I'd suggest googling transformation matrixes and stretching/skewing.
The other way would be to turn the part of the image you are cropping into an OpenGL texture and map the texture onto your output. The actual texture drawing part of that would be easy, but there are about 1000 kilos of OpenGL setup to do, and a whole lot to learning in order to get anything done at all. If you want to pursue that route, I'd suggest searching for simple 2D texture examples using the new iOS 5 GLKit.
Using the code given in Link : http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
Instead of using CGRect and CGContextClipToRect Try using CGContextEOClip OR CGContextClosePath
Though i havnt tried this... But i have tried drawing closed path using CGContextClosePath on TouchesBegan and TouchesMoved and TouchesEnd events.
Hope this can give more insight to your problem...