Storing texture name in objects - obj-c - iphone

I'm in the process of learning opengles / obj-c and creating an app for the iphone that will render multiple 3d models. I've created an object that stores all the important details such as vertices / faces / textures etc but I also want to store the texture name that is currently being used on the model. In my CustomModels.h file I have:
#interface CustomModels : NSObject {
Vertex3D *vertices;
int numberOfFaces;
Face3D *faces;
Tex3D *texCoords;
BOOL active;
NSMutableArray *textures;
GLuint activeTexture;
}
then in my view controller .m file I'm trying to store the texture name like this:
glGenTextures(1, oModel.activeTexture);
But receive this error:
lvalue required as unary '&' operand
I'm a complete starter in obj-c programming so if anyone can point me in the right direction it would be much appreciated! Many thanks!

glGenTextures expects a pointer to a GLuint as its second parameter. You cannot use an Objective-C property (which is just another way of writing [oModel activeTexture]) in this place. Use a temp local variable instead:
GLuint texture = 0;
glGenTextures(1, &texture);
oModel.activeTexture = texture;

Related

How to make the sample run Open CV

I am trying to make Open CV project sample for Template Matching as explained here .
Steps i did so far includes :
Downloaded and imported Open CV framework in my project changed the .m extension files to .mm and in the .pch file i have included the code
#ifdef __cplusplus
#import <opencv2/opencv.hpp>
#endif
#ifdef __OBJC__
#import <UIKit/UIKit.h>
#import <Foundation/Foundation.h>
#endif
I have also downloaded and imported the MatchTemplate_Demo.cpp file from the link.
But here having library linking issue
ld: warning: directory not found for option '-L/Users/G1/Desktop/Xcode'
ld: warning: directory not found for option '-Lprojects/FirstOpenCv/opencv/lib/debug'
ld: library not found for -lopencv_calib3d
clang: error: linker command failed with exit code 1 (use -v to see invocation)
I followed the same step to include the library as given here.
2) Add $(SRCROOT)/opencv to header search path and $(SRCROOT)/opencv/lib/debug for library search path for debug configuration and $(SRCROOT)/opencv/lib/release for release build.
3) Add OpenCV libs to linker input by modifying "Other Linker Flags" option with "-lopencv_calib3d -lzlib -lopencv_contrib -lopencv_legacy -lopencv_features2d -lopencv_imgproc -lopencv_video -lopencv_core".
Now can please any one tell me how should i make the project run.
I have taken the source and template image and imported in the project.
I have basically ViewController.h and ViewController.mm file now i don't know what should i code in these files to see the result.
Also Step 2 :
I need to scan the image in real time using camera view (so that when i place my camera over the source image it should scan and find the template).
On following this link i got Linker error while importing the .cpp file :
ld: 1 duplicate symbol for architecture i386 clang:
error: linker command failed with exit code 1 (use -v to see invocation)
Please can any one suggest me how should i implement it.
You have three interrelated questions here:
1/ how to get openCV framework to run in an iOS project
2/ how to get the Template Matching c++ sample code to run in an iOS project
3/ how to do live template matching with the camera view
1/ how to get openCV framework to run in an iOS project
Download and import the openCV framework as you describe
Change the .pch file as you describe
check that c++ standard library is set to libc++ in your target build settings (this is the default for new projects)
Don't just import demo.cpp without making changes as described below (it is a 'raw' c++ progam with it's own main function, and needs alterations to work as part of an iOS/Cocoa project).
Don't mess with header search paths, other linker flags etc, this isn't necessary if you have imported the prebuilt framework from openCV.org.
Don't change your .m files to .mm unless you know that you need to. My advice is to keep your c++ code separate from your objective-C code as far as practicable, so most of you files should be .m files (objective-C) or .cpp files (c++). You only need .mm prefixes for "objective-C++" where you intend to mix objective-C and c++ in the same file.
2/ how to get the Template Matching c++ sample code to run in an iOS project
We are going to set this up so that your iOS viewController - and the bulk of your iOS code - does not need to know that the image is processed using openCV/C++, and likewise the C++ code doesn't need to know where it's input or output image data is being routed to. We do this by making a small wrapper class between the two that translates objective-C method calls to c++ class member functions and back. We will also set up a category on UIImage to translate image formats from iOS-friendly UIImage to openCV-native cv::Mat.
UIImage+OpenCV Category
You need some utility methods to convert from UIImage to cv::Mat and back. A good place to put these is in a UIImage category. In XCode: File>New FIle>Cocoa Touch>Objective-C category will set you up. Call the category OpenCV and make it a category on UIImage. This .m file you will want to change to .mm as it will need to understand c++ types from the openCV framework.
The header should look something like this:
#import <UIKit/UIKit.h>
#interface UIImage (OpenCV)
//cv::Mat to UIImage
+ (UIImage *)imageWithCVMat:(const cv::Mat&)cvMat;
//UIImage to cv::Mat
- (cv::Mat)cvMat;
#end
The .mm file should implement these methods by closely following this openCV.org code sample adapted to work as category methods (eg you don't pass a UIImage into the instance method, but refer to it using self).
You can use the category methods as if they are UIImage class and instance methods like this:
UIImage* image = [UIImage imageWithCVMat:matImage]; //class method
cv::Mat matImage = [image cvMat]; //instance method
openCV wrapper class
Make a wrapper class to convert your objective-C method (called from a viewController) to a c++ function
header something like this
// CVWrapper.h
#import <Foundation/Foundation.h>
#interface CVWrapper : NSObject
+ (NSImage*) templateMatchImage:(UIImage*)image
patch:(UIImage*)patch
method:(int)method;
#end
We send in the template image, patch image and template matching method, and get back an image showing the match
implementation (.mm file)
// CVWrapper.mm
#import "CVWrapper.h"
#import "CVTemplateMatch.h"
#import "UIImage+OpenCV.h"
#implementation CVWrapper
+ (UIImage*) templateMatchImage:(UIImage *)image
patch:(UIImage *)patch
method:(int)method
{
cv::Mat imageMat = [image cvMat];
cv::Mat patchMat = [patch cvMat];
cv::Mat matchImage =
CVTemplateMatch::matchImage(imageMat,
patchMat,
method);
UIImage* result = [UIImage imageWithCVMat:matchImage];
return result;
}
We are effectively taking a standard objective-C method and UIImage types and translating them into a call to a C++ member function with c++(openCV framework) types, and translating the result back to a UIImage.
C++ TemplateMatch class
Header:
// TemplateMatch.h
#ifndef __CVOpenTemplate__CVTemplateMatch__
#define __CVOpenTemplate__CVTemplateMatch__
class CVTemplateMatch
{
public:
static cv::Mat matchImage (cv::Mat imageMat,
cv::Mat patchMat,
int method);
};
#endif /* defined(__CVOpenTemplate__CVTemplateMatch__) */
#end
Implementation:
This is the Template Match openCV example code, reworked as a class implementation:
// TemplateMatch.cpp
/*
Alterations for use in iOS project
[1] remove GUI code (iOS supplies the GUI)
[2] change main{} to static member function
with appropriate inputs and return value
[3] change MatchingMethod{} signature
to return Mat value
*/
#include "CVTemplateMatch.h"
//[1] #include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
/// Global Variables
Mat img; Mat templ; Mat result;
//[1] char* image_window = "Source Image";
//[1] char* result_window = "Result window";
int match_method;
//[1] int max_Trackbar = 5;
/// Function Headers
Mat MatchingMethod( int, void* ); //[3] (added return value to function)
// [2] /** #function main */
// [2] int main( int argc, char** argv )
Mat CVTemplateMatch::matchImage (Mat image,Mat patch, int method)
// [2]
{
/// Load image and template
//[2] img = imread( argv[1], 1 );
//[2] templ = imread( argv[2], 1 );
img = image; //[2]
templ = patch; //[2]
match_method = method; //[2]
/// Create windows
//[1] namedWindow( image_window, CV_WINDOW_AUTOSIZE );
//[1] namedWindow( result_window, CV_WINDOW_AUTOSIZE );
/// Create Trackbar
//[1] char* trackbar_label = "Method: \n 0: SQDIFF \n 1: SQDIFF NORMED \n 2: TM CCORR \n 3: TM CCORR NORMED \n 4: TM COEFF \n 5: TM COEFF NORMED";
//[1] createTrackbar( trackbar_label, image_window, &match_method, max_Trackbar, MatchingMethod );
Mat result = MatchingMethod( 0, 0 );
//[1] waitKey(0);
//[2] return 0;
return result; //[2]
}
//[3] void MatchingMethod( int, void* )
Mat MatchingMethod( int, void* )
{
/// Source image to display
Mat img_display;
img.copyTo( img_display );
/// Create the result matrix
int result_cols = img.cols - templ.cols + 1;
int result_rows = img.rows - templ.rows + 1;
result.create( result_cols, result_rows, CV_32FC1 );
/// Do the Matching and Normalize
matchTemplate( img, templ, result, match_method );
normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() );
/// Localizing the best match with minMaxLoc
double minVal; double maxVal; Point minLoc; Point maxLoc;
Point matchLoc;
minMaxLoc( result, &minVal, &maxVal, &minLoc, &maxLoc, Mat() );
/// For SQDIFF and SQDIFF_NORMED, the best matches are lower values. For all the other methods, the higher the better
if( match_method == CV_TM_SQDIFF || match_method == CV_TM_SQDIFF_NORMED )
{ matchLoc = minLoc; }
else
{ matchLoc = maxLoc; }
/// Show me what you got
rectangle( img_display, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
rectangle( result, matchLoc, Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), Scalar::all(0), 2, 8, 0 );
//[1] imshow( image_window, img_display );
//[1] imshow( result_window, result );
return img_display; //[3] add return value
}
Now in your viewController you just need to call this method:
UIImage* matchedImage =
[CVWrapper templateMatchImage:self.imageView.image
patch:self.patchView.image
method:0];
with no c++ in sight.
3/ Template Matching with live camera view
The short answer: matchTemplate is not going to work too well in a live camera context. The algorithm is looking for a match in the image with the same scale and orientation as the patch: it slides the patch tile across the image at it's original orientation and size comparing for best match. This is not going to yield great results if the image is perspective-skewed, a different size or rotated to a different orientation.
You could look instead at OpenCV's Feature Detection algorithms, some of which have been moved to non-free. Here is a nice description of SIFT to give you the idea. For video capture you might also want to look at cap_ios.h in opencv2/highgui: here is a tutorial.
Actually you have downloaded already compiled library so no need to follow the steps that you have mentioned in your question and this the issues(i.e. you have followed incorrect steps) because that steps are to compile the source code into static library.
Follow the below steps and it will be done
Unzip the downloaded framework. You can see folder with name "opencv2.framework"
Drag that folder directly into project(Note. When you drag that folder into xcode, xcode will prompt you dialog there will be a tick mark to actually copy this into folder. please tick that check box)
In you .pch file import openCV like you have mentioned in your question is correct way
Now compile. One thing more wherever you want to use function of openCV that file should have .mm extension(i.e. Objective C++ source) It will run perfectly.
Help link:

CCSpriteBatchNode and CCArray, finding inactive objects

For a simple game, I have 4 different platforms (all on one spritesheet). I initially add 5 of each to a CCSpriteBatchNode, and set them all as not visible. When I set my platforms I want to take a platform of a certain type from my CCSpriteBatchNode and change it to make it visible and position it.
I am having trouble finding platforms of a specific type that aren't visible. Or vice-versa?
I know you can use [batchnode getchildbytag:tag] but as far as I know that only returns one sprite. Is there any way I can put pointers to each platform of a specific type into an array, so that I can iterate through the array and find all the not visible sprites?
Thanks!
As suggested by Drama, you will have no choice than to 'iterate' the children. As for identifying which sprite corresponds to which platform, a few ways exist. A simple one would be to use the 'tag' property of the sprite -- assuming you do not use it for any other purpose.
// some constants
static int _tagForIcyPlatform = 101;
static int _tagForRedHotPlatform = 102;
... etc
// where you create the platforms
CCSptiteBatchNode *platforms= [CCSpriteBatchNode batchNodeWithFile:#"mapItems_playObjects.pvr.gz"];
CCSprite *sp = [CCSprite striteWithSpriteFrameName:#"platform_icy.png"];
sp.tag = _tagForIcyPlatform;
[platforms addChild:sp];
sp = [CCSprite striteWithSpriteFrameName:#"platform_redHot.png"];
sp.tag = _tagForRedNotPlatform;
[platforms addChild:sp];
// ... etc
// where you want to change properties of
-(void) setVisibilityOf:(int) aPlatformTag to:(BOOL) aVisibility {
for (CCNode *child in platforms.children) {
if (child.tag != aPlatformTag) continue;
child.visible = aVisibility;
}
}
once again, this works if you are not using tags of platform's children for another purpose. If you need the tags for some other purpose, consider using an NSMutableArray in the class, one per platform type, and store in that array the pointer to your sprites of the appropriate type.
There's not a super straightforward way to do that. You'll need to iterate through the children and inspect each child individually.
For coding efficiency, consider adding a category to CCSpriteBatchNode that performs this function for you. That way you can easily replicate it as needed.

DirectFB data from memory buffer

I need a very fast way of displaying a data buffer to screen. I first tried accessing the linux framebuffer and that proved to be quite good. Then I learned about directFB and I liked the extra features it provides (like fast memcpy, resizing the images on the fly, no need for extra code etc.). But then I hit a snag - all examples are for images that are loaded from files. As far as I can tell there are no examples/tutorials for its 'DataBuffer' type. After peering through the documentation and source code I've managed to compile something that goes like this:
DFBSurfaceDescription sdsc;
DFBDataBufferDescription ddsc;
DFBDataBufferDescriptionFlags ddscf = (DFBDataBufferDescriptionFlags)DBDESC_MEMORY;
IDirectFBDataBuffer *dbuffer;
IDirectFBImageProvider *provider;
ddsc.flags = ddscf;
ddsc.file = NULL;
ddsc.memory.data = m_z;
ddsc.memory.length = 640*480;
DFBCHECK (DirectFBInit (&argc, &argv));
DFBCHECK (DirectFBCreate (&dfb));
DFBCHECK (dfb->SetCooperativeLevel (dfb, DFSCL_FULLSCREEN));
sdsc.flags = DSDESC_CAPS;
sdsc.caps = (DFBSurfaceCapabilities)(DSCAPS_PRIMARY | DSCAPS_FLIPPING);
DFBCHECK (dfb->CreateSurface( dfb, &sdsc, &primary ));
DFBCHECK (primary->GetSize (primary, &screen_width, &screen_height));
DFBCHECK (dfb->CreateDataBuffer(dfb, &ddsc, &dbuffer));
DFBCHECK (dbuffer->CreateImageProvider(dbuffer, &provider));
DFBCHECK (provider->GetSurfaceDescription (provider, &sdsc));
DFBCHECK (dfb->CreateSurface( dfb, &sdsc, &fbwindow ));
DFBCHECK (provider->RenderTo (provider, fbwindow, NULL));
provider->Release (provider);
So basically I'm creating a DataBuffer from the DFB, then an ImageProvider from the DataBuffer and set it to render on a surface. When I run it however, it throws the error:
(#) DirectFBError [dbuffer->CreateImageProvider(dbuffer, &provider)]: No (suitable) implementation found!
Is the method really not implemented? I'm currently using DirectFB 1.4, from the API documentation the function should be there. That being said, does anyone know how to get a buffer (char*640*480*4 RGBA) from memory to render to the framebuffer using DirectFB?
Thanks.
Maybe a bit late to help you, but for the benefit of anyone else trying this, here is an answer.
It is actually more simple than you think (with one gotcha) - I am doing exactly what you want, using DirectFb 1.4.11.
Once you have the primary suface, don't bother with the DataBuffer. Create another surface, using the DSDESC_PREALLOCATED flasg and your buffer as the preallocated data. Then Blit() the data from your new surface onto the primary surface and Flip() the primary surface onto the screen. The one gotcha is that your data needs to be in a format that DirectFB understands: 32 bit RGBA is not one of them, but 32 bit ARGB is - I had to parse my buffer to swap the bytes around.
Example code:
dsc.width = screen_width;
dsc.height = screen_height;
dsc.flags = DSDESC_HEIGHT | DSDESC_WIDTH | DSDESC_PREALLOCATED | DSDESC_PIXELFORMAT;
dsc.caps = DSCAPS_NONE;
dsc.pixelformat = DSPF_ARGB;
dsc.preallocated[0].data = buffer; // Buffer is your data
dsc.preallocated[0].pitch = dsc.width*4;
dsc.preallocated[1].data = NULL;
dsc.preallocated[1].pitch = 0;
DFBCHECK (dfb->CreateSurface( dfb, &dsc, &imageSurface ));
DFBCHECK (primary->Blit(primary, imageSurface, NULL, 0, 0));
DFBCHECK (primary->Flip(primary, NULL, DSFLIP_ONSYNC));
If your buffer is not the same size/shape as your screen, you can use StretchBlit() instead to resize it.
I hope this helps.
The answer above from MartinP is good, but only works if the image is not compressed. I found this topic because I wanted to load and decode/uncompress a png/jpeg image directly from memory.
There is hardly any useful info on this and I have been struggling with it myself but found out how to do it. Although this is an old question, it might help others who try to accomplish the same:
// Variables
DFBDataBufferDescription ddsc;
DFBSurfaceDescription sdsc;
IDirectFBDataBuffer *buffer;
IDirectFBImageProvider *image_provider;
// The surface that will contain the rendered image
IDirectFBSurface *surface;
// create a data buffer for memory
ddsc.flags = DBDESC_MEMORY;
ddsc.memory.data = data;
ddsc.memory.length = dataLength;
DFBCHECK(directFB->CreateDataBuffer(directFB, &ddsc, &buffer));
// Create the image provider, surface description and surface itself
DFBCHECK(buffer->CreateImageProvider(buffer, &image_provider));
DFBCHECK(image_provider->GetSurfaceDescription(image_provider, &sdsc));
DFBCHECK(directFB->CreateSurface(directFB, &sdsc, &surface ));
// Now render the image onto the surface
DFBCHECK(image_provider->RenderTo(image_provider, surface, NULL));
// Release
image_provider->Release(image_provider);
buffer->Release(buffer);
The data variable is a pointer to an array of unsigned char containing the image (hint: you can create such an array with 'xxd -i image.jpg > image.h'). datalength is an unsigned int with the size of the array. The created surface can be blit to the screen, or perhaps you can 'renderto' the display-surface at once.

Photo manipulation/filter library techniques for iPhone

Just wondering how difficult it would be to convert the following library to Objective-C to be used on the iPhone?
I guess I'm after some similar image processing libraries that would lead me in the right direction? I'm aware that it's not easy to apply the same filters as existing applications like Instragram, Path and Hipstamatic.
However, I'd like to be able to do something similar.
Here is the JavaScript library:
https://github.com/alexmic/filtrr/blob/master/filtrr.js
A demo of its functionality can be found here:
http://alexmic.net/demos/filtrr
I've started a bit of converting, here is a sample. Now of course, fully converting it would a lot of time, too much for me to do it. But just see how I've done it. I'm hoping you have prior experience with Obj-C?
Also, perhaps you could look at some existing libraries.
http://code.google.com/p/simple-iphone-image-processing/
http://mattgemmell.com/2010/07/05/mgimageutilities/
http://developer.apple.com/library/ios/#samplecode/GLImageProcessing/Introduction/Intro.html
Also, dont forget that XCode can compile C++ into your project so also investigate C or C++ libraries.
NSObject canvas;
int w;
int h;
int ctx;
NSData imageData;
#implementation filtr
{
-(id) initWithCanvas:(id)_canvas
{
if (!_canvas) {
throw "Canvas supplied to filtr was null or undefined.";
}
canvas = _canvas;
w = canvas.width;
h = canvas.height;
ctx = canvas.getContext("2d");
imageData = ctx.getImageData(0, 0, w, h);
}
/**
* Clamps the intensity level between 0 - 255.
*
* #param i The intensity level.
*/
-(int)safe:(int)i
{
return MIN(255, MAX(0, i));
}

Get size of repeated pattern from UIColor?

I can query if a UIColor is a pattern by inspecting the CGColor instance it wraps, the CGColorGetPattern() function returns the pattern if it exist, or null if it is not a pattern color.
CGPatternCreate() method requires a bounds when creating a pattern, this value defines the size of the pattern tile (Known as cell in Quartz parlance).
How would I go about to retrieve this pattern size from a UIColor, or the backing CGPattern once it has been created?
If your application is intended for internal distribution only, then you can use a private API. If you look at the functions defined in the CoreGraphics framework, you will see that there is a bunch of functions and among them one called CGPatternGetBounds:
otool -tV /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS4.3.sdk/System/Library/Frameworks/CoreGraphics.framework/CoreGraphics | egrep "^_CGPattern"
You just have to make some function lookup on the framework and use it through a function pointer.
The header to include:
#include <dlfcn.h>
The function pointer:
typedef CGRect (*CGPatternGetBounds)(CGPatternRef pattern);
The code to retrieve the function:
void *handle = dlopen("/System/Library/Frameworks/CoreGraphics.framework/CoreGraphics", RTLD_NOW);
CGPatternGetBounds getBounds = (CGPatternGetBounds) dlsym(handle, "CGPatternGetBounds");
The code to retrieve the bounds:
UIColor *uicolor = [UIColor groupTableViewBackgroundColor]; // Select a pattern color
CGColorRef color = [uicolor CGColor];
CGPatternRef pattern = CGColorGetPattern(color);
CGRect bounds = getBounds (pattern); // This result is a CGRect(0, 0, 84, 1)
I dont think its possible to get the bounds from the CGPatternRef, if thats what you're asking
There does not appear to be any way to directly retrieve any information from the CGPatternRef.
If you must do this, probably the only way (besides poking at the private contents of the CGPattern struct, which probably counts as "using private APIs") is to render the pattern to a sufficiently large image buffer and then detect the repeating subunit. Finding repeating patterns/images in images may be a good starting point for that.
Making your own color class that stores the bounds and let you access them through a property might be a viable solution.
Extracting the pattern bounds from UIColor doesn't seem to be possible.