Drawing imagedata from data set of integers on an iPad using OpenGL - iphone

I'm trying to draw an image using OpenGL in a project for iPad.
The image data:
A data blob of UInt8 that represents the grayscale value for each pixel in three dimensions (I'm going to draw slices from the 3D-body). I also have information on height and width for the image.
My current (unsuccessful) approach is to use it as a texture on a square and I am looking at some example code I found on the net. That code, however, loads an image file from the disc.
While setting up the view there is a call to CGContextDrawImage and the last parameter is suppose to be an CGImageRef. Do you know how I can create one from my data or is this a dead end?
Thankful for all input. I really haven't gotten the grip of OpenGL yet so please be gentle :-)

It's not a dead end.
You can create an CGImageRef from a blob of pixel memory using CGBitmapContextCreate() to create a bitmap context and CGBitmapContextCreateImage() to create the image ref from the bitmap context.

Related

Is there any open source sdk like cam scanner in iphone sdk [duplicate]

I am stuck in my application feature. I want cropping feature similar to Cam Scanner Cropping.
The screens of CAM-SCANNER are:
I have created similar crop view.
I have obtained CGPoint of four corners.
But How can I obtained cropped image in slant.
Please provide me some suggestions if possible.
This is a perspective transform problem. In this case they are plotting a 3D projection in a 2D plane.
As, the first image has selection corners in quadrilateral shape and when you transform it in a rectangular shape, then you will either need to add more pixel information(interpolation) or remove some pixels.
So now actual problem is to add additional pixel information to cropped image and project it to generate second image. It can be implemented in various ways:
<> you can implement it by your own by applying perspective tranformation matrix with interpolation.
<> you can use OpenGL .
<> you can use OpenCV.
.. and there are many more ways to implement it.
I had solved this problem using OpenCV. Following functions in OpenCV will help you to achieve this.
cvPerspectiveTransform
cvWarpPerspective
First function will calculate transformation matrix using source and destination projection coordinates. In your case src array will have values from CGPoint for all the corners. And dest will have rectangular projection points for example {(0,0)(200,0)(200,150)(0,150)}.
Once you get transformation matrix you will need to pass it to second function. you can visit this thread.
There may be few other alternatives to OpenCV library, but it has good collection of image processing algorithms.
iOS application with opencv library is available at eosgarden.
I see 2 possibilities. The first is to calculate a transformation matrix that slants the image, and installing it in the CATransform3D property of your view's layer.
That would be simple, assuming you knew how to form the transformation matrix that did the stretching. I've never learned how to construct transformation matrixes that stretch or skew images, so I can't be of any help. I'd suggest googling transformation matrixes and stretching/skewing.
The other way would be to turn the part of the image you are cropping into an OpenGL texture and map the texture onto your output. The actual texture drawing part of that would be easy, but there are about 1000 kilos of OpenGL setup to do, and a whole lot to learning in order to get anything done at all. If you want to pursue that route, I'd suggest searching for simple 2D texture examples using the new iOS 5 GLKit.
Using the code given in Link : http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
Instead of using CGRect and CGContextClipToRect Try using CGContextEOClip OR CGContextClosePath
Though i havnt tried this... But i have tried drawing closed path using CGContextClosePath on TouchesBegan and TouchesMoved and TouchesEnd events.
Hope this can give more insight to your problem...

In OpenGL ES2.0 for iOs, how can I use a CVPixelBufferRef to update a cubemap texture?

I have managed to get a CVPixelBufferRef from an AVPlayer to feed pixel data that I can use to texture a 2D object. When my pixelbuffer has data in it I do:
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage('
kCFAllocatorDefault,
videoTextureCache_,
pixelBuffer, //this is a CVPixelBufferRef
NULL,
GL_TEXTURE_2D,
GL_RGBA,
frameWidth,
frameHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
I would like to use this buffer to create a GL_TEXTURE_CUBE_MAP. My video frame data is actually 6 sections in one image (e.g. a cubestrip) that in total makes the sides of a cube. Any thoughts on a way to do this?
I had thought to just pretend my GL_TEXTURE_2D was a GL_TEXTURE_CUBE_MAP and replace the texture on my skybox with the texture generated by the code above, but this creates a distorted mess (as I suppose should be expected when trying to force a skybox to be textured with a GL_TEXTURE_2D.
The other idea was to setup unpacking using glPixelStorei and then read from the pixelbuffur:
glPixelStorei(GL_UNPACK_ROW_LENGTH, width);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, X);
glPixelStorei(GL_UNPACK_SKIP_ROWS, Y);
glTexImage2D(...,&pixelbuffer);
But unbelievably GL_UNPACK_ROW_LENGTH is not supported in OpenGl ES2.0 for iOS.
So, is there:
-Any way to split us the pixel data in my CVPixelBufferRef through indexing the buffer to some pixel subset before using it to make a texture?
-Any way to make a 6 new GL_TEXTURE_2D as indexed subsets of my GL_TEXTURE_2D that is created by the code above
-any way to convert a GL_TEXTURE_2D to a valid GL_TEXTURE_CUBE_MAP (e.g. GLKit has a Skybox effect that loads a GL_TEXTURE_CUBE_MAP from a single cubestrip file. It doesnt have a method to load a texture from memory though or I would be sorted)
-any other ideas?
If it were impossible any other way (which is unlikely, there probably is an alternate way -- so this is probably not the best answer & involves more work than necessary) here is a hack I'd try:
How a cube map works is it projects the texture for each face from a point in the center of the geometry out toward each of the cube faces. So you could reproduce that behavior yourself; you could use Projective Texturing to make six draw calls, one for each face of your cube. Each time, you'd first draw the face you're interested in to the stencil buffer, then calculate the projection matrix for your texture (this technique is used a lot for 'spotlight' effects in games), then figure out the transform matrix required to augment the fragment shader's texture read so that for each face, only the portion of the texture that corresponds to that face winds up within the (0..1) texture lookup range. If everything has gone right, anything outside the 0..1 range should be discarded by the stencil buffer, and you'd be left with a DIY cube map out of a TEXTURE_2D.
The above method is actually really similar to what I'm doing for an app right now, except I'm only using projective texturing to mask off & replace a small portion of the cube map. I need to pixel-match the edges of the small square I'm projecting so that it's seamlessly applied to the skybox, so that's why I feel confident that this method will actually reproduce the cube map behavior -- otherwise, pixel-matching wouldn't be possible.
Anyway, I hope you find a way to simply transition your 2D to CUBEMAP, because that would probably be much easier and cleaner.

Cropping UIImage like camscanner

I am stuck in my application feature. I want cropping feature similar to Cam Scanner Cropping.
The screens of CAM-SCANNER are:
I have created similar crop view.
I have obtained CGPoint of four corners.
But How can I obtained cropped image in slant.
Please provide me some suggestions if possible.
This is a perspective transform problem. In this case they are plotting a 3D projection in a 2D plane.
As, the first image has selection corners in quadrilateral shape and when you transform it in a rectangular shape, then you will either need to add more pixel information(interpolation) or remove some pixels.
So now actual problem is to add additional pixel information to cropped image and project it to generate second image. It can be implemented in various ways:
<> you can implement it by your own by applying perspective tranformation matrix with interpolation.
<> you can use OpenGL .
<> you can use OpenCV.
.. and there are many more ways to implement it.
I had solved this problem using OpenCV. Following functions in OpenCV will help you to achieve this.
cvPerspectiveTransform
cvWarpPerspective
First function will calculate transformation matrix using source and destination projection coordinates. In your case src array will have values from CGPoint for all the corners. And dest will have rectangular projection points for example {(0,0)(200,0)(200,150)(0,150)}.
Once you get transformation matrix you will need to pass it to second function. you can visit this thread.
There may be few other alternatives to OpenCV library, but it has good collection of image processing algorithms.
iOS application with opencv library is available at eosgarden.
I see 2 possibilities. The first is to calculate a transformation matrix that slants the image, and installing it in the CATransform3D property of your view's layer.
That would be simple, assuming you knew how to form the transformation matrix that did the stretching. I've never learned how to construct transformation matrixes that stretch or skew images, so I can't be of any help. I'd suggest googling transformation matrixes and stretching/skewing.
The other way would be to turn the part of the image you are cropping into an OpenGL texture and map the texture onto your output. The actual texture drawing part of that would be easy, but there are about 1000 kilos of OpenGL setup to do, and a whole lot to learning in order to get anything done at all. If you want to pursue that route, I'd suggest searching for simple 2D texture examples using the new iOS 5 GLKit.
Using the code given in Link : http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
Instead of using CGRect and CGContextClipToRect Try using CGContextEOClip OR CGContextClosePath
Though i havnt tried this... But i have tried drawing closed path using CGContextClosePath on TouchesBegan and TouchesMoved and TouchesEnd events.
Hope this can give more insight to your problem...

Getting pixels data from image on iPhone

I want to use bitmap images as a "map" for levels in iphone game. Basicly it's all about the location of obstacles in the rectangular world. The obstacles would be color-coded -- where the white pixel is, there's no obstacle. Black means there is one at this point.
Now I need to use this data to do 2 things: (a) display the level map, (b) for in-game calculations. So, in general, I need a way to read the data from the bitmap and create some data structure (matrix-like) with those information - to both overlay the bitmap onto the level graphics as well as to calculate collisions and such.
How should I do it? Is there any easy way to read the data from image? And what's the best format to keep the images for this?
Have you looked at how Texture2D translates an image file to an OpenGL Texture ?
Tip: take a look at this Method in Texture2D.m:
- (id) initWithCGImage:(CGImageRef)image orientation:(UIImageOrientation)orientation sizeToFit:(BOOL)sizeToFit pixelFormat:(Texture2DPixelFormat)pixelFormat filter:(GLenum) filter
In 3D apps, it's quite common to use this kind of representation for height maps, in a height map, you use a Texture with colors that range from black to white ( white represents the maximum altitude )
For example, from this:
To this:
That was just to tell you that your representation is not that crazy :).
About reading the bitmap, I would also recommend you to read this (just in case you want to go deeper)
Hope I helped a bit!

Help with image processing in an iPhone app

I'm working with an image processing tool in MATLAB. How can I convert MATLAB code to Objective-C?
Here are some of the tasks I want to do:
I want to rotate an oblique line to normal.
I have algorıthm that converts an colored image to black & white. (How can I access pixel color values in Objective-C?)
In the function getrgbfromimage, how can I access output of pixel values to consol?
How can I run a function (getrotatedımage) on each element of an array?
Quartz 2D (aka. Core Graphics) is the 2D drawing API in iOS. Quartz will, most likely, do everything you're looking for. I recommend checking out the Quartz 2D Programming Guide in the documentation. For your specific requests check out these sections:
Colors and Color Spaces - for color and b&w images
Transforms - for rotating or performing any affine transform
Bitmap Images and Image Masks - for information on the underlying image data
As for running a function on each element of an array, you can use the block iteration API (as long as your app can require iOS 4.0 or higher). An example:
[myArray enumerateObjectsUsingBlock:^(id item, NSUInteger index, BOOL *stop) {
doSomethingWith(item);
}];
If you just want to call a method on each item in the array, there is also:
[myArray makeObjectsPerformSelector:#selector(doSomething)];
CGBitmapContext will get pixel values from an image.
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Reference/CGBitmapContext/Reference/reference.html
This has same demo code.
Retrieving a pixel alpha value for a UIImage (MonoTouch)
printf will dump the RGB values to the console.
NSArray or NSMutableArray will hold your images and a simple for loop will let you iterate through them.