Is there a way to quickly modify individual color values in an image? - iphone

For example, I have a CGImageRef and I want to shift all pixels which are red to orange. Or to put it in other words: I want to apply a value function on every pixel in an image, which modifies the pixel based on the RGBA values it has. So the value function would calculate the new component values for that pixel based on the current component values and some algorithm applied to it.
I know how I could code that by hand with about 100 lines of code, but I wonder if there is a easier and maybe even faster way?
I believe Brad Larson has mentioned somewhere that things like this can be done on the GPU easily and fast. However, I must support iOS 3.2 so it should not get too fancy.
Would be happy about any ideas.
Thanks!

As far as I know there are no built in functions to achieve what you want.
The easiest way to get GPU acceleration when doing custom image manipulations on the pixel level is using the Accelerate.framework. Accelerate will use the 'best execution path' based on available hardware.
However... the port to iOS was only done for iOS4
(disclaimer: I have little to no experience nor knowledge of everything that's possible in OpenGL, so take my answer as limited within the Core Graphics realms of iOS)

Related

Fake long exposure on iOS

I have to implement long exposure photo capabilities to an app. Since i know that this is not really possible i have to fake it. It should work like "Slow Shutter" or "Magic Shutter".
Sadly i got no clue how to achieve this. I know how to take images with the camera (through AVFoundation) but i'm stuck at merging them to fake long shutter times.
Possibly i need to manipulate and combine all the images with coregraphics but i'm not sure about this (even the how). Maybe there's a better solution to this.
I would appreciate every help i can get here,
thank you people!
You might try the plus lighter blend mode.
Well, I suppose it would be possible to average together the results of several shots. I've mucked around a bit with the core graphics stuff to resize images (averaging together adjacent pixels), but with lower res images. The algorithm I used is here -- maybe it'll give you some ideas.
There may, of course, be a better way, and some tricks for working efficiently with high-res images. Can't help you there.
Convert the images to pixel bitmaps. Align and stack the bitmaps. Then try applying various 3D convolution filters to the 3D pixel array.

Measuring distance with iPhone camera

How to implement a way to measure distances in real time (video camera?) on the iPhone, like this app that uses a card to compare the size of the card with the actual distance?
Are there any other ways to measure distances? Or how to go about doing this using the card method? What framework should I use?
Well you do have something for reference, hence the use of the card. Saying that after watching the a video for the app I can't seem it seems too user friendly.
So you either need a reference of an object that has some known size, or you need to deduct the size from the image. One idea I just had that might help you do it is what the iPhone's 4 flash (I'm sure it's very complicated by it might just work for some stuff).
Here's what I think.
When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess.
But that's about the only other way I thought about deducting scale from an image without any fixed objects.
I like Ron Srebro's idea and have thought about something similar -- please share if you get it to work!
An alternative approach would be to use the auto-focus feature of the camera. Point-and-shoot camera's often have a laser range finder that they use to auto-focus. iPhone doesn't have this and the f-stop is fixed. However, users can change the focus by tapping the camera screen. The phone can also switch between regular and macro focus.
If the API exposes the current focus settings, maybe there's a way to use this to determine range?
Another solution may be to use two laser pointers.
Basically you would shine two laser pointers at, say, a wall in parallel. Then, the further back you go, the beams will look closer and closer together in the video, but they will still remain the same distance apart. Then you can easily come up with some formula to measure the distance based on how far apart the dots are in the photo.
See this thread for more details: Possible to measure distance with an iPhone and laser pointer?.

Drawing pixels with XCode

I am trying to draw individual pixels in xcode.
I already know Objective-C but not the Quartz/graphics stuffs and I'nm not interested at the moment. I simply want a basic app that let me have a map of X*Y and being able to show pixel at (x,y) with color rgb.
I don't know how to find a tutorial for this, and I think it must be very quick. Do you guys have a file like this, or could point me to a tutorial?
Any help is greatly appreciated.
Simply use NSBitmapImageRep with setColor:atX:y:
Create a empty NSBitmapImageRep.
Every time you need to update the view do something like this:
- Set the specific pixel to a certain color with setColor:atX:y:
- Convert NSBitmapImageRep to NSImage
- Show result in a NSImageView
This works perfect if you don't need to update the view too many times/sec.
I already know Objective-C but not the Quartz/graphics stuffs and I'nm not interested at the moment. I simply want a basic app that let me have a map of X*Y and being able to show pixel at (x,y) with color rgb.
If you're not interested in learning Core Graphics, then tough luck. You get two choices for graphics: OpenGL, or UIKit/Core Graphics. Your choice, but Core Graphics is considerably easier. You can use OpenGL to paint on a per pixel basis, but I'm assuming if you have no interest in learning Core Graphics you're probably not going to be keen on OpenGL. For high performance applications you're looking at OpenGL as the only realistic option.
So, if you don't want to learn OpenGL your best bet is the Quartz programming guide:
http://developer.apple.com/library/ios/#DOCUMENTATION/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html%23//apple_ref/doc/uid/TP40007533-SW1
Out of interest, why wouldn't you want to look into Quartz?
You could either use CoreGraphic to draw a filled rect in a drawRect after a setNeedsDisplay on the view. Or you could generate a bitmap context and assign it to the layer contents of your view at a 1.0 scale if you want actual pixels instead of 1x1 point rectangles.

iphone, Image processing

I am building an application on night vision but i don't find any useful algorithm which I can apply on the dark images to make it clear. Anyone please suggest me some good algorithm.
Thanks in advance
With the size of the iphone lens and sensor, you are going to have a lot of noise no matter what you do. I would practice manipulating the image in Photoshop first, and you'll probably find that it is useful to select a white point out of a sample of the brighter pixels in the image and to use a curve. You'll probably also need to run an anti-noise filter and smoother. Edge detection or condensation may allow you to bold some areas of the image. As for specific algorithms to perform each of these filters there are a lot of Computer Science books and lists on the subject. Here is one list:
http://www.efg2.com/Lab/Library/ImageProcessing/Algorithms.htm
Many OpenGL implementations can be found if you find a standard name for an algorithm you need.
Real (useful) night vision typically uses an infrared light and an infrared-tuned camera. I think you're out of luck.
Of course using the iPhone 4's camera light could be considered "night vision" ...
Your real problem is the camera and not the algorithm.
You can apply algorithm to clarify images, but it won't make from dark to real like by magic ^^
But if you want to try some algorithms you should take a look at OpenCV (http://opencv.willowgarage.com/wiki/) there is some port like here http://ildan.blogspot.com/2008/07/creating-universal-static-opencv.html
I suppose there are two ways to refine the dark image. first is active which use infrared and other is passive which manipulates the pixel of the image....
The images will be noisy, but you can always try scaling up the pixel values (all of the components in RGB or just the luminance of HSV, either linear or applying some sort of curve, either globally or local to just the darker areas) and saturating them, and/or using a contrast edge enhancement filter algorithm.
If the camera and subject matter are sufficiently motionless (tripod, etc.) you could try summing each pixel over several image captures. Or you could do what some HDR apps do, and try aligning images before pixel processing across time.
I haven't seen any documentation on whether the iPhone's camera sensor has a wider wavelength gamut than the human eye.
I suggest conducting a simple test before trying to actually implement this:
Save a photo made in a dark room.
Open in GIMP (or a similar application).
Apply "Stretch HSV" algorithm (or equivalent).
Check if the resulting image quality is good enough.
This should give you an idea as to whether your camera is good enough to try it.

How to morphing of two images in iphone programming

how to do morphing of two images in iphone programming.?
Your question is not iphone related.. the kind of algorithm you are looking for is language-agnostic since it just work with images.
By the way it's quite complex to morph two images, usually you have to
embed a grid of points over the two images that links characteristics that should be morphed. For example if you have two faces you would use a grid that connects eyes, the mouth, ears, the nose, the edge of the face and so on: these two grid tells the morpher how to "translate" a point into another one while blending the two images
the previous step can be done automatically (with specific software) or by hand. more points you place better will be your results
then you can do the real morphing sequence: basically you do an interpolation between the two images (in which the parameter that you use will decide how much will be the final risult similar to the first or the second image)
you should also apply some blending effect to actually create a believable result, always using a parametric function according to the morphing position
You can use UIView animation to transition from one UIView to another. This should provide some sort of lame morphing.
You can use XMRM, which is written in C++: http://www.cg.tuwien.ac.at/~xmrm/
There is no image morphing API in the iOS SDK.
No, there isn't an API for it. You'll have to do it yourself.
...ask a short question, get a short answer...