Polling IPhone Camera to Process Image - iphone

Scenario is I want my app to process (in the background if possible) images been seen by the iphone camera.
e.g. App is running, user places the phone down on a piece of red cardboard, than want to display an alertview saying "Phone placed on Red Surface"(this is a simplified version of what i want to do but just to keep the question direct).
Hope this makes sense. I know there is two seperate concerns here.
How to process images from the camera in the background of the app (if we cant do this that we can initiate the process with say a button click if needed).
Processing the image to say what solid colour it is sitting on.
Any help/guidance would be greatly appreciated.
Thanks

Generic answers to your two questions:
Background processing of image can be triggered as a timer event. Say for example, every 30 second, capture the image on the screen and do the processing behind. If the processing is not computing/time intensive, this should work
It is technically possible to know the color of say one pixel programatically. If you are sure that the entire image is just one color, you can try that approach. Get few random points and get the color of the pixel in the image. But if the image (in your example, red board) consists of an image or multiple colors, then that will require detailed image processing techniques.
Hope this helps

1) Image Capture
There's two kinds of apps that continually take imagery from the camera: media capture (e.g. Camera, iMovie) or Augmented Reality apps.
Here's the iPhone SDK tutorial for media capture:
https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW3
Access the camera with iPhone SDK
Augmented Reality apps take continual pictures from the camera for processing/overlay. I suggest you look into some of the available AR kits and see how they get a continual stream from the camera and also analyze the pixels.
Starting a augmented reality (AR) app like Panasonic VIERA AR Setup Simulator
http://blog.bordertownlabs.com/post/157320598/customizing-the-iphone-camera-view-with
2) Image Processing
Image processing is a really big topic that's been addressed in multiple other places:
https://photo.stackexchange.com/questions/tagged/image-processing
https://dsp.stackexchange.com/questions/tagged/image-processing
https://mathematica.stackexchange.com/questions/tagged/image-processing
..but for starters, you'll need to use some heuristical analysis to determine what you're looking for. Sampling the captured pixels in a bunch of places (e.g. corners + middle) may help, as would generating a histogram of colour intensities - if there's lots of red but little or no blue and green, it's a red card.

Related

Apply custom camera filters on live camera preview - Swift

I'm looking to make a native iPhone iOS application in Swift 3/4 which uses the live preview of the back facing camera and allows users to apply filters like in the built in Camera app. The idea was for me to create my own filters by adjusting Hue/ RGB/ Brightness levels etc. Eventually I want to create a HUE slider which allows users to filter for specific colours in the live preview.
All of the answers I came across for a similar problem were posted > 2 years ago and I'm not even sure if they provide me with the relevant, up-to-date solution I am looking for.
I'm not looking to take a photo and then apply a filter afterwards. I'm looking for the same functionality as the native Camera app. To apply the filter live as you are seeing the camera preview.
How can I create this functionality? Can this be achieved using AVFoundation? AVKit? Can this functionality be achieved with ARKit perhaps?
Yes, you can apply image filters to the camera feed by capturing video with the AVFoundation Capture system and using your own renderer to process and display video frames.
Apple has a sample code project called AVCamPhotoFilter that does just this, and shows multiple approaches to the process, using Metal or Core Image. The key points are to:
Use AVCaptureVideoDataOutput to get live video frames.
Use CVMetalTextureCache or CVPixelBufferPool to get the video pixel buffers accessible to your favorite rendering technology.
Draw the textures using Metal (or OpenGL or whatever) with a Metal shader or Core Image filter to do pixel processing on the CPU during your render pass.
BTW, ARKit is overkill if all you want to do is apply image processing to the camera feed. ARKit is for when you want to know about the camera’s relationship to real-world space, primarily for purposes like drawing 3D content that appears to inhabit the real world.

How to segment an image in iOS to remove background and retain the foreground picture

I need to segment an image in ios for a fashion app by keeping only the foreground image and removing all other background part of the image which should resemble like a tool for removing the background of images in various photo editing tools please help me.
General background subtraction is an unsolved problem, so getting perfect results is going to be a big effort. With that said, you can probably get close. Here are a couple of suggested avenues:
I am guessing that your app will place clothes on a human, or something of the sort. Instead of getting a perfect segmentation, run a person detector, remove all of the image except for the detected person, and fit a part-based human model to the remaining image. Then you have the pose of the person, and can do your image processing accordingly.
Allow the user to input some strokes from the foreground and some strokes from the background, and run a graph-cuts-based image segmentation algorithm on the frame.
Begin your process by having the user not be present in your video stream. From this, learn the background distribution (start with a simple histogram of background pixels, there are much more elaborate schemes but you need a starting place). Then, when the user enters the scene, create a binary image containing the connected components that don't fit into the learned background distribution. This will not be perfect, but you will start to see something close to a binary image where the white pixels are your user, and the black pixels are the background. Use morphology operators to join any large connected components that are slightly separated, and threshold your image to remove small noise in the image, from things like specular objects and illumination changes.
Like I said (and is mentioned in the comments), this is not an easy problem, but you can come up with a good approximation if you put some time into it. I suggest the third method I listed. It is achievable, and can be broken down into small parts so you can tell when you're making progress.
Good luck!

Build Iphone app that can recognise colour from streaming camera

I am building an iphone app to recognise a specific colour through the iphone camera when placed onto a colour board.
Note that I want it to work through the streaming camera output not just a still image or photo.
My initial thoughts were to scan series of pixels (say 4 on each corner of the camera feed) and if the colours registered in each pixel match, then display colour (in text) to user.
Can someone please point me in the right direction as far as example code or API or even if there is a better design solution to the problem.

How to make iPhone Camera less sensitive to movement

I have made a "two screen app" in which the camera is divided into two sides left and right each of which can be captured independently and merged later.
The problem which iam facing is when ever user touches the capture button the camera moves a bit and captured image shakes so the user is unable to match the two halves.
Is there any way to make camera less sensitive to minor movements?
I am using imagepicker
Thanks
Roll your won image capture. Capture a larger image than necessary and stabilize the shown image using the gyro.
There is a great example of stabilizing the compass in much the same way here:
http://www.sundh.com/blog/2011/09/stabalize-compass-of-iphone-with-gyroscope/

How to draw an image according to the pixels of another image?

HI all ,what i want is to map the images.Suppose i have two images of persons,one is of fat person and another is of weak person,Now i want to match their faces ,eyes.I want to increase or decrease the face size eye size of one image according to another.As you can see in adobe photoshop you can make the face fat,make it squueze.I want to do the image manuplation in this.These types of operations i want to implement.I don't know from where to start.
Pleas guide and help me.Can i perform all this with core graphics if so then how
Any reference,tutorial address ,sample code ........appreciated.
You are probably going to have to deal with some sort of edge detection and face recognition algorithms, at the very least, if this is to be accomplished automatically. Otherwise, if the user is going to be resizing one image to match the other, this will require simple resizing operations driven by perhaps user pinch & gestures.
UPDATE:
For manual resizing:
Download the source code for the great book Cool iPhone Projects. One of the projects is called 'Touching'. This project contains code that accomplishes what you need: pinch and zoom functionality.