how can I realize realtime face detection, when I use iPhone camera to take picture?
just like the example: http://www.morethantechnical.com/2009/08/09/near-realtime-face-detection-on-the-iphone-w-opencv-port-wcodevideo/ (this example don't provide the .xcodeproj, so I can't compile .cpp file)
another example: http://blog.beetlebugsoftware.com/post/104154581/face-detection-iphone-source
(can't be compiled)
do you have any solution? please give a hand!
Wait for iOS 5:
Create amazing effects in your camera
and image editing apps with Core
Image. Core Image is a
hardware-accelerated framework that
provides an easy way to enhance photos
and videos. Core Image provides
several built-in filters, such as
color effects, distortions and
transitions. It also includes advanced
features such as auto enhance, red-eye
reduction and facial recognition.
Related
I want to integrate OCR in an iOS application. I have found some helpful tutorials, specially This Article: How To: Compile and Use Tesseract (3.01) on iOS (SDK 5), helped me a lot. Now I can read plain text from any image which has a clear background. But I want to read information from an ID card which doesn't have clear background at all!
I have also found some answers regarding removing background in stackoverflow, for example: Prepare complex image for OCR, Remove Background Color or Texture Before OCR Processing and How to use OpenCV to remove non text areas from a business card?
But those solutions are not for iOS. I understand the steps, but I need an iOS example and if it is using Core Image, than it would be better for me.
I have no problem in OCR end, but my problem is to remove the background.
Initial Image:
After removing, the image should look like this:
Can you refer me an iOS example? or Is it possible to refer me an iOS example to remove all the color without Black color?
the best way to detect a card in sence is traing a cascade classifier.
Training is not a very small project. the count of the sample images should be more thank 10K.
Once you get the trained cascade classifier, you can detect the the card quickly.
The detection is very quick on iOS, but the tesseract recognition is not very fast,
Is there any filters available in ios to convert a image to cartoonistic image like exactly in the above picture?
For a much faster solution than ImageMagick, you could use the GPUImageToonFilter from my GPUImage framework:
It combines Sobel edge detection with posterization of the image to give a good cartoon-like feel. As implemented in this framework, it's fast enough to run on realtime video from the iPhone's camera, and is probably at least an order of magnitude faster than something similar in ImageMagick. My framework's also a little easier to integrate with an iOS project than ImageMagick.
If you want more of an abstract look to the image, the GPUImageKuwaharaFilter converts images into an oil painting style, as I show in this answer.
Try to use imagemagick for iOS http://www.imagemagick.org/download/iOS/
Of course you need some serval hours how to use imagemagick for iOS.
But then you should also look at: http://www.fmwconcepts.com/imagemagick/cartoon/index.php
and maybe also on:
http://www.imagemagick.org/discourse-server/viewtopic.php?f=1&t=11140&start=0&st=0&sk=t&sd=a
This Core Image filter section in the iOS dev library, possibly combined with the script referenced by Jonas and a little luck, might get you where you're going. Not sure, having never used either of these technologies.
I have an equalizer view with 10 bars in OpenGL ES which can light up and down. Now I'd like to drive this equalizer view from the background music that is playing in iOS.
Someone suggested what I need is a Fast Fourier Transform to transform the audio data into something else. But since there are so many audio visualizations floating around, my hope is that there is an open source library or anything that I could start with.
Maybe there are open source iOS projects which do audio visualization?
Yes.
You can try this Objective-C library which I wrote for exactly this purpose. What it does is to give you an interface for playing files from URLs and then getting real-time FFT and waveform data so that you can feed it with your OpenGL bars or whatever graphics you're using to visualise the sound. It also tries to deliver very accurate results.
If you want to do it in swift, you can take a look at this example which is cleaner and also shows how it's possible to actually draw the equalizer view.
For every developer arrives the day to improve the user interface experience because apps are evalutated mainly from the ui carefulness.
So, i've took a look around the websites and I found some psd where to start to desing my apps.
My question is: How to transform a psd prototype to a well-working app?
I don't unserstand how a mockup can help a developer to build a ui...
Can someone make me some clear the situation?
Well, I'd be careful to make a distinction between the graphics an app uses and the actual User Interface. Certainly the graphics are part of the UI, but the UI is soooo much more than that. Depending on how it is done, photoshop mock ups can be simple graphics you can use for your interface to complexes 'scenes' describing how the app functions. In the latter case, the mock-up can be useful for UI design, in the former case it just gives you pretty images to use (which can certainly be useful).
But to more directly answer your question, most people take 'slices' (individual pieces) of the photoshop image and export them as .png images (or .jpg). If the .psd file doesn't already have the images 'sliced', look up 'photoshop image slicing' on Google. You can then import them into Xcode and use them as background images for the controls you want to use. Especially since iOS 5.0, images can be used for a lot of controls. Also, you'll probably want to make sure you make the image resizable with proper UIEdgeInsets. This will allow the image to resize without pixilation by setting an area that can be tiled within the image.
My app will let users cut out things from photos. They'll be able to either select a photo already in their iPhone's photo library, or take a new one with the camera. From what I understand, UIImagePicker is the simplest way to accomplish picking a photo from the library or taking a new one. However, I also understand that it only provides basic image editing (zoom, crop). I want my image editing to allow for the creation of Bezier curves that, once all joined together, will cut out the enclosed area, saving it without the surrounding background.
The official apple documentation on UIImagePicker suggested that the AV Framework is required for providing custom image editing as opposed to the basic zoom and crop. So my first questions are:
Is the AV Framework indeed what I want to
use?
Will it get used in conjunction with UIImagePicker (i.e., UIImagePicker is used to select the photo or take a new one, and then my AV Framework code takes over for the image editing)?
Can anyone offer good resources on getting started on learning the code for this process?
My final question is about the actual Bezier curve generation/manipulation. It appears that the Core Graphics Framework has support for this, but there is also the UIBezierPath object, which is apparently some kind of wrapper for the Core Graphics tools I would otherwise use.
So my final question: will I want to use the UIBezierPath object, or does what I previously described require more fine-grained control that UIBezierPath can't provide, thereby forcing me to use the Core Graphics framework directly?
Thanks!
the AV Foundation allows you to talk to the camera, to configure it in various ways, and to receive a live feed from it. So it's good for taking new pictures or movies, but not for selecting them from the camera roll or for editing them. You'd likely want to use the AV Foundation to replace the image capture duties that UIImagePicker supplies. Probably you'll want to use a UIImagePicker with allowsEditing set to NO so as to be able to provide your own entirely separate editing interface.
no, it's a different sort of task.
I'm unaware of any tutorials on this sort of thing, but the docs are pretty good. I've posted the whole stuff for capturing a live feed from the camera in answers like this one, not sure if that's a more helpful way to see how some of the AV Foundation classes can be chained together?
What you'll probably end up doing in order to edit an image is starting with a UIImage, creating a CoreGraphics bitmap context (which is something you can draw to), doing some sort of compositing to that and then converting the result into an image and saving it back out to the camera roll.
UIBezierPath is a wrapper over the Core Graphics stuff, but will probably do what you want. addClip can set a defined path to be the new clipping path on the current context, or you can use the CGPath property if you need to go a bit further afield than UIKit's idea of a current context.
look for the iphone cookbook, maybe kickasstorrents still has it
C07 has everything you need, camera, overlay, loading, picking, editing, snapping,hiking camera, saving doc, sending image, image scroller, thumbnails, masking, etc....