Image Recognition in real time - swift

I am searching for a way, to achieve real time image recognition in dynamically, I have to scan object (it can be a product) and show details about that object.
Is it possible by using native frameworks like ARKit? Or I have to go with third party libraries by using recent methodology?
Example: https://www.youtube.com/watch?v=GbplSdh0lGU
Can someone suggest a way?
Thanks.

Yes, it's possible by using Apple's native CoreML and Vision frameworks.
But you need to know about all the products in your library in advance.
You can read about it in this Medium post: CoreML Machine Learning on iOS.
Also, watch this WWDC 2017 video about Vision/CoreML frameworks.

Related

I am trying to develop a pitch detection feature in flutter and need some suggestion regarding what kind of packages and library to use

A pitch detecting feature where a user records his voice and app should process the audio and tell what is the frequency of the voice. With this knowledge, a musician can use to tune their voice. I found a couple of good resources like ML5 and Tensorflow lite which I like the idea of. I don't have to have my own dataset models, I can see many available models which are basic but gets the job done, but I don't know which one will be best for flutter integration? or even they are compatible to use! Any kind of relevant suggestion will be appreciated.

Which library should i use for face Tracking for captured image?

I am creating one application with face processing. How may i create face animation?
Is there any library which i can use?
How to face track once capturing image of face?
Please help me
As far as I am aware there is no completely free library to track facial expressions - which i think is what you need to produce aniamtion.
However, there is a commerical library for iOS (and other platforms) here: http://www.image-metrics.com/livedriver/overview/
This is available under a trial license and also a free educational licence. I believe it will do what you want.
Your other option is to develop you own facial feature tracking system using something like OpenCV: http://opencv.org/
Thats going to be a challenge though.
Face detection is already included in CoreImage (CI), see
http://www.bobmccune.com/2012/03/22/ios-5-face-detection-with-core-image/
If you want face recognition, you have to do something on your own, but there are some tutorials available - most of them using OpenCV.

Acoustic fingerprint code for iOS?

I've started looking on the subject of Acoustic Fingerprint (http://en.wikipedia.org/wiki/Acoustic_fingerprint) for a pet project of mine for the iOS and I was wondering if there are:
Any opensource libraries or source code for the iOS that handle this?
Assuming I'm a veteran jack of all trades coder, is it very problematic to implement this myself if there is no open-source versions?
Will the Accelerate DSP library in iOS able to handle such a task?
Thanks
you may want to check out the EchoPrint CodeGen library by The Echo Nest. They even have a fully functional iOS code example.
You can find some additional links to open source audio fingerprinting related software in this MusicBrainz article, but AFAIK the EchoPrint library is the only one that has a license that is compatible with iOS apps.
Good Luck!
Not of my knowledge
No problem for a veteran, that won't be easy, but achievable.
Never looked into.
Even in java, this might be an interesting reading.
Before doing anything, especially if you intend to sell on AppStore, take care that these techniques/algorithms are patented. Read what happened to the above blog post writer.
Will the Accelerate DSP library in iOS able to handle such a task?
NO
I also notice that you put the tag "voice recognition". Just to make sure voice recognition as nothing to do with audio identification/acoustic fingerprinting !!

image filters for iphone sdk development

I am planning to develop an iphone app which makes use of image filters like blurring, sharpening,etc. I noticed that there are few approaches for this one,
Use openGL ES. I even found an example code on apple iphone dev site. How easy is openGL for somebody who has never used it? Can the image filters be implemented using the openGL framework?
There is a Quartz demo as well posted on apple iphone dev site. Has anybody used this framework for doing image processing? How is this approach compared to openGL framework?
Don't use openGL and Quartz framework. Basically access the raw pixels from the image and do the manipulation myself.
Make use of any custom built image processing libraries like this one. Do you know of any other libraries like this one?
Can anybody provide insights/suggestions on which option is the best? Your opinions are highly appreciated. Thanks!
Here is another alternative for image filtering. They provide lots of filters using core image framework.
http://www.binpress.com/app/photo-effects-sdk-for-ios/801
Quartz doesn't have access to Core Image yet on the iPhoneOS so you can't use the Core Image filters like you do on MacOS.
I would go with a dedicated library. There's a lot of overhead in OpenGL ES you don't want to miss with if you're not using it for anything else.
If your App has a support of iOS6 the use CoreGraphics and CoreImage. It contains many filters and some other approaches through which yo get other composite filters.
If you r not on iOS6 the you can use GPUImage framwork or ImageMagick.
and the last option is to manipulating pixels values, but it needs an filter algorithm to add filters on Image

MPMoviePlayerController alternatives on iPhone?

I am looking for alternatives to the MPMoviePlayerController on the iPhone. As a video player its functionality is very limited. According to the class reference there is no way to get the current play back time or set a new time, for example. It's just play and stop.
Are there any middleware solutions out there for iPhone video playback that offer more functionality? CRI has something in development but it has not been released. I haven't been able to find anything else.
Thanks.
Keep in mind that even though a project is GPL, that does not mean you can't contact the author's about an LGPL option on the underlying code.
A possible roll your own solution would be to use openGL as a compositing surface for the video and obtain a behind the scenes library like ffmpeg if you need to process specific video types.
NeHe has an example of rendering AVI's to openGL: http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=35
FFMpeg has recently been ported to iPhone and is an LGPL based product: http://geek.thinkunique.org/2008/03/05/ffmpeg-on-the-iphone/
(Note: There is some debate over the commercial use of LGPL on iPhone because the license references the phrase "dynamic" when referring to library linkage, which iPhone doesn't allow. I have not seen any project teams balk at their code being used on the iPhone statically, but you should contact the authors directly for clarification.)
Another (though GPL) version of an OpenGL video player is: http://code.google.com/p/glover/
What your getting through a solution like this is basically a bypass on the iPhone/Mac/CALayer specific technical details and leveraging an existing knowledge base of video through OpenGL which although not extensive, is still broadly supported.
If you are dealing with a specific video style, then you may want to see if a library is avaiable for the specific video format direct from the vendor instead of using a multi-purpose tool like FFMpeg. Once you have the compositing working, the video can come from most any library.
Barney
You could use AVPlayer. See the documentation
You can then get the current playback time with currentTime and seek to a specified time with seekToTime:.
You have to direct the visual output of an AVPlayer instance to an AVPlayerLayer object (subclass of CALayer). See the first listing here.
VLC has been ported to iPhone but not using the official SDK.