Determine Person from the Camera - iphone

I need some help for finding the person with Camera.
is it possible to fetch person from the groups if he has install applicator in that device from the camera?
the basic concept is when user rotate camera to any person camera will fetch image of that user and will search it and compare it with application database for if that user is register in application or not and display his information.
if its possible and you have some reference then please share with me.
Thank you for read question.

Core Image has a new CIFaceFeature to recognize faces in real time; you can start with these examples to take an overview:
SquareCam (from Apple)
iOS Facial Recognition
Easy Face detection with Core Image
Then you have to design the logic to compare and store images.

Related

Using image markers to differentiate assets in unity

I am a very newbie to Augmented Reality software. I want to design a simple app. As a part of this app, There will be a series of uniquely designed tags. These tags will be on some assets. In the application, I want to store some metadata for each asset. Imagine a DB table with fields like :(asset Id, name, var1, var2...) holding the asset meta-data.
So, when The augmented reality app detects a unique image then it will show its meta-data information, over the marker. It is that simple. In summary, I want to know how can I use image markers to differentiate assets? Sorry If I am asking a very basic question.
Regards,
Ferda
First of all your question is too broad. How are you planning to implement this application? First, you have to decide whether you will be using an Augmented Reality SDK or computer vision techniques?
My suggestions would be based on amount of devices you want to use this application or target platform, choosing 1 SDK from ARCore, Vuforia or ARKit. I am not familiar with ARKit but in ARCore and Vuforia, either augmented images or image targets are hold in an image database. So you can get the image id or name of any target you detected using your device. In conclusion you can visualize specific assets for specific images.
Below you can see ARCore Augmented Image database. As you can see every image has a name. In your code you can differentiate images using image.Name, then visualize corresponding meta data over the marker.
Also in both SDKs, you can define your own database but your images should not have repetitive features and have high contrast sections.
Vuforia has a similar concept as well. The difference between ARCore and Vuforia depends on what which devices you target and quality of image tracking. Vuforia can detect images better in my opinion. It is really fast in terms of detecting images.

Fixing object when camera open Unity AR

Im trying to create a AR Game in Unity for educational project.
I want to create something like pokemon go: when the camera open the object will be fixed somewhere on the real world and you will have to search for it with the camera.
My problem is that ARCore and vuforia groundDetection (I dont want to use targets) are only limited for few types of phone and i tried to use kudan sdk but it didnt work.
Any one can give me a tool or a tutorial on how to do this? I just need ideas or someone to tell me where to start?
Thanks in advance.
The reason why plane detection is limited to only some phones at this time is partially because older/less powerful phones cannot handle the required computing power.
If you want to make an app that has the largest reach, Vuforia is probably the way to go. Personally, I am not a fan of Vuforia, and I would suggest you use ARCore (and/or ARKit for iOS).
Since this is an educational tool and not a game, are you sure Unity is the way to go? I am sure you may be able to do it in Unity, but choosing the right platform for a project is important - just keep that in mind. You could make a native app instead.
If you want to work with ARCore and Unity (which is a great choice in general), here is the first in a series of tutorials that can get you started as a total beginner.
Let me know if you have other questions :)
You can use GPS data from phone to display object when the user arrived specific place you can show the object. You can search GPS based Augmented Reality on google. You can check this video : https://www.youtube.com/watch?v=X6djed8e4n0

Best way to build a camera app on iPhone

I am thinking of building a camera application - with the ability to do image processing (adjust contrast, apply different image filters) while you are taking picture or after the pictures has taken.
The app will also have the ability of drag and drop icons.
At the end you are able to export the edited images either to the camera roll or app memory.
There is already many apps out there like this. (Line Camera) etc...
Just wondering what is the best way to build such app.
Can I build the app purely with Objective C ios sdk? or do i need to build it with C++/cocos2d, etc...
Thanks for your help!
Your question is very broad, so here is a broad answer...
Accessing the camera/photo library
First you'll need to access the camera using UIImagePickerController to either take a new photo or grab one from your photo library. You can read up on how to accomplish this here: Camera Programming Topics for iOS
Image Manipulation
AviarySDK has much of this already built for you. Very easy to set up and use in your apps. You can download their sample app for free in the app store if you want to see what it can do. Check it out here: http://aviary.com/
Alternatively, read up on Core Image if you'd like to avoid third-party libraries. See Core Image Programming Guide for more information.
There is absolutely no need for cocos2d which is a game engine.
You can accomplish everything you mentioned using only Objective-C.
If you want real-time effects you will need to dive into OpenGL. you can use GLKit if you target iOS 5 and above.

iPhone User Interface steps online demo

I've designed the User Interface of an iPhone app and I wish to show an online demo of that consisting for the moment of a series of static images representing the main steps of the app.
According to you what is the best way to do this simulation?
You know, something like a series of single webpage, optimized for mobile, containing a single image linking to the next step, but I was wondering if exists a much elegant and sophisticated solution, with a transition effect for example or other features.
I hope I was clear enough :)
Any help will be sincerely appreciated.
Thanks in advance for your attention.
This sounds like a good use for Briefs Briefs App Website. This pretty much allows you to create an interface and step through it as if it were an application. I believe you'll need to have a developer account to run the app that will read the brief on your phone (since it wasn't able to be released in the app store).
An alternative to static images would be to make a video. I use the iShowU video screen capture tool and set it to record the iPhone/iPad simulator window. I then run through the screens, type inputs, etc. In addition to recording the video, the program records my voice as I narrate the app's features.
As to transition effects, the video will capture whatever transition animations are in your program.
In the end you have a video that you could give your user, put on YouTube, or whatever.
You can do this easily and for free on AppDemoStore. You just have to upload the app screenshots and then add hotspots which are used for the navigation through the demo.
AppDemoStore offers also the sophisticated features you are asking for:
iPhone specific transition effects such as slide up/down/left/right, fade and flip
gestures icons for the hotspots
text boxes and callouts
multiple hotspots on a screen in order to create a simulation of the app (and not just a linear demo)
Here's a sample demo: http://www.appdemostore.com/demo?id=1699008
Moreover, the demos created on AppDemoStore run in any browser and mobile device and can be embedded in your webpage or blog (like you do it with a YouTube video). With the FREE account, you can create up to 10 demos with unlimited screenshots and all the features specified above.
Regards,
Daniel

How to add a tag overlay to a photo in iOS aka Facebook

I was wondering if anyone had an idea as to how the people tagging feature works on facebooks iPhone app i.e. in the app you can touch the photo and then associate that touch-point with a facebook friend. Specifically I was wondering whether this is just as simple as associating co-ordinates on the image with a data object (facebook friend in this case) using the iPhone or whether they are doing some smarter image recognition in the background to workout what other areas of the photo also may belong to that person i.e. is does the tag extend beyond the point touched on the screen. If the latter is the case is anyone familiar with the techniques used?
Thanks in advance
Dave
I don't think they are using face recognition algorithms on the iphone, since that is processor consuming specially if you have hundreds of friends. If you want to do a face recognition and you have faces of people that you want to search in you should do it on the server, so after you take or import a photo, you should send it to your server where you search for the face and return a JSON with points for faces and data for matched users. Then do your UI to present it on the screen for the user.
Edit
If you want to use face recogingitioning on iphone try this: Face recoginition iOS