How to implement a CMSampleBuffer for MLkit facial detection? - swift

Basically, I'm trying to create a simple real-time facial recognition IOS app that streams the users face and tells them whether their eyes are closed. I'm following the google tutorial here - https://firebase.google.com/docs/ml-kit/ios/detect-faces.
I'm on step 2 (Run the Face Detector) and I'm trying to create a visionImage using the CMSampleBufferRef. I'm basically just copying the code and when I do, there is no reference to "sampleBuffer" as shown in the tutorial. I don't know what to do as I really don't understand the CMSampleBuffer stuff.

ML Kit has a Quickstart app showing how to do that. Here is the code:
https://github.com/firebase/quickstart-ios/tree/master/mlvision

Related

How do I render over a tracked image on Hololens?

I'd like the Hololens to take in through the camera and project an image over tracked images and I can't seem to find a concrete way as to how online. I'd like to avoid using Vuforia etc for this.
I'm currently using AR Foundation Tracked Image manager (https://docs.unity3d.com/Packages/com.unity.xr.arfoundation#2.1/manual/tracked-image-manager.html) in order to achieve the same functionality on mobile, however it doesn't seem to work very well on hololens.
Any help would be very appreciated, thanks!
AR Foundation is a Unity tool and 2D Image tracking feature of AR Foundation is not supported on HoloLens platforms for now, you can refer to this link to learn more about feature support per platform:Platform Support
Currently, Microsoft does not provide an official library to support image tracking for Hololens. But that sort of thing is possible with OpenCV, you can implement it yourself, or refer to some third-party libraries.
Besides, if you are using HoloLens2 and making the QR code as a tracking image is an acceptable option in your project, recommand using Microsoft.MixedReality.QR to detect an QR code in the environment and get the coordinate system for the QR code, more information please see: QR code tracking

Reading a QR code and deploying an application using Unity and Hololens

I'm beginner with HoloLens and Unity engine. Last days, I've found an application that can be read QR codes. In there can be encoded information like URLs, images, mp3 audio, etc. At the same time, I was wondering if it is possible through reading QR code launching different kind of applications using unity engine. For example, if I read one QR code executing this game with the HoloLens, with another QR code another game. I did not study game engineer or informatics, therefore I don't if it is possible to implement it.
Probably you can, but you need to specify your requeriments very carefully.
First of all QR Code reading is actually an image processing. For image processing you need 2 object: An "Digital image" and a "processor". (In your case Hololens Camera and Hololens processor)
Basicly when you open an application in your holo lens which has qr reading capabilities, It just uses hololens's camera for "digital image" and then with the algorithm in the application it just convert to qr code to a data.(for exmple url or a general path to an application on the hololens etc.)
After that,Applicaiton uses that data to for opening new applications or use it in own scope. It depends on the privilages of application in the Hololens Operating system.
If I were you,I will not use unity for the qr code reading. If I will use unity, ı will not make diffrent apps for diffrerent games so after read qr code ı just open diffrent part of the same application.

Is there any way in unity for number recognition?

I am trying to create a AR app which recognizes numbers.I was able to do the text recognition using vuforia but numbers I didn't find any such SDK.I am also ready to write the code using c# but I am not sure where to start.

Best way to build a camera app on iPhone

I am thinking of building a camera application - with the ability to do image processing (adjust contrast, apply different image filters) while you are taking picture or after the pictures has taken.
The app will also have the ability of drag and drop icons.
At the end you are able to export the edited images either to the camera roll or app memory.
There is already many apps out there like this. (Line Camera) etc...
Just wondering what is the best way to build such app.
Can I build the app purely with Objective C ios sdk? or do i need to build it with C++/cocos2d, etc...
Thanks for your help!
Your question is very broad, so here is a broad answer...
Accessing the camera/photo library
First you'll need to access the camera using UIImagePickerController to either take a new photo or grab one from your photo library. You can read up on how to accomplish this here: Camera Programming Topics for iOS
Image Manipulation
AviarySDK has much of this already built for you. Very easy to set up and use in your apps. You can download their sample app for free in the app store if you want to see what it can do. Check it out here: http://aviary.com/
Alternatively, read up on Core Image if you'd like to avoid third-party libraries. See Core Image Programming Guide for more information.
There is absolutely no need for cocos2d which is a game engine.
You can accomplish everything you mentioned using only Objective-C.
If you want real-time effects you will need to dive into OpenGL. you can use GLKit if you target iOS 5 and above.

Augmented Reality Application in iOS

I am trying to create an ios application using which we can convert a real life object e.g Sofa, Table as 3D objects using IPhone's camera. These 3D object info can be saved in the database and can be displayed as Augumented reality objects when the IPhone camera is pointed at some other part of the room.
I have searched the internet but could'nt find any info on where to get started to convert real life objects to 3D objects for viewing as augumented reality objects.
check below link where you found SDK and also sample code for implement AR
http://quickblox.com/developers/IOS
I think any way you go with this it's going to be huge task. However I've had good results with similar goals using OpenCV.
It has an iOS SDK, but is written in C++. Unfortunately I don't think there's anything available that will allow you to achieve this using pure Obj-C or Swift.
You can go through following links
https://www.qualcomm.com/products/vuforia
http://www.t-immersion.com/ar-key-words/augmented-reality-sdk#
http://dev.metaio.com/sdk/
https://www.layar.com