Reading a QR code and deploying an application using Unity and Hololens - unity3d

I'm beginner with HoloLens and Unity engine. Last days, I've found an application that can be read QR codes. In there can be encoded information like URLs, images, mp3 audio, etc. At the same time, I was wondering if it is possible through reading QR code launching different kind of applications using unity engine. For example, if I read one QR code executing this game with the HoloLens, with another QR code another game. I did not study game engineer or informatics, therefore I don't if it is possible to implement it.

Probably you can, but you need to specify your requeriments very carefully.
First of all QR Code reading is actually an image processing. For image processing you need 2 object: An "Digital image" and a "processor". (In your case Hololens Camera and Hololens processor)
Basicly when you open an application in your holo lens which has qr reading capabilities, It just uses hololens's camera for "digital image" and then with the algorithm in the application it just convert to qr code to a data.(for exmple url or a general path to an application on the hololens etc.)
After that,Applicaiton uses that data to for opening new applications or use it in own scope. It depends on the privilages of application in the Hololens Operating system.
If I were you,I will not use unity for the qr code reading. If I will use unity, ı will not make diffrent apps for diffrerent games so after read qr code ı just open diffrent part of the same application.

Related

Does Agora.io for Unity provide these features?

I'm a bit lost looking through all the various Agora.io modules (and not sure what it means that only some of them have Unity-specific downloads).
I want to make a Unity app where two remote phones exchange data as follows:
Streaming voice in both directions
Streaming video in one direction (recorded from device camera)
Streaming a small amount of continuously-changing custom data in the other direction (specifically, a position + orientation in a virtual world; probably encoded as 7 floats)
The custom data needs to have low latency but does not need reliability (it's fine if some updates get lost; app only cares about the most recent update). Updates basically every frame.
Ideally I want to support both Android and iOS.
I started looking at Agora video (successfully built a test project) and it seems like it will cover the voice and video, but I'm struggling to find a good way to send the custom data (position + orientation). It's probably theoretically possible to encode it as a custom video feed but that sounds complex and inefficient. Is there some out-of-band signalling mechanism I could use to send some extra data alongside/instead of a video?
Agora real-time messaging sounds like it would probably work for this, but I can't seem to find any info about integrating it with Unity (either on Agora's web site or in a general web search). Can I roll this in somehow?
Agora interactive gaming could maybe also be relevant? The overview doesn't seem real clear about how it's different from regular Agora video. I suspect it's overkill but that might be fine if there isn't a large performance cost.
Could anyone point me in the right direction?
I would also consider alternatives to Agora if there's a better plugin for implementing this feature set in Unity.
Agora's Video SDK for Unity supports exporting projects to Android, iOS, MacOS, and Windows (non-UWP).
Regarding your data streaming needs, Agora's RTM SDK is in the process of being ported to work within Unity. At the moment the best way to send data using the Agora SDK is to use CreateDataStream to leverage Agora's ability to open a data stream that is sent along with the frames. Data stream messages are limited to 1kb per frame and 30kb/s so I would be cautious about running it on every frame if you are using a frame-rate above 30fps.

How to implement a CMSampleBuffer for MLkit facial detection?

Basically, I'm trying to create a simple real-time facial recognition IOS app that streams the users face and tells them whether their eyes are closed. I'm following the google tutorial here - https://firebase.google.com/docs/ml-kit/ios/detect-faces.
I'm on step 2 (Run the Face Detector) and I'm trying to create a visionImage using the CMSampleBufferRef. I'm basically just copying the code and when I do, there is no reference to "sampleBuffer" as shown in the tutorial. I don't know what to do as I really don't understand the CMSampleBuffer stuff.
ML Kit has a Quickstart app showing how to do that. Here is the code:
https://github.com/firebase/quickstart-ios/tree/master/mlvision

Object Recognition with Mixed Reality Capture (MRC)

We're using the HoloLens' locatable camera (in Unity) to perform a number of image recognition tasks. We'd like to utilize the mixed reality capture feature (MRC) available in the HoloLens developer portal so that we can demo our app, but MRC crashes because we're hogging the camera in Photo Mode.
Does anyone have a good workaround for this? We've had some ideas, but none of them are without large downsides.
Solution: Put your locatable camera in Video Mode so that you can share the video camera with MRC.
Downside: Video Mode only allows us to save the video to disk, but we need realtime access to the buffer in memory (the way photo mode gives us access) so that we can do our detection in realtime.
Solution: Capture the video in a C++ plugin, and pass the frame bytes to Unity. This allows MRC to work as expected.
Downside: We lose the 'locatable' part of the 'locatable camera' as we no longer get access to the cameraSpaceToWorldSpace transformation matrix, which we are utilizing in our UI to locate our recognized objects in world space.
Sub-solution: recreate the locatable camera view's transformation matrix yourself.
Sub-downside: I don't have any insight into how Microsoft creates this transformation matrix. I imagine it involves some hardware complexities, such as accounting for lens distortions. If someone can guide me to how this matrix is created, that might be one solution.
Solution: Turn off object recognition while you create the MRC, then turn it back on when you're done recording
Downside: Our recognition system runs in real time, n times per second. There would be no way to capture the recognitions on video.
We ended up creating a plugin for Unity that uses Microsoft's Media Foundation to get access to the video camera frames. We open sourced it in case anyone else runs into this problem.
The plugin mimics Unity's VideoCapture class so that developers will be able to easily understand how to implement it.
Hopefully this is helpful to a few.

Is there any way in unity for number recognition?

I am trying to create a AR app which recognizes numbers.I was able to do the text recognition using vuforia but numbers I didn't find any such SDK.I am also ready to write the code using c# but I am not sure where to start.

iphone camera used in app to scan (like red laser)

I am working on an app that requires the use of a camera to scan in text. Basically without getting too detailed, I need to point the camera at something (for my purposes here I will say a license plate) and i need to point the camera at the plate, and have it somehow save the digits into a string within the app. i guess its similar to Word Lens or red laser where it doesnt actually take a picture, it just scans the view and returns information. i have not been able to find much about this so any help on how to write this kind of code would be greatly appreciated!!
This is not barcode scanning. This is called OCR (optical character recognition), and there are some free and opensource libraries available that do this.
For example, Tesseract is a complete OCR engine written in C++ (it has a C++ interface, so it can be easily used from within an iOS app).
The other solution is gocr, the GNU Optical Character Recognizer. This is supposed to be a standalone program (a command line tool), but I've had success extracting its essential parts into a library (and I used it in an iOS project of mine as well).
OpenCV is a complete computer vision library. You can implement OCR using it - just google for the adequate documentation and tutorials.