Google Glass import Zxing - import

cheers,
we got our hands on one exemplar of the google glass and trying out a little bit. We wanted to create a Barcode Scanner using the zxing library.
We imported these two classes: https://github.com/zxing/zxing/tree/master/android-integration/src/main/java/com/google/zxing/integration/android
and start the intent via:
IntentIntegrator integrator = new IntentIntegrator(MainActivity.this);
integrator.initiateScan();
but we get a scrambled camera image, like here:
Glass camera preview display is garbled
We tried several fixes but were unable to import the zxing library to work with our project.
Best
zxing/zxing
github.com

The BarcodeEye seems to have resolved this problem in their port of ZXing:
https://github.com/BarcodeEye/BarcodeEye
I used the above code as a base and abstracted some of the code into a Android library, if that helps:
https://github.com/jaxbot/glass-barcode

Related

Face Swap and Face Detection/Recognition in Flutter

How does one go about creating a Face Swap mechanism in Flutter?
Can anyone point me in the right direction?
Thank you
You’ll probably need a good plugin to do all the hard work for you. I recommend Google’s ML Kit on Flutter, as it is the most popular way to run on-device ML with Flutter.
The face detection plugin is what you want. You would basically get the face oval shape with face contour detection and swap those shapes. And this can be done real-time with a given video input.
But you should keep in mind that the plugin is on v0.0.1. If you’re aiming for production, you’d better do that with Swift or Kotlin.
There are multiple ways to archive this thing in Flutter. It might be in real-time or with some delay of seconds.
You can use one of these packages.
Open CV
TensorFlow
Google's ML Kit
It might be possible you will not get good support from openCV and TensorFlow in a flutter. But you can integrate the OpenCV/TensorFlow native libs or SDK for both Android and IOS and invoke them through platform channels
There is also one more possible solution but it will definitely have a delay. For this kind of ML project python have great support of library and projects.
You can set up a python project which is responsible for face swapping it takes input from the flutter app (using rest API or socket) and return the output image after face-swapping.
Some great face swap projects are available on GitHub you can look into it.

How do I render over a tracked image on Hololens?

I'd like the Hololens to take in through the camera and project an image over tracked images and I can't seem to find a concrete way as to how online. I'd like to avoid using Vuforia etc for this.
I'm currently using AR Foundation Tracked Image manager (https://docs.unity3d.com/Packages/com.unity.xr.arfoundation#2.1/manual/tracked-image-manager.html) in order to achieve the same functionality on mobile, however it doesn't seem to work very well on hololens.
Any help would be very appreciated, thanks!
AR Foundation is a Unity tool and 2D Image tracking feature of AR Foundation is not supported on HoloLens platforms for now, you can refer to this link to learn more about feature support per platform:Platform Support
Currently, Microsoft does not provide an official library to support image tracking for Hololens. But that sort of thing is possible with OpenCV, you can implement it yourself, or refer to some third-party libraries.
Besides, if you are using HoloLens2 and making the QR code as a tracking image is an acceptable option in your project, recommand using Microsoft.MixedReality.QR to detect an QR code in the environment and get the coordinate system for the QR code, more information please see: QR code tracking

How to make live barcode reader using flutter?

I am using camera plugin https://pub.dev/packages/camera I want to take image stream from it and use it to identify the barcode using Firebase ml kit. Can I do that? I can't figure out how to take image stream from camera plugin and how to use this stream with ml kit. Can anybody help me? I am creating a live reader, I don't want to take picture using the camera plugin.
There are some already built packages that you can use and enhance based on your use case like barcode_scan.
However, I would encourage you to try a few things first and then turn to the community for help with the code that you have tried but didn't work.

Augmented Reality - What do i need?

i have to build an app like this:
https://www.youtube.com/watch?v=vetDCkbQGM4
It should simply detect the cockpit of a car and should show informations. For example "this is air conditioning", "this is switch button for the radio". The targets will be pre defined. Basically the app should detect everything and should show information.
Can I realize this with Vuforia? Which framework is suitable for this task?
I hope you guys can help me.
Cheers!
Since your targets are pre-defined, the simplest solution would be to use aruco markers to get 3D world positions/rotations through your user's camera feed.
See the AR Marker Detector in the Unity Asset Store for an example. Vuforia uses 'VuMarks' that are more intricate versions of this.
If you can't add computer-readable labels to the real world for your project, then you are talking about real-time object recognition. That is a much harder problem and not yet easily solvable in Unity as far as I know. It would require something like Google's Cloud Vision API. There is a Unity Cloud Vision project on GitHub, but I have no idea how well it works or what it's capabilities are.
Yes it is possible, you were first require to google. There are different SDK/Framework and Unity Asset store packages available.
You can use Free Vuforia AR Starter Kit from asset store to up and run your logic. Or You can also use Free AR Toolkit. There are different kind of tut available which can show you how to implement these pacakges.

Augmented Reality : Recognize hand written number?

I am trying to find a solution for this AR app as the topic tells.
I want my app to recognize a hand-written number by the user.
The app will tell the user to write down for example number 24 on a paper and move the camera over the written number to see the 3d object.
This might be used for saving a Birthday, a wedding date .. etc
For accuracy, the app instructions will show the user a preview to tell please write the number 24 similar to this..
Although each hand writing will differ, but at least we do not get curly "2"-s or "4" with an open edge ..etc
So here we need AR to recognize the number, or be able to read the number according to approximation.
And the first question is: Is such a behavior doable or anyone familiar with a similar concept?
After searching similar apps, I found "Ink Hunter" apps for tatoo preview-s, although these apps use symbols not number, but we can think of a number as a symbol as well.
Also as this video: https://www.youtube.com/watch?v=9rXJcIE2Fcs shows, each user draws the symbol in a different way and still they get it working.
I am using Unity3d and Vuforia.
Vuforia offers free samples(unity3d packages) on the website, and there is one named "Text Recognition" , and here's the tutorial link: https://www.youtube.com/watch?v=W3MK6nC5FWE
But unfortunately couldn't make it work.
If someone has developed such a functionality using these sample projects from vuforia or have any ulternative method please I need you help :)
thanks in advance moghes
Here's a tutorial our team created on text recognition using the Hololens and Vuforia with Unity: https://www.youtube.com/watch?v=WdMeHgD4fMY. In the first portion of the video, we show how to get text recognition working with just Vuforia and Unity - no Hololens required. For your application, just change the text to numbers.
I believe the biggest challenge you will have is the "hand-written" component. From our research, Vuforia prefers computer-generated, predefined font types.