I'm trying to store and display the user's game score in a leaderboard using Google Play Game Services . I'm following this --> https://developers.google.com/games/services/android/init . The issue arises when I import the BaseGameUtils library in my project , then all the andEngine classes display an error (for eg. Camera cannot be resolved and my imports also display an error) . Also , there are two BaseGameActivity.java classes (one in the andEngine jar and one in the BaseGameUtils library ) . How to resolve this issue , please help .
I resolved it in following way:
1. Changed the name of BaseGameActivity (from BaseGameUtils lib) to GoogleBaseGameActivity
2. Made it to extend AndEngine BaseGameActivity
3. In my main game activty I extend GoogleBaseGameActivty instead of Andengine one.
Object oriented programming guys:)
Related
I'd like the Hololens to take in through the camera and project an image over tracked images and I can't seem to find a concrete way as to how online. I'd like to avoid using Vuforia etc for this.
I'm currently using AR Foundation Tracked Image manager (https://docs.unity3d.com/Packages/com.unity.xr.arfoundation#2.1/manual/tracked-image-manager.html) in order to achieve the same functionality on mobile, however it doesn't seem to work very well on hololens.
Any help would be very appreciated, thanks!
AR Foundation is a Unity tool and 2D Image tracking feature of AR Foundation is not supported on HoloLens platforms for now, you can refer to this link to learn more about feature support per platform:Platform Support
Currently, Microsoft does not provide an official library to support image tracking for Hololens. But that sort of thing is possible with OpenCV, you can implement it yourself, or refer to some third-party libraries.
Besides, if you are using HoloLens2 and making the QR code as a tracking image is an acceptable option in your project, recommand using Microsoft.MixedReality.QR to detect an QR code in the environment and get the coordinate system for the QR code, more information please see: QR code tracking
i have to build an app like this:
https://www.youtube.com/watch?v=vetDCkbQGM4
It should simply detect the cockpit of a car and should show informations. For example "this is air conditioning", "this is switch button for the radio". The targets will be pre defined. Basically the app should detect everything and should show information.
Can I realize this with Vuforia? Which framework is suitable for this task?
I hope you guys can help me.
Cheers!
Since your targets are pre-defined, the simplest solution would be to use aruco markers to get 3D world positions/rotations through your user's camera feed.
See the AR Marker Detector in the Unity Asset Store for an example. Vuforia uses 'VuMarks' that are more intricate versions of this.
If you can't add computer-readable labels to the real world for your project, then you are talking about real-time object recognition. That is a much harder problem and not yet easily solvable in Unity as far as I know. It would require something like Google's Cloud Vision API. There is a Unity Cloud Vision project on GitHub, but I have no idea how well it works or what it's capabilities are.
Yes it is possible, you were first require to google. There are different SDK/Framework and Unity Asset store packages available.
You can use Free Vuforia AR Starter Kit from asset store to up and run your logic. Or You can also use Free AR Toolkit. There are different kind of tut available which can show you how to implement these pacakges.
I would like to know if It is possible to make a 3D model of an entire building on my college campus. If I am able to make the 3D model of each room, and then somehow combine all the rooms to make a full 3D building it would be a great project for my senior internship. Please direct me to the correct information. Or please give me instructions on how to use the Project Tango Device to create a full 3D building. Ultimately, I want to use the Project Tango Device to conduct indoor mapping using augmented reality.
Project Tango can export your scanned meshes to .obj files. Programs like 3DSMax allows you to import several .obj-files.
To create a mesh of a room with project tango and export the files you can use the constructor app.
I created an AR application on unity3d using vuforia extension. It runs on unity editor and Samsung Galaxy S3. But when I run it on Asus TF300TG tablet, I don't see 3d object. It don't find marker region. But why? What should I do to find it?
My settings :
EDIT :
I get an error message. Unity says that
Failed to get device caps (0xc0110001) .
Well, from your question, it is unclear of the steps you've taken to get your output. But here's an answer that explains the complete guide to getting a simple application working using Vuforia's Unity Extension.
But specifically, make sure with the AR Camera you've set Data Set Load Behavior. Under that you SHOULD see the name of the .unitypakage you imported. Check the name as well as make it Active.
Also, regarding the Image Target Behavior should be Predefined and the package name and Image name must be selected.
For more details please refer the link above. Good luck. :)
cheers,
we got our hands on one exemplar of the google glass and trying out a little bit. We wanted to create a Barcode Scanner using the zxing library.
We imported these two classes: https://github.com/zxing/zxing/tree/master/android-integration/src/main/java/com/google/zxing/integration/android
and start the intent via:
IntentIntegrator integrator = new IntentIntegrator(MainActivity.this);
integrator.initiateScan();
but we get a scrambled camera image, like here:
Glass camera preview display is garbled
We tried several fixes but were unable to import the zxing library to work with our project.
Best
zxing/zxing
github.com
The BarcodeEye seems to have resolved this problem in their port of ZXing:
https://github.com/BarcodeEye/BarcodeEye
I used the above code as a base and abstracted some of the code into a Android library, if that helps:
https://github.com/jaxbot/glass-barcode