Can both image target and object target be added one single database in unity vuforia? - unity3d

I am developing an android app where I have to train my app to recognize two images and four objects.I created one single database where I added all the images and objects target in vuforia developer site and created the unity package. Now neither image nor object is getting recognized.

Probably the problem is the same for objects and images.
I think you should share some more info about what your doing as well as some meaningful code implementing it.
W/O that, I would suggest:
verify that the database and trackables are loaded and active # runtime
if so, see in console that the trackables are tracked by Vuforia
if so, verify the code enabling your augmentations
Please confirm whether have run trough these steps already and what results you got. I can share some code and further tips once the issue is a little but more specific.
Regards

Related

Maneuver issues in Turn By Turn Navigation with HERE maps in Flutter

Thanks in advance.
We have to use HERE map's Turn by turn navigation feature in one of our Flutter application, we have added billing in the developer account and have created the necessary keys.
When we try HERE map examples they have provided, we get everything except maneuver instructions that shows the user when to turn right/left/go straight for some distance etc.
I'm new to this and I have no idea how to get this, we never get events on the listener and it only shows updating there, am I missing something ?
this is how it looks right now, Updating...
I think we should be getting the progress here, but we are not getting it here...
_visualNavigator.routeProgressListener = Navigation.RouteProgressListener((routeProgress) { }
Please look into the provided example app. It shows here how to get the maneuver actions.
Your screenshot shows a different app, so make sure everything works with the example app, at first. The app offers to run a simulation mode. This should work. If you run the example app with real GPS updates, you may need to go outside and move to get location updates. This should also work.
If this still does not work, it could either mean that your device has an issue with getting GPS locations. Some iPads, for example, lack support. Or that you have disabled getting location updates. You can cross-check this when trying the positioning_app example from the same repository that shows how to get location updates.
A last point may be to clarify what events you get and what you miss: There are multiple event listeners providing real-time information during guidance - if you have only an issue with maneuvers, then most likely you can solve your issue by following strictly the code of the example app.
Note that previous HERE SDK versions, prior to HERE SDK 4.13.0, only provided empty maneuver instruction texts during guidance when they have been taken from the route instance. Make sure to take this information from the VisualNavigator instead.

HoloLens 2 Research Mode with Unreal - How?

I'm new here, so feel free to give tips where needed. I am running into trouble using the Unreal engine combined with the HoloLens 2.
I would like to access the special black/white cameras of the HoloLens, for tracking purposes. These are normally not accessible. However, they can be activated by using the “perceptionSensorsExperimental” capability. This should be possible, since it also works with Unity: https://github.com/doughtmw/HoloLensForCV-Unity
I have tried to add the capability in the Unreal Project Settings: Config\HoloLens\HoloLensEngine.ini” -> “+RescapCapabilityList=perceptionSensorsExperimental”. The project still builds as expected, but I noticed that it doesn’t matter what I add here. Even something random like “+abcd=efgh” doesn’t break the build.
However, if I add “+CapabilityList=perceptionSensorsExperimental”, I get “Packaging (HoloLens): ERROR: The 'Name' attribute is invalid - The value 'perceptionSensorsExperimental' is invalid according to its datatype 'http://schemas.microsoft.com/appx/manifest/types:ST_Capability_Foundation' - The Enumeration constraint failed.”. I conclude: 1.) I’m making the changes in the right file. 2.) The right scheme needs to be configured in order for “+RescapCapabilityList=perceptionSensorsExperimental” to work as expected.
My question is how do I add the right schema to my Unreal project? (like in the Unity example referenced above, which uses “http://schemas.microsoft.com/appx/manifest/foundation/windows10/restrictedcapabilities”), I cannot find any example and I cannot find any proper place to put it. Not in the settings, not in the xml/ini files. Clearly, I am missing something.
Any thoughts are much appreciated!
Updated. We released HoloLens-ResearchMode-Unreal plugin

What do you lose by ejecting a React app that was created using create-react-app?

I'm interested in using Hot Module Replacement with a newly created React app.
Facebook Incubator's create-react-app uses Webpack 2 which can be configured to support HMR, however in order to do so, one needs to "eject" the create-react-app project.
As the documentation points out, this is a "one way" operation and cannot be reversed.
If I'm to do this, I want to know what I might be giving up. I've been unable to locate any documentation that explains the potential drawbacks of ejecting.
The current configuration allows your project to get updates from create-react-app core team. Once you eject you no longer get this.
It's kind of like pulling in bootstrap css via CDN as opposed to downloading the source code and injecting it directly into your project.
If you want more control over your webpack, there are ways to configure/customize it without ejecting:
https://www.npmjs.com/package/custom-react-scripts

Problems getting OSVR to initialise the HMD Display with Oculus DK2

I am using a Oculus DK2 (v0.8) and OSVR SDK. I'm having a problem getting the HMD to run/display anything.
The Oculus samples and the OSVR samples do work however, so the osvr_server seems to run fine.
My application itself renders a test scene just fine when not using a HMD.
I tried two approaches:
First, just creating a osvr context and creating a DisplayConfig object. This seems to work, but DisplayConfig::checkStartup() fails (I do this in a loop, calling update on the context when the checkStartup call is failing). I used the OpenGLSample.cpp as a guide for this
Second, I tried using a RenderManager, but the call to createRenderManager results in a crash within the RenderManager.dll. I get the same crash wether I create the graphics lib object myself or if I let the library create it.
I am quite stuck now, since the demos and examples do work, I have no idea where to look for the error on my side. Creating the context works, querying interfaces as well, but the crash with createRenderManager is beyond me.
Does anyone have any hints or ideas what the problem could possibly be?
Regards and thanks in advance
pettersson
RenderManager should not crash during open. There have been a couple of bug fixes recently to avoid that happening, and the latest RenderManager binaries, libraries and header files are available with the SDK download from http://osvr.github.io/using/ along with updated copies of the example programs.
When something goes wrong in RenderManager, it usually reports that to standard error. We're moving that to a logging interface, but for now it should show up on the console. Posting an output of that as an issue at https://github.com/sensics/OSVR-RenderManager/issues is a good way to let the developers know that there is a problem. Of course, providing the same sort of information you provided here will be helpful as well.

Create an App within an App

I am being presented with a very interesting project. The task that I must complete is to figure out a way to allow a partner to be involved in an app without giving up their source code. The code will be included in the main bundle of the app so it is not dynamically stored. The partner has a fully functional app that is needed to be ran in a window within the main app at the appropriate time. I know having the partners create a web app would be ideal so it is treated like a webpage but I am more concerned with codes that must be written natively in iOS.
My question is what is the best way to go about solving this? In theory it is like an App within an App. Is there a way if they gave up their .app file I can include this in the bundle and then run it when I catch a certain event? Should I have the partners create their code in a framework and then import into the shell project? What is the best way to approach this problem?
If your 2nd-party doesn't want to provide you with the source code, why doesn't he compile it to object code then let you simply link it to your app?
By the way, at least on official (non-jailbroken) iDevices, apps can't 'embed' or 'open' one another in such a way - you can open an app programmatically if 1. it's a separate app 2. it has a registered special URL associated to its bundle.
Is there a way if they gave up their .app file I can include this in
the bundle and then run it when I catch a certain event?
No, you'll want to have them create a library instead. You can then include that library in your project.
Creating a library is as simple as:
Choose File->New...->Project... in Xcode.
Select the "Cocoa Touch Static Library" project template.
Add your code.
Build.
The result is a static library that you can add to your application(s). The library will contain the compiled code that you added, but doesn't include the source code. The library developer should provide whatever header files are necessary to use the code in the library.
An App within an App is possible however it requires a common data framework that allows one app to reference the same data without confusing the the source of and destination of the data.
Such a framework allows one app to interact with another app referencing the same data.