Is there a way to keep the effects turned on, even after the target is not visible any more? I am using Spark AR Studio and I am able to successfully look at a target image and start the effects. But when the image is removed from facebook camera view, the effects stop! Is there a way to keep on showing the effects?
Try to change the position of the object.
this keeps it in the scene and also keeps the effect.
Related
After I build my Unity project and send it to the Hololens, I have the following problem:
The splash screen appear followed by a debugging window on the bottom. In the background is a white net. However, you can't see any game objects. I've tested a lot but haven't found a solution for that. Visual Studio does not display any error messages. What I've looked at roughly:
These are my modules. Im using the 2019.4.22f1 version of Unity and the MRTK Foundation Toolkit 2.7.2.
My build settings
My project settings
I tried to place the objects in the middle of the camera and changed the colors.
MRTK settings I haven't changed anything most of the time
Main camera settings
My scene
When i start the scene i get this error in the console. I dont know if this has anything to do with my problem
i have two possible solutions (no guarantee)##
you could spawn the objects on input directly in front of the
camera, add a debug.log("object in front of you"); so you can find
the issue.
If this doesnt work i would try to test differnet types of materials
like you do with HDRP.
if this does not work either i probably cant help you out now.
It seems like your GameObject is too far to be hidden behind by the mesh. Please make the spatial mesh invisible by setting the Display Option property of Spatial Mesh Observer Setting to None, this item can be found under the Spatial Awareness profile of the MRTK profile.
I'm using Vuforia with Unity and I try to track an ImageTarget( that with stones from Vuforia site). The problem is that sometimes (in 20-30% of cases) it takes like 10-20 seconds to recognize the ImageTarget even if I put it very clearly in front of the camera. Other problem is when the ImageTarget disappears from the scene and reappears sometimes works perfectly sometimes not. Any suggestion why this is happening? (I'm new in Vuforia)
If its default project without your scripts - then may be quality of your camera is not enough to track correctly image targets all the time. Also vuforia sometimes has bug with loading resources with image targets but if you do not reload app and image been tracked it is only quality of your camera.
I'm using This github project to create order independent transparency in my projects. It's working fine with a normal camera [no OIT] [OIT], but when I try to use it with a VR setup, it just doesn't render any object on the "transparent" layer that is using the above project. [VR no OIT] [VR no working with OIT].
On top of just not rendering anything marked as transparent, the left and right cameras become offset in a way that they shouldn't, so you get a disorienting effect as if you're eyes are in the wrong place.
I'm using the Oculus SDK, but I don't think it's that. The same thing happens if I just use a camera that feeds to the Oculus headset.
Here is the Unity project, if you want to see for yourself.
Thank you
EDIT: I was also occassionally getting a weird effect where the spheres would render all black and remain centered on my left eye, and the rest of the scene was rendered upside down. The right eye would render everything not marked as transparent correctly. I believe I was using the single camera setting in the OVR camera rig when this happened, but I couldn't get it to produce the same error when I went back to record these errors.
Additionally I am using Unity 2018.2.0f2 and the Oculus SDK v1.30.1
I'm working with the Google VR Unity SDK and I'm trying to create a VR application where users can switch between multiple ambients(places). I want them to switch to a different ambient just by pushing down the magnetic sensor of the cardboard, pointing anywhere. My problem is that every link (like this one) I've found, works with objects selection. I've tried adding an Event Trigger to my Main Camera and adding a Mesh collider to my building but none of them worked.
So, ¿is it possible to listen for the magnetic sensor pushdown in the full scene without having to select an object?
Turns out it's simpler than I thought.
if(Input.GetButtonDown("Fire1")){
//some code
}
Thing is googleVR removed magnetic button support since version 0.9.0 and I was using 1.0.3. So if you want to implement a trigger for cardboard's magnetic button you need to use v0.8.5.
You could put up a Canvas that's attached to the camera in World Space, so that it always stays in the line of sight. Add a Button to the canvas at the location where the gaze input cursor is, and you should always hit that when triggering.
If I create a blank scene, add a cube and attach an Animation Controller to it, then create a blank Animation and add that to the Animation Controller and then run the game in the editor - all is good.
If I then edit the Animation to spin the cube in the editor and run - all is still good.
My problem is, if I then push this to my attached Android device, the cube is not rendered - even though it is rendered in the editor.
The issue has something to do with the animation clip itself, as the cube will render on the Android device if an animation clip is empty.
If there is somewhere practical to upload the sample scene file, let me know and I'll do that.
It turns out this was a bug starting in Unity 3D 5.3.0. I submitted this to Unity and they have just responded:
Thanks for reporting the issue.
We have fixed this and problem should not appear in the latest version
(5.3.2p3). If you are still able to reproduce it on the latest
version of Unity, please respond to this email.