This project is a combination of a few things I've found, first embedding webpages in Three.js:
http://adndevblog.typepad.com/cloud_and_mobile/2015/07/embedding-webpages-in-a-3d-threejs-scene.html
and the second is the custom CSS 3D Renderer I found on stackoverflow:
Three.js StereoEffect cannot be applied to CSS3DRenderer
The effect is almost exactly what I wanted, except instead of simply re-drawing the output from one side to the other, it's loading two separate instances, which sorta breaks the point of going VR...
Any ideas? Here's the file:
https://drive.google.com/open?id=1UmXmdgyhZkbeuZlCrXFUXx-yYEKtLzFP
My goal was to render the Ace cloud editor in a VR environment using the stereo effect algorithm (which seems like it could be a fun new way to develop code if you had a wireless keyboard/trackpad with a VR headset, locking the camera in one location of course but still needing the mirrored view for the lenses)...
https://www.ebay.com/itm/Wireless-Bluetooth-Keyboard-with-Touchpad-for-iOS-Android-Smart-phone-Tablet-PC/112515393899?_trkparms=aid%3D222007%26algo%3DSIM.MBE%26ao%3D1%26asc%3D20161006002618%26meid%3D50ca4e61c27345df85a8461fb1a0e6d5%26pid%3D100694%26rk%3D7%26rkt%3D30%26sd%3D222631353709&_trksid=p2385738.c100694.m4598
https://www.walmart.com/ip/ONN-Virtual-Reality-Headset-White/187088616?wmlspartner=wlpa&selectedSellerId=0
Related
I'm trying to create a ship simulator within Unreal Engine (4.27) I tried to use Water plugin as also its content to start from and shorten the time to get it working.
I created an empty game project loaded Water/Maps/WaterTestMap then added an istance of Water/Blueprints/BP_BuoyancyExample to simulation (above sea level) and started the simulation... it just sink like anything else. I tried to modify some Buoyancy data parameters but it just seems that no forces are applied to the body.
I tried again on 4.26.2, using water plugin contents, and it seems to work but if I try to create a floating cube BP that mimics BP_BuoyancyExample (in the same level as above) it sinks all the times… the only way to make it work is to use EditorCube as static mesh.
I cannot understand where’s the fault but it really sounds like a wonderful bug…
I am messing with Unity's cameras for a school project, my plan was to change the way that coordinates are projected onto the projection plane to a projection onto a sphere using sphere coordinates. But, getting to the actual math behind the cameras has been a bit of a pain.
My first method involved messing with render textures but, theoretically, that won't work because the camera has already rendered a texture which I am modifying.
Next I tried to get into the code for the base camera, maybe make a copy of the camera to modify without messing the original, but then I ran into this, the code that sets all of the camera's parameters for the editor, I saw the reference to a few .h files.
Where can I access these files? I found files with the same names but not related to Unity. They were also different from each-other, making me think that the file above isn't referring to some sort of industry standard, but it might be.
Unity is generally considered to be made up of two parts; the managed front end, and the unmanaged back end. The front end code (written in C#) can be studied on GitHub here. The unmanaged code (written in C++) is proprietary and isn't freely available.
Unity is fairly modifiable, but there are a number of rules you have to follow.
A camera workaround might be to work with the Scriptable Render Pipeline (e.g. URP). But I'm not sure this actually addresses what you're trying to achieve.
I'm tasked with developing an application, which would emulate augmented reality in a virtual reality application. We are using Google Cardboard (Google VR), and want to show the camera images (don't mind the actual camera setup, say I already have the images) to the user.
I'm wondering about the ways to implement it. Some ideas I had:
Substituting the images rendered for each eye by my custom camera images.
Here I have the following problems: I don't know how to actually replace the images that are rendered to the screen, let alone to each eye. And how to afterwards show some models overlayed on top of the image (I would assume by using the Stencil Buffer?).
Placing 2 planes in from of the camera with custom images rendered onto it
In this case, I'm not sure about the whole "convenience" of the user experience, as the planes would most likely be placed really close, so you only see one plane with one eye, and not the other. Seems like it might put some strain onto your eyes, because they would close on something that is really close to you.
Somehow I haven't found a project that would try to achieve something like that, and especially with all the Windows Mixed Reality related stuff polluting the search results.
You can use Vuforia digital eyewear, here is the documentation for it.
And a simple tutorial on YouTube.
I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.
For change cursor i using this:
UnityEngine.Cursor.SetCursor(CursorTexture,
new Vector2(CursorTexture.width, CursorTexture.height) * 0.5f,
CursorMode.ForceSoftware);
I want to animate cursor when something happens.
Is it possible to anumate cursor using Cursor.SetCursor?
You can do it like LearnCocos2D says. The problem will be it will flicker a lot and the other problem you will most likely have is that the mouse pointer will be really sluggish. This is because software mouse pointers are not rendered by hardware so its always a couple of frames behind actual user's input on the pointer device.
Then for the animated texture to work on web browser you need to make sure you export the needed shaders you are using if anything on a resources folder of your web player project since lots of shaders are not exported to the web build by default. It should work if you are using a standard diffuse, but I think that for a mouse pointer since most likely it uses transparency then it may not work by default. You'll need to find the actual shader being used and export that by manually for your build.
Unity should have support for hardware animated cursors at least on PC, but sadly it doesn't...