My university colleagues and I are trying to develop a Virtual Reality project for university where we use an Oculus headset and create a scene with a mouse where you can select and click and drag different objects in the scene. You are supposed to be stationary and move one of the controllers as if it was a mouse. However, we want to modify the behaviour of the controller to better fit the 3D environment. When an object is not selected, we want to interpolate the depth of the cursor according to the interpolation of the nearest objects. There is a paper that we were shown in class that we are supposed to drag inspiration from, and it achieved this kind of behaviour of the cursor with a normal mouse but I can't seem to find any information on how they did it. Our final goal would be to compare both ways of managing the scene and assess which one is better. We are using Unity with VRTK as suggested by our professor, but we and can't really seem to be able to access the mouse's file on how it moves or its behaviour, and we are kind of lost on where to go. Could someone help with this?
Here is the paper where they talk about it:
https://dl.acm.org/doi/pdf/10.1145/3491102.3501884
We so far have tried creating a simple scene and adding objects with different behaviours as well as a controller instance, but we seem to only be able to modify the events of the mouse and not its specific behaviour.
Kind regards and thanks
Related
I've just started my first Unity experience by moving a match-3 based game to Unity, and I want to be able to attach data to each available tile type specifying how likely that tile is to be dropped onto the board (and be able to override that value for each level). There are a few match-3 tutorials floating around, but besides a tendency to not explain how to make things happen in Unity, I haven't found any leveraging the Tilemap, which seems a shame since it looks like it provides a fair bit of necessary functionality. The only problem is, I don't see anywhere to attach data or a script to a base Tile, so I've been looking into whether going the Scriptable Tile / custom editor route would get me what I need; but then I run into the lack of instructions on how to get basic functionality on the custom editor (the ability to specify the underlying sprite seems rather fundamental, but I'm not finding anything).
Anyway, is the Scriptable Tile interface my best bet to get level-specific data attached to individual tiles, or should I be looking for a different way to get this data on there? Or should I just ditch the Tilemap and the functionality it does include out of the box altogether like all the tutorials seem to be doing?
I am working on a Projection Mapping Project and I am prototyping in Unity 3D. I have a cube like object with a 3D terrain and characters in it.
To recreate the 3D perspective and feel I am using two projectors which will project in a real world object which is exactly like the Unity Object. In order to do this I need to extract 2D views from the shape in Unity.
Is there an easy way to achieve this ?
Interesting project. It sounds like you would need multiple displays, one for each projector, each using a separate virtual camera in Unity, like documented there.
Not sure if I understood your concept correctly from the description above. If the spectator should be able to walk around the cube, onto which the rendered virtual scene should be projected, it would also be necessary to track a spectator's head/eyes to realize a convincing 3D effect. The virtual scene would need to be rendered from the matching point of view in virtual space (works for only one spectator). Otherwise the perspective would only be "right" from one single point in real space.
The effect would also only be convincing with stereo view, either by using shutter glasses or something similar. Shadows are another problem, when projecting onto the cube from outside the scene. By using only two projectors, you would also need to correct the perspective distortion, when projecting onto multiple sides of cube at the same time.
As an inspiration: There's also this fantastic experiment by Johnny Chung Lee demonstrating a head tracking technique using the Wii Remote, that might be useful in a projection mapping project like yours.
(In order to really solve this problem, it might be best to use AR glasses instead of conventional projectors, which have the projector built in, and use special projection surfaces that allow for multiple spectators at the same time (like CastAR). But I have no idea, if these devices are already on the market... - However, I see the appeal of a simple projection mapping without using special equipment. In that case it might be possible to get away from a realistic 3D scene, and use more experimental/abstract graphics, projected onto the cube...)
I am making a strategy game and I need to have a tool which places the objects above the terrain while I am dragging them in Unity Editor when I work on level design.
Basically I want to get result like here:
https://www.youtube.com/watch?v=YI6F1x4pzpg
but I need it to work before I hit the Play button in Unity Editor.
Here is a tutorial
https://www.youtube.com/watch?v=gLtjPxQxJPk
where the author of it made a tool which snaps the object to the terrain height when a key is pressed. I need the same to happen automatically whenever I place an object over my terrain. And I want my tool to adjust the Y position of the object automatically even while I am dragging it inside of the editor.
Also just to clarify: I don't need grid snapping and I don't need this functionality during the gameplay. I just need to have a tool for my level design work.
Please give me a clue where to start with it.
Thanks!
There is this tag you can apply to classes so they do call their regular events during editor mode already: https://docs.unity3d.com/ScriptReference/ExecuteInEditMode.html
A trivial way then would be to apply this to a special class/object which regularly "finds" all objects from the game object hierarchy. Then it shall filter that list for the ones you want to snap to the axis and enforce their Y.
I'm somewhat familiar with tango and unity. I have worked through the examples and can get them to work correctly. I have seen some people doing an AR type example where they have their custom objects in an area to interact with or another example would be directions where you follow a line to a destination.
The one thing I cannot figure out is how to precisely place a 3d object into a scene. How are people getting that data to place it within unity in the correct location? I ha e an area set up and the AR demo seems promising but I'm not placing objects with the click of a finger. What I am looking to do is when they walk by my 3d object will already be there and they can interact with it. Any ideas? I feel like I've been searching everywhere with little luck to an answer to this question.
In my project, I have a specific space the user will always be in - so I place things in the (single room) scene when I compile.
I Create an ADF using the provided apps, and then my app has a mode where it does the 3D Reconstruction and saves off the mesh.
I then load the Mesh into my Unity Scene (I have to rotate it by 180° in the Y axis because of how I am saving the .obj files)
You now have a guide letting you place objects exactly where you want them, and a nice environment to build up your scene.
I disable the mesh before I build. When tango localises, your unity stuff matches up with the tango world space.
If you want to place objects programatically, you can place them in scripts using Instantiate
I also sometimes have my app place markers with a touch, like in the examples, and record the positions to a file, which I then use to place objects specifically... But having a good mesh loaded into your scene is really the nicest way i've found.
first time posting on stack and everything looks promising so far! I had a bit of a complicated question here so I'll do my best to provide exact details of what I'd like to get accomplished. I'm working with a third person controller in unity, so far everything is going great. I've dabbled with basic up and down platforms, a little glitchy but things work. Anytime my player runs through a mesh I make sure the mesh collider is working and a 'rigid-body' is attached set to Kinematic. Here's the kicker, in my game I have turning gears which the player can jump on. This is great except for the player doesn't turn with my gear, which would make sense according to my game-play. What would be the process for getting my character to interact with this animated mesh? I imagine some sort of script which my nooby mind cannot fathom at this point in my unity career. If anyone out there knows the solution to this, I would love to have any assistance, either way I'll plugging away at a solution. Thanks again!!
This is assuming that you're using the packages that ship with Unity3D, which it sounds like you are. After importing the Character Controllers package, you'll have a bunch of scripts in the Standard Assets\Character Controllers\Sources\Scripts folder, in the project hierarchy view. There's a script in there called CharacterMotor.js, attach that to the same GameObject you're running ThirdPersonController on.
Essentially this script adds more interactivity between the character and the scene. There's several methods inside this script that automatically move the character when in contact with a moving object (as long as it has a collision mesh) basically by inheriting the object's velocity.
If your gear/cog wheel has a proper collision mesh set up, adding this script to your character should be all that you require.