I'm very new to Unreal Engine 4 and have been following an fps guide online!
Currently have an AK and M4 in the game and can switch between the two using 1 / 2 on the keypad. I had to setup the first aim down sights camera to the AK and it works well! However if I equip the M4 and aim down sights then the camera is no longer in the correct spot and it doesn't line up at all with the ironsights. So I added another camera called M4A1 ADS Camera, but can't figure out how to switch to that camera when aiming down sights then going back to the AK camera if using that weapon.
Is there a better way of doing this or any tutorials / tips to help with the process for the future?
If I want to try and answer your question I'd say that you should add a switch case or make branches to check wich weapon is equipped at the time.
But I'd say a better way to do this would be to add a camera to your weapon blueprint then you could access the camera from the weapon directly (assuming you have a master weapon class). This way you would configure 1 ADS camera per weapon and align it properly in it own blueprint.
you can use "Set View Targent With Blend" function to change your cameras, it is very good for changing speed, and blending other things.
I know this is old but even cleaner than Deimos's suggestion would be to have an ADS camera component on your character and attach it to a socket you create on each of your weapons. You can adjust the socket position and rotation on each weapon's skeleton and then all you do from the character side is attach the camera to the weapon any time you equip one.
Related
Basically I am working on a mixed reality experience using the Hololens2 and Unity, where the player has several physical objects they need to interact with, as well as virtual objects. One of the physical objects is a gun controller that has an IMU to detect acceleration and orientation. My main challenge is this : how do I get the physical object's position in Unity, in order to accurately fire virtual projectiles at a virtual enemy?
My current idea is to have the player position the physical weapon inside a virtual bounding box at the start of the game. I can then track the position of the virtual box through collision with the player's hands when they pick up the physical controller. Does OnCollisionEnter, or a similar method, work with the Players hands? (see attached image)
I am also looking into the use of spatial awareness / image recognition / pose estimation to accomplish this task, as well as researching the use of a tracking base station to determine object position (similar to HTC Vive / Oculus Rift ).
Any suggestions, resources, and assistance is greatly appreciated here. Thank you!
EDIT UPDATE 11/30/2020 :
Hernando commented below suggesting QR codes, assume for this project we are not allowed to use QR codes, and we want as as precise orientation data as possible. Thanks Hernando!
For locating the object, QR code would definitely be the recommendation to find quickly with HL2 device. I have seen the QR approach in multiple venues too for VR LBE experiences like being described here. QR code is just sitting on top the device.
Otherwise, if the controller in question supports Bluetooth, can possibly pair the device and if device has location information, can possible transmit the location of where it is at. Based on what I am seeing from all of the above, this would be a custom solution and highly dependent on the controller abilities to be seen if QR codes are out of the equation. I have witnessed some controller solutions to first start the user experience to do something like touch the floor to get an initial reference point. Or alternatively doing something like always picking up the gun from specific location in the real world like some local based experiences do before starting.
Good luck with project, just my advice from using systems with VR
Is the controller allowed to paste multiple QRcodes? If allowed, we recommend you use QRCode tracking to assist in locating your controller. If you prefer to use image recognition, object detection, or other technologies, it needs Azure service or some third-party library, more information please see:Computer Vision documentation
I have a unity project. It is developed for VR headset training usage. However, users have a strong dizzy feeling after playing the game. Now, I want to use 3 monitors to replace the VR headset so the users can look at the 3 monitors to drive. Is it a big effort to change the software code to achieve this? What can I do for the software so that it can be run in monitor?
Actually it is quite simple:
See Unity Manual Multi-Display
In your Scene have 3 Camera objects and set their according Camera.targetDisplay via the Inspector (1-indexed).
To make them follow the vehicle correctly simply make them childs of the vehicle object then they are always rotated and moved along with it. Now position and rotate them according to your needs relative to the vehicle.
In PlayerSettings → XRSettings (at the bottom) disable the Virtual Reality Supported since you do not want any VR-HMD move the Camera but it's only controlled by the vehicle transform.
Then you also have to activate according Displays (0-indexes where 0 is the default monitor which is always enabled) in your case e.g.
private void Start()
{
Display.displays[1].Activate();
Display.displays[2].Activate();
}
I don't know how exactly the "second" or "third" connected Monitor is defined but I guess it should match with the monitor numbering in the system display settings.
I need a solution for my problem, please.
I make a character in unity 3d base on AR, the problem is when I build the project in my android phone I can found the character or I should to turn a camera for search where is he.
I need to make this object into the center of my camera how can I do this, please ...
The question isn't the clearest, but I suggest attaching the camera to the gameobject you want to see in the editor.
This link is to a unity tutorial.
https://unity3d.com/learn/tutorials/projects/2d-ufo-tutorial/following-player-camera
Just re-read your question, AR skimed over that at first ... so you dont want to move the camera rather you want to move the object.
To find the middle of the screen in world space is what you asking for.
Camera.current.ViewportToWorldPoint(new Vector3(0.5f,0.5f, 100));
Use your camera to find a point in the world that is in the middle of your view. You have several options, ViewportToWorldPoint for example as shown above. ViewPort calls 0.5,0.5 the middle of the screen, Z is the distance from the Camera origin point.
You could alternativly cast a ray from your camera center screen and find a point on the ground to put your character at ... would need to know more about your wourld set up to help further.
My old answer is how to move a camera ... I will leave it case its useful to you.
[Edit Old Answer]
The other answer will work and is nice and light but has its limitations as your project advances.
I recomend Cinemachine for camera work, in this case a simple Cinemachine virtual camera setting its 'Look At' and optionaly its 'Follow' is what you want.
https://unity3d.com/learn/tutorials/topics/animation/using-cinemachine-getting-started
Cinemachine tutorail above. in short, Cinemachine works with the concept of 'virtual cameras' these are just light easy to use behvaiours that discribe how to 'frame' a shot e.g. what to look at, how to move, etc.
You real camera will get a Cinemachine 'brain' which simply listens to these virtual cameras and sorts out what to do to the real camera to make it happen. Getting to terms with this system will greatly improve your camera work and masivly simplify it.
Things like simply attaching the camera to the player object and similar work but have big limitations that end up biting you in the backside eventually.
Alternativly you can write a script to transform the camera based on your own custom logic the draw back here is the 'ball of code' problem ... that is at first the logic is simple but as you want more and more specific shots framed up it gets to being a spegettie monster quickly.
I am making an First Person Shooter game using unity 3D, which will be multiplayer in future. So i want to use a full body for my FPS.
I am getting problem in placing camera for my FPS body. When i use only hands it works great.
Can any one tell which approach i should use for this .
1. Should i use two camera one form showing only hands and player weapon and one for showing the rest of view.
2. OR any other way.
I am using unity3d engine for my game development
Just draw the hands because unless your game allows you to look down and see your feet, a whole body approach might be a bit of a waste on computer resources.
I'm building an iPhone core motion game demo and would like to have a virtual "room" around the user. The user would be using the phone with the core motion to "look around" the room through the phone. Attached is an example.
I'm not looking for anything fancy. 4 solid color panels for walls and 2 panels for the floor and ceiling would do. Pretty much a large cube with the middle at the user's location
What is the quickest way for me to create a room with a box geometry, putting the user in the middle? Can this be done with UIKit objects, or do I need to use openGL to render the panels? Maybe there's some kind of a game engine that I can use for these purposes?
I would want to rotate the room in the future.
Thank you for your input!
You won't be able to create a 3 dimensional environment without using OpenGL in some form. The best way to get started is to follow a good tutorial on OpenGL such as this one. You could even take this tutorial and put the camera inside the cube and voila, instant room. You would just need to add view rotation logic from core motion and you would be set.