Aiming down sights in both third person and frist person view - unreal-engine4

I am making a game in which yoc can switch between tpp and fpp in ue4 for aiming down sights i thought of having a seperate camera with a seperate for but thats not working
I tried to use do once node and it was successful but i can't implement it

Related

Unity2D: Mirror Multiplying - How to view an opponent's screen in a match 3 game

I'm making my own match 3 multiplayer game, the concept is to have two people face off against each other person can face off another person by swapping tiles to make a line of the same form. I want to introduce multiplayer by connecting two players together and allowing each person to see their opponent's screen, as well as syncing their moves. So far, I have a simple match 3 game (I created one using different tutorials, mainly this playlist) and followed a simple multiplayer tutorial (Mirror) for a player to host or be a client. My problem is that I have no idea how to show both players their opponent's screen to each other. I even found an example of what I want the multiplayer mode in my game to be like. Can anyone point me in the right direction, please and thank you.
Additional information:
I'm using mirror for multiplayer
I created a network manager gameobject and added the necessary components to it. I also added the game pieces into the 'registered spawnable prefabs' and created an empty gameobject, called player for the player prefab.
Each game piece has a network transform and network identity component attached.
The player prefab object has a camera child under it to.
This is what I want my game to look like:
Overall, I want to have player's view each other's screen:
As you can see, both player's are connected, what I want to do it to allow each player see their opponent's screen. Does anyone have an idea on how I can do it?
Thank you! :)

How should I handle a multiple scenes project?

I'm trying to make this game using the approach of multiple scenes to make things more modular.
In my actual case I have an "Initialization" scene which holds some global state objects and the one to control the state machine of all the scenes in the game.
As for the other scenes, for now I divided them just in two: the base scenes (which for now contains everything besides UI) and its UI scenes (which basically have a Canvas and all the UI elements and UI-related scripts).
The confusion in my mind is simple though: as I tried to make the UI scenes as modular and independent as possible, there are a lot of points of interactions between the base scene and its UI scene.
For the sake of illustrating this question please take this problem I'm facing right now: I have camera animations that should be played as a response to user inputs to the UI (like the click of a button should trigger a specific camera animation). Thing is: that camera is not in the UI scene. The way I'm resolving this problem right now is creating a ScriptableObject which holds events for important actions triggered in the UI scene that are fired in the UI scene and subscribed in any other place. The same can occur in the opposite direction: the UI scene need to react to many actions that happens in other scenes.
Considering that the "camera animation" problem I explained above can happen with many other objects, if there is not a better way to handle that wouldn't splitting a game into multiple scenes be just too much of work just for the benefit of modularity? And by that I also asks: am I handling this problem the right way?
If you want to keep things consistent between scenes, there are a few ways to do it.
PlayerPrefs lets you keep variables consistent, I don't need to do a whole tutorial here, look it up.
DontDestroyOnLoad lets you take an object and make it consistent throughout the whole game. If you want, you can use DontDestroyOnLoad on one of your cameras and just delete the others in the other scenes if you want to keep a consistent camera.

better way to implemet 2d platformer rooms system in unity

I'm currently making a 2d platformer game with unity. I'm going to build game world with rooms (like in Hollow Knight and many other metroidvania games). So, my first idea is to have each room as a separate prefab with virtual camera and exits linked to other rooms on scene. And to have several scenes (smth like each scene contains a set of "thematic" rooms).
I have another idea but i'm not sure if it gonna work properly in terms of perfomance. The idea is simple - to have single game scene and instantiate\destroy game rooms dynamically and seamless. So the game will have current room and all adjacent rooms loaded (with some depth maybe, i.e. all adjacent rooms with depth R), when player changes room - some new rooms are instanciated and others destroyed. This feels like a good idea, cause after creating dynamic room system you can just concentrate on creating and linking rooms. But i'm afraid it can lead to some perfomance problems (i.e. game freezes when player moves from one room to another if there is a big enough room nearby). And i guess there can be a lot more unexpected problems.
So it's kind of open-type question. What do you think about this "dynamic" approach? Is it worth trying? If you have expirience building similar games, what design approach did you use?
Typically, creating and destroying objects in-game is a no due to performance issues.
From my high school game dev teacher, a better way to do it is to preload everything outside the camera, and just move needed resources into view as needed for a randomly generated scene.
If you're looking for a static scene, I would just preload everything that I need for that specific scene.

How can I only allow entering my virtual scene from a portal?

I have an application which will render an augmented reality scene and a portal for which you can walk into the scene. The scene is occluded from view by a plane, but if you walk through that plane, you "bust" into the virtual environment.
I'm not looking for code but rather help on how to approach this problem. I want to make it so that the only way you can enter the virtual scene is by walking through the doorway that I've created. I first thought about tracking the location of the camera and making sure that you're very close to the entrance before you cross over the threshold to enable rendering but it seems like if I do it this way the user would not be able to see through the doorway before approaching/entering the virtual scene.
At first, look at How to create a Portal effect in ARKit just using the SceneKit editor? Stack Overflow post how to make a portal itself.
The robust way to prevent users from passing through virtual walls is to have the same configuration of virtual walls like real walls have (where physical wall is – the virtual wall exists too).
Also you need object detection tools. For precise positioning of your virtual walls over real physical walls just use Core ML framework with pre-trained small-sized mlmodel along with ARKit framework's classes like ARImageTrackingConfiguration() or ARWorldTrackingConfiguration().
In case you have no opportunity to build the same configuration of virtual walls like real walls were built, you can make a user's iPhone vibrate when a user has collided with a virtual wall. Here's a code:
import AudioToolbox.AudioServices
AudioServicesPlaySystemSound(kSystemSoundID_Vibrate)
AudioServicesPlayAlertSound(kSystemSoundID_Vibrate)
Hope this helps.
There are a few methods I can think of off the top of my head.
Make it so that when a person walks through a wall, the whole screen goes blank except for a message telling them that they need to back away out of the wall, and maybe an arrow to tell them what direction to move.
Make it so that bumping into a wall shifts the entire scene.
Do a combo of the two and ask them if they’d like to shift the scene when they run far into a wall.

Which design pattern to use for a 2D iPhone game?

To give a little background about the game: falling items float from the top, and the objective is to flick/slide another object to hit them. If an item hits the ground, you lose a life, and gain points for hitting falling items.
Here is where I'm a little confused. In O'Reilly's iPhone game development. They state have the AppDelegate inherit a game state machine object, and have the main game loop in the App Delegate. Nothing about MVC.
I was going to use MVC. I have all the objects identified for the models, and was going to use one controller to update each model and their corresponding view. Then have a navigation controller in the App Delegate, and push certain controllers (Play, instructions, stats) from the home screen. Then have the game loop run in my gameViewController. I am using Chipmunk as a physics engine by the way.
This is my first game so I'm little confused. I would greatly appreciate any advice on how to proceed. I would like to get the object orientated design right from the start before jumping into code.
I don't think MVC is really what you want here. MVC could apply to your overall application state - ie a view for the menu, a view for the gameboard etc. It doesn't fit well WITHIN the game play - at least just thinking off the top of my head.
Take a look at this post on gameDev. Lots of useful patterns from people smarter about this than I.
https://gamedev.stackexchange.com/questions/4157/what-are-some-programming-design-patterns-that-are-useful-in-game-development
My MVC goes something as follows. Each Game Object that is create is just a single Model. Empty data with no logic attached. When the object is created it also gets a Brain or controller attached to it. Each created Brain is added to the Brain list. The Brain List updates each brain and the brains change the Model.
To show something on screen the Brain adds the Model to the Scene. The scene keeps a list of all the models it is rendering. The Scene is also Updated from the Game Loop. Each update the Scene looks at each Model, any model without a View, is given a view (a new view is created based on data in the model). The Scene then tracks the view until the Model's data says it no longer needs it.
When I have been working on the iPhone I like to break the game loop out onto its own thread. Those folks over at O'Reilly are pretty smart though so take what I've got to say with a grain of salt.
[NSThread detachNewThreadSelector:#selector(GameLoop:) toTarget:self withObject:nil];
Then the game loop itself is updating first the Brains (or "Controller List"), then the Scene (or "view list").
The final piece that ties it all together is the input. For iPhone I use a full screen View. In the touchesBegan and touchesEnd of the view I generate Events which I pass off to the InputManager. The InputManager will send events to different models as needed.
Do you not consider that game state machine to be a kind of data model? I don't have the O'Reilly book you mention, but the description you give sounds to me very much like MVC.
The main point of MVC is to separate an application's content from the way that content is represented on the screen. The "model" in MVC doesn't have to consist of dumb data objects that you read from a file or a web server... it could just as easily be a simulation, a connection to another device, etc. The way I think of it is that the model is the part that you'd keep if you were going to throw out app's GUI and replace it with a script, a command line interface, or maybe a web service. A game state machine could certainly fit that description.
It's not uncommon in an iOS app to have the application delegate instantiate the model. You then have view controllers that know how to talk to the model and translate the data that it provides into something that can be displayed in the view(s). If some of the data that the model provides are graphic elements like textures or meshes, that's okay... those are the data that the game operates on, after all.