How level system works in unreal engine 4? - unreal-engine4

In a story driven game how can I manage levels. Like the player go to a specific location and after cutscence next level starts how can I place levels more efficiently?

You can use level streaming.
When you want the level to load, use LoadStreamLevel, and to unload, use UnloadStreamLevel.
Level Streaming Guide

Related

What is the proper workflow for creating city landscapes for use with RealityKit

I'm learning up on RealityKit and trying to create a city landscape.
Watched this video from Apple and downloaded the associated project talking about RealityComposer
https://developer.apple.com/videos/play/wwdc2019/605
My initial goal is to create a city street with tall buildings and a controllable character which can walk around the streets and perform tasks (character controlled by the user)
I've played with RealityComposer but it doesn't seem like the tool for creating complex landscapes or characters for this use case (I could be wrong). seems more of a prototype tool for fast POC
I'm assuming that there are tools such as sketch and open usdz files (tried googling and searching but nothing substantial came up)
What is the appropriate workflow for this type of app (game) development?
I would recommend one of two options:
A. Programmatically add and control models within the AR View. This will require a decent knowledge of Swift and a lot of looking around for examples and reading the docs for RealityKit.
B. Switch over to Unity. Unity would be a lot easier to work with and is designed for games (Which is what it sounds like you want to do). Bonus is your game/app will be cross platform.

HOW do Unity World Anchors work on the HoloLens?

I'm currently building a HoloLens application and have a feature in-mind that requires holograms to be dynamically created, placed, and to persist between sessions. Those holograms don't need to be shared between devices.
I've had a nightmare trying to find (working) implementations and documentation for Unity WorldAnchors, with Azure Spatial Anchors seeming to stomp out most traces of it. Thankfully I've gotten past that and have managed to implement WorldAnchors by using the older HoloToolkit, since documentation for WorldAnchors in the newer MRTK also seems to have also disappeared.
MY QUESTION (because I am unable to find any docs for it) is how do WorldAnchors work?
I'd hazard a guess that it's based on spatial mapping, which presents the limitation that if you have 2 identical rooms or objects that move in the original room, the anchor/s is/are going to be lost.
What I'd LIKE to hear is that it's some magical management of transforms, which means my app has an understanding of its change in real-world location between uses even if the app is launched from a different location each time.
Does anybody know the answer or where I might look (beyond the limited Unity and MS Docs for this matter) to find out implementation details?
Thank you.
I'd hazard a guess that it's based on spatial mapping, which presents the limitation that if you have 2 identical rooms or objects that move in the original room, the anchor/s is/are going to be lost.
We won’t divulge the internal implementation details of the internal coding of the World Anchor but we can state that it is not based on GPS currently with HoloLens v1 or HoloLens v2. Currently, the World Anchor uses the data in the spatial map for placement. The underlying piece that is key is the anchors rely on the spatial scanning and the scanning can use wifi to improve the speed and accuracy, see these two references: 1 & 2
What I'd LIKE to hear is that it's some magical management of transforms, which means my app has an understanding of its change in real-world location between uses even if the app is launched from a different location each time.
It is certainly possible to have two identical rooms with exact layout to trick the mapping to think it is the same room. We document that here:
https://learn.microsoft.com/en-us/windows/mixed-reality/coordinate-systems#headset-tracks-incorrectly-due-to-identical-spaces-in-an-environment

How would I use a second input device in maya to affect controls separately to the mouse?

Not sure if I'm in the right place but not having much luck finding anything out. What I wanted to try and do is create a plugin for autodesk software (namely maya) that allows a secondary input device to control things like the viewport camera. Basically the same concept as the 3Dconnexion space navigator but using a different input device.
Any help is appreciated
The Maya api samples include an example of how to connect external devices. You can find an example in the maya application directory in `devkit/mocap', which includes a C++ project that uses the maya Mocap api to output continuous rotation values based on the system clock. I've seen this used to add support for joysticks and game controllers:
http://download.autodesk.com/global/docs/maya2014/en_us/index.html?url=files/Motion_Capture_Animation_Server_.htm,topicNumber=d30e260341
You'd want to replace the clock part, of course, with something that spits out controller values you care about.
The maya side is handled by scripts that connect incoming "mocap" data to different scene elements. There used to be generic UI for it but nowadays you have to do it all in script:
http://download.autodesk.com/global/docs/maya2014/en_us/index.html?url=files/Motion_Capture_Animation_Server_.htm,topicNumber=d30e260341
I'm not too up on the current state of the art but some googling should show you how to attach device inputs to the scene

Best practices for implementing levels in cocos2d games

I'm making a simple cocos2d adventure game, but have no clue how to implement any sort of levels. I've searched for tutorials, but can't find any.
Is there anything I can use to figure out levels in cocos2D?
Thanks
There are so many ways to implement levels in a cocos2d game. I think a straightforward way is to:
Modeling your levels first. Decide what should be stored in a level's data model. I think typically you will have at least two kinds of data:
Player data (Run-time generated, e.g. score, character's current location, etc.)
Level data (e.g. what's on the screen in this level, the rule to pass this level, etc.) This data could be either fixed or dynamic. If the levels are designed by developer, like Angry Birds, you can store this part of data in external configuration files and load them on demand; if the levels are dynamically generated according to some rules, then the rules should be stored in the data model.)
Design a general game-play layer which can be initialized according to an instance of the data model above. The layer class controls the presentation of the level, and is responsible for user input handling.
If your levels shares some global data, you can make another shared data model to manage these things (e.g. total score, achievements, player's name, etc.). Create a shared instance of this class and manage the data in it via your game-play layer.
You could also consider more advanced way like using scripts (such as Lua) to implement the levels.
You mentioned not being about to find any tutorials. I agree that finding free online tutorials for cocos2d can be challenging. I ran into the same problem when I started learning it. I recommend grabbing a book on cocos2d such as Learning cocos2d. There is so much to the API that you will have a very hard time creating even a rudimentary game without any tutorials or guidance, unless you have a lot of prior programming experience.

Perfomance difference between Open GL and UI kit for a map based application

I am currently working on a navigation based app which uses third party maps. I use a lot of tiling. Though the maps are of very high resolution, I estimate an average of 6 tiles each of 256 * 256 pixels loaded . I might refresh the tiles like once in five minutes. I am currently using UIScrollview + tiles...
Should I really switch to Open GL ? I am hestitant to use Open GL because all the zoom functions and scrolling have to be hard coded in case of Open Gl.
Could some one please suggest the performance difference I would have?
Thanks
I don't have any Apple development experience, but I think this falls under the more general practice of avoiding premature optimization. I'd say go ahead and implement it as simply and naturally as possible. If you (or the app store overlords) find that it has performance issues, then it might be worth the extra investment of time.
Be sure to structure your code with modularity in mind. Develop your own simple interface for displaying the GUI from the core program logic. Then, once you decide to switch, you should be able to develop an OpenGL replacement to drop in, using the same interface.