Is it possible for the player to make a polygon in game in Unreal Engine 4.27? - unreal-engine4

I am creating an App to enable neighborhood design using unreal engine (for specific reasons). For this I want the user to be able to delineate the polygon boundary within which they would like to build their neighborhood, but am unable to find references on how a polygon can be created by a user in game.
I am using unreal engine 4.27 with a cesion for unreal plugin to be able to display the 3D world.

Related

Unreal Engine 4 game using phone sensor value measurement

In the process of preparing for the development of a built-in mobile phone game, we designed a game that affects the player using the data obtained from the user's movement by using the sensor built into the smartphone as a control device. Included in Unreal Engine 4 UE4duino when implementing the game I tried to get the acceleration and velocity values of the cell phone using the functions and blueprints that have been created. But I don't know how.

Spatial Mapping in Unity for Hololens 1

Good morning,
I am trying to install the Mixed Reality Toolkit for Hololens 1. I need to do a Spatial Mapping in Unity and I would like to use a "Spatial Mapping" prefab which should be displayed after Unity configuration with the MRTK tool. Unfortunately, I don’t see the prefab. I enabled the "SpatialPerception" in the Player configuration and simply put "Microsoft Reality Toolkit Foundation" in my project from the MRTK tool. How can I access the Spatial Mapping prefab please?
Thank you.
image unity
To use spatial mapping in the app, we should enable the Spatial Awareness system in the MixedRealityToolkit profile and register spatial observers to provide mesh data. There is not a Spatial Mapping "prefab" in MRTK. Here is a step by step guide showing how to do that:Spatial awareness getting started

Is there a way to perform multiple 3d object recognition in unity using vuforia?

I'm using vuforia scanner to detect and recognize a 3D object. It works well with one object but now i want to recognize multiple objects at once. I have gone through so many links but they only speak of multiple image targets and not 3d objects. If not using vuforia, is there any other sdk to do so?
I messed with object recognition once but I'm pretty sure the databases are basically the "same" as 2D image target databases. That is, you can tell Vuforia to load more than one of them and they'll run simultaneously. I don't have Vuforia installed at the moment, but I know the setting is in the main script attached to the camera (you have to fiddle with it when creating your project in the first place to get it to use something other than the sample targets).
There is, however, the limit on how many different targets Vuforia will recognize at once (IIRC is something really small, like 2 or 3). So be aware of this when planning your project.

ARCore in unity vs Sceneform features/use cases?

The way I understand it is that there are several environments that support ARCore and Unity and Sceneform SDK are some of the options.
I was wondering how are they different from each other besides one being in Java and the other being in C#? Why would someone choose one over the other aside from language preference?
Thank you
Sceneform empowers Android developers to work with ARCore without learning 3D graphics and OpenGL. It includes a high-level scene graph API, realistic physically based renderer, an Android Studio plugin for importing, viewing, and building 3D assets, and easy integration into ARCore that makes it straightforward to build AR apps. Visit this video link of Google I/O '18.
Whereas ARCore in Unity uses three key capabilities to integrate virtual content with the real world as seen through your phone's camera:
Motion tracking
Environmental understanding allows the phone to detect the size
and location of all type of surfaces: horizontal, vertical and
angled surfaces like the ground, a coffee table or walls.
Light estimation allows the phone to estimate the environment's
current lighting conditions.
ARCore is Google’s platform for building augmented reality experiences. Using different APIs, ARCore enables your phone to sense its environment, understand the world and interact with information. Some of the APIs are available across Android and iOS to enable shared AR experiences.

What are good ways to load kinect input into unity?

I'd like to design a simple tower defense game, with the twist that every input is done via the Kinect. I want to give the player the option to build a real maze and project the minions on it via beamer.
The input from the Kinect should mainly be range data and color data. I'm at the beginning of the Project and up till now I only found Kinect fusion which seems to have the functionality I need.
Can you suggest any other options, which I should take a look at?