Unreal Engine - Let player import their own 3D models/meshes into the game - unreal-engine4

Similar to what you could do in Second Life, I'd like build an online multiplayer game with Unreal Engine where the player can import his own 3D models into the game (create his own environment) and position them the way he wants, import his own textures, etc.
The other online players would then see the newly imported 3D model the same way its owner does.
Is this doable with UE4/UE5? If not, which game engine should I be looking into?
Thank you!

This task requires two separate things: importing mesh data from a file, and displaying the mesh data.
For importing mesh data, it's possible to use the open-source cross-platform 3D file import library, assimp. You need to compile it for needed platform (the documentation of the library is good, it describes how you can do that in detail). Then you need to include the binaries/includes in the engine: help for that here (and elsewhere on the internet, just search "include third-party library in Unreal).
Assimp can import many file types, and give you arrays of vertex positions, UVs, normals, as well as triangle data. This data then needs to be displayed.
For displaying mesh data, you can use Procedural Mesh Component which is included in the engine, or a free open-source alternative, which has several improvements: Runtime Mesh Component.
Those components have functions such as CreateMeshSection, which have as input parameters those same arrays of vertices/uvs/etc.
Specifics of the implementation depend on your needs. Documentation of these libraries will help.

Related

What modules or behaviours I need to implement to achieve the behaviour described?

Object Placement serialisation with Unity and Hololens spatial mapping
I'm currently working on an application for Hololens 1 with Unity 2019.4 and MRTK 2.4.0.
I want to know if there is a way to import a spatial mapping into a unity (i have already made a scan of the environment) and use this spatial mapping to place an object with the unity editor at a specific location (like on a table into the spatial mapping).
And then when the application starts the hololens recognize the spatial environment and its place into the environment and place all the objects according to the user position.
So my question is: What modules or behaviours I need to implement to achieve the behaviour described above?
I hope I have been clear, I don't want to use Azure spatial anchors or place object when the application run.
Thanks
As an alternative to Azure spatial anchors, you can achieve similar functions through transfer WorldAnchors. You can refer this link to learn how to use it: Local anchor transfers in Unity.
Besides, you also need to instantiate a prefab during runtime and manage the state of the gameobject after import those anchors.

How would I use a second input device in maya to affect controls separately to the mouse?

Not sure if I'm in the right place but not having much luck finding anything out. What I wanted to try and do is create a plugin for autodesk software (namely maya) that allows a secondary input device to control things like the viewport camera. Basically the same concept as the 3Dconnexion space navigator but using a different input device.
Any help is appreciated
The Maya api samples include an example of how to connect external devices. You can find an example in the maya application directory in `devkit/mocap', which includes a C++ project that uses the maya Mocap api to output continuous rotation values based on the system clock. I've seen this used to add support for joysticks and game controllers:
http://download.autodesk.com/global/docs/maya2014/en_us/index.html?url=files/Motion_Capture_Animation_Server_.htm,topicNumber=d30e260341
You'd want to replace the clock part, of course, with something that spits out controller values you care about.
The maya side is handled by scripts that connect incoming "mocap" data to different scene elements. There used to be generic UI for it but nowadays you have to do it all in script:
http://download.autodesk.com/global/docs/maya2014/en_us/index.html?url=files/Motion_Capture_Animation_Server_.htm,topicNumber=d30e260341
I'm not too up on the current state of the art but some googling should show you how to attach device inputs to the scene

Generate a real time 3D (mesh) model in Unity using Kinect

I'm currently developing an application with the initial goal of obtaining, in real time, a 3D model of the environment "seen" by a Kinect device. This information would be later on used for projection mapping but that's not an issue, for the moment.
There are a couple of challenges to overcome, namely the fact that the Kinect will be mounted on a mobile platform (robot) and the model generation has to be in real-time (or close to it).
After a long research on this topic, I came up with several possible (?) architectures:
1) Use the depth data obtained from Kinect, convert it into a point cloud (using PCL for this step), then a Mesh and then export it into Unity for further work.
2) Use the depth data obtained from Kinect, convert it into a point cloud (using PCL for this step), export it into Unity and then convert it into a Mesh.
3) Use KinectFusion that already the option of creating a Mesh model, and (somehow) automatically load the Mesh model created into Unity.
4) Use OpenNI+ZDK (+ wrapper) to obtain the depth map and generate the Mesh using Unity.
Quite honestly, I'm kinda lost here, my main issue is the real-time requirement along with being forced to integrate several software components makes this it tricky problem. I don't know which if any of these solutions are viable and the information/tutorials on these issues isn't exactly abundant like the one, for example, for Skeleton tracking .
Any sort of help would be greatly appreciated.
Regards,
Nuno
Sorry, I might not be providing a solution for realtime mesh creation within Unity - but the process discussion here, was interesting enough for me to reply.
In the hard science novel Memories with Maya - there is discussion of exactly such a scenario:
"“Point taken,” he said. “So… Satish showed me a demo of the Quad [Quad=Drone] acquiring real-time depth and texture maps.”
“Nothing new in that,” I said.
“Yeah, but look above us.”
I tilted my head up. The crude shape of the Quad came into view.
“The Quad is here, but you can't see it because the FishEye [Fisheye=Kinect 2] is on it aimed straight ahead.”
“So it's mapping video texture over live geometry? Cool,” I said.
“Yeah, the breakthrough is I can freeze a frame… freeze real life as it were, step out of the scene and study it.”
“All you do is block out the live world with the cross polarizers?”
“Yeah,” he said. “It's a big deal for AYREE to be able to use such data-sets.”
“The resolution has improved,” I said.
“Good observation,” he said. “So has the range sensing. The lens optics have also been upgraded.”
“I noticed that if I turn around I don't see the live feed, just the empty street,” I said.
“Yes, of course,” he replied. “The Quad is facing the other way around. It's why I'm standing in front of you. The whole street, however, is a 3D model done by a standard laser scan taken from the top of that high tower.”
Krish pointed to a building block at the far end of the street. I turned back to the live 3D view again. He walked in front of me.
“This is uber cool. Everyone looks so real.”
“Haha. You should see how cool it is when you're here in person with the Wizer on,” he said. “I'm here watching these real people pass by, only they have a mesh of themselves mapped onto them.”
“Ahhh! Yes.”
“Yeah, it's like they have living paint on them. I feel like reaching out and touching, just to feel the texture.”...
The work that you're thinking of doing in this area, and this use of a live mesh goes far beyond Projection Mapping for events- for sure!
Wishing you the best on the project, and I will be following your updates.
Some of the science behind the story is on www.dirrogate.com if the topic interests you.
Kind Regards.
I would use Kinect Fusion, as it has a sample with the ability to export to .obj, which unity supports. You can automatically save it, and import it to unity to generate a mesh automatically. Especially if you have multiple Kinects, then Microsoft even has a sample to show the basics of Kinect Fusion with multiple Kinects. Also, since Fusion is already pre-written, there is not much code you will have to write.
Here is an example of a mesh from Fusion with one camera:
I do want you to notice how many vertices there are though... This could cause performance problems later on.
Good luck!

OpenCV IOS real-time template matching

I'd like to create an app (on iPhone) which does this:
I have a template image (a logo or any object) and I'd like to find that in camera view and put a layer on the place of where it is found and tracking it!
It is a markless AR with OpenCV!
I read some docs and books and Q&A-s here, but sadly
actually i'd like to create something like this or something like this.
If anyone can send to me some source code or a really useful tutorial (step by step) i'd really be happy!!!
Thank you!
Implementing this is not trivial - it involves Augmented Reality combined with template matching and 3D rendering.
A rough outline:
Use some sort of stable feature extraction to obtain features from the input video stream. (eg. see FAST in OpenCV).
Combine these features and back-project to estimate the camera parameters and pose. (See Camera Calibration for a discussion, but note that these usually require calibration pattern such as a checkerboard.)
Use template matching to scan the image for patches of your target image, then use the features and camera parameters to determine the pose of the object.
Apply the camera and object transforms forward and render the replacement image into the scene.
Implementing all this will require much research and hard work!
There are a few articles on the web you might find useful:
Simple Augmented Reality for OpenCV
A minimal library for Augmented Reality
AR with NyartToolkit
You might like to investigate some of the AR libraries and frameworks available. Wikipedia has a good list:
AR Software
Notable is Qualcomm's toolkit, which is not FLOSS but appears highly capable.

OpenGL ES and real world development

I'm trying to learn OpenGL ES quickly (I know, I know, but these are the pressures that have been thrusted upon me) and I have been read around a fair bit, which lots of success at rendering basic models, some basic lighting and 'some' texturing success too.
But this is CONSTANTLY the point at which all OpenGL ES tutorials end, they never say more of what a real life app may need. So I have a few questions that Im hoping arent too difficult.
How do people get 3d models from their favorite 3d modeling tool into the iPhone/iPad application? I have seen a couple of blog posts where people have written some python scripts for tools like Blender which create .h files that you can use, is this what people seem to do everytime? Or do the "big" tooling suites (3DS, Maya, etc...) have exporting features?
Say I have my model in a nice .h file, all the vertexes, texture points, etc.. are lined up, how to I make my model (say of a basic person) walk? Or to be more general, how do you animate "part" of a model (legs only, turn head, etc...)? Do they need to be a massive mash-up of many different tiny models, or can you pre-bake animations these days "into" models (somehow)
Truely great 3D games for the iPhone are (im sure) unbelievably complex, but how do people (game dev firms) seem to manage that designer/developer workflow? Surely not all the animations, textures, etc... are done programatically.
I hope these are not stupid questions, and in actual fact, my app that Im trying to investigate how to make is really quite simple, just a basic 3D model that I want to be able to pan/tilt around using touch. Has anyone ever done/seen anything like this that I might be able to read up on?
Thanks for any help you can give, I appreciate all types of response big or small :)
Cheers,
Mark
Trying to explain why the answer to this question always will be vague.
OpenGLES is very low level. Its all about pushing triangles to the screen and filling pixels and nothing else basicly.
What you need to create a game is, as you've realised, a lot of code for managing assets, loading objects and worlds, managing animations, textures, sound, maybe network, physics, etc.
These parts is the "game engine".
Development firms have their own preferences. Some buy their game engine, other like to develop their own. Most use some combination of bought tech, open source and inhouse built tech and tools. There are many engines on the market, and everyone have their own opinion on which is best...
Workflow and tools used vary a lot from large firms with strict roles and big budgets to small indie teams of a couple of guys and gals that do whatever is needed to get the game done :-)
For the hobbyist, and indie dev, there are several cheap and open source engines you can use of different maturity, and amount of documentation/support. Same there, you have to look around until you find one you like.
on top of the game engine, you write your game code that uses the game engine (and any other libraries you might need) to create whatever game it is you want to make.
something many people are surprised with when starting OpenGL development is that there's no such thing as a "OpenGL file format" for models, let alone animated ones. (DirectX for example comes with a .x file format supported right away). This is because OpenGL acts somewhat at a lower level. Of course, as tm1rbrt mentioned, there are plenty of libraries available. You can easily create your own file format though if you only need geometry. Things get more complex when you want to take also animation and shading into account. Take a look at Collada for that sort of things.
again, animation can be done in several ways. Characters are often animated with skeletal animation. Have a look at the cal3d library as a starting point for this.
you definitely want to spend some time creating a good pipeline for your content creation. Artist must have a set of tools to create their models and animations and to test them in the game engine. Artist must also be instructed about the limits of the engine, both in terms of polygons and of shading. Sometimes complex custom editors are coded to create levels, worlds, etc. in a way compatible with your specific needs.
Write or use a model loading library. Or use an existing graphics library; this will have routines to load models/textures already.
Animating models is done with bones in the 3d model editor. Graphics library will take care of moving the vertices etc for you.
No, artists create art and programmers create engines.
This is a link to my favourite graphics engine.
Hope that helps