I'm trying to figure how can I load models and textures in the most efficient way with Unity3D at run-time, so far I've read about glTF formats, Draco API, and currently I've implemented a simple procedure which works really slow. I'm using BestHTTP and for textures I'm doing this:
var texFoundPath = Directory.GetFiles(outFolder, texFileName, SearchOption.AllDirectories).Single();
Debug.Log("Loading texture from " + texFoundPath);
var texBytes = File.ReadAllBytes(texFoundPath);
Texture2D tex = new Texture2D(2,2);
tex.LoadImage(texBytes);
And using ObjImporter for importing objects.
var gameObject = ObjImporter.Import(objStr, mtlStr, textureHashtable);
So at run-time, models that are 3-5mb takes up to 3-5 seconds to load. Which is very slow and not suitable for anything. Doing this locally will result in a 200-300mb apk if I have around 100 models.
So currently I'm looking for a way to do this efficiently, and would love your help. I think it'll be the best to load the models with Draco API. but my I'm not sure how to create the plugin that will communicate with that API.
1 MB per second isn't a bad download speed. Really the only option without using prediction is to lower the resolution of your textures and make them smaller, compress your texture files into "chunks" and download each "chunk" as needed, uncompress, and apply to models, but that would probably still take a good amount of time.
I think what you really want is to have some prediction logic that loads "bundles" of assets before they are actually needed, and store them in memory for later use.
For instance, if the player is 200 meters away from a house that needs to be downloaded before viewing, and the players view distance is 100 meters, then start downloading the house. By the time the house is within viewing distance of the player, it will be downloaded and ready to use.
Related
I am loading a 300mb 3d model of extension .dae, converted into .scn, with 4.4 million vertices, 1.5 million polygons, which is a 3d model of a building, created in 3DS MAX by an artist, like so:
let sceneToLoad = SCNScene(named: "art.scnassets/building1.scn")!
(It is loaded in a SCNView default viewer in the app so that the user can view, rotate it etc., by SCNView.allowsCameraControl = true)
Xcode will immediately crash when it reads that line, with only compiler info "unexpectedly found nil while unwrapping an optional value".
The memory does not go up at all when it runs to that line - suggesting it refuses to read it , and crashes instead. The 3d model is perfectly loaded and vieweable, rotateable etc in the XCODE Scenekit editor graphical viewer. When I replace it to point to another file name of a smaller 3d model it works fine, and even when I remove the model SCNNode in the same file (in the same "building1.scn" file) and replace with a smaller SCNNode of another random object, then miraculously it also works and loads fine.
I have not found anything similar on SO - in other similar answers iOS tries to load the model even if it's huge, but in none it crashes immediately finding a nil value.
Have tried all workarounds, remove/delete file and add again, load it as .dae in its original form, load the scene without unwrapping and unwrap later when searching for a node - nothing works, always crashes in the same way. The same thing happens when I try to load it in an ARKIT scene - it crashes at the above line that tries to just load the file.
Has anyone come across this, or knows of any workaround?
Many thanks
When load a 3D model with 1.5M polygons into SceneKit/ARKit, into RealityKit, or into AR Quick Look you'll always fail. That's because a robust number of polygons per 3D model must be not greater that 10K (with UV-texture having max resolution 2Kx2K, or with a regular texture rez 1Kx1K), and a maximum number of polygons per 3D scene must be not greater that 100K. You have exceeded the "unspoken" AR limit in 15 times.
Game engines and AR frameworks, like SceneKit, RealityKit and AR Quick Look, are incapable of rendering such a huge number of polygons using 60 fps framerate on iOS device (even most desktop computers fail to do this). The best solution for an ARKit/RealityKit applications is to use an optimized low-poly models. The most preferred format for working with AR on mobile platform is Pixar USDZ. A USDZ file is a no compression, unencrypted zip archive of USD file.
Look at this low-poly model from Turbosquid. It has just 5K polygons and it looks fine, doesn't it?
P.S.
You can convert obj, fbx or abc in usdz using command line tools. Read about it HERE.
I have a large 3d model in .obj format with a large 20000*20000 texture. (it was generated by photogrammetry) I need to keep all the texture detail but want to use it with the Unity game engine. Unity only supports textures up to 8192*8192 so i think i need to break the model into smaller pieces and generate smaller textures(below 8192) from the original file.
How can i do this so i get separate textures for each piece? i.e. each model doesn't use the same large texture? I have access to 3DS Max
This is not the website to get those kind of answers. Autodesk has their own websites for asking these types of questions.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I want to create an hologram that is exported via the kinect to an hololense. But it's very slow.
I use this tutorial to collect point cloud data, and this library to export my data as a 3D object in .obj format. The library that exports obj doesnt accept points so I had to draw little triangles. I save the files .obj .png and .mtl on my local xampp.
Next, I download the files with a unity script and WWW object. I also use Runtime OBJ Importer from unity's asset store to create a 3D object at runtime.
The last part is to export the unity app on a hololense. (I will do it next).
But before that,
The process is working but is very slow. I want the hologram to be fluid. A lot of time is wasted :
take depth and rgb data of the kinect
export data to an obj png and mtl file
download the files on unity as frequent as possible
render the files
I think of streaming but does unity need a complet obj file to render ? If I compress .png to .jpg will a gain some time ?
Do you have some pointers to help me ?
Currently the way your question is phrased is confusing: it's unclear wether you want to record point clouds that you later load and render in Unity or you want to somehow stream the point cloud with the aligned RGB texture in close to realtime to Unity.
You initial attempts are using Processing.
In terms of recording data, I recommend using the SimpleOpenNI library which can record both depth and RGB data to an .oni file (see the RecorderPlay example).
Once you have a recording, you can loop through each frame and for each frame store the vertices to a file.
In terms of saving to .obj you'll need to convert the point cloud to a mesh (triangulate the vertices in 3D).
Another option would be to store the point cloud to a format like .ply.
You can find more info on writing to a .ply file in Processing in this answer
In terms of streaming the data, this will be complicated:
if you stream all the vertices it that's a lot of data: up to 921600 floats ( (640 x 480 = 307200) * 3)
if you stream both depth (11bit 640x480) and RGB (8bit 640x480) images that will be even more data.
One option might be to only send the vertices that have a depth and overall skipping points (e.g. send every 3rd point). In terms of sending the data you can try OSC
Once you get the points in Unity you should be able to render a point cloud in Unity
What would be ideal in terms of network performance is a codec (compressor/decompressor) for the depth data. I haven't used one thus far, but doing a quick I see there are options like this one(very date).
You'll need to do a bit of research and see what kinect v1 depth streaming libraries are out there already, test and see what works best for your scenario.
Ideally, if the library is written in C# there's a chance you'll be able to use it to decode the received in Unity.
How can I scale up the size of my world/level to include more gameobjects without causing lag for the player?
I am creating an asset for the asset store. It is a random procedural world generator. There is only one major problem: world size.
I can't figure out how to scale up the worlds to have more objects/tiles.
I have generated worlds up to 2000x500 tiles, but it lags very badly.
The maximum sized world that will not affect the speed of the game is
around 500x200 tiles.
I have generated worlds of the same size with smaller blocks: 1/4th the size (it doesn't affect how many tiles you can spawn)
I would like to create a world at least the size of 4200x1200 blocks without lag spikes.
I have looked at object pooling (it doesn't seem like it can help me
that much)
I have looked at LoadLevelAsync (don't really know how to use this,
and rumor is that you need Unity Pro which I do not have)
I have tried setting chunks Active or Deactive based on player
position (This caused more lag than just leaving the blocks alone).
Additional Information:
The terrain is split up into chunks. It is 2d, and I have box colliders on all solid tiles/blocks. Players can dig/place blocks. I am not worried about the amount of time it takes for the level to load initially, but rather about the smoothness of the game while playing it -no lag spikes while playing.
question on Unity Forums
If you're storing each tile as an individual GameObject, don't. Use a texture atlas and 'tile data' to generate the look of each chunk whenever it is dug into or a tile placed on it.
Also make sure to disable, potentially even delete any chunks not within the visible range of the player. Object pooling will help significantly here if you can work out the maximum number of chunks that will ever be needed at once, and just recycle chunks as they go off the screen.
DETAILS:
There is a lot to talk about for the optimal generation, so I'm going to post this link (http://studentgamedev.blogspot.co.uk/2013/08/unity-voxel-tutorial-part-1-generating.html) It shows you how to do it in a 3D space, but the principales are essentially the same if not a little easier for 2D space. The following is just a rough outline of what might be involved, and going down this path will result in huge benefits, but will require a lot of work to get there. I've included all the benefits at the bottom of the answer.
Each tile can be made to be a simple struct with fields like int id, vector2d texturePos, bool visible in it's simplest form. You can then store these tiles in a 2 dimensional array within each chunk, though to make them even more memory efficient you could store the texturePos once elsewhere in the program and write a method to get a texturePos by id.
When you make changes to this 2 dimensional array which represents either the addition or removal of a tile, you update the chunk, which is the actual GameObject used to represent the tiles. By iterating over the tile data stored in the chunk, it will be possible to generate a mesh of vertices based on the position of each tile in the 2 dimensional array. If visible is false, simply don't generate any vertices for it.
This mesh alone could be used as a collider, but won't look like anything. It will also be necessary to generate UV co-ords which happen to be the texturePos. When Unity then displays the mesh, it will display specific points of the texture atlas as defined by the UV co-ords of the mesh.
This has the benefit of resulting in significantly fewer GameObjects, better texture batching for Unity, less memory usage, faster random access for any tile as it's not got any MonoBehaviour overhead, and a genuine plethora of additional benefits.
I have already asked this question on scientific computing and wondered if this forum could offer alternatives.
I need to simulate the movement of a large number of agents undergoing soft body deformation. The processes that govern the agents' movement are complex and so the entire process requires parallelisation.
The simulation needs to be visualised in 3D. As I will be running this simulation across many different nodes (MPI or even MPI+GPGPU) I do not want the visualisation to run in real time, rather the simulation should output a video file after it is finished.
(i'm not look for awesome AAA video game quality graphics, in addition the movement code will take up enough CPU time so I don't want to further slow the application down by adding heavy weight rendering code)
I think there are three ways of producing such a video:
Write raw pixel information to BMPs and stitch them together - I have done this in 2D but I don't know how this would work in 3D.....
Use an offline analogue of OpenGL/Direct3D, rendering to a buffer instead of the screen.
Write some sort of telemetry data to a file, indicating each agents' position, deformation etc for each time interval and then after the simulation has finished use it as input to a OpenGL/Direct3D program.
This problem MUST have been solved before - there's plenty of visualisation in HPC
In summary: How does one easily render to a video in an offline manner (very basic graphics not toy story - I just need 3D blobs) without impacting performance in a big way?
My idea would be to store the different states/positions of the vertices as single frames of a vertex animation in a suitable file format. A suitable format would be COLLADA, which is a intermediate format for 3D scenes based on XML, thus it can be easily parsed and written with general purpose XML libraries. There are also special purpose libraries for COLLADA like COLLADA DOM and pycollada. The COLLADA file containing the vertex animation could then be rendered directly to a video file, with the rendering software of your choice (3D Studio Max, Blender, Maya ...)