Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I want to create an hologram that is exported via the kinect to an hololense. But it's very slow.
I use this tutorial to collect point cloud data, and this library to export my data as a 3D object in .obj format. The library that exports obj doesnt accept points so I had to draw little triangles. I save the files .obj .png and .mtl on my local xampp.
Next, I download the files with a unity script and WWW object. I also use Runtime OBJ Importer from unity's asset store to create a 3D object at runtime.
The last part is to export the unity app on a hololense. (I will do it next).
But before that,
The process is working but is very slow. I want the hologram to be fluid. A lot of time is wasted :
take depth and rgb data of the kinect
export data to an obj png and mtl file
download the files on unity as frequent as possible
render the files
I think of streaming but does unity need a complet obj file to render ? If I compress .png to .jpg will a gain some time ?
Do you have some pointers to help me ?
Currently the way your question is phrased is confusing: it's unclear wether you want to record point clouds that you later load and render in Unity or you want to somehow stream the point cloud with the aligned RGB texture in close to realtime to Unity.
You initial attempts are using Processing.
In terms of recording data, I recommend using the SimpleOpenNI library which can record both depth and RGB data to an .oni file (see the RecorderPlay example).
Once you have a recording, you can loop through each frame and for each frame store the vertices to a file.
In terms of saving to .obj you'll need to convert the point cloud to a mesh (triangulate the vertices in 3D).
Another option would be to store the point cloud to a format like .ply.
You can find more info on writing to a .ply file in Processing in this answer
In terms of streaming the data, this will be complicated:
if you stream all the vertices it that's a lot of data: up to 921600 floats ( (640 x 480 = 307200) * 3)
if you stream both depth (11bit 640x480) and RGB (8bit 640x480) images that will be even more data.
One option might be to only send the vertices that have a depth and overall skipping points (e.g. send every 3rd point). In terms of sending the data you can try OSC
Once you get the points in Unity you should be able to render a point cloud in Unity
What would be ideal in terms of network performance is a codec (compressor/decompressor) for the depth data. I haven't used one thus far, but doing a quick I see there are options like this one(very date).
You'll need to do a bit of research and see what kinect v1 depth streaming libraries are out there already, test and see what works best for your scenario.
Ideally, if the library is written in C# there's a chance you'll be able to use it to decode the received in Unity.
Related
I did read that Unity supports wav loop points metadata (e.g. https://stackoverflow.com/a/53934779/525873). We have, however, not found any official doc/release notes that confirm this. Loop points (using Wavosaur in my case) appear to still be ignored. We are on Unity 2018.2.17f1.
We know there are other options to make audio clips loop, but using wav loop points would be ideal. Anyone was able to get wav loop points to work in Unity?
Many thanks!
I might be wrong but I don't think looping other than 'the whole file' is natively supported. You can however achieve it by filling the audio buffer manually (using MonoBehaviour.OnAudioFilterRead )
Please keep in mind that this happens on the managed side so it might be a little bit expensive, especially if you want to do resampling
I have downloaded some .raw file of depth data from this website.
3D Video Download
In order to get a depth data image, I wrote a script in Unity as below:
However, this is the texture I got.
How can I got the depth data texture as below?
RAW is not a standarized format, while most of the variants are pretty easy to read (there's rarely any compression) its might not be just one call to LoadRawTextureData.
I am assuming you have tried other texture formats than PVRTC_RGBA4 and they all failed?
First off, if you have the resolution of your image, and file size, you can try to guess the format, for depth its common to use 8bit or 16bit values, if you need 16 bit you take two bytes and do
a<<8||b
or
a*256+b
But sometimes there's another operation required (i.e for 18bit formats).
Once you have your values, getting the texture is as easy as calling SetPixel enough times
I'm trying to figure how can I load models and textures in the most efficient way with Unity3D at run-time, so far I've read about glTF formats, Draco API, and currently I've implemented a simple procedure which works really slow. I'm using BestHTTP and for textures I'm doing this:
var texFoundPath = Directory.GetFiles(outFolder, texFileName, SearchOption.AllDirectories).Single();
Debug.Log("Loading texture from " + texFoundPath);
var texBytes = File.ReadAllBytes(texFoundPath);
Texture2D tex = new Texture2D(2,2);
tex.LoadImage(texBytes);
And using ObjImporter for importing objects.
var gameObject = ObjImporter.Import(objStr, mtlStr, textureHashtable);
So at run-time, models that are 3-5mb takes up to 3-5 seconds to load. Which is very slow and not suitable for anything. Doing this locally will result in a 200-300mb apk if I have around 100 models.
So currently I'm looking for a way to do this efficiently, and would love your help. I think it'll be the best to load the models with Draco API. but my I'm not sure how to create the plugin that will communicate with that API.
1 MB per second isn't a bad download speed. Really the only option without using prediction is to lower the resolution of your textures and make them smaller, compress your texture files into "chunks" and download each "chunk" as needed, uncompress, and apply to models, but that would probably still take a good amount of time.
I think what you really want is to have some prediction logic that loads "bundles" of assets before they are actually needed, and store them in memory for later use.
For instance, if the player is 200 meters away from a house that needs to be downloaded before viewing, and the players view distance is 100 meters, then start downloading the house. By the time the house is within viewing distance of the player, it will be downloaded and ready to use.
Mapbox provides a kindle of map tiles--mapbox.mapbox-terrain-v2 which is stored in pbf format and saved in mvt suffix. The height data is represented by contour (line).
I want to generate terrain with satellite texture and this height data in Unity3D. How could I convert this pbf data to a height map(a pixel for a height value)?
There is an example
https://api.mapbox.com/v4/mapbox.mapbox-terrain-v2/12/1171/1566.jpg?access_token=pk.eyJ1Ijoib2xlb3RpZ2VyIiwiYSI6ImZ2cllZQ3cifQ.2yDE9wUcfO_BLiinccfOKg
And the mvt file
https://api.mapbox.com/v4/mapbox.mapbox-terrain-v2/12/1171/1566.mvt?access_token=pk.eyJ1Ijoib2xlb3RpZ2VyIiwiYSI6ImZ2cllZQ3cifQ.2yDE9wUcfO_BLiinccfOKg
And the document of Mapbox:
https://www.mapbox.com/vector-tiles/mapbox-terrain/
https://www.mapbox.com/vector-tiles/specification/
MapBox have buid a Unity3d package: MapBox-Unity-SDK
SDK here : https://www.mapbox.com/unity/
Just click download.
This is an asset you can open in Unity directly.
Launch Unity3d, goto Menu>Assets>ImportPackage>CustomPackage
and select your downloaded file.
It will unpack some files and the folders, you will find into some exemples scene files to help you.
The current vector terrain layer isn't designed to be turned into a heightmap: we've processed terrain into elevation contours and lines, so turning those back into raw data would be difficult (much like doing the opposite: we do a lot of processing because it would also be difficult to transfer raw data and derive visual data).
A new and improved vector terrain model that supports your usecase is on the way, but we've also introduced RGB terrain, which was actually designed specifically to address cases like Unity - decoding the RGB-encoded elevation tiles tends to be much simpler in software.
I have already asked this question on scientific computing and wondered if this forum could offer alternatives.
I need to simulate the movement of a large number of agents undergoing soft body deformation. The processes that govern the agents' movement are complex and so the entire process requires parallelisation.
The simulation needs to be visualised in 3D. As I will be running this simulation across many different nodes (MPI or even MPI+GPGPU) I do not want the visualisation to run in real time, rather the simulation should output a video file after it is finished.
(i'm not look for awesome AAA video game quality graphics, in addition the movement code will take up enough CPU time so I don't want to further slow the application down by adding heavy weight rendering code)
I think there are three ways of producing such a video:
Write raw pixel information to BMPs and stitch them together - I have done this in 2D but I don't know how this would work in 3D.....
Use an offline analogue of OpenGL/Direct3D, rendering to a buffer instead of the screen.
Write some sort of telemetry data to a file, indicating each agents' position, deformation etc for each time interval and then after the simulation has finished use it as input to a OpenGL/Direct3D program.
This problem MUST have been solved before - there's plenty of visualisation in HPC
In summary: How does one easily render to a video in an offline manner (very basic graphics not toy story - I just need 3D blobs) without impacting performance in a big way?
My idea would be to store the different states/positions of the vertices as single frames of a vertex animation in a suitable file format. A suitable format would be COLLADA, which is a intermediate format for 3D scenes based on XML, thus it can be easily parsed and written with general purpose XML libraries. There are also special purpose libraries for COLLADA like COLLADA DOM and pycollada. The COLLADA file containing the vertex animation could then be rendered directly to a video file, with the rendering software of your choice (3D Studio Max, Blender, Maya ...)