How could I read .raw file in Unity? - unity3d

I have downloaded some .raw file of depth data from this website.
3D Video Download
In order to get a depth data image, I wrote a script in Unity as below:
However, this is the texture I got.
How can I got the depth data texture as below?

RAW is not a standarized format, while most of the variants are pretty easy to read (there's rarely any compression) its might not be just one call to LoadRawTextureData.
I am assuming you have tried other texture formats than PVRTC_RGBA4 and they all failed?
First off, if you have the resolution of your image, and file size, you can try to guess the format, for depth its common to use 8bit or 16bit values, if you need 16 bit you take two bytes and do
a<<8||b
or
a*256+b
But sometimes there's another operation required (i.e for 18bit formats).
Once you have your values, getting the texture is as easy as calling SetPixel enough times

Related

Fast movie creation using MATLAB and ffmpeg

I have some time series data that I would like to create into movies. The data could be 2D (about 500x10000) or 3D (500x500x10000). For 2D data, the movie frames are simply line plot using plot, and for 3D data, we can use surf, imagesc, contour etc. Then we create a video file using these frames in MATLAB, then compress the video file using ffmpeg.
To do it fast, one would try not to render all the images to display, nor save the data to disk then read it back again during the process. Usually, one would use getframe or VideoWriter to create movie in MATLAB, but they seem to easily get tricky if one tries not to display the figures to screen. Some even suggest plotting in hidden figures, then saving them as images to disk as .png files, then compress them using ffmpeg (e.g. with x265 encoder into .mp4). However, saving the output of imagesc in my iMac took 3.5s the first time, then 0.5s after. I also find it not fast enough to save so many files to disk only to ask ffmpeg to read them again. One could hardcopy the data as this suggests, but I am not sure whether it works regardless of the plotting method (e.g. plot, surf etc.), and how one would transfer data over to ffmpeg with minimal disk access.
This is similiar to this, but immovie is too slow. This post 3 is similar, but advocates writing images to disk then reading them (slow IO).
maybe what you're trying to do is to convert your data into an image by doing the same kind of operation that surf, or imagesc or contour is doing and then writing it to a file directly, that would keep all the data in the memory until writing is needed.
I had little experience with real images that could also work here:
I saw that calling imshow took lot of time, but changing the CData of a presetted figure created by the imshow function took around 5ms, so, maybe you could set a figure using any of the function you like, and then update the underlying XData, YData etc. so that the figure will update in the same fashion?
best of luck!

Kinect v1 with Processing to .obj to unity [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I want to create an hologram that is exported via the kinect to an hololense. But it's very slow.
I use this tutorial to collect point cloud data, and this library to export my data as a 3D object in .obj format. The library that exports obj doesnt accept points so I had to draw little triangles. I save the files .obj .png and .mtl on my local xampp.
Next, I download the files with a unity script and WWW object. I also use Runtime OBJ Importer from unity's asset store to create a 3D object at runtime.
The last part is to export the unity app on a hololense. (I will do it next).
But before that,
The process is working but is very slow. I want the hologram to be fluid. A lot of time is wasted :
take depth and rgb data of the kinect
export data to an obj png and mtl file
download the files on unity as frequent as possible
render the files
I think of streaming but does unity need a complet obj file to render ? If I compress .png to .jpg will a gain some time ?
Do you have some pointers to help me ?
Currently the way your question is phrased is confusing: it's unclear wether you want to record point clouds that you later load and render in Unity or you want to somehow stream the point cloud with the aligned RGB texture in close to realtime to Unity.
You initial attempts are using Processing.
In terms of recording data, I recommend using the SimpleOpenNI library which can record both depth and RGB data to an .oni file (see the RecorderPlay example).
Once you have a recording, you can loop through each frame and for each frame store the vertices to a file.
In terms of saving to .obj you'll need to convert the point cloud to a mesh (triangulate the vertices in 3D).
Another option would be to store the point cloud to a format like .ply.
You can find more info on writing to a .ply file in Processing in this answer
In terms of streaming the data, this will be complicated:
if you stream all the vertices it that's a lot of data: up to 921600 floats ( (640 x 480 = 307200) * 3)
if you stream both depth (11bit 640x480) and RGB (8bit 640x480) images that will be even more data.
One option might be to only send the vertices that have a depth and overall skipping points (e.g. send every 3rd point). In terms of sending the data you can try OSC
Once you get the points in Unity you should be able to render a point cloud in Unity
What would be ideal in terms of network performance is a codec (compressor/decompressor) for the depth data. I haven't used one thus far, but doing a quick I see there are options like this one(very date).
You'll need to do a bit of research and see what kinect v1 depth streaming libraries are out there already, test and see what works best for your scenario.
Ideally, if the library is written in C# there's a chance you'll be able to use it to decode the received in Unity.

How to convert `mapbox.mapbox-terrain-v2` tiles to heightmap tiles?

Mapbox provides a kindle of map tiles--mapbox.mapbox-terrain-v2 which is stored in pbf format and saved in mvt suffix. The height data is represented by contour (line).
I want to generate terrain with satellite texture and this height data in Unity3D. How could I convert this pbf data to a height map(a pixel for a height value)?
There is an example
https://api.mapbox.com/v4/mapbox.mapbox-terrain-v2/12/1171/1566.jpg?access_token=pk.eyJ1Ijoib2xlb3RpZ2VyIiwiYSI6ImZ2cllZQ3cifQ.2yDE9wUcfO_BLiinccfOKg
And the mvt file
https://api.mapbox.com/v4/mapbox.mapbox-terrain-v2/12/1171/1566.mvt?access_token=pk.eyJ1Ijoib2xlb3RpZ2VyIiwiYSI6ImZ2cllZQ3cifQ.2yDE9wUcfO_BLiinccfOKg
And the document of Mapbox:
https://www.mapbox.com/vector-tiles/mapbox-terrain/
https://www.mapbox.com/vector-tiles/specification/
MapBox have buid a Unity3d package: MapBox-Unity-SDK
SDK here : https://www.mapbox.com/unity/
Just click download.
This is an asset you can open in Unity directly.
Launch Unity3d, goto Menu>Assets>ImportPackage>CustomPackage
and select your downloaded file.
It will unpack some files and the folders, you will find into some exemples scene files to help you.
The current vector terrain layer isn't designed to be turned into a heightmap: we've processed terrain into elevation contours and lines, so turning those back into raw data would be difficult (much like doing the opposite: we do a lot of processing because it would also be difficult to transfer raw data and derive visual data).
A new and improved vector terrain model that supports your usecase is on the way, but we've also introduced RGB terrain, which was actually designed specifically to address cases like Unity - decoding the RGB-encoded elevation tiles tends to be much simpler in software.

faster imread with matlab for images with small portion of data

I'm trying to track objects in separated frames of a video. If I do a background subtraction before storing the images the size of the images will be much smaller (like one fifth). so I was wondering if I can also read these images faster since most of the pixels are zero. Still simple imread didn't make any difference.
I also tried the pixelregion option for loading only the location of objects and that didn't work either since there are like ten objects in each frame.
It may be faster to store the frames as a video file, rather than individual images. Then you can read them using vision.VideoFileReader from the Computer Vision System Toolbox.

How can I correctly display a png with a gradient using OpenGL ES on the iPhone?

I've tried using pngs with gradients as textures in my OpenGL ES based iPhone game. The gradients are not drawn correctly. How can I fix this?
By "not drawn correctly" I mean the gradients are not smooth and seem to degrade to sections of a particular color rather a smooth transition.
The basic problem is having too few bits of RGB in your texture or (less likely) your frame buffer. The PNG file won't be used directly by the graphics hardware -- it must be converted into some internal format. I do not know the OpenGL ES API, but presumably you either hand it the .PNG file directly or you first do some kind of conversion step and hand the converted data to Open GL ES. In either case consult the relevant documentation to ensure that the internal format used is of sufficient depth. For example, a 256 color palettized image will be sufficient as would a 24 bit RGB or 32 bit RGBA format. I strongly suspect your PNG is converted to RGB15 or RGB16 which has only 5 or 6 bits per color component -- not nearly enough to display a smooth gradient.