Background
Hello everybody, I am a beginner in Unity and I am planning on building a small mobile game. Because size is such a critical factor in small hand held devices, I have searched for ways to reduce the resulting APK/IPA file.
One solution I have encountered is to use proprietary compression algorithms by providers like https://kraken.io/.
I was wondering if it was possible to replace Unity's built-in image compression solution with API calls to https://kraken.io/
Question
Is it possible to write custom build scripts that loop over the resulting build and replace the respective image files with the ones outputted by kraken?
Could anyone point me in the right direction on how one would achieve this? Any suggestions or help will be much appreciated.
You can reduce the disk space by enabling Use Crunch Compression on your textures. This will behave similarly to what Kraken does. One limitation is that textures must be power of 2.
Here's a screenshot of a texture from Kraken with similar results when it comes to file size.
Note that the pixel counts is lower in Unity due to PoT conversion.
Although file size is important, there are other aspects to keep in mind when building your game: keeping draw calls down by using automatic or manual batching and loading assets on demand.
Addressable Assets is a way to externalize and manage game assets to handle dependencies for on-demand loading and content updates.
For batching, it depends on the type of game you are creating. For 2D you can use sprite atlassing. Also Texture Packer is a good solution I have used in the past.
hth.
Related
I'm actually doing a project with the Hololens of Microsoft. The problem is that the Hololens memory is bad, so i can only make a spatialmapping of a room and not of a building because he can't remember all the building. I had an idea, maybe a can create more object and assemble them ? But no one talk about this... Do you think it's possible ?
Thanks for reading me.
Y.P
Since you don’t have a compass, you could establish some convention to help. For example, you could start the scanning by giving a voice command (and stop it by another one), and decide to only start scanning when you’re facing north, for example. Then it would be easy to know the orientation of each room. What may be harder is to get the angle exactly right. Your head might be off by a few degrees and you may have to work some “magic” (post processing) to correct it.
Or placing QR codes on a wall (printer paper + scotch tape) and using something like Vuforia can help you avoid this orientation problem altogether (you would get the QR code’s orientation which would match that of the wall).
You can also simplify the scanned mesh and convert it to planes. That way you can remember simpler objects instead of the raw spatial mapping mesh. (Search for the SurfaceToPlanes script in the Holographic Academy tutorials).
Scanning, the first layer, as in HoloLens trying to reason about the environment is an unstoppable process. There is no API for starting or stopping it. And that process also does slowly consume more and more memory as far as I know. The only thing you can do is deleting space (aka deleting holograms) or covering the sensors. But that's OS/hardware level, not app level, which you presumably want.
Layer two, what you are you probably talking about, is starting and stopping the spatial reconstruction process, where that raw spatial data is processed into a low-poly mesh (aka spatial mapping). This process can be started or stopped. For example through Unity's SpatialMappingCollider and SpatialMappingRenderer components, if you use Unity.
Finally the third level is extracting some objects/segments from that spatial mapping mesh into primitives. Like that SurfaceToPlanes. That you can also fully control in terms of when.
There has been a great confusion, especially due to the a re-naming parties in MixedRealityToolkit (overuse of word Scanning) and Unity (SpatialAnchor to WorldAnchor etc.) and misleading tutorials using a lot of colloquialisms instead of crisp terminology.
Theory aside. If you want the HoloLens to think of your entire building as one continuous space in terms of the first layer, you're out of luck. It was designed for a living room and there is a lot of voodoo involved into making it work stable in facilities 30x30 meters. You probably want to rely on disjointed "islands" with specific detection anchors to identify where you are. Or rely on markers and coordinates relative to them.
Cheers
tl;dr;
How a software developer who get's a very high detailed 3D model, quickly & easily optimize it for mobile apps, so he can focus his time & energy on developing the app logic?
I think it's a pretty common use case and there may be a tool for this already out somewhere.
Long story
I have a 3D model (collada) of a machine. This model being created by the machine's engineering team, contains a lot of minute details essential in creating the machine hardware.
Now, I am developing a mobile app with unity that needs to render this machine along with 10 other machines in a single scene. The thing I like about the available models is that they look exactly like the real stuff. At the same time, I am not interested in the internal stuff, the external shell is just enough for me. I have no interaction with the 3D modelling team (let's assume I downloaded the model from some archive), and hence can't ask them to make any changes for me. The model is all I have. I am on my own.
There are two problems I am facing
How to get rid of the interiors of the models?
How to get rid of the high resolutions details in the external shell which the human eye can't detect in a mobile phone?
To give a sense of the scale, the real equipment and hence the model can be as big as 100 ft. (30 m.) while these will never occupy more than a 5 inch HD display. The size of the models ranges from 50MB to 400MB. The entire scene hence can go up to 2GB. Each model has nearly 300k faces.
The other challenge I am facing is that I am a software developer familiar with code and my familiarity with 3D modelling tools is very limited and I would like it to be that way :) I can play around with these tools, but I don't want to start spending half my time with these tools.
I have tried blender's decimate modifier. But the result's aren't good, the amount of details lost is very uniform, instead of being targeted to the interiors. I don't want to spend time in going through each mesh and deleting them manually.
Also, for some reason when I import a model that is exported from blender into unity, they look horrible (some faces/polygons that I can see in Blender I don't see them in unity), even with 0 decimation.
I am unable to digest the fact that the manual process is the only way. I feel with today's technology this would be a simple automation. The steps as I see are
Detect polygons that aren't directly reachable from any exterior raycasts. If required, I can define the set (14 may be enough) of points for the raycast's origin, basically camera's locations.
Delete these polygons
Detect polygons with dimensions less than a threshold
Delete these polygons
Blender to unity models can have slight problems if you don't export them the right way. How to do this is out of my field as I am also a developer and personally prefer 3DSMax.
What I would reccomend you to do is do what you don't want to do, it is the easiest way. Select the faces (just drag and select) and then just delete them all (the inside faces that is) you should be able to hide the outer shell if you got a propper 3d program, just google how to do it if it's too complicated for you.
If you want to delete smaller details on the outside, do exactly the same... just select the polys and delete them. I wouldn't reccomend using a build in tool because most of those tend to take the whole object and make it less polys or more polys depending on what tool you use.
In the end next time just try to get in to the program, as a programmer I dislike having to use 3D modelling software as well, but it is part of the job, so put some effort in and just learn the tools. It's less work than it seems.
Edit: As for the tools you are asking for, those do not really exist, you don't normally take a high poly model and change it to a low poly model for a mobile game. Instead you usually get a 3D artist to make a low poly model. The fact you do not have any communication with the team is a bit odd, but so be it. I'd reccomend either getting in touch with them or like I said before, putting the effort in and learning a 3D program, what you are wanting to do literally sounds like just click, drag, select and then press delete to delete some polys you wouldn't see anyways.
-Lars
with vcglib
vcglib may work for you, you can see the sample for simplify a ply 3D file. And it can applyed for many other 3D file format such as stl, obj... As vcglib is a C++ library, you can write a simple to use this library to simplify your stl model. This method work on the OS without X, such as ubuntu server. You can refer to my quesiton Failed to to simplify 3D models with vcglib, Assertion `0' failed on how to use vcglib to simplify a ply 3D file.
with meshlabserver
If you want do the auto simplyfication on OS with X, or windows, or Mac OS, it's much easy, you can refer to the meshlabserver, meshlab is also build on vcglib. You can run such ccommand, where the PLYmesher_script.mlx is the filter file, you can write this file or generate it with meshlab refer here.
meshlabserver -i ./option-0000.ply -o ./meshed.ply -m vc vn -s scripts/PLYmesher_script.mlx
I'm currently developing an application with the initial goal of obtaining, in real time, a 3D model of the environment "seen" by a Kinect device. This information would be later on used for projection mapping but that's not an issue, for the moment.
There are a couple of challenges to overcome, namely the fact that the Kinect will be mounted on a mobile platform (robot) and the model generation has to be in real-time (or close to it).
After a long research on this topic, I came up with several possible (?) architectures:
1) Use the depth data obtained from Kinect, convert it into a point cloud (using PCL for this step), then a Mesh and then export it into Unity for further work.
2) Use the depth data obtained from Kinect, convert it into a point cloud (using PCL for this step), export it into Unity and then convert it into a Mesh.
3) Use KinectFusion that already the option of creating a Mesh model, and (somehow) automatically load the Mesh model created into Unity.
4) Use OpenNI+ZDK (+ wrapper) to obtain the depth map and generate the Mesh using Unity.
Quite honestly, I'm kinda lost here, my main issue is the real-time requirement along with being forced to integrate several software components makes this it tricky problem. I don't know which if any of these solutions are viable and the information/tutorials on these issues isn't exactly abundant like the one, for example, for Skeleton tracking .
Any sort of help would be greatly appreciated.
Regards,
Nuno
Sorry, I might not be providing a solution for realtime mesh creation within Unity - but the process discussion here, was interesting enough for me to reply.
In the hard science novel Memories with Maya - there is discussion of exactly such a scenario:
"“Point taken,” he said. “So… Satish showed me a demo of the Quad [Quad=Drone] acquiring real-time depth and texture maps.”
“Nothing new in that,” I said.
“Yeah, but look above us.”
I tilted my head up. The crude shape of the Quad came into view.
“The Quad is here, but you can't see it because the FishEye [Fisheye=Kinect 2] is on it aimed straight ahead.”
“So it's mapping video texture over live geometry? Cool,” I said.
“Yeah, the breakthrough is I can freeze a frame… freeze real life as it were, step out of the scene and study it.”
“All you do is block out the live world with the cross polarizers?”
“Yeah,” he said. “It's a big deal for AYREE to be able to use such data-sets.”
“The resolution has improved,” I said.
“Good observation,” he said. “So has the range sensing. The lens optics have also been upgraded.”
“I noticed that if I turn around I don't see the live feed, just the empty street,” I said.
“Yes, of course,” he replied. “The Quad is facing the other way around. It's why I'm standing in front of you. The whole street, however, is a 3D model done by a standard laser scan taken from the top of that high tower.”
Krish pointed to a building block at the far end of the street. I turned back to the live 3D view again. He walked in front of me.
“This is uber cool. Everyone looks so real.”
“Haha. You should see how cool it is when you're here in person with the Wizer on,” he said. “I'm here watching these real people pass by, only they have a mesh of themselves mapped onto them.”
“Ahhh! Yes.”
“Yeah, it's like they have living paint on them. I feel like reaching out and touching, just to feel the texture.”...
The work that you're thinking of doing in this area, and this use of a live mesh goes far beyond Projection Mapping for events- for sure!
Wishing you the best on the project, and I will be following your updates.
Some of the science behind the story is on www.dirrogate.com if the topic interests you.
Kind Regards.
I would use Kinect Fusion, as it has a sample with the ability to export to .obj, which unity supports. You can automatically save it, and import it to unity to generate a mesh automatically. Especially if you have multiple Kinects, then Microsoft even has a sample to show the basics of Kinect Fusion with multiple Kinects. Also, since Fusion is already pre-written, there is not much code you will have to write.
Here is an example of a mesh from Fusion with one camera:
I do want you to notice how many vertices there are though... This could cause performance problems later on.
Good luck!
For my Windows Phone Mango app, I want to make overlay a heatmap on Bing Maps, and a tile overlay seems the best way to do it. I've been having trouble finding any good documentation or code samples to work from. It seems like most people are pointing the tile source to a web service. I'd rather render the heatmap on the phone itself - is that possible?
One of the main reasons to use tilelayers to represent data on a map is that the computation and rendering involved in creating the layer is performed in advance, generally as a one-off or infrequent task. Then, at runtime, the only work the client needs to do is to retrieve the pre-rendered tile images from the server and display them straight on the map, which is a simple, low-resource activity.
Rendering tiles can be a resource-intensive task, both in terms of processing and memory usage - for example, I can only render about 3 tiles per second on a quadcore desktop machine with 8Gb RAM. Even if it's technically possible to create the tiles dynamically on a handheld device, the performance is almost certainly going to be unacceptable for any user. You've also got the question of how you're going to store the data from which the layer is created. Since you're talking about plotting a heatmap, I'm guessing you have a reasonably large dataset of points - did you envisage these stored locally on the device, or retrieved over the network? (either will create different problems).
Basically, while it may be theoretically possible to create tile layers dynamically on the client, doing so would negate almost any benefit of using tilelayers in the first place, which is why you probably won't find any code samples explaining how to do so. Perhaps you could explain your comment why you'd rather create the heatmap on the phone?
It's pretty easy to create a server-side tile renderer using .NET or PHP that renders and server tile images to a Bing Maps client, or you can use an existing map rendering library such as mapnik.org or geoserver.org.
I'm currently working on a small iPhone game, and am porting the 3d engine I've started to develop for the Mac to the iPhone. This is all going very well, and all functionality of the Mac engine is now present on the iPhone. The engine was by no means finished, but now at least I have basic resource management, a scene graph and a construction to easily animate and move objects around.
A screenshot of what I have now: http://emle.nl/forumpics/site/planes_grid.png. The little plane is a test object I've made several years ago for a game I was making then. It's not related to the game I'm developing now, but the 3d engine and its facilities are, of course.
Now, I've come to the topic of materials, the description of which textures, lights, etc belong to a renderable object. This means a lot of OpenGL clientstate and glEnable/glDisable calls for every object. What way would you suggest to minimise these state changes?
Currently I'm sorting by material, since objects with the same material don't need any changes at all. I've created a class called RenderState that caches the current OpenGL state and only applies the members that are different when a different material is selected. Is this a workable solution, or will it grow beyond control when the engine matures and more and more state needs to be cached?
A bit of advice. Just write the code you need for your game. Don't spend time writing a generalised rendering engine because it's more than likely you won't need it. If you end writing another game then extract the useful bits out into an engine at that point. This will be way quicker.
If the number of states in OpenGL ES as high as the standard version, it will be difficult to manage at some point.
Also, if you really want to minimize state changes you might need some kind of state-sorting concept, so that drawables with similar states are rendered together w/o needing a lot of glEnable/glDisable's between them. However, this might be sort of difficult to manage even on the PC hardware (imagine state-sorting thousands of drawables) and blindly changing the state might actually be cheaper, depending on the OpenGL implementation.
For a comparison, here's the approach taken by OpenSceneGraph:
Basically, every node in the scene graph has its own stateset which stores the material properties, states etc. The nice thing is that statesets can be shared by multiple nodes. This way, the rendering backend can just sort the drawables with respect to their stateset pointers (not the contents of the stateset!) and render nodes with same stateset together. This offers a nice trade-off since the backend is not bothered with managing individual opengl states, yet can achieve nearly minimal state changing, if the scenegraph is generated accordingly.
What I suggest, in your case is that you should do a lot of testing before sticking with a solution. Whatever you do, I'm sure that you will need some kind of abstraction to OpenGL states.