I'm making an openGL game for iPod/iPhone.
At start I load at once all the textures I need. At first loading times where small, but as I kept developing and adding new textures the loading times have been increasing, to the point of taking many seconds before the game start.
Recently a new problem appeared, when I build the game in the device It takes too long and the game quits. At the app is installed correctly and i can test It, but never while being connected to xcode. Sometimes even the app quits, when too many elements are dran on screen.
Right now I use 6 files , with about 2 Mbs of size in total.
Is there a form to create a loading screenor the such ?
What other meassures can I take so solve this issues ?
If you're decoding PNG files at startup using Core Graphics, I would suggest using PVRTexTool to create PVR data files instead. The contents of PVR files can be uploaded directly to OpenGL; no need to use Core Graphics to decode them.
PVRTexTool can also do neat stuff like generate mipmaps (another thing you might want to avoid at startup time) and encode to compressed formats (reducing your texture size will help too).
Besides encoding your textures as PVR-textures there are a few more solutions.
One is to defer texture loading to a later point. Let your app bring up its user interface and maybe show a progress bar to the user while you're loading your textures. This will stop iPhoneOS from killing your app.
You'll probably also need to look into what kind of textures you are creating. Some formats are much more expensive than others to create from you png:s.
As a last resort you could save your textures as uncompressed raw textures. This will make your app larger but cut down loading time.
Related
I currently work on a 2D project on Unity (for mobiles/tablets), where I download a few heavy pictures by using UnityWebRequestTexture.GetTexture(url), which I then store using File.WriteAllBytes(filePath, webRequest.downloadHandler.data).
Later in the project, I need to show the downloaded picture on a RawImage element. To avoid blocking the app I read the bytes via FileStream on a separate Thread, which works well.
But when I call (from the main thread) ((Texture2D) myRawImage.texture).LoadImage(loadedBytes), my Profiler shows a huge peak, and if most devices handle it correctly, on iPad Mini I often get a MemoryWarning at this very point. Which, when repeated, leads to the app being forced close by iOS.
For info, the downloaded image's size vary depending on the device asking it. We initially intended to display 2048x1536 pictures that way on iPad Mini, but we were getting MemoryWarning on each display. We're now using 750x1334, but we still get MemoryWarning from time to time.
My question then is : is there a better way to do the "LoadImage" step ? I believe that the conversion from byte[] to Texture2D is where my memory problem lies, but I didn't find any workaround to this.
Or eventually: Is there a better way to download/store the image? If the image was in the app's resources from the beginning, I don't seem to have any trouble displaying it on the same device. Maybe there is a way to download the texture while applying it the same compression algorithm that assets receive when building an app ?
Using Texture2D.Compress() after loading the texture compresses the texture by at least 90%.
Reference
At present one of my quiz game require, many images of flowers based on questions asked.
I can say total 250+ images with resolution of 512x435 each.
Plus other games textures get loaded at a time.
So when game screen get opened which showing all these images, its on the spot get crashed.
I am testing this on iPhone devices. So how to handle these many textures? I was stuck in this point.
Here is overview of flowers textures.
I was displaying all these in grid view so in one scrollable screen all get displayed.
Now I hope, you understand my side point of view.
You need to reduce the amount of these images that are loaded at once, try keeping most of those images rendering components disabled using some sort of managing script when you are not seeing them, rather than simply putting them off camera and try to moderate the amount that are rendered at once. If this does not work the problem might be that the total size of the images is bigger then the amount of ram your device has. Even the IPhone X has only 3GB of ram, check if your images in total are bigger than or close to the 1GB-3GB range, unity remote 5 is probably storing these images in ram or temporary memory. It is always a good idea to try to compress your images when working with a mobile device anyway, try to put them into a texture atlas or lower the quality of them until the iphone can handle it, you should never rely on mobile devices to render tons of images at once.
Wow. That's a lot of flowers....
Okay if you need all of those textures, and you're only displaying them in the editor, I'd group them into folders.
Even if you sort the blue from the red..... etc just human readable directories it will help out a lot as you won't load ALL of your images at once.
If you are loading all of those textures at runtime, which I don't think you would as unity optimizes your built executable, I would recommend against loading all of the images at once because you will use a lot of RAM, especially if those images are 512px or larger.
With in the application you could split them into 25 at a time on the screen and you could swipe to the right to go to the next 25 or swipe back to the previous. This is how i solved a similar problem in unity a while back.
I need to create a very short background animation for the entire iPhone screen. I am trying to figure out whether I should use a video or animate a series of PNG files.
Does anybody know the advantage of using video as an app background instead of the series of PNG files being animated inside UIImageView? I heard that video can be compressed to really small size and will look better that animated PNGs.
Thanks in advance.
I really don't think using a video rather than a series of PNG is a good idea. You'll make your life more complicated for a few kilobytes saved, if you even save any.
It is of course highly dependent on what exactly you are trying to animate, and where you try to save space. For example, if you try to save memory in the binary and if you have your PNGs (already well compressed format) in a zip that you unzip on the fly, most similarities will be factored out by the zip algorithm. If you're trying to save space in memory during the game itself (not in the binary), then this doesn't count. However, loading up the video library binaries has a serious chance to clutter your memory more than the few PNGs will.
This project does the job with JPG, it should be very easy to change that with PNG ;)
I have thousands of png files that I am loading using libpng and then creating OpenGLES textures out of for use in the application. Doing this is causing a huge load time lag on the iPhone. Is there any way to speed up load time?
What I've typically done is build an object that handles lazy loading of my textures through a manager. At start up I register my known textures and resolve their file system attributes I will need later to save on simple IO at load time, then as they are needed I pull them in.
To speed things up I have a batch load mechanism too, where I say "load this array of images and return them to me." This is simply to remove the overhead of repeated method calls. Indeed, my single load solution is just a simple wrapper around my batch load.
This way at start up I cache my bookkeeping (object creation, file system attribute discovery, etc), but defer heavy work until necessary. As I load textures into my app at run time it triggers faults which fill in the textures from storage to texture memory. If I'm loading a scene with many textures known before hand I load the set of very common textures in a prefetch, but defer relatively uncommonly seen textures to runtime.
In practice this tends to work due to the probabilities involved - forcing the load at start time assures you you'll encounter all textures at once, whereas sparsely loading them in likelihood causes the expected user latency to drop off weighted by load time latency * probability of being loaded within some window of time from start. If you optimize your start time prefetch to not load textures you decrease your expected UI latencies dramatically.
Additionally, you may want to consider using NSURLConnection:connectionWithRequest:delegate: for loading your textures from storage. It is asynchronous so you can ask it to load your largest ones asynchronously and your smaller ones synchronously to take advantage of IO / CPU idle factors during file system fetches / texture decompression (large files load long while small files load fast and can deserialize at the same time). You should test this though since the iPhone may not handle asynchronous file system access well.
You can place several textures in one big texture (1024x1024). This require some re-calculation of the texcoords though. Try also to have the texture parts close to the actual resolution used when drawing, ie if a texture will fill a 1/2 screen height (240px) its good enough to use a texture of 256x256.
Then, a more advanced method (not tested) could be to concatenate all files (+registering their length+pos) and then used mmap(..) to access this png-db file. Then use NSData to feed the texture loading function if they are based on UIImage. The good thing with using mmap(..) like this is that one don't open and close a lot of files and the access to the data is handled with help of the OS VM manager.
[Note: Yeah, iPhone native PNG loader require some png mangling...might need a none native reader then or concatenate mangled files instead]
Do you know what order you're going to need your thousands of images in? Perhaps you can only load a few hundred of them when your app starts, and then load more on a background thread as you go.
Combining small images into fewer larger textures will also be a good idea, again only if there's some pattern about what images are used together.
I am not sure what the problem is. My app is running fine on the simulator but when I try to run it on the iPhone it crashes during debugging or without debugging with signal "0". I am using the Texture2D.m and OpenGLES2DView.m from the examples provided by Apple. I profiled the app on the iPhone with Instruments using the Memory tracer from the Library and when the app died the final memory consumed was about 60Mb real and 90+Mb virtual. Is there some other problem or is the iPhone just killing the application because it has consumed too much memory? If you need any information please state it and I will try to provide it. I am creating thousands of textures at load time which is why the memory consumption is so high. Really cant do anything about reducing the number of pics being loaded. I was running before on just UIImage but it was giving me really low frame rates. I read on this site that I should use OpenGLES for higher frame rates.
Also sub question is there any way not to use UIImage to load the png file and then use the Texture class provided to create the texture for OpenGLES functions to use it for drawing? Is there some function in OpenGLES which will create a texture straight from a png file?
thousands of textures? really? how many of them are on the screen at one time? perhaps you can only load some of them at a time, or if they're small, you should combine them into fewer larger textures.
the general guideline I've heard is that you are limited to 24MB of texture memory.
there's nothing built into OpenGLES that loads from disk, but you can use a file parser like stb_image to do it yourself.
I tried load as ten texture pieces of 2048x2048 pixels.
Texture memory exceed 24MB, but iPhone3GS is able to loaded and rendered it.
I also recommend stb_image or SOIL texture loader.
(stb_image library is used SOIL library.)