I'm finding some problems uploading a VertexBuffer to the context.
This buffer is ~200000 items long and it takes about +15 seconds, apparently the player has some difficulties uploading it to the context.
Does anyone experienced this? Any solution?
I'm trying to upload the vector in chunks with no success (for now... )
Update:
Apparently the problem is not the upload, but somewhere else. The data is creating from javascript since is a customized ThreeJS fallback. It works great for small scenes but it slows down exponentially for bigger projects.
Right now I am also investigating ExternalInterface communication speed.
Thanks!
Uploading with ByteArray is faster, btw it changed quite a lot from player to player. there is a test between ByteArray and Vector http://jacksondunstan.com/articles/1617 try this with your GPU. and also you can do this with async flag or from separate thread
Related
I am generating a video stream in real time, and I've got it as a series of bitmaps in memory.
I'd like to stream these bitmaps over the network using libvlc, but I wasn't able to find the right functions in the API (all streaming functions expect a file or other source).
I even thought of emulating a capture device, but that seemed too convoluted to be true, so I'd rather ask.
My question is, what do I now have to do with these bitmaps to be able to use libvlc to stream them?
I found a question which appears to be solving the same issue.
Other suggestion with significantly less overhead is "emulating" a file with named pipes, i.e. FIFOs.
I want to debug which part that consume most of mono memory in Unity because what I saw from profiler is only a chunk said "Mono" something like that and do not know what it is.
I have already done the texture observe in Unity profiler and I do not have a problem with it. (I stated that I used it and saw only mono chunk with high memory) The problem is there must be some kind of memory leak not memory spike because if I play the game long enough, my Galaxy Grand Duo will crash with black texture which is memory used too much. If I saw it in the profiler, it said ManagedHeap and Mono Domain only with large chunk of memory after play the game multiple times.
More information about which platform you are developing for, which tools you are already using, and the reason you are profiling your memory to begin with would be helpful. Without that information I can only suggest the following ...
1) Unity Memory Profiler
I would recommend starting with the memory profiling tools included with the Unity3D editor. You can find out more about these tools here: http://docs.unity3d.com/Manual/ProfilerMemory.html
It sounds like you are already doing this since you have narrowed it down the the "Mono" item from the profiler. This is good, you now know that it is one of your scripts that is consuming the memory.
Make sure you are using the Advanced View. The advanced view of the Unity Profiler will give you more information about which scripts are utilizing your memory.
2) Textures
When it comes to Unity and memory I always start with the textures. It seems like every time I do something with dynamic loading or modifying of textures I end up creating a memory leak. Take a look at your scripts, particularly any that load textures, and try temporarily disabling this logic. Does it help your memory issue?
3) Observe and Optimize
If you aren't able to locate any scripts that seem like they could be causing an issue, I would try observing your game and locating the point where you see your memory spike. Try to identify what logic is running at this point in time. Disable individual scripts and run your scene again. Did this reduce the memory usage? Repeat this process until you locate the script or scripts responsible for your spike. Once you find these scripts you can try re-factoring them until you get the results you are looking for.
I have to draw a waveform for an audio file (CMK.mp3) in my application.
For this I have tried this Solution
As this solution is using AVAssetreader, which is taking two much time to display the waveform.
Can anyone please help, is there any other way to display the waveform quickly?
Thanks
AVAssetReader is the only way to read an AVAsset so there is no way around that. You will want to tune the code to process it without incurring unwanted overhead. I have not tried that code yet but I intend on using it to build a sample project to share on GitHub once I have the time, hopefully soon.
My approach to tune it will be to do the following:
Eliminate all Objective-C method calls and use C only instead
Move all work to a secondary queue off the main queue and use a block to call back one finished
One obstacle with rendering a waveform is you cannot have more than one AVAssetReader running at a time, at least the last time I tried. (It may have changed with iOS 6 possibly) A new reader cancels the other and that interrupts playback, so you need to do your work in sequence. I do that with queues.
In an audio app that I built it reads the CMSampleBufferRef into a CMBufferQueueRef which can hold multiple sample buffers. (see copyNextSampleBuffer on AVAssetReader) You can configure the queue to provide you with enough time to process a waveform after an AVAssetReader finishes reading an asset so that the current playback does not exhaust the contents of the CMBufferQueueRef before you start reading more buffers into it for the next track. That will be my approach when I attempt it. I just have to be careful that I do not use too much memory by making the buffer too big or making the buffer so big that it causes issues with playback. I just do not know how long it will take to process the waveform and I will test it on my older iPods and iPhone 4 before I try it on my iPhone 5 to see if they all perform well.
Be sure to stay as close to C as possible. Calls to Objective-C resources during this processing will incur potential thread switching and other run-time overhead costs which are significant enough to be noticeable. You will want to avoid that. What I may do is set up Key-Value Observing (KVO) to trigger the AVAssetReader to start the next task quickly so that I can maintain gapless playback between tracks.
Once I start my audio experiments I will put them on GitHub. I've created a repository where I will do this work. If you are interested you can "watch" that repo so you will know when I start committing updates to it.
https://github.com/brennanMKE/Audio
A game I"m working on has several XML files it uses to manage sprite animations. Currently when I create an instance of a sprite I load it up into an XDocument once and keep it in a cache so that if I need to load it again I can just grab what is arlready in memory.
I do this very often in-game as I create animated sprites and such, going through its definitions like so:
var definitions = doc.Document.Descendants(name);
foreach (var animationDefinition in definitions)
So my question is, is this acceptable on a mobile phone? Say iPhone 3GS/iPhone 4/Windows Phone 7/Android? I use MonoTouch in ANdroid and iPhone while WP7 has its own .Net running.
The reason I ask is currently I don't load that many animated sprites up but as I do more and more I'm worried it will start hurting performance. I figure it might be best to change my design ideas now than wait to suffer at a later date.
Thanks for any help!
I would simply test what's faster (reading it again an again using a SAXParser or storing it in Memory using DOM).
Maybe it also makes sense to save the read data from your XML-File in something like an Array/Vector/Class so you don't need to parse the XML File over and over again.
Well in essentially what you are trying to ask is how many is "too many". I would suggest to find a "practical" limit considering performance, you just need to recursively increase the number of sprites and notice when the application slows down. Keep a safety factor of say 1.5 to 3. And there you know. Again you need to consider how many sprites do you have in all to begin with. If all can possibly be kept in memory respect all other specifications, then go for it. Customer's won't mind a slightly longer loading time for games, as long as it is smooth inside. Else just test stuff out.
I've just started playing around with opengl es on the iphone the past couple of weeks and i'm looking at refactoring some of my code to use Vertex Buffer Objects(VBO). Before I do though I would like to make sure it'll be worth it. The problem is that afaik the only reason you create VBO's is to shift a chunk of data onto the graphics card so that it doesn't need to be retrieved from system ram when it's used. The iPhone however does not have any dedicated ram that I'm aware of so i'm struggling to see why I would benefit at all from using VBO's. I have seen talk around the internet with conflicting opinions and apple certainly want dev's to use it so there's probably still a reason to use them but just wanted to see if anyone on SO had an opinion to add.
I saw no performance improvement on an iPhone 3G. I moved a bunch of stuff to VBOs, but eventually backed it out as it made it more difficult for me to pursue other performance gains. It's not the quick 25% performance increase that I was hoping for.
I've read somewhere that it can make a difference on the newer hardware (3GS), but I don't have references to back that up.
It depends. (sorry).
Rob didn't see an improvement for his setup, but here is an interesting post that did see a large improvement.
The main reason to existence of VBO's is the presence of static data on 3D models. The first bottleneck you encounter is the slowness of copying data to video memory (by using the unavailable glBegin/glEnd block or glVertexPointer, glBufferData and friends).
Let's imagine the old "flying toaster" screensaver. All toasts are static (changing only the position) - why waste resources copying them every frame from CPU's memory to GPU's? Copy it once with buffers and draw it with a single command. And, depending on how you do animations, even the animated toasters can be described in a static fashion.
My first 2D game I started without VBOs. When I changed to VBOs, no difference (like Rob). But, when I refactored to use more static buffers, FPS gone from 20 to 40. Since my goal was to reach 30, I was satisfied. I had some ideas to refactor even more, leaving everything static, but I don't have time now (game is on review, next one to come).