Tips on getting best GPU performance while using PNG Texture Atlas? - swift

Hello I'm a new developer, I have a spider solitaire app on the App Store. When I profile my app's GPU usage the profiler tells me that its usage is "very high." I've tried to cobble a solution together by researching, off and on, for months on the internet, but haven't found it yet.
The spider solitaire variant of solitaire uses 104 cards which could potentially be on the scene at the same time and they are all PNGs. I'm pretty sure, from researching, that the pngs are the reason for the performance hit.
The reason I want to use PNGs over less resource intensive file formats is to replicate the rounded corners of a playing card. I've tried using shape masks but that just made performance worse. Other than that, I have seen reference to "tiling" on SO, but haven't been able to parse that information.
I've sized the card images correctly. I'm getting 99%, or so, 60fps in my game, but of course, I am concerned about the energy consumption. I should mention that the game is 2D SpriteKit.
Is this just a "suck it up, you made a design decision that is costly" type situation? Or is there a better way? Is caching a thing, weak references maybe? Please, be nice, I'm not very experienced with this, but I love it.
Thank you very much to anyone who can provide any resources!

Related

Apple Vision Image Registration

I want to stitch images together to make a spherical panorama in an iOS App. I tried doing it with OpenCV but that turned out to be a waste of time since it almost always crashes when I try to stitch photos of the ceiling or the floor. Also, it takes up a lot of cpu memory.
I just discovered upon going through Apple documentation that Apple Vision has image registration capabilities. Upon spending hours and hours I couldn't figure out how to use it though. The documentation is terrible and there are no usage examples at all whatsoever.
All I really need is a tutorial or a demo or a function that stitches two or more images and I can make my way from there. Any help will be extremely appreciated since my job depends on it.

Better options than Vuforia for Extended Tracking?

Me and a friend are trying to develop an app with a very unique and strange concept. I have a rubiks cube, where all the stickers have been replaced with QR Codes,
So that to the naked eye, they are impossible to distinguish from each other, but through a QR-Code reader (and a fair bit of patience) you can scan one and one sticker to reveal which color it in fact is, and use this system to ultimately solve the rubiks cube. The idea was widely received in the cubing community, and I am now looking to expand on that idea by making an app that recognizes each sticker, tracks them using a tracking plugin, and virtually colors in that sticker, so that through the lens of my smartphone, the stickers have colors. I have so far made a simple version of the app, using the Vuforia tracking plugin, and it's produced really promising results Screenshot from said app
My problem though, is that Vuforia only allows 2-3 stickers to be tracked and painted at a time, and while I've had it read 6 stickers before, that requiered me to hold the cube very still. I am developing a brand new set of stickers that isn't QR Codes, but rather random patterns that hopefully will be easier for the app to recognize, but what I really need is a tracking plugin that allows more stickers to be tracked at once. Vuforia is, to my understanding, the best free tracking plugin out there, but are there any other tracking plugins that can produce the results I'm seeking? Optimally, I would like to track 27 stickers at once, as that's the maximum amount of stickers visible on a rubiks cube at a time. but if that's too much, 9 stickers, which is the amount of stickers for each side, would do just fine too. Can anyone with more knowledge to tracking and extended tracking help me? I didn't make this app, only the stickers, cube and concept, so I don't know how to use the plugins. Would it be possible to use several plugins at once? I am willing to pay for the license for a better plugin if that's what it takes. If there are any better ways to approach my idea, I am also very open to hear them. Thank you very much for your time

Is Deconvolution possible for video in iOS?

I want to film a batter swinging at a baseball, but the bat is blurry. The video is 30 fps.
Through research I have found that deconvolution seems to be the way to minimize motion blur, but I have no idea if or how I can implement it in my iOS app post processing.
I was hoping someone could point me in the right direction like how to apply a deconvolution algorithm in iOS or what I might need to do...or if it is even possible. I imagine it takes some processing power.
Any suggestions at all are welcome...
Thanks, this is driving me crazy...
After a lot of research and talks with developers about deconvolusion on iOS (Thanks to Brad Larson for taking the time to give me detailed information) I am confident that it is not possible and/or not worth the time. If the hardware can handle the computations (No guarantee) it would be EXTREMELY slow and consume much of the device's battery. I have also been told it could take months to implement the algorithms...if it is possible at all.
Here is the response I received from Apple...
Deconvolution algorithms are generally difficult to implement and can be very computationally intensive. I suggest you starting with a simple sharpening technique. Depending on the amount of the motion blur in your video, it might just suffice.
The sharpen filters, including CISharpenLuminance and CIUnsharpMask, are now available in iOS 6, so it is moderately easy to test them out.
Core Image Filter Reference
https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html
Core Image sample code from this year's WWDC session 511 "Core Image Techniques". It's called "Attempt3". This sample demonstrates best practices for applying CIFilter's to a live video taken by the iPhone/iPad camera. You may download the session video from the following page: https://developer.apple.com/videos/wwdc/2012/.
Just wanted to pass this information along.

Possible to get more than 20K+ triangles at 35fps on iPhone 3GS?

I'm programming a new engine for iOS and I'm at a point where I can test how much power I can get out of my engine.
My code is written in C++ and the engine is written in a highly efficient manner to do streaming, batch rendering, frustum culling, occlusion culling, fast memory managers, etc. However, the results don't satisfy my expectations and I'm wondering if anyone has been able to get more out of their iPhone device.
Right now I'm rendering only the geometry with textures and the best I get is about 20K+ triangles being rendered at ~35fps on my iPhone 3GS.
Is this somehow the maximum iPhone 3GS can do? Or has anyone done better?
P.S. I'm doing no triangle strips yet, so i know there is about ~5fps improvement in there.
As far as knowing the maximum possible performance of the 3GS take a look here:
http://www.glbenchmark.com/phonedetails.jsp?benchmark=glpro11&D=Apple%20iPhone%203G%20S&testgroup=lowlevel
well, i did more research on this. i was already aware of 7M t/s but that's just a number not taking triangle filling in account.
so to make sure there wasn't a big bottleneck in my code i downloaded the Oolong engine and did some comparison and the speed was fairly the same.
(core animation results)
Oolong engine(running the San Angeles demo):
5k to 14k #~60
20k to 25k #~40
25k to 30k #~30
I'm getting very much the same results in terms of speed.

Quartz 2D vs OpenGL ES Learning Curve

I have been developing iPhone Applications for a couple of months. I would like to know your views about the Quartz vs OpenGL ES 1.x or 2.0 learning curve. You can tell your perspective. My Questions are
*I am a wannabe game developer, So is it a good idea to first develop in quartz , then move
on to OpenGL ES or does it not make an difference
*Can you please tell your experiences when you were having the similar question
Thanks :)
Quartz 2D is not applicable for game development IMHO. It is a software rendering API. It won't give you realtime rendering speed. It's good for drawing charts or vector text with shadows, or for blending several images together. Just not for games. Unless you want to make a game where few images are moving against a monochrome background and even in that case I doubt it will be really smooth on older devices. I've seen some games obviously coded with Quartz. A pitiful sight.
Sooner or later you'll end up using Open GL ES or a game framework build on top of it. I recommend you to check cocos2D, SIO2 engine, or examples from SDK.
With careful programming it is possible to make an Open GL ES game with parallax scrolling and relatively small amount of objects work at 60 FPS even on 2nd gen devices. Tiny Wings is an example of such game. And maintaining stable 30 FPS is not a problem at all.
I skipped Quartz and went right to OpenGL ES. I started with a 2D sprite based game. Thought it was pretty easy.
The key is having a good example to look at. I used the Lunar Lander clone (Crash Lander), but I don't think that's easy to find anymore. Maybe someone who has done it recently knows of a better, newer example that uses current best practices.
I'm in the same boat as you describe, although I have no programming background. (Although I don't know what your background is either) Currently, I am in the process of learning to code as I learn the various API's that are available. I'm an objective-c guy going backwards to the c-based Quartz API, and it's a little bit of a challenge. Luckily, Programming in Objective-C 2.0 by S. Kochan has a great chapter on underlying C features to keep you afloat.
I have taken a couple of stabs # OpenGLES, and I have to say, that from a conceptual standpoint, I'm not ready for it. The Quartz2d API is a bit easier to learn conceptually because it's very easy to get up & running with a few commands. Right now, I'm at the point where I can define shapes and point to point images with out too much trouble.
OpenGLES is going to be something in my future, but it takes such an enormous amount of code to configure the drawing view, set up buffers, etc. If you are familiar with everything the code is doing, then it's a bit easier. However, from a learning perspective, Quartz is an easier way to get going, quickly.
Resources I'm using: The aforementioned book, and an anemic amount of blogs containing tutorials, which are limited # best. At this point, make an appointment with the apple docs and get cozy, because it's about the best (free) stuff that's out there (& exhaustive) With that said, I'd love for someone to prove me wrong on this site by posting a great resource for learning, but that's about it. Good Luck.
I have been looking for the fundamental differences so I can decide between OpenGL (ES) or Quartz or a hybrid. The good news is that the hybrid is an option. Clearly Quartz is easier to master for O-O programming and the answer from Apple appears to be that OpenGL, "...is ideal for immersive types of applications..."
http://developer.apple.com/library/ios/#DOCUMENTATION/General/Conceptual/Devpedia-CocoaApp/DrawingModel.html
I don't want to limit the category to games as I believe any game UX can be applied to a business App, a productivity App, entertainment viewing, etc. By the same token, I fully expect the technology (both h/w and s/w) to advance to make either a choice.