I'm trying to create a relatively simple game with 2D graphics which has somewhat complicated animations. I tried to develop the game with the basic Core Graphics and Core Animation libraries but the animation performance of the game objects is not satisfactory (i.e. jerky/delayed).
So, I am now looking into Cocos2D which looked very promising until I checked into the high resolution support for the iPhone 4. This doesn't seem to work very well at all, but then again, i've just started to look into it.
The one thing the documentation seems to say about the issue it so simply use the CCDirector to set the scale factor, which can be done like so:
if( [[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
[[CCDirector sharedDirector] setContentScaleFactor: [[UIScreen mainScreen] scale] ];
}
However, this doesn't give the behavior I expect. It simply doubles the amount of pixels to work with, meaning I have to take the 'double pixels factor' into account in all my code.
In the Core Graphics/Animation libraries the coordinate space we work with doesn't change. Behind the curtains it simply doubles everything to fill the iPhone 4 high res screen and uses the #2x images where available. This actually works very well and the game logic need not take the device's resolution into account. Cocos2D seems to have implemented this pretty badly, making me not want to go onboard the Cocos2D train.
I found some other topics on the subject but there were no real answers that didn't involve workarounds. Is this being fixed? Am I doing it wrong or is Cocos2D simply not high res ready? Are there any alternatives to Cocos2D which can give me what I need?
Sorry for the shameless self-promotion but I guess it can help you out. I've written two small blog posts on high res in cocos2d recently and put them up on our website.
The first one goes into detail as to how to get cocos2d up and running including a proposed fix for how to implement the auto-loading behaviour of regular vs high-res images.
Additionally, there is a second post trying to go into some details on the difference between points and pixels. UIKit is entirely based on points which is why you do not have to re-work all your coordinates. In contrast to UIKit, cocos2d works based on pixels and is therefore resolution-dependent. You can provide cocos2d with the means to work with points rather than pixels through some rather simple conversions. I've implemented those as categories for CCDirector and CCNode to make them easy to work with. Obviously, the methods in these categories are not the only ones one should "pointify". Specifying sizes on CCLabel or any positions on CCMoveBy, for example, are pixel-based and thus would have to be reworked also ...
Have a look at the blog at: http://apptech.next-munich.com/ (I would've directly linked to the articles but as a new user I can only post a single link in an answer).
Hope this helps and gets you started!
I'm in the process of figuring out a pretty simple solution to this. My progress is posted here: http://www.cocos2d-iphone.org/forum/topic/8511#post-50361
Basically you scale your the texture source rects in CCSprite, but nothing else, that way you get to use your same 320x480 coordinate system(aka a points system), but you get high res graphics.
You should combine my technique with this with point #2 from Benjamin's blog here, so you don't get memory problems: http://apptech.next-munich.com/2010/07/high-res-using-cocos2d-on-ios-4.html
Cocos2D uses the UIImage methods imageWithContentsOfFile: and initWithContentsOfFile: which, despite what the documentation says, don't correctly return the #2x versions of images when on the iPhone 4.
This is a known bug and apparently Apple are working on it.
However, some people believe this is correct behaviour as it is passing a path to a specific file, rather than a name of a file and letting the system decide which one to use. In that case, I have proposed that the documentation be updated to reflect this fact if it is true. We'll see how Apple respond.
Related
I'm working on an iPhone game that involves only two dimensional, translation-based animation of just one object. This object is subclassed from UIView and drawn with Quartz-2D. The translation is currently put into effect by an NSTimer that ticks each frame and tells the UIView to change its location.
However, there is some fairly complex math that goes behind determining where the UIView should move during the next frame. Testing the game on the iOS simulator works fine, but when testing on an iPhone it definitely seems to be skipping frames.
My question is this: is my method of translating the view frame by frame simply a bad method? I know OpenGL is more typically used for games, but it seems a shame to set up OpenGL for such a simple animation. Nonetheless, is it worth the hassle?
It's hard to say without knowing what kind of complex math is going on to calculate the translations. Using OpenGL for this only makes sense if the GPU is really the bottleneck. I would suspect that this is not the case, but you have to test which parts are causing the skipped frames.
Generally, UIView and CALayer are implemented on top of OpenGL, so animating the translation of a UIView already makes use of the GPU.
As an aside, using CADisplayLink instead of NSTimer would probably be better for a game loop.
The problem with the iPhone simulator is it has access to the same resources as your mac. Your macs ram, video card etc. What I would suggest doing is opening instruments.app that comes with the iPhone SDK, and using the CoreAnimation template to have a look at how your resources are being managed. You could also look at allocations to see if its something hogging ram. CPU could also help.
tl;dr iPhone sim uses your macs ram and GFX card. Try looking at the sequence in Instruments to see if theres some lag.
I'm working on an iPhone app that I'm going to be demo'ing to a live audience soon.
I'd really like to demo the app live over VGA to a projector, rather than show screenshots.
I bought a VGA adapter for iPhone, and have adapted Rob Terrell's TVOutManager to suit my needs. Unfortunately, the frame rate after testing on my television at home just isn't that good - even on an iPhone 4 (perhaps 4-5 frames per second, it varies).
I believe the reason for this slowness is that the main routine I'm using to capture the device's screen (which is then being displayed on an external display) is UIGetScreenImage(). This routine, which is no longer allowed to be part of shipping apps, is actually quite slow. Here's the code I'm using to capture the screen (FYI mirrorView is a UIImageView):
CGImageRef cgScreen = UIGetScreenImage();
self.mirrorView.image = [UIImage imageWithCGImage:cgScreen];
CGImageRelease(cgScreen);
Is there a faster method I can use to capture the iPhone's screen and achieve a better frame rate (shooting for 20+ fps)? It doesn't need to pass Apple's app review - this demo code won't be in the shipping app. If anyone knows of any faster private APIs, I'd really appreciate the help!
Also, the above code is being executed using a repeating NSTimer which fires every 1.0/desiredFrameRate seconds (currently every 0.1 seconds). I'm wondering if instead wrapping those calls in a block and using GCD or an NSOperationQueue might be more efficient than having the NSTimer invoke my updateTVOut obj-c method that currently contains those calls. Would appreciate some input on that too - some searching seems to indicate that obj-c message sending is somewhat slow compared to other operations.
Finally, as you can see above, the CGImageRef that UIGetScreenImage() returns is being turned into a UIImage and then that UIImage is being passed to a UIImageView, which is probably resizing the image on the fly. I'm wondering if the resizing might be slowing things down even more. Ideas of how to do this faster?
Have you looked at Apple's recommended alternatives to UIGetScreenImage? From the "Notice regarding UIGetScreenImage()" thread:
Applications using UIGetScreenImage() to capture images from the camera should instead use AVCaptureSession and related classes in the AV Foundation Framework. For an example, see Technical Q&A QA1702, "How to capture video frames from the camera as images using AV Foundation". Note that use of AVCaptureSession is supported in iOS4 and above only.
Applications using UIGetScreenImage() to capture the contents of interface views and layers should instead use the -renderInContext: method of CALayer in the QuartzCore framework. For an example, see Technical Q&A QA1703, "Screen capture in UIKit applications".
Applications using UIGetScreenImage() to capture the contents of OpenGL ES based views and layers should instead use the glReadPixels() function to obtain pixel data. For an example, see Technical Q&A QA1704, "OpenGL ES View Snapshot".
New solution: get an iPad 2 and mirror the output! :)
I don't know how fast is this but it worth a try ;)
CGImageRef screenshot = [[UIApplication sharedApplication] _createDefaultImageSnapshot];
[myVGAView.layer setContents:(id)screenshot];
where _createDefaultImageSnapshot is a private API. (Since is for demo... its ok I suppose)
and myVGAView is a normal UIView.
If you get CGImageRefs then just pass them to the contents of a layer, its lighter and should be a little bit faster (but just a little bit ;) )
I haven't got the solution you want (simulating video mirroring) but you can move your views to the external display. This is what I do and there is no appreciable impact on the frame rate. However, obviously since the view is no longer on the device's screen you can no longer directly interact with it or see it. If you have something like a game controlled with the accelerometer this shouldn't be a problem, however something touch based will require some work. What I do is have an alternative view on the device when the primary view is external. For me this is a 2D control view to "command" the normal 3D view. If you have a game you could perhaps create an alternative input view to control the game with (virtual buttons/joystick etc.) really depends on what you have as to how to work around it best.
Not having jailbroken myself I can't say for sure but I am under the impression that a jailbroken device can essentially enable video mirroring (like they use at the apple demos...). If true, that is likely your easiest route if all you want is a demo.
I'd like to have a water effect on a background layer in my app. The effect doesn't need to react to touch or anything - it just needs to wave an image a little bit.
CCWaves3D seem ok, but leave have nasty black artifacts around the edges when I run it. Similarly CCShaky3D. CCLiquid brings my app down from 20fps to 5fps..
Is there any other effect I might want to try out? Or perhaps I'm using the current effects in a wrong way?
id shaky = [CCShaky3D actionWithRange:4 shakeZ:NO grid:ccg(15,10) duration:4];
id liquid = [CCLiquid actionWithSize:ccg(15,10) duration:1];
id wave = [CCWaves3D actionWithWaves:18 amplitude:80 grid:ccg(15,10) duration:10];
Bonus question - where can I find any good documentation for cocos2d effects? I found default cocos2d docs utterly useless & wasted a couple of hours trying to google before asking this question :/
I have noticed performance issues when building/running in debug mode. Have you tried to build/run in release mode? Also, are you experiencing this on the device and not just on the simulator?
Unfortunately, I have not found alternate documentation specifically for cocos2d effects. Here are a few links to posts and sites I have gathered for many different resources including tutorials, tools for making tile map games, using zwoptex for making sprite sheets, using vertex helper for making a verticies plist file for box2d/chipmunk collision detection instead of just rectangles, and sites for images & sounds:
Cocos2d Resources
Need 2D iPhone graphics designed
http://www.learn-cocos2d.com/knowledge-base/tutorial-professional-cocos2d-xcode-project-template/
I have found Ray's Tutorials especially helpful along with viewing the test applications included with cocos2d.
Happy coding!
I have a set of tiled image collections created via Microsoft's Deep Zoom composer, and a Silverlight application that currently consumes them for display via MultiScaleImage - it's all working pretty well - I'd just like to get some experience with iPad programming and
have a couple of ideas for some iPad applications. All my ideas rely on me being able to display/manipulate these tiled image sets (on the iPad).
I just picked up a iMac to facilitate this. I'm not seeing any Objective-C / Cocoa-touch libraries for this though, so am assuming I will have to roll my own. (I saw the Seadragon Ajax component, which is pretty slick, but I'm dealing with collections here, which it doesn't support. I would also like to roll this as a native application just to get the experience.)
The only open source project I found for displaying/manipulating the tiled image sets was Openzoom - a Flash component. I'm not to familiar with ActionScript either (Python, Java, C#, and c are the only languages I have really used), but briefly inspecting the code I didn't really have any issues with it and can probably use it for hints on how to swap the tiles in and out, etc. But, as I'm pretty new to Objective-C/Cocoa-touch, some pointers in the right direction would be appreciated.
1) Are there any other projects out there I am missing, or is OpenZoom my best bet for some reference?
2) Should I be trying to do this display in the UIKit framework, or should I do it as an OpenGL display?
3) Any other suggestions/pointers that I didn't think to ask.
I have just been working on a few apps that rely on tiling large images to allow for deep zooming. I found a couple of examples but the best and most useful for me was Apple's "PhotoScroller" sample code.
It relies on CATiledLayer to handle the tiling. The result is an extremely smooth and responsive interface even with very large images and its not too complex. (A little complex but not too bad).
So to answer your question directly:
PhotoScroller Code
QuartzCore Framework (which is part of the SDK)
There is a great, free little mac app for slicing images into tiles that I use a lot: "Tilen"
In the WWDC 2010 source samples, under iOS, there is a project in the ScrollView Suite called Tiling. It corresponds to WWDC10 session 104. It is probably the best image tiling example out there.
You can take a look at they way RouteMe library does this, the dynamic loading of higher resolution tiles, panning, etc. https://github.com/route-me/route-me
I can't believe nobody has told you about UIScrollview; the UIScrollView component is designed for this very purpose! (think Google Maps, which uses it).
Check out the class reference...
UIScrollView
The delegate method you require is the following....
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView
This allows you to check the zoom level, offset etc and then provide a view for zooming. This way you can maintain your 'position' within the tiled landscape independently of the graphics used to represent it.
Don't roll your own UIScrollView, no need to!
Take a look at CATiled Layer. See my answer to a similar question here: Drawing in CATiledLayer with CoreGraphics CGContextDrawImage
What are some suggested "paths" for getting better at drawing in code in Cocoa? I think at this point, that's my biggest weakness. Is drawing in code something general, or Cocoa-specific?
Thanks!
- Jason
The best way is probably practice. Try drawing some simple things at first: a calendar (basically a grid), a custom button, or a digital clock.
Its also worth noting that a lot of 'custom' controls are made from images, so not that much of the drawing is done in code -- the only thing the code does is stitch those images together.
You might want to look at Opacity, a drawing app for OS X (I'm not affiliated with these folks, just discovered the app a few days ago). What sets Opacity apart from other drawing apps is that it can create Quartz code directly from your drawings. Naturally, the generated code is not perfect but in the few days I've been trying this app I've found it to be quite helpful in understanding how to use Quartz more effectively.
Drawing in code is need for creating custom controls no matter what UI toolkit you pick. Drawing in code certainly has its advantage, for example the application/framework that you are building is really lightweight come production time, cause there will be lot let resources(images/fonts/etc) to worry about.
Also if a problem arises changing drawing in code is a lot easier than to redo code and images together.
If you are doing Cocoa drawing start by looking at source code of BGHudAppKit and reading Cocoa Drawing Guide by Apple.
I'm in the same boat as you; I'd like to learn more about drawing code.
It's a big document, but the Quartz 2D programming guide on the developer website seems like a good place to start from. They introduce Graphics Contexts and Paths and include plenty of images.
There's also a book referenced in that document, Programming With Quartz: 2D and PDF Graphics in Mac OS X which looks good. The APIs for iPhone and OSX are almost identical, so there's no problem using a Mac OSX book.
So I'd suggest start with the Apple documentation (you don't need to read past the section on CGLayer drawing), try some sample code and figure out how it's working. Then move on to either that book or find more sample code on the web. Good luck!