I am developing an iPhone application where
I need to provide an animation which is used by Mac OS X while minimizing and maximizing the screen to an UIImageView.
Need the animation on an image which continously give pulse effect (i.e. continous zoomin/out)(PS: I have done this using two images but I want to know whether its possible with single image).
Image must come in full size from the screen with alpha as 0 and then go to the center of the screen increasing alpha at each move till it reaches its orignal position.
Any help would be really appreciated..
Thanks
This is not predefined, but checkout these WWDC2010 Videos for informations on how to do it:
Session 424 - Core Animation in Practice, Part 1
Session 425 - Core Animation in Practice, Part 2
This animation is quite complex. I think it is done my manipulating the CGAffinityTransform of the window. It can be implemented the old-school way with a timer and updating the transformation matrix manually or you could maybe even use CoreAnimation. I think there is an animatable value function for this.
Related
This is for Unity3D 4.3+
I have a ridiculously large background I wish to use for a 2D scroller game. The background is 10 times the width of a landscape device (10240 x 1024). (The basic loop background goes behind that and is not an issue.)
I understand I can cut the background into 10 images of 1024 x 1024 each (basic sprites). But, I'm unsure of the best approach to go forward...
One way is to pre-load all the background sprites and then do a simple scrolling of them all. But take too much memory.
However, keeping in mind this is aimed for mobiles and tablets, isn't possible to do a loading/offloading of the background as the player progresses? Like this: Initially load 2 background images (bg-1 and bg-2).
Once the camera has passed bg-1, the unload bg-1 and load bg-3. Then when player passes bg-2, then offload bg-2 and load bg-4 and repeat. Thus only 2 bg images loaded at a time.
The player can not go backwards, so that helps me in this scenario.
Any thoughts on the best approach?
Thank you.
You can use Resources.Load function to load assets dynamically(link). Or just load them all in a list and reference from there.
How do you create filled flat circles (say white) that build out from each other in random overlap to fill the iPhone screen but ease-dissolve so that the screen never gets filled - on touch? On touch release they stop being created as they continue to ease transparency to 0 and are cleared from memory.
An example in code would be killer.
What you're doing shouldn't be too hard to do, but it would take some time to actually code. I'd be surprised but impressed if anyone were to make an example for you.
You could probably figure this out for yourself and learn some pretty good coding lessons on the way.
In order to do what you want, you need a pretty good handle on some concepts such as these:
(Cocos2d)
Touch Detection
Drawing Shapes
Random Numbers
Opacity
Timed Fade Animation (Your "Ease-Dissolve")
and utmost important:
**memory management!**
I want to create a 360 degree turntable showing lots of pictures (12, 24 or 36) by controlling that rotation with touch events (like that example but coded for an iOS app natively).
The simplest idea depending on the touch position is to load that specific uiimage.
Any ideas what's the best practice for that? Is there a chance to create that image-turntable with the help of coreanimation faster? Any other hints on that? Any other projects known where I can get some help on that?
Thanks for your time and hints in the right direction.
Here's another example for an ipad-app from the "audi a8".
From the first example it becomes obvious that the objects have actually been photographed for each angle of rotation. This is the really tricky part. You will need a tripod and a camera with remote control, and if possible also a rotational platter to keep angles consistent.
Implementation is relatively straightforward. As you guessed, you just track the touch positions and, depending on delta to the last touch position, show the appropriate image.
well, you can just use the HTML/CSS/JS used in the same example... just load that in an UIWebView in your app and load your site embedded as a resource...
Subclass UIImageView, load array of your frames, handle tap movement over the screen y-axis and change active image accordingly. Don't forget to loop your images. :)
What is the easiest and most practical way to implement smooth frame-based animation on the iPhone? I know that image sequences are easy to do, but if I have say 30 images flashing per second (30 fps) for 5 seconds, will it freeze up? Are there massive memory implications with this method?
I'm designing a game that will be mostly static, but there will be animations for some of the user actions. I'm hoping that OpenGL (etc.) won't be necessary.
I'd recommend looking at the Cocos2d-iPhone open source game engine.
Using an AtlasSprite it should be relatively straight forward to achieve the result you want.
This question needs to be more clear. Are you trying to display a sprite based animation? Or animate an image along a path? Or both?
If you have a series of images to animate, using UIImageView's animationImage is an easy way to go about it, although it is probably not efficient enough for a full scale game.
in my (puzzle) game the pieces are drawn on-screen using a CALayer for each piece. There are 48 pieces (in an 8x6 grid) with each piece being 48x48 pixels. I'm not sure if this is just too many layers, but if this isn't the best solution I don't know what is, because redrawing the whole display using Quartz2D every frame doesn't seem like it would be any faster.
Anyway, the images for the pieces come from one big PNG file that has 24 frames of animation for 10 different states (so measures 1152 x 480 pixels) and the animation is done by setting the contentsRect property of each CALayer as I move it.
This actually seems to work pretty well with up to 7 pieces tracking a touch point in the window, but the weird thing is that when I initially start moving the pieces, for the first 0.5 a second or so, it's very jerky like the CPU is doing something else, but after that it'll track and update the screen at 40+ FPS (according to Instruments).
So does anyone have any ideas what could account for that initial jerkiness?
The only theory I could come up with is it's decompressing bits of the PNG file into a temporary location and then discarding them after the animation has stopped, in which case is there a way to stop Core Animation doing that?
I could obviously split the PNG file up into 10 pieces, but I'm not convinced that would help as they'd all (potentially) still need to be in memory at once.
EDIT: OK, as described in the comment to the first answer, I've split the image up into ten pieces that are now 576 x 96, so as to fit in with the constraints of the hardware. It's still not as smooth as it should be though, so I've put a bounty on this.
EDIT2: I've linked one of the images below. Essentially the user's touch is tracked, the offset from the start of the tracking is calculated (they can one move horizontal or vertical and only one place at a time). Then one of the images is selected as the content of the layer (depending on what type of piece it is and whether it's moving horizontally or vertically). Then the contentsRect property is set to chose one 48x48 frame from the larger image with something like this:-
layer.position = newPos;
layer.contents = (id)BallImg[imgNum];
layer.contentsRect = CGRectMake((1.0/12.0)*(float)(frame % 12),
0.5 * (float)(frame / 12),
1.0/12.0, 0.5);
btw. My theory about it decompressing the source image a-fresh each time wasn't right. I wrote some code to copy the raw pixels from the decoded PNG file into a fresh CGImage when the app loads and it didn't make any difference.
Next thing I'll try is copying each frame into a separate CGImage which will get rid of the ugly contentsRect calculation at least.
EDIT3: Further back-to-basics investigation points to this being a problem with touch tracking and not a problem with Core Animation at all. I found a basic sample app that tracks touches and commented out the code that actually causes the screen to redraw and the NSLog() shows the exactly the same problem I've been experiencing: A long-ish delay between the touchesBegin and first touchesMoved events.
2009-06-05 01:22:37.209 TouchDemo[234:207] Begin Touch ID 0 Tracking with image 2
2009-06-05 01:22:37.432 TouchDemo[234:207] Touch ID 0 Tracking with image 2
2009-06-05 01:22:37.448 TouchDemo[234:207] Touch ID 0 Tracking with image 2
2009-06-05 01:22:37.464 TouchDemo[234:207] Touch ID 0 Tracking with image 2
2009-06-05 01:22:37.480 TouchDemo[234:207] Touch ID 0 Tracking with image 2
Typical gap between touchesMoved events is 20ms. The gap between the touchesBegin and first touchesMoved is ten times that. And that's with no computation or screen updating at all, just the NSLog call. Sigh. I guess I'll open this up a separate question.
I don't think it's a memory issue; I'm thinking that it has to do with the inefficiency of having that large of an image in terms of Core Animation.
Core Animation can't use it natively, as it exceeds the maximum texture size on the GPU (1024x1024). I would break it up some; individual images might give you the best performance, but you'll have to test to find out.
IIRC, UIImageView does its animating by setting successive individual images, so if it's good enough for Appleā¦.
When it is about performance, I definitly recommend using Shark (also over Instruments). In the 'Time Profile' you can see what are the bottlenecks in your app, even if it is something in Apple's code. I used it a lot while developing my iPhone app that uses OpenGL and Core Animation.
Have you tried using CATiledLayer yet?
It is optimized for this type of work.