iPhone short animation: video or image sequence? - iphone

I have read several post on both matters but I haven't seen anyone comparing so far.
Suppose I just want full screen animation without any transparency etc, just a couple of seconds animation (1''-2'') when an app starts. Does anyone know how "video" compares to "sequence of images" (320x480 # 30) on the iPhone, regarding performance etc?

I think there are a few points to think about here.
Size of animation as pointed out above. You could try a framerate of 15 images per second so that could be 45 images for 3s. That is quite a lot data.
The video would be compressed as mentioned before in H.264 (Baseline Profile Level 3.0) format or MPEG-4 Part 2 video (Simple Profile) format. Which means its going to be reasonably small.
I think you will need to go for video because,
1. 45 full screen PNG images is going to require a lot of ram. I don't think this is going to work that well.
Lastly you will need to ad the Media Player Framework which will have to be loaded into memory and this going to increase your load times.
MY ADVICE: Sounds like the animation is a bit superfluous to the app, I hate apps that take ages to load and this is only going to increase you app startup times. If you can avoid doing this, then dont do it. Make you app fast. If you could do this at some other time after load then that is cool.

The video will be a lot more compressed than a sequence of images, because video compression takes previous frame data into account to reduce bitrate. It will take more power to decode, however the iPhone has hardware for that, and the OS has APIs that use this hardware, so I wouldn't feel bad about making use of them.

do not overlook the possibility of rendering the sequence in real-time.

Related

Reduced quality OpenGL ES screenshots (iPhone)

I'm currently using this method from Apple to take screenshots of my OpenGL ES iPhone game. The screenshots look great. However taking a screenshot causes a small stutter in the game play (which otherwise runs smoothly at 60 fps). How can I modify the method from Apple to take lower quality screenshots (hence eliminating the stutter caused by taking the screenshot)?
Edit #1: the end goal is to create a video of the game play using AVAssetWriter. Perhaps there's a more efficient way to generate the CVPixelBuffers referenced in this SO post.
What is the purpose of the recording?
If you want to replay a sequence on the device you can look into saving the object positions etc instead and redraw the sequence in 3D. This also makes it possible to replay sequences from other view positions.
If you want to show the game play on i.e. youtube or other you can look into recording the game play with another device/camera or record some game play running in the simulator using some screen capture software as ScreenFlow.
The Apple method uses glReadPixels() which just pulls all the data across from the display buffer, and probably triggers sync barriers, etc, between GPU and CPU. You can't make that part faster or lower resolution.
Are you doing this to create a one-off video? Or are you wanting the user to be able to trigger this behavior in the production code? If the former, you could do all sorts of trickery to speed it up-- render to a smaller size for everything, don't present at all and just capture frames based on a recording of the input data running into the game, or other such tricks, or going even further run that whole simulation at half speed to get all the frames.
I'm less helpful if you need an actual in-game function for this. Perhaps someone else will be.
If all else fails.
Get one of these
http://store.apple.com/us/product/MC748ZM/A
And then convert that composite video to digital through some sort of external device.
I've done this when I converted vhs movies to dvd a long time ago.

iOS frame by frame animation, by script

There are a few SO questions regarding frame by frame animation (such as frame by frame animation and other similar questions), however I feel mine is different so here goes.
This is partially a design question from someone with very little ios experience.
I'm not sure "frame by frame" is the correct description of what I want to do so let me describe that. Basically, I have a "script" of an animated movie and I'd like to play this script.
This script is a json file which describes a set of scenes. In each scene there are a few elements such as a background image, a list of actors with their positions and a background sound clip. Further, for each actor and background there's an image file that represents it. (it's a bit more complex - each actor has a "behavior" such as how it blinks, how he talks etc). So my job is to follow the given script referencing actors and background and with every frame, place the actors in their designated position, draw the correct background and play the sound file.
The movie may be paused, scrubbed forward or backward similar to youtube's movie player functionality.
Most of the questions I've seen which refer to frame-by-frame animation have different requirements than I do (I'll list some more requirements later). They usually suggest to use animationImages property of a UIImageView. This is fine for animating a button or a checkbox but they all assume there's a short and predefined set of images that need to be played.
If I were to go with animationImages I'd have to pre-create all the images up front and my pure guess is that it won't scale (think about 30fps for one minute, you get 60*30=1800 images. Plus the scrub and pause/play abilities seem challenging in this case).
So I'm looking for the right way to do this. My instinct, and I'm learning more as I go, is that there are probably three or four main ways to achieve this.
By using Core Animations and defining "keypoints" and animated transitions b/w those keypoints. For example if an actor needs to be at point A at time t1 and point B at time t2 then all I need to do it animate what's in between. I've done something similar in ActionScript in the past and it was nice but was particularly challenging to implement the scrub action and keep everytyhing in sync so I'm not a big fan of the approach. Imagine that you have to implement a pause in the middle of an animation or scrub to a middle of an animation. It's doable but not pleasant.
Set a timer for, say 30 times a second and on every tick consult the model (the model is the script json file along with the description of the actors and the backgrounds) and draw what needs to be drawn at this time. Use Quartz 2D's API and drawRect. This is probably the simple approach and but I don't have enough experience to tell how well it's going to work on different devices, probably CPU wise, it all depends on the amount of calculations I need to make on each tick and the amount of effort it takes ios to draw everything. I don't have a hunch.
Similar to 2, but use OpenGL to draw. I prefer 2 b/c the API is easier but perhaps resource wise OpenGL is more suitable.
Use a game framework such as cocos2d which I'd never used before but seems to be solving more or less similar problems. They seem to have a nice API so I'd be happy if I could find all my requirements answered by them.
Atop of the requirements I'd just described (play a movie given it's "script" file and a description of the actors, backgrounds and sounds), there's another set of requirements -
The movie needs to be played in full screen mode or partial screen mode (where the rest of the screen is dedicated to other controls)
I'm starting with the iphone by naturally an ipad should follow.
I'd like to be able to create a thumbnail of this movie for local phone use (display it in a gallery in my application). The thumbnail may just be the first frame of the movie.
I want to be able to "export" the result as a movie, something that could be easily uploaded to youtube or facebook.
So the big question here is whether any of the suggested 1-4 implementations I have in mind (or others you might suggest) can somehow export such a movie.
If all four fail on the movie export task then I have an alternative in mind. The alternative is to use a server which runs ffmpeg and which accept a bundle of all the movie images (I'd have to draw them in the phone and upload them to the sever by their sequence) and then the server would compile all the images with their soundtrack to a single movie.
Obviously to keep things simple I'd prefer to do this server-less, i.e. be able to export the movie from the iphone but if that's too much to ask then the last requirement would be to at least be able to export the set of all images (keyframes in the movie) so I can bundle them and upload to a server.
The length of the movie is supposed to be a one or two minutes. I hope the question wasn't too long and that it's clear...
Thanks!
well written question. for your video export needs check out AVFoundation (available as of iOS 4). If I were going to implement this, I'd try #1 or #4. I think #1 might be the quickest to just try out, but that's probably because I don't have any experience with cocos2d. I think you will be able to pause and scrub CoreAnimation: check out the CAMediaTiming protocol it adopts.
Ran, you do have a number of options. You are not going to find a "complete solution" but it will be possible to make use of existing libraries in order to skip a bunch of implementation and performance issues. You can of course try to build this whole thing in OpenGL, but my advice is that you go with another approach. What I suggest is that you render the entire "video" frame by frame on the device based on your json settings. That basically comes down to setting up your scene elements and then determining the positions of each element for times [0, 1, 2] where each number indicates a frame at some framerate (15, 20, or 24 FPS would be more than enough). First off, please have a look at my library for non-trivial iOS animations, in it you will find a class named AVOfflineComposition that does the "comp items and save to a file on disk" step. Obviously, this class does not do everything you need, but it is a good starting point for the basic logic of creating a comp of N elements and writing the results out to a video file. The point of creating a comp is that all of your code that reads settings and places objects at a specific spot in the comp can be run in an offline mode and the result you get at the end is a video file. Compare this to all the details involved with maintaining all theses elements in memory and then going forward more quickly or slowly depending on how quickly everything is running.
The next step will be to create 1 audio file that is the length that the "movie" of all the comped frames and have it include any sounds at specific times. This basically means mixing the audio at runtime and saving the results to an output file so that the results are easy to play with AVAudioPlayer. You can have a look at some very simple PCM mixer code that I wrote for this type of thing. But, you might want to consider a more complete audio engine like theamazingaudioengine.
Once you have an audio file and a movie file, these can be played together and kept in sync quite easily using the AVAnimatorMedia class. Take a look at this AVSync example for source code that shows a tightly synced example of playing a video and showing a movie.
Your last requirement can be implemented with the AVAssetWriterConvertFromMaxvid class, it implements logic that will read a .mvid movie file and write it as a h.264 encoded video using the h.264 encoder hardware on the iPhone or iPad. With this code, you will not need to write a ffmpeg based server module. Also, that would not work anyway because it would take too long to upload all the uncompressed video to your server. You need to compress the video to h.264 before it can be uploaded or emailed from the app.

Updating saved images for Retina Display

I have an iPhone app that, among other things, allows users to store photos. When a new photo is added to the app's data store, I cache a thumbnail version of the image so that the photo thumbnail grids load in a reasonable amount of time.
The problem is that these thumbnails look great on a pre-Retina Display screen, but they look a little blurry on RD displays. It's not so bad that the images are unusable, but I would really like to be able to get the full benefit of Retina Display for images users saved with older versions of my app.
The problem is that re-creating all these thumbnails takes way too long. In my tests, it took about a minute and a half to re-encode a sample database to high-res thumbnails (admittedly a large one) on my iPhone 4. It will be even worse on older hardware.
How can I get around this? Doing a one-time migration seems out of the question, given the performance results above. Other options are shrinking the thumbnails lazily (i.e. as they're displayed on-screen) and then saving them to the database at that point. Screens full of old images will be sluggish the first time they're viewed, and then snappier after that.
Are there other approaches to consider? Anyone else faced this problem?
I dont like the idea that you try and convert the images.
User will quickly get impatient and say you app is buggy and takes ages to load.
I think you solve the situation without any re-processing of full sized images.
On older hardware you would not have a retina display (so no need to upsize the images). If they have a retina display then they have a fast iPhone iPod.
I would suggest you graphically solve the problem by how you display the thumbnail images. so instead of fullscreen put a border around this image and show it at its true resolution (dont upscale it). Or show 4 images where you normally show 1 (since iPhone screen is 4x the resolution).
Instead of resampling the original massive image, you could do a bicubic upsample of the thumbnail making it 4x the size. This will make it slightly blurry but it should look better than the iPhone scaling which will look really bad. The upsample would be ultra fast as its working with a small image.
I cannot help you out on upsampling but there will be some code somewhere.
Cheers, John.
Screens full of old images will be sluggish the first time they're viewed, and then snappier after that.
It doesn't have to be sluggish.
It's a bit of a pain, but you can do most of your processing in a background thread. Set the thread priority to something low (like 0.1) to avoid making the UI too slow. The easiest way to do this is to set up an NSOperation for each image you need to convert and add them to a NSOperationQueue with maxConcurrentOperationCount=1.
If writes are not atomic, in -applicationDidEnterBackground: or -applicationWillTerminate: (or in something listening for the corresponding notifications notifications), do something like [queue cancelAllOperations]; for (NSOperation * operation in queue) { [operation setThreadPriority:1]; } [queue waitUntilAllOperationsAreFinished];; you get about 10 seconds or so which should be enough for the image conversion to finish writing to disk (and thus avoid half-written files). For added protection, check [operation isCancelled] immediately before the write if it might take longer than 10 seconds. Obviously, in -applicationWillEnterForeground:, you should restart the conversion (remembering that some of the images have already been converted).
Concurrency issues are fun to track down...
(Note that [data writeToFile:path atomically:YES] isn't sufficient — it's likely to leave temporary files lying around if the app is killed during the write. I'd recommend storing thumbnails in Core Data if you can, but that might be out of the question for existing apps.)

iPhone. I need to touch every fullscreen pixel at 30fps. Doable?

I am interested in doing some image-hacking apps. To get a better sense of expected performance can someone give me some idea of the overhead of touching each pixel at fullscreen resolution?
Typical use case: The use pulls a photo out of the Photo Album, selects a visual effect and - unlike a Photoshop filter - gestural manipulation of the device drives the effect in realtime.
I'm just looking for ballpark performace numbers here. Obviously, the more compute intensive my effect the more lag I can expect.
Cheers,
Doug
You will need to know OpenGL well to do this. The iPhone OpenGL ES hardware has a distinct advantage over many desktop systems in that there is only one place for memory - so textures don't really need to be 'uploaded to the card'. There are ways to access the memory of a texture pretty well directly.
The 3GS has a much faster OpenGL stack than the 3G, you will need to try it on the 3GS/equivalent touch.
Also compile and run the GLImageProcessing example code.
One thing that will make a big difference is if you're going to do this at device resolution or at the resolution of the photo itself. Typically, photos transferred from iTunes are scaled to 640x480 (4 times the number of pixels as the screen). Pictures from the camera roll will be larger than that - up to 3Mpix for 3GS photos.
I've only played around with this a little bit, but doing it the obvious way - i.e. a CGImage backed by an array in your code - you could see in the range of 5-10 FPS. If you want something more responsive than that, you'll have to come up with a more-creative solution. Maybe map the image as textures on a grid of points, and render with OpenGL?
Look up FaceGoo in the App Store. That's an example of an app that uses a straightforward OpenGL rendering loop to do something similar to what you're talking about.
Not doable, not with the current APIs and a generic image filter. Currently you can only access the screen through OpenGL or higher abstractions and OpenGL is not much suited to framebuffer operations. (Certainly not the OpenGL ES implementation on iPhone.) If you change the image every frame you have to upload new textures, which is too expensive. In my opinion the only solution is to do the effects on the GPU, using OpenGL operations on the texture.
My answer is just wait a litle until they get rid of the openGL 1.0 devices and finally bring Core Image over to the iphone SDK.
With Fragment shaders this is very doable on the newer devices.
I'm beginning to think the only way to pull this off is to write a suite of vertex/fragment shaders and do it all in OpenGL ES 2.0. I'd prefer not to incur the restriction of limiting the app to iPhone 3GS but I think thats the only viable way to go here.
I was really hoping there was some CoreGraphics approach that would work but that does not appear to be the case.
Thanks,
Doug

What shows better performance? Playing a movie clip, or animating an image sequence with UIImageView?

Specs: about 320 x 270 px, 5 seconds. I don't know exactly how many images needed for a fluid animation, but let's assume 30.
What would be the best way to playback this? As a movie file in some kind of quicktime view (if available), or as an animated image sequence with UIImageView? I'm not sure but I believe loading 30 images per second is nearby impossible on the ipod touch. Any idea?
In general, a movie will have applied some compression and possibly even used a lossy compression. This means the processor would need to work harder but it has a lot less memory to read. The CPU is a fast resource. Compared to the CPU, memory is slow. Thus a compressed movie would (logically) have the better performance.
In practice, it could depend on a lot of factors, although movies do tend to be better optimized for animations. With a slow CPU and extremely fast memory, a multi-image animation might just be faster. Also, it depends on how you store those many images. But in 99% of all situations, movies will have better performance.
Well, I guess it depends.... if it supposed to be static, than a movie is the most appropriate way... it's hardware accelerated, and easy to write the code for using it. If you plan to modify the animation, and reuse it, you could load and modify images, or load a bunch of them in succession, but I imagine it's quite coding overkill for the task.
Why would you not want to use a video file? If you don't have very specific reasons, I would recommend just using the standard video playback functions provided by Apple.
This gives you several advantages:
Even if your proposed method would run fluidly, it certainly wouldn't be as well optimised for the graphics chip and therefore use up more battery
It's very easy to implement, whereas your idea would be rather complicated, and you'd spend a lot of time on a probably less important part of your app.
It's won't introduce any new bugs and is most likely a lot better tested than your custom solution.
Check out this to get started:
http://developer.apple.com/iphone/library/navigation/index.html?section=Topics&topic=Audio%20%26amp%3B%20Video