When simulating on Xcode i can see that the frame rate is about 60fps .
I also know that the human eye, can see between 10-12 frames in a second and define them.
My question is , lets say i would like to replace an image on the iPhone, as fast as i can, for example running all the numbers images again and again , what will be the max speed that can be produced without any lags(sure that my eye cant see that but anyway) , and what would happen if i will put it to be faster than the max ? will it just jump over some numbers ?
What does it means when i see 60fps ? why not 30? can we see the difference ?
is there any docs or specs to understand the exact screen spec ?
Thanks .
As for whether fps is noticeable, that depends on the person and the circumstances. The Human Visual System is not easily defined! For example, the bigger the screen, the higher the need for high frame rate - so one could argue that 30 fps may be enough for a phone, but if you hold the display close to your eyes and your eyes is tracking a moving object, surely it is possible to see the difference between for example 30 and 60 fps.
In fact, there is no fixed fps number a person can see. If you imagine a giant screen and your eyeballs tracking a small baseball traveling from one end to the other of that screen, the image of the baseball "projected" onto your retina will jitter back-and-forth with a frequency depending on the frame rate, a phenomenon called strobing.
Even if the frequency of this jitter is higher than the so called "critical flicker frequency", where your eyes are able to perceive the flicker, you will still see a blurring of the ball on your retina which is dependent on the frame rate, and in this situation you may easily be able to distinguish between 30 fps and 60 fps and even higher. The higher fps the sharper you will see the ball. Ask any hardcore PC gamer if he thinks 30 fps is good enough!
The iPhone 4s, iPhone 5, and newer devices can update the screen at 60 frames per second. The iPhone 4 and older devices can update at 30 fps. If your code updates each frame faster than the device can update the screen, it will simply skip frames.
As for whether fps is noticeable, that depends on the person. I cannot stand a framerate lower than 30 and I can clearly tell the difference between 30 and 60. Some people can't tell the difference. Some people prefer 30 to 60. It really depends on who you ask. But in general, the higher the framerate, the smoother the animations will be.
To answer your first question (and possibly your second if I understand what you are asking) here is a little background:
FPS is used to describe how many frames your graphics card can output each second. Refresh rate is typically measured in Hz and represents how often the screen can refresh. If your FPS is faster than your Hz then the screen simply won't display all the images you show. This can lead to Tearing which can be resolved by using Vsync. Vsync will synchronize your FPS and refresh rate to resolve the issue.
Does this help?
Related
I need to rotate a full size photo (about 8MB) as fast as possible on an iPhone (4s and up), an arbitrary angle. The code to do so with CoreImage is easy enough, but not fast. It takes about 1.5 seconds on a 4s. Please note that the purpose of this rotate is for further image processing in memory, NOT for display on the screen.
Is there any kind of hope that we can get this down to sub-second given, perhaps, the DSP (using the Accelerate framework) or OpenGL (and keeping in mind that we have to copy the bits in and out of whatever buffer we using. If this is hopeless then we have other (but more complicated) ways to tackle the job. I have not written OpenGL code before and want some assurance that this will actually work before I spend significant time on it!
Thank you,
Ken
Since you have it running at 1.5s with no hardware acceleration I'd say it's safe to assume you can get it under a second with OpenGL.
I have a need to measure a room (if possible) from within an iPhone application, and I'm looking for some ideas on how I can achieve this. Extreme accuracy is not important, but accuracy down to say 1 foot would be good. Some ideas I've had so far are:
Walk around the room and measure using GPS. Unlikely to be anywhere near accurate enough, particularly for iPod touch users
Emit sounds from the microphone and measure how long they take to return. There are some apps out there that do this already, such as PocketMeter. I suspect this would not be user friendly, and more gimmicky than practical.
Anyone have any other ideas?
You could stand in one corner and throw the phone against the far corner. The phone could begin measurement at a certain point of acceleration and end measurement at deceleration
1) Set iPhone down on the floor starting at one wall with base against the wall.
2) Mark line where iPhone ends at top.
3) Pick iPhone up and move base to where the line is you just drew.
4) Repeat steps 1->3 until you reach the other wall.
5) Multiply number of lines it took to reach other wall by length of iPhone to reach final measurement.
=)
I remember seeing programs for realtors that involved holding a reference object up in a picture. The program would identify the reference object and other flat surfaces in the image and calculate dimensions from that. It was intended for measuring the exterior of houses. It could follow connected walls that it could assume were at right angles.
Instead of shipping with a reference object, as those programs did, you might be able to use a few common household objects like a piece of printer paper. Let the user pick from a list of common objects what flat item they are holding up to the wall.
Detecting the edges of walls, and of the reference object, is some tricky pattern recognition, followed by some tricky math to convert the found edges to planes. Still better than throwing you phone at the far wall though.
Emit sounds from the microphone and measure how long they take to return. There are some apps out there that do this already, such as PocketMeter. I suspect this would not be user friendly, and more gimmicky than practical.
Au contraire, mon frère.
This is the most user friendly, not to mention accurate, way of measuring the dimensions of a room.
PocketMeter measures the distance to one wall with an accuracy of half an inch.
If you use the same formulas to measure distance, but have the person stand near a corner of the room (so that the distances to the walls, floor, and ceiling are all different), you should be able to calculate all three measurements (length, width, and height) with one sonar pulse.
Edited, because of the comment, to add:
In an ideal world, you would get 6 pulses, one from each of the surfaces. However, we don't live in an ideal world. Here are some things you'll have to take into account:
The sound pulse causes the iPhone to vibrate. The iPhone microphone picks up this vibration.
The type of floor (carpet, wood, tile) will affect the time that the sound travels to the floor and back to the device.
The sound reflects of off more than one surface (wall) and returns to the iPhone.
If I had to guess, because I've done something similar in the past, you're going to have to emit a multi-frequency tone, made up of a low frequency, a medium frequency, and a high frequency. You'll have to perform a fast Fourier Transform on the sound wave you receive to pick out the frequencies that you transmitted.
Now, I don't want to discourage you. The calculations can be done. However, it's going to take some work. After all PocketMeter has been at it for a while, and they only measure the distance to one wall.
I think an easier way to do this would be to use the Pythagorean theorem. Most rooms are 8 or 10 feet tall and if the user can guess accurately, you can use the camera to do some analysis and crunch the numbers. (You might have to have some clever way to detect the angle)
How to do it
I expect 5 points off of your bottom line for this ;)
Let me see if it helps. Take an object of known length and keep it beside the wall and with Iphone, take pic of wall along with the object that you kept beside the wall. Now get the ratio of wall width and object width from the image in Iphone. And as you know the width of the object, you can easily calcualte the width of wall. repeat it for each wall and you will have a room measurement.
Your users could measure a known distance by pacing it off, and thereby calibrate the length of their pace. Then they could enter the distance of each wall in paces, and the phone would convert it to feet. This would probably be very convenient, and would probably be accurate to within 10%.
If they may need more accurate readings, then give them the option of entering in a measurement from a tape measure.
This answer is somewhat similar to Jitendra's answer, but the method he suggests will only work where you can fit the whole wall in a single shot.
Get an object of know size and photograph it held against the wall with the iphone held against the other wall (two people or blutac needed). Then you can calculate the distance between the walls by looking at the size of the object (in pixels) in the photo. You could use a PDF to make a printed document the object of known size and use a 2D barcode to get the iphone to pick it up.
When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess.
But that's about the only other way I thought about deducting scale from an image without any fixed objects.
When I drag a finger across the screen, there seems to be a limit to how quickly touchesMoved gets called. It seems that the limit is around 50/second. What controls this limit, and is it possible to change this?
I'm guessing this is controlled at a very low level and I'm stuck with it, but perhaps not. I'd love to be able to have a higher resolution time-wise for these touch events.
Yes, I think the iPhone is sampling the touches roughly 60 times per second. When you spend time in touchedMoved that number decreases relative to how much time you are using for processing the touch.
It takes computational resources to handle touchesMoved, especially if you do a lot in your touch handler, so you are going to hit an upper bound at some point, assuming it is not capped.
My guess is that there is no cap, and you are just hitting the upper bound of what the iPhone can handle while it is also trying to draw your application.
Just a guess though.
As a result of the search for an answer of this question, I ran into a rather uncomfortable finding.
It seems that rendering only glClearColor() at 60 fps pushes the iPhone at 27% render utilization.
That means that doing hardly anything at all - only refreshing the screen - makes the iPhone use more than a quarter of its render capacity.
Is this to be expected?
The POWERVR should hit around 270 megapixels/second at least, according to the documentation. As unwind correctly stated below, 480×320 at 60 fps equals about 9.2 megapixels/second, putting the total performance at around 40 megapixels/second, wich is suspicious.
This just means that you should design your rendering to fill all pixels every frame with actual content, so you don't need to clear the framebuffer at all. That is, at least, the classic "solution" to the bottleneck of clearing: don't do it.
In typical first-person engines, for instance, this is achieved by rendering a skybox and a ground "plane", that always cover the entire viewport.
I haven't read up on the details of the iPhone's rendering subsystem, but it does seem to indicate a very low fill-rate. 480×320 at 60 fps equals about 9.2 megapixels/second, putting the total performance at around 40. Sounds suspicious.
in my (puzzle) game the pieces are drawn on-screen using a CALayer for each piece. There are 48 pieces (in an 8x6 grid) with each piece being 48x48 pixels. I'm not sure if this is just too many layers, but if this isn't the best solution I don't know what is, because redrawing the whole display using Quartz2D every frame doesn't seem like it would be any faster.
Anyway, the images for the pieces come from one big PNG file that has 24 frames of animation for 10 different states (so measures 1152 x 480 pixels) and the animation is done by setting the contentsRect property of each CALayer as I move it.
This actually seems to work pretty well with up to 7 pieces tracking a touch point in the window, but the weird thing is that when I initially start moving the pieces, for the first 0.5 a second or so, it's very jerky like the CPU is doing something else, but after that it'll track and update the screen at 40+ FPS (according to Instruments).
So does anyone have any ideas what could account for that initial jerkiness?
The only theory I could come up with is it's decompressing bits of the PNG file into a temporary location and then discarding them after the animation has stopped, in which case is there a way to stop Core Animation doing that?
I could obviously split the PNG file up into 10 pieces, but I'm not convinced that would help as they'd all (potentially) still need to be in memory at once.
EDIT: OK, as described in the comment to the first answer, I've split the image up into ten pieces that are now 576 x 96, so as to fit in with the constraints of the hardware. It's still not as smooth as it should be though, so I've put a bounty on this.
EDIT2: I've linked one of the images below. Essentially the user's touch is tracked, the offset from the start of the tracking is calculated (they can one move horizontal or vertical and only one place at a time). Then one of the images is selected as the content of the layer (depending on what type of piece it is and whether it's moving horizontally or vertically). Then the contentsRect property is set to chose one 48x48 frame from the larger image with something like this:-
layer.position = newPos;
layer.contents = (id)BallImg[imgNum];
layer.contentsRect = CGRectMake((1.0/12.0)*(float)(frame % 12),
0.5 * (float)(frame / 12),
1.0/12.0, 0.5);
btw. My theory about it decompressing the source image a-fresh each time wasn't right. I wrote some code to copy the raw pixels from the decoded PNG file into a fresh CGImage when the app loads and it didn't make any difference.
Next thing I'll try is copying each frame into a separate CGImage which will get rid of the ugly contentsRect calculation at least.
EDIT3: Further back-to-basics investigation points to this being a problem with touch tracking and not a problem with Core Animation at all. I found a basic sample app that tracks touches and commented out the code that actually causes the screen to redraw and the NSLog() shows the exactly the same problem I've been experiencing: A long-ish delay between the touchesBegin and first touchesMoved events.
2009-06-05 01:22:37.209 TouchDemo[234:207] Begin Touch ID 0 Tracking with image 2
2009-06-05 01:22:37.432 TouchDemo[234:207] Touch ID 0 Tracking with image 2
2009-06-05 01:22:37.448 TouchDemo[234:207] Touch ID 0 Tracking with image 2
2009-06-05 01:22:37.464 TouchDemo[234:207] Touch ID 0 Tracking with image 2
2009-06-05 01:22:37.480 TouchDemo[234:207] Touch ID 0 Tracking with image 2
Typical gap between touchesMoved events is 20ms. The gap between the touchesBegin and first touchesMoved is ten times that. And that's with no computation or screen updating at all, just the NSLog call. Sigh. I guess I'll open this up a separate question.
I don't think it's a memory issue; I'm thinking that it has to do with the inefficiency of having that large of an image in terms of Core Animation.
Core Animation can't use it natively, as it exceeds the maximum texture size on the GPU (1024x1024). I would break it up some; individual images might give you the best performance, but you'll have to test to find out.
IIRC, UIImageView does its animating by setting successive individual images, so if it's good enough for Apple….
When it is about performance, I definitly recommend using Shark (also over Instruments). In the 'Time Profile' you can see what are the bottlenecks in your app, even if it is something in Apple's code. I used it a lot while developing my iPhone app that uses OpenGL and Core Animation.
Have you tried using CATiledLayer yet?
It is optimized for this type of work.