how to scale down Cocos2D image repeatedly - iphone

I'm repeatedly shrinking an image (and then render it to a new full sized image) by a small amount, and the result is that a stripe down the middle is not being shrunk. I'm assuming this has to do with the resize method cocos2d uses. If I increase the amount I scale down the image by the resize is too fast, and if I decrease the shrink size the bar down the middle gets even bigger! the following code is called 60 times a second. the picture below shows the result! So.. any suggestions on how to get rid of the bar?
[mySprite setScaleX:rtt.scaleX - .05];

I wasn't sure quite what you meant, but did you mean you're calling this line 60 times a second?
[mySprite setScaleX:rtt.scaleX - .05];
If so then your sprite's scale will become negative in a third of a second...

Every time you manipulate an image, you lose information.
A better approach would be to always resize from the original, and just change the resize amount each time, rather than continually resizing the result of the last resize operation.

I'm new to cocos2d engine, so hope this helps. If your shrinking an image, I would suggest using CCScaleBy. You can try something like this...
CCScaleBy *yourSprite = [CCScaleBy actionWithDuration: .01 scaleX: .95 scaleY: 1.0f];
This will scale your sprite down by 5% each time its called. Then you can have it replaced by the new image when it reaches what you would consider its smallest pixel point. The duration may need to be played with, but thought this would help.

Related

Changing UIVIEW's transformation/rotation/scale WITHOUT animation

I need to change the size of my images according to their distance from the center of the screen. If an image is closer to the middle, it should be given a scale of 1, and the further it is from the center, the nearer it's scale is to zero by some function.
Since the user is panning the screen, I need a way to change the images (UIViews) scale, but since this is not a very classic animation where I know a how to define an animation sequence exactly - mostly because of timing issues (due to system performance, I don't know how long the animation will last), I am going to need to simply change the scale in one step (no timed animations).
This way every frame the functiion gets called when panning, all images should update easily.
Is there a way to do that ?
You could directly apply a CGAffineTransform to your UIImageView. i,e:
CGAffineTransform trans = CGAffineTransformMakeScale(1.0,1.0);
imageView.transform = trans;
Of course you can change your values, and or use other CGAffineTransform's, this should get you on your way though.
Hope it helps !

sequence of images of my sprite hurt performance

so I have a sprite that I create every second on the screen. This sprite is a sequence of 20 images . I would like to know if it can hurt performance ? if yes how can I reduce the impact on the performance thank you :) sorry for my english I'm french :/
I've worked with sprites before and yes the more you have on the screen the lower your performance will be. The sequence of 20 images part is what worries me. Instead of using 20 images, look into something called a spritesheet.
A spritesheet is where all your images (an animation In your case right??) are in one file and you keep some parameters stored like
-How big is one frame?
-How many frames?
-X and Y positions.
Example: If I have a 5 frame animation, and each image is 20x100 pixels. I would put them all in one image file side by side, making the image file 100x100. I would draw each portion of this spritesheet on the screen in sequential order.
So my parameters would be:
-SizePerFrame = (20, 100)
-TotalSizeOfImage = (100, 100)
-FramesTotal = 5
-x y of First frame = (0,0)
So I would draw a portion from (0,0) to (20,100) first frame, (20,100) to (40,100) the second and so on.
Hope this makes sense

UIView animated crossfade causes alpha to drop below 1.0?

I'm crossfading between two views using UIView animation. I've noticed the following surprising fact:
If I have (say) two identical views in an identical place, and I animate a cross fade between them (e.g. animate the alpha from 0.0 to 1.0 on one while going from 1.0 to 0.0 on the other, in the same animation), during the animation, the visible result arcs slightly below opaque during the animation-- this is a noticeable artifact and can be verified by putting some other view behind the crossfaded views (it becomes visible briefly during the animation before being obscured again).
I would expect (using any animation timing curve) that perfectly paired 0->1 and 1->0 alpha transitions would always add up to a net alpha of 1.0, and that in this test situation, I should never see any visible change in alpha, yet I do.
Any idea what's going on here? I could hack around this for a "fix", but I'm mostly interested in what I'm missing conceptually in the blending.
Thanks!
Two stacked views with alphas adding up to 1.0 doesn't do what you think it does. They are multiplied, not added.
Let's take it a chunk at a time. Here's a background, shining through 100%:
bg
|======>
|======>
|======>
|======>
Now let's add another view on top, 50% opacity. That means it lets 50% of the background through
bg 50%
|===|===>
|===|
|===|===>
|===|
What if we have another 50% view on top?
bg 50% 50%
|===|===|===>
|===| |
|===|===|
|===| |
Another 50% of the stuff behind is passed through. This means that 50% × 50% = 25% of the background layer will still be showing through.
Now, what do you really want to do? You want the new view to appear smoothly, an increasing amount passing through the old view. So just stack the two views, and fade out the top one, but leave the bottom at 100% opacity the whole time. Otherwise you'll be showing some of the background through during the animation.
This is just a guess with no basis in actual fact, but since alpha is expressed as a CGFloat, I would advise against trying to assume anything about it adding up to 1.0 exactly, given the difficulty representing floating points with that type of precision. They likely add up to .99 or so, causing this artifact.

IPHONE - detecting if a pointer is inside a coordinate

I have an object that can be moved across the screen with the finger. This object is an image, a small image, like a thumbnail.
On the screen I have a background image where 10 rectangles were drawn. This rectangles are part of the background image. The background image is dumb, just a UIImageView.
I have 10 sounds I want to play every time the thumbnail passes over one of the 10 areas, represented by the 10 rectangles on the background. Each area has its own sound.
All I have is the size of the translating thumbnail and its coordinates (like origin, center, width and height). I have the origin (x and y) coordinates in realtime.
The point is: how to detect if the translating thumbnail is over one of the 10 squares considering a certain tolerance (example +- 10 pixels) and discover what area is it?
The problem: as I have the origin coordinates in realtime I can always create a loop to check if this value is inside one of the 10 rectangles, but this is CPU intensive because the loop will run for each pixel the thumbnail scrolls.
Any other ideas on how to do that?
thanks for any help.
You could poll for the coordinates at a predefined interval instead of constantly.
The idea behind this is in the main loop to set off a (say 1 second) timer. When the timer finishes it fires an event in which you can inquire the current location. Then use that value to check to see which rectangle it's in.
I would use a timer to fire a method which checks them every .2-.5 seconds:
[NSTimer scheduledTimerWithTimeInterval:.2 target:self selector:#selector(checkPointInRects) userInfo:nil repeats:YES];
Use touchesBegan,moved,ended to cache the current touches and refer to them in the checkPointInRects method. You can use CGRectContainsPoint to determine if the point lies in any given rectangle.

Why is scaling down a UIImage from the camera so slow?

resizing a camera UIImage returned by the UIImagePickerController takes a ridiculously long time if you do it the usual way as in this post.
[update: last call for creative ideas here! my next option is to go ask Apple, I guess.]
Yes, it's a lot of pixels, but the graphics hardware on the iPhone is perfectly capable of drawing lots of 1024x1024 textured quads onto the screen in 1/60th of a second, so there really should be a way of resizing a 2048x1536 image down to 640x480 in a lot less than 1.5 seconds.
So why is it so slow? Is the underlying image data the OS returns from the picker somehow not ready to be drawn, so that it has to be swizzled in some fashion that the GPU can't help with?
My best guess is that it needs to be converted from RGBA to ABGR or something like that; can anybody think of a way that it might be possible to convince the system to give me the data quickly, even if it's in the wrong format, and I'll deal with it myself later?
As far as I know, the iPhone doesn't have any dedicated "graphics" memory, so there shouldn't be a question of moving the image data from one place to another.
So, the question: is there some alternative drawing method besides just using CGBitmapContextCreate and CGContextDrawImage that takes more advantage of the GPU?
Something to investigate: if I start with a UIImage of the same size that's not from the image picker, is it just as slow? Apparently not...
Update: Matt Long found that it only takes 30ms to resize the image you get back from the picker in [info objectForKey:#"UIImagePickerControllerEditedImage"], if you've enabled cropping with the manual camera controls. That isn't helpful for the case I care about where I'm using takePicture to take pictures programmatically. I see that that the edited image is kCGImageAlphaPremultipliedFirst but the original image is kCGImageAlphaNoneSkipFirst.
Further update: Jason Crawford suggested CGContextSetInterpolationQuality(context, kCGInterpolationLow), which does in fact cut the time from about 1.5 sec to 1.3 sec, at a cost in image quality--but that's still far from the speed the GPU should be capable of!
Last update before the week runs out: user refulgentis did some profiling which seems to indicate that the 1.5 seconds is spent writing the captured camera image out to disk as a JPEG and then reading it back in. If true, very bizarre.
Seems that you have made several assumptions here that may or may not be true. My experience is different than yours. This method seems to only take 20-30ms on my 3Gs when scaling a photo snapped from the camera to 0.31 of the original size with a call to:
CGImageRef scaled = CreateScaledCGImageFromCGImage([image CGImage], 0.31);
(I get 0.31 by taking the width scale, 640.0/2048.0, by the way)
I've checked to make sure the image is the same size you're working with. Here's my NSLog output:
2009-12-07 16:32:12.941 ImagePickerThing[8709:207] Info: {
UIImagePickerControllerCropRect = NSRect: {{0, 0}, {2048, 1536}};
UIImagePickerControllerEditedImage = <UIImage: 0x16c1e0>;
UIImagePickerControllerMediaType = "public.image";
UIImagePickerControllerOriginalImage = <UIImage: 0x184ca0>;
}
I'm not sure why the difference and I can't answer your question as it relates to the GPU, however I would consider 1.5 seconds and 30ms a very significant difference. Maybe compare the code in that blog post to what you are using?
Best Regards.
Use Shark, profile it, figure out what's taking so long.
I have to work a lot with MediaPlayer.framework and when you get properties for songs on the iPod, the first property request is insanely slow compared to subsequent requests, because in the first property request MobileMediaPlayer packages up a dictionary with all the properties and passes it to my app.
I'd be willing to bet that there is a similar situation occurring here.
EDIT: I was able to do a time profile in Shark of both Matt Long's UIImagePickerControllerEditedImage situation and the generic UIImagePickerControllerOriginalImage situation.
In both cases, a majority of the time is taken up by CGContextDrawImage. In Matt Long's case, the UIImagePickerController takes care of this in between the user capturing the image and the image entering 'edit' mode.
Scaling the percentage of time taken to CGContextDrawImage = 100%, CGContextDelegateDrawImage then takes 100%, then ripc_DrawImage (from libRIP.A.dylib) takes 100%, and then ripc_AcquireImage (which it looks like decompresses the JPEG, and takes up most of its time in _cg_jpeg_idct_islow, vec_ycc_bgrx_convert, decompress_onepass, sep_upsample) takes 93% of the time. Only 7% of the time is actually spent in ripc_RenderImage, which I assume is the actual drawing.
I have had the same problem and banged my head against it for a long time. As far as I can tell, the first time you access the UIImage returned by the image picker, it's just slow. As an experiment, try timing any two operations with the UIImage--e.g., your scale-down, and then UIImageJPEGRepresentation or something. Then switch the order. When I've done this in the past, the first operation gets a time penalty. My best hypothesis is that the memory is still on the CCD somehow, and transferring it into main memory to do anything with it is slow.
When you set allowsImageEditing=YES, the image you get back is resized and cropped down to about 320x320. That makes it faster, but it's probably not what you want.
The best speedup I've found is:
CGContextSetInterpolationQuality(context, kCGInterpolationLow)
on the context you get back from CGBitmapContextCreate, before you do CGContextDrawImage.
The problem is that your scaled-down images might not look as good. However, if you're scaling down by an integer factor--e.g., 1600x1200 to 800x600--then it looks OK.
Here's a git project that I've used and it seems to work well. The usage is pretty clean as well - one line of code.
https://github.com/AliSoftware/UIImage-Resize
DO NOT USE CGBitmapImageContextCreate in this case! I spent almost a week in the same situation you are in. Performance will be absolutely terrible and it will eat up memory like crazy. Use UIGraphicsBeginImageContext instead:
// create a new CGImage of the desired size
UIGraphicsBeginImageContext(desiredImageSize);
CGContextRef c = UIGraphicsGetCurrentContext();
// clear the new image
CGContextClearRect(c, CGRectMake(0,0,desiredImageSize.width, desiredImageSize.height));
// draw in the image
CGContextDrawImage(c, rect, [image CGImage]);
// return the result to our parent controller
UIImage * result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
In the above example (from my own image resize code), "rect" is significantly smaller than the image. The code above runs very fast, and should do exactly what you need.
I'm not entirely sure why UIGraphicsBeginImageContext is so much faster, but I believe it has something to do with memory allocation. I've noticed that this approach requires significantly less memory, implying that the OS has already allocated space for an image context somewhere.