I'm using the QuartzImage class in one of the demo projects and what I'm was trying to achieve was a simple frame display unit that basically draws an image (320x480) every 1/10th of sec. So my "frame rate" should be 10 frames per sec.
In the QuartzImage demo class, there is a drawInContext method and in this method, it's basically drawing a CGImageRef using the CGContextDrawImage(), I measured the time it took to finish complete and it's taking on average around ~200ms.
2011-03-24 11:12:33.350 QuartzDemo[3159:207] drawInContext took 0.19105 secs
-(void)drawInContext:(CGContextRef)context
{
CFAbsoluteTime start = CFAbsoluteTimeGetCurrent();
CGRect imageRect;
imageRect.origin = CGPointMake(0.0, 0.0);
imageRect.size = CGSizeMake(320.0f, 480.0f);
CGContextDrawImage(context, imageRect, image);
CFAbsoluteTime end = CFAbsoluteTimeGetCurrent();
NSLog(#"drawInContext took %2.5f secs", end - start);
}
Can anyone explain why it's taking that long and if there is any other way of improving the performance? 200ms just seems much more longer than it should have taken.
UPDATES
I tried #Brad-Larson's suggestion but not seeing a lot of performance improvement.
So the updated version is I got my own class
#interface FDisplay : UIView {
CALayer *imgFrame;
NSInteger frameNum;
}
end
So in my Class implementation
- (id)initWithFrame:(CGRect)frame {
............
frameNum = 0;
NSString *file = [NSString stringWithFormat:#"frame%d",frameNum];
UIImage *img = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:file ofType:#"jpg"]];
imgFrame = [CALayer layer];
CGFloat nativeWidth = CGImageGetWidth(img.CGImage);
CGFloat nativeHeight = CGImageGetHeight(img.CGImage);
CGRect startFrame = CGRectMake(0.0, 0.0, nativeWidth, nativeHeight);
imgFrame.contents = (id)img.CGImage;
imgFrame.frame = startFrame;
CALayer *l = [self layer];
[l addSublayer:imgFrame];
}
I have a NSTimer going at 0.1f calling my refresh method
NSString *file = [NSString stringWithFormat:#"frame%d",frameNum];
UIImage *img = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:file ofType:#"jpg"]];
frameNum++;
if (frameNum>100)
frameNum = 0;
[CATransaction begin];
[CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions];
[imgFrame setContents:(id)img.CGImage];
[CATransaction commit];
end = CFAbsoluteTimeGetCurrent();
NSLog(#"(%d)refresh took %2.5f secs", [[self subviews] count],end - start);
I think I got everything right but the frame rate is still way low,
refresh took 0.15960 secs
Using Quartz to draw out images is about the slowest way you could do this, due to the way that content is presented to the screen.
As a step up, you could use a UIImageView and swap out its hosting image for each frame. An even faster approach might be to use a CALayer and set its contents property to a CGImageRef representing each image frame. You may need to disable implicit animations using a CATransaction for this to be as fast as it can be. If I recall, I was able to get 320x480 images to be drawn at over 15 FPS on an original iPhone using the latter method.
Finally, for optimal performance you could set up an OpenGL ES display with a rectangle that filled the screen and supply the images as textures to be rendered on that rectangle. This would require significantly more code, but it would be extremely fast.
I have been porting my game from Android to iPhone and I was shocked by the performance of Quartz.
The same code on Android is 10x faster than on iPhone. I have a few DrawImages, and a bunch of line draws and beziers.
It's so slow on iOS, specially iPhone 4, where the processor struggles to keep up with the retina display resolution.
My game performed perfectly at 60fps in almost ANY android device.
I tried several approaches for rendering (always avoiding OpenGL). I started by drawing everything at every frame. Then I started rendering as much as I could before the game loop, by using UIImage's. Now I'm trying to go with CALayers. Although I can see the game at steady 60FPS on iPhone 3GS and 4S, the most I can get on the iPhone 4 is 45 FPS.
Related
My goal is to use the Google Street View API to display a full pledged panorama scrollable street view image to the user. Basically the API provides me with many images where I can vary the direction, height, zoom, location etc. I can retrieve all these and hope to stitch them together and view it. The first question is, do you know any resources that demoes this full google street view demo working? Where a user can swipe around to move street view around, just like in that old iOS 5 Map Street View thing that I am sure we all miss...
If not, I will be basically downloading hundreds of photos that differ in vertical and horizontal direction. Is there a library or API or resource or method where I can stitch all these photos together to make a big panorama and make it so the user can swipe to view the big panorama on the tiny iPhone screen?
Thanks to everyone!
I threw together a quick implementation to do a lot of this as a demo for you. There are some excellent open source libraries out there that make an amateur version of StreetView very simple. You can check out my demo on GitHub: https://github.com/ocrickard/StreetViewDemo
You can use the heading and pitch parameters from the Google StreetView API to generate tiles. These tiles could be arranged in a UIScrollView as both Bilal and Geraud.ch suggest. However, I really like the JCTiledScrollView because it contains a pretty nice annotation system for adding pins on top of the images like Google does, and its datasource/delegate structure makes for some very straight forward image handling.
The meaty parts of my implementation follow:
- (UIImage *)tiledScrollView:(JCTiledScrollView *)scrollView imageForRow:(NSInteger)row column:(NSInteger)column scale:(NSInteger)scale
{
float fov = 45.f / scale;
float heading = fmodf(column*fov, 360.f);
float pitch = (scale - row)*fov;
if(lastRequestDate) {
while(fabsf([lastRequestDate timeIntervalSinceNow]) < 0.1f) {
//continue only if the time interval is greater than 0.1 seconds
}
}
lastRequestDate = [NSDate date];
int resolution = (scale > 1.f) ? 640 : 200;
NSString *path = [NSString stringWithFormat:#"http://maps.googleapis.com/maps/api/streetview?size=%dx%d&location=40.720032,-73.988354&fov=%f&heading=%f&pitch=%f&sensor=false", resolution, resolution, fov, heading, pitch];
NSError *error = nil;
NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:path] options:0 error:&error];
if(error) {
NSLog(#"Error downloading image:%#", error);
}
UIImage *image = [UIImage imageWithData:data];
//Distort image using GPUImage
{
//This is where you should try to transform the image. I messed around
//with the math for awhile, and couldn't get it. Therefore, this is left
//as an exercise for the reader... :)
/*
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:image];
GPUImageTransformFilter *stillImageFilter = [[GPUImageTransformFilter alloc] init];
[stillImageFilter forceProcessingAtSize:image.size];
//This is actually based on some math, but doesn't work...
//float xOffset = 200.f;
//CATransform3D transform = [ViewController rectToQuad:CGRectMake(0, 0, image.size.width, image.size.height) quadTLX:-xOffset quadTLY:0 quadTRX:(image.size.width+xOffset) quadTRY:0.f quadBLX:0.f quadBLY:image.size.height quadBRX:image.size.width quadBRY:image.size.height];
//[(GPUImageTransformFilter *)stillImageFilter setTransform3D:transform];
//This is me playing guess and check...
CATransform3D transform = CATransform3DIdentity;
transform.m34 = fabsf(pitch) / 60.f * 0.3f;
transform = CATransform3DRotate(transform, pitch*M_PI/180.f, 1.f, 0.f, 0.f);
transform = CATransform3DScale(transform, 1.f/cosf(pitch*M_PI/180.f), sinf(pitch*M_PI/180.f) + 1.f, 1.f);
transform = CATransform3DTranslate(transform, 0.f, 0.1f * sinf(pitch*M_PI/180.f), 0.f);
[stillImageFilter setTransform3D:transform];
[stillImageSource addTarget:stillImageFilter];
[stillImageFilter prepareForImageCapture];
[stillImageSource processImage];
image = [stillImageFilter imageFromCurrentlyProcessedOutput];
*/
}
return image;
}
Now, in order to get the full 360 degree, infinite scrolling effect Google has, you would have to do some trickery in the observeValueForKeyPath method where you observe the contentOffset of the UIScrollView. I've started implementing this, but did not finish it. The idea is that when the user reaches either the left or right side of the view, the contentOffset property of the scrollView is pushed to the opposite side of the scrollView. If you can get the content to align properly, and you set up the contentSize just right, this should work.
Finally, I should note that the Google StreetView system has a limit of 10 images/second, so you have to throttle your requests or the IP address of the device will be blacklisted for a certain amount of time (my home internet is now blacked out from StreetView requests for the next few hours 'cause I didn't understand this at first).
you need to use UIScrollView, set its clipSubviews property to true, add all the images to the UIScrollView and the UIScrollView's contentsOffset according to the images.
You should use the method described on this post: http://mobiledevelopertips.com/user-interface/creating-circular-and-infinite-uiscrollviews.html
You will have to make some factory to transform images to the good size and to your viewport (iPhone/iPad). And then add some buttons where you can clicked to go to the next place.
Unfortunately, if you want to go to a globe version (instead of a tube one), I think you'll need to go full openGL to display images in this 3D surface.
So I've got a basic drawing app in the process that allows me to draw lines. I draw to an off screen bitmap then present the image in drawRect. It works but its way too slow, updating about half a second after you've drawn it with your finger. I took the code and adapted it from this tutorial, http://www.youtube.com/watch?v=UfWeMIL-Nu8&feature=relmfu , as you can see in the comments people are also saying its too slow but the guy hasn't responded.
So how can I speed it up? or is there a better way to do it? any pointers will be appreciated.
Heres the code in my DrawView.m.
-(id)initWithCoder:(NSCoder *)aDecoder {
if ((self=[super initWithCoder:aDecoder])) {
[self setUpBuffer];
}
return self;
}
-(void)setUpBuffer {
CGContextRelease(offscreenBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
offscreenBuffer = CGBitmapContextCreate(NULL, self.bounds.size.width, self.bounds.size.height, 8, self.bounds.size.width*4, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(offscreenBuffer, 0, self.bounds.size.height);
CGContextScaleCTM(offscreenBuffer, 1.0, -1.0);
}
-(void)drawToBuffer:(CGPoint)coordA :(CGPoint)coordB :(UIColor *)penColor :(int)thickness {
CGContextBeginPath(offscreenBuffer);
CGContextMoveToPoint(offscreenBuffer, coordA.x,coordA.y);
CGContextAddLineToPoint(offscreenBuffer, coordB.x,coordB.y);
CGContextSetLineWidth(offscreenBuffer, thickness);
CGContextSetLineCap(offscreenBuffer, kCGLineCapRound);
CGContextSetStrokeColorWithColor(offscreenBuffer, [penColor CGColor]);
CGContextStrokePath(offscreenBuffer);
}
- (void)drawRect:(CGRect)rect {
CGImageRef cgImage = CGBitmapContextCreateImage(offscreenBuffer);
UIImage *image =[[UIImage alloc] initWithCGImage:cgImage];
CGImageRelease(cgImage);
[image drawInRect:self.bounds];
}
Works perfectly on the simulator but not device, I imagine that's something to do with processor speed.
I'm using ARC.
I tried to fix your code, however as you only seem to have posted half of it I couldn't get it working (Copy+pasting code results in lots of errors, let alone start performance tuning it).
However there are some tips you can use to VASTLY improve performance.
The first, and probably most noticeably, is -setNeedsDisplayInRect: rather then -setNeedsDisplay. This will mean that it only redraws the little rect that changed. For an iPad 3 with 1024*768*4 pixels that is a lot of work. Reducing that down to about 20*20 or less for each frame will massively improve performance.
CGRect rect;
rect.origin.x = minimum(coordA.x, coordB.x) - (thickness * 0.5);
rect.size.width = (maximum(coordA.x, coordB.x) + (thickness * 0.5)) - rect.origin.x;
rect.origin.y = minimum(coordA.y, coordB.y) - (thickness * 0.5);
rect.size.height = (maximum(coordA.y, coordB.y) + (thickness * 0.5)) - rect.origin.y;
[self setNeedsDisplayInRect:rect];
Another big improvement you could make is to only draw the CGPath for this current touch (which you do). However you then draw that saved/cached image in the draw rect. So, again, it is redrawn each frame. A better approach is to have the draw view being transparent and then to use a UIImageView behind that. UIImageView is the best way to display images on iOS.
- DrawView (1 finger)
-drawRect:
- BackgroundView (the image of the old touches)
-self.image
The draw view would itself then only ever draw the current touch only the part that changes each time. When the user lifts their finger you can cache that to a UIImage, draw that over the current background/cache UIImageView's image and set the imageView.image to the new image.
That final bit when combining the images involves drawing 2 full screen images into an off screen CGContext and so will cause lag if done on the main thread, instead this should be done in a background thread and then the result pushed back to the main thread.
* touch starts *
- DrawView : draw current touch
* touch ends *
- 'background thread' : combine backgroundView.image and DrawView.drawRect
* thread finished *
send resulting UIImage to main queue and set backgroundView.image to it;
Clear DrawView's current path that is now in the cache;
All of this combined can make a very smooth 60fps drawing app. However, views are not updated as quickly as we'd like so the drawing when moving the figure faster looks jagged. This can be improved by using UIBezierPath's instead of CGPaths.
CGPoint lastPoint = [touch previousLocationInView:self];
CGPoint mid = midPoint(currentPoint, lastPoint);
-[UIBezierPath addQuadCurveToPoint:mid controlPoint:lastPoint];
The reason it is slow is because every frame you are creating a bitmap and trying to draw that.
You asked for better ways of doing it? Have you looked at the apple sample code for a drawing app on iOS? If you don't like that, then you can always use cocos2d which provides a CCRenderTexture class (and sample code).
Currently, you are using a method which you already know is not efficient.
With this approach I suppose you should consider using background thread for all hard work of image rendering and main thread for UI updates only, i. e.
__block UIImage *__imageBuffer = nil;
- (UIImage *)drawSomeImage
{
UIGraphicsBeginImageContext(self.bounds);
// draw image with CoreGraphics
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (void)updateUI
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// prepare image on background thread
__imageBuffer = [self drawSomeImage];
dispatch_async(dispatch_get_main_queue(), ^{
// calling drawRect with prepared image
[self setNeedsDisplay];
});
});
}
- (void)drawRect
{
// draw image buffer on current context
[__imageBuffer drawInRect:self.bounds];
}
I am omitting some details for making the optimization more clear. Even better to switch to UIImageView. This way you could get rid from critically important - (void)drawDect method and update image property of the UIImageView when the image is ready.
Well I think you need to change your logic. You may get some very good idea with the help of this link
http://devmag.org.za/2011/04/05/bzier-curves-a-tutorial/
and if you think that you have no time to make understanding then you may go directly to this code https://github.com/levinunnink/Smooth-Line-View :) I hop this will help you a lot.
Use CgLayer for caching your paths, read the docs, Its best for optimization.
I did something exactly like this. Check out the Pixelate app on AppStore. In order to draw , I used tiles in my code. After all , when you toch the screen and draw something you need to re-draw the entire image which is a very heavy operation. If you like the way Pixelate is moving , here's how I did it:
1)Split my image in n x m tiles. That was so I can change those values and obtain bigger/smaller tiles. In the worst case scenario (the user taps at the intersection of 4 tiles) you have to re-draw those 4 tiles. Not the entire image.
2) Make a 3 dimensional matrix in which I was storring the pixel information of each tile. So matrix[0][0][0] was the red value ( each pixel has a RGB or RGBA value depending if you are using pngs or jpgs) of the first pixel of the first tile.
3) Get the location the user pressed and calculate the tiles that need to be modified.
4) Modify the values in the matrix and update the tiles that need to update.
NOTE: This most certainly isn't the best option. It's just an alternative. I mentioned it because I think it is close to what you have right now. And it worked for me on an iPhone 3GS. If you are targeting >= iPhone 4 , you should be more than ok.
Regards,
George
Whatever the method u've suggested is way too inefficient, because creating the image every time you move the finger is inappropriate.
If its just paths that you need to draw, then have a CGMutablePathref as a member variable,
and in draw rect just move to the specified point using CGPath functions.
And more importantly, while refreshing the view, call setNeedsDisplayInRect passing only the area that you need to draw. Hope it will work for you.
I'm trying to draw some shadows in a rect. The shadow image itself is about 1px * 26px.
Here's two methods I've thought of for drawing the image in the view:
//These methods are called in drawRect:
/* Method 1 */
[self.upperShadow drawInRect:rectHigh]; //upperShadow is UIImage
[self.lowerShadow drawInRect:rectLow];
/* Method 2 */
CALayer *shadowTop = [CALayer layer];
shadowTop.frame = rectHigh;
shadowTop.contents = (__bridge id)topShadow; //topShadow is CGImage
[self.layer addSublayer:shadowTop];
CALayer *shadowLow = [CALayer layer];
shadowLow.frame = rectLow;
shadowLow.contents = (__bridge id)lowShadow;
[self.layer addSublayer:shadowLow];
/* Method 3 */
UIImageView *tShadow = [[UIImageView alloc] initWithFrame:rectHigh];
UIImageView *bShadow = [[UIImageView alloc] initWithFrame:rectLow];
tShadow.image = self.upperShadow;
bShadow.image = self.lowerShadow;
tShadow.contentMode = UIViewContentModeScaleToFill;
bShadow.contentMode = UIViewContentModeScaleToFill;
[self addSubview:tShadow];
[self addSubview:bShadow];
I'm curious which of these is better, when it comes to performance in drawing and animation. From my benchmarking it seems that the layers are faster to draw. Here are some benchmarking stats:
drawInRect: took 0.00054 secs
CALayers took 0.00006 secs
UIImageView took 0.00017 secs
The view which contains these shadows is going to have a view above it which will be animated (the view itself is not). Anything that would degrade the animation performance should be avoided. Any thoughts between the three methods?
If the shadows are static, then the best way is to use two UIImageViews. It's even smarter than CALayer about how to deal with static images (though I don't know if that's going to make a difference here), and will otherwise have the same benefits as CALayer, such as having all compositing being done on the GPU instead of on the CPU (as your Method 2 will require).
i have a scroll view loaded with 3 view controllers. each view controller is drawing its layers with that code -
(there us more then that but I pulled it out to check if it will help). still i have very crappy sliding.
any help ?
shani
CALayer *sublayer = [CALayer layer];
sublayer.backgroundColor = [Helper cardBackGroundColor:card].CGColor;
sublayer.shadowOffset = CGSizeMake(0, 3);
sublayer.shadowRadius = 5.0;
sublayer.shadowColor = [UIColor blackColor].CGColor;
sublayer.shadowOpacity = 0.8;
sublayer.frame = CGRectInset(self.view.layer.frame, 20, 20);
sublayer.borderColor = [UIColor blackColor].CGColor;
sublayer.borderWidth = 2.0;
sublayer.cornerRadius = 10.0;
[self.view.layer addSublayer:sublayer];
Drawing things with CALayer often yields poor performance. We usually use a stretchable image to get adequate performance. When you think of it, it does make sense to render it before hand rather than using the iPhone's limited processing power to render it in real time.
It's possible that you can get adequate performance from CALayer, but drawing a png will probably still be faster, thus saving battery life time.
EDIT: So here's an example to explain the concept.
This code actually replaced a CALayer drawing that was too slow.
UIImageView *shadow = [[UIImageView alloc] initWithFrame:frame];
shadow.image = [[UIImage imageNamed:#"shadow.png"] stretchableImageWithLeftCapWidth:16.0 topCapHeight:16.0];
[contentView addSubview:shadow];
[shadow release];
shadow.png is 34 by 34 pixels and contains a shadowed square. Thanks to the stretchable image it's possible to resize the square without stretching the shadow. For more information about this I would suggest reading the documentation for stretchableImageWithLeftCapWidth:topCapHeight:. Also Google will help you find guides on how to work with stretchable images. If you have more questions I'll be happy to answer them.
You have a mask (assuming you somewhere say masksToBounds=YES) and a shadow on this layer. Both cause an off screen rendering pass.
Please watch the WWDC 2010 Session 425 - Core Animation in Practice Part 2
Which you can find here;
http://developer.apple.com/videos/wwdc/2010/
I have a view with UIImageView and an UIImage set to it. How do I make image sharp or blur using coregraphics?
Apple has a great sample program called GLImageProcessing that includes a very fast blur/sharpen effect using OpenGL ES 1.1 (meaning it works on all iPhones, not just the 3gs.)
If you're not fairly experienced with OpenGL, the code may make your head hurt.
Going down the OpenGL route felt like insane overkill for my needs (blurring a touched point on an image). Instead I implemented a simple blurring process that takes a touch point, creates a rect containing that touch point, samples the image in that point and then redraws the sample image upside down on top of the source rect several times slightly offset with slightly different opacity. This produces a pretty nice poor man's blur effect without an insane amount of code and complexity. Code follows:
- (UIImage*)imageWithBlurAroundPoint:(CGPoint)point {
CGRect bnds = CGRectZero;
UIImage* copy = nil;
CGContextRef ctxt = nil;
CGImageRef imag = self.CGImage;
CGRect rect = CGRectZero;
CGAffineTransform tran = CGAffineTransformIdentity;
int indx = 0;
rect.size.width = CGImageGetWidth(imag);
rect.size.height = CGImageGetHeight(imag);
bnds = rect;
UIGraphicsBeginImageContext(bnds.size);
ctxt = UIGraphicsGetCurrentContext();
// Cut out a sample out the image
CGRect fillRect = CGRectMake(point.x - 10, point.y - 10, 20, 20);
CGImageRef sampleImageRef = CGImageCreateWithImageInRect(self.CGImage, fillRect);
// Flip the image right side up & draw
CGContextSaveGState(ctxt);
CGContextScaleCTM(ctxt, 1.0, -1.0);
CGContextTranslateCTM(ctxt, 0.0, -rect.size.height);
CGContextConcatCTM(ctxt, tran);
CGContextDrawImage(UIGraphicsGetCurrentContext(), rect, imag);
// Restore the context so that the coordinate system is restored
CGContextRestoreGState(ctxt);
// Cut out a sample image and redraw it over the source rect
// several times, shifting the opacity and the positioning slightly
// to produce a blurred effect
for (indx = 0; indx < 5; indx++) {
CGRect myRect = CGRectOffset(fillRect, 0.5 * indx, 0.5 * indx);
CGContextSetAlpha(ctxt, 0.2 * indx);
CGContextScaleCTM(ctxt, 1.0, -1.0);
CGContextDrawImage(ctxt, myRect, sampleImageRef);
}
copy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return copy;
}
What you really need are in the image filters in the CoreImage API. Unfortunately CoreImage is not supported on the iPhone (unless that changed recently and I missed it). Be careful here, as, IIRC, they are available in the SIM - but not on the device.
AFAIK there is no other way to do it properly with the native libraries, although I've sort of faked a blur before by creating an extra layer over the top which is a copy of what's below, offset by a pixel or two and with a low alpha value. For a proper blur effect, tho, the only way I've been able to do it is offline in Photoshop or similar.
Would be keen to hear if there is a better way too, but to my knowledge that is the situation currently.
Have a look at the following libraries:
https://github.com/coryleach/UIImageAdjust
https://github.com/esilverberg/ios-image-filters
https://github.com/cmkilger/CKImageAdditions