UIAlertView Rendering Error - iphone

I have been working on an app for a couple of months now, but have finally run into an issue that I can't solve myself, and can't find anything on the internet to help.
I am using several normal UIAlertViews, in my app. Some have 2 buttons, some have 3 buttons, and a couple have 2 buttons and a text field. However all have the same issue. When you call [someAlertView show]; the alert view appears as normal, but then suddenly its graphics context seems to get corrupted as you can see from the screenshot.
This happens on both iPhone and iPad simulators (both 5.0 and 5.1), and happens on an iPad and iPhone4S device as well.
The image showing through is whatever happens to be behind the alertView.
The Alert still works, I can click the buttons, type in the text field, then when it dismisses the delegate methods are called correctly and everything goes back to normal. When the alertView appears again, the same thing happens.
The view behind the alert is a custom UIScrollView subclass with a content size of approximately 4000 pixels by 1000 with a UIImage as the background. The png file is mostly transparent, so is only about 80kB in memory size, and the phone is having no issues rendering it - the scroll view is still fully responsive and not slow.
It also has a CADisplayLink timer attached to it as part of the subclass. I have tried disabling this just before the alertView is shown, but it makes no difference so I am doubtful that is the issue.
This app is a partial rewrite of one I made for a university project, and that one could display UIAlertViews over the top of a scrollView of the same size and subclass without issue. The difference between this app and that one is that in my old app, I had subclassed UIAlertView to add extra things such as a pickerView, however I decided that I didn't like the way it looked so moved everything out of the alert and am just sticking with a standard UIAlertView.
This is how the alertView in the screenshot is called:
- (IBAction)loadSimulation:(id)sender {
importAlert = [[UIAlertView alloc] initWithTitle:#"Load Simulation" message:#"Enter Filename:" delegate:self cancelButtonTitle:#"Cancel" otherButtonTitles:#"Load", nil];
[importAlert setAlertViewStyle:UIAlertViewStylePlainTextInput];
[importAlert showPausingSimulation:self.simulationView]; //Calling [importAlert show]; makes no difference.
if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone) {
[self hideOrganiser]; //Not an issue as the problem occurs on iPad as well.
}
}
With this being the categorised AlertView to add the ability to stop the scrollViews CADisplay link.
#interface UIAlertView(pauseDisplayLink)
- (void)showPausingSimulation:(UILogicSimulatorView*)simulationView;
#end
#implementation UIAlertView(pauseDisplayLink)
- (void)showPausingSimulation:(UILogicSimulatorView *)simulationView {
[simulationView stopRunning];
[simulationView removeDisplayLink]; //displayLink needs to be removed from the run loop, otherwise it will keep going in the background and get corrupted.
[self show];
}
I get no memory warnings when this happens, so I am doubtful it is due to lack of resources.
Has anyone come across an issue like this before? If you need further information I can try to provide it, but I am limited in what code I can post. Any help would be appreciated, I've been trying to solve this for two weeks and can't figure it out.
Edit:
It appears that it is not the AlertView at all (or rather it is not just the alertView), as the problem goes away when I remove the scroll view behind it, so there must be some issue between the two. This is the code for my UIScrollView subclass:
.h file:
#import
#import
#class ECSimulatorController;
#interface UILogicSimulatorView : UIScrollView {
CADisplayLink *displayLink;
NSInteger _updateRate;
ECSimulatorController* _hostName;
}
#property (nonatomic) NSInteger updateRate;
#property (nonatomic, strong) ECSimulatorController* hostName;
- (void) removeDisplayLink;
- (void) reAddDisplayLink;
- (void) displayUpdated:(CADisplayLink*)timer;
- (void) startRunning;
- (void) stopRunning;
- (void) refreshRate:(NSInteger)rate;
- (void) setHost:(id)host;
- (void)setMinimumNumberOfTouches:(NSInteger)touches;
- (void)setMaximumNumberOfTouches:(NSInteger)touches;
#end
.m file:
#import "UILogicSimulatorView.h"
#import "ECSimulatorController.h"
#import <QuartzCore/QuartzCore.h>
#implementation UILogicSimulatorView
#synthesize updateRate = _updateRate;
#synthesize hostName = _hostName;
- (void)reAddDisplayLink {
[displayLink addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSDefaultRunLoopMode]; //allows the display link to be re-added to the run loop after having been removed.
}
- (void)removeDisplayLink {
[displayLink removeFromRunLoop:[NSRunLoop mainRunLoop] forMode:NSDefaultRunLoopMode]; //allows the display link to be removed from the Run loop without deleting it. Removing it is essential to prevent corruption between the games and the simulator as both use CADisplay link, and only one can be in the run loop at a given moment.
}
- (void)startRunning {
[self refreshRate:self.updateRate];
[displayLink setPaused:NO];
}
- (void)refreshRate:(NSInteger)rate {
if (rate > 59) {
rate = 59; //prevent the rate from being set too an undefined value.
}
NSInteger frameInterval = 60 - rate; //rate is the number of frames to skip. There are 60FPS, so this converts to frame interval.
[displayLink setFrameInterval:frameInterval];
}
- (void)stopRunning {
[displayLink setPaused:YES];
}
- (void)displayUpdated:(CADisplayLink*)timer {
//call the function that the snakeController host needs to update
[self.hostName updateStates];
}
- (void)setHost:(ECSimulatorController*)host;
{
self.hostName = host; //Host allows the CADisplay link to call a selector in the object which created this one.
}
- (id)initWithFrame:(CGRect)frame
{
//Locates the UIScrollView's gesture recogniser
if(self = [super initWithFrame:frame])
{
[self setMinimumNumberOfTouches:2];
displayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(displayUpdated:)]; //CADisplayLink will update the logic gate states.
self.updateRate = 1;
[displayLink setPaused:YES];
}
return self;
}
- (void)setMinimumNumberOfTouches:(NSInteger)touches{
for (UIGestureRecognizer *gestureRecognizer in [self gestureRecognizers])
{
if([gestureRecognizer isKindOfClass:[UIPanGestureRecognizer class]])
{
//Changes the minimum number of touches to 'touches'. This allows the UIPanGestureRecogniser in the object which created this one to work with one finger.
[(UIPanGestureRecognizer*)gestureRecognizer setMinimumNumberOfTouches:touches];
}
}
}
- (void)setMaximumNumberOfTouches:(NSInteger)touches{
for (UIGestureRecognizer *gestureRecognizer in [self gestureRecognizers])
{
if([gestureRecognizer isKindOfClass:[UIPanGestureRecognizer class]])
{
//Changes the maximum number of touches to 'touches'. This allows the UIPanGestureRecogniser in the object which created this one to work with one finger.
[(UIPanGestureRecognizer*)gestureRecognizer setMaximumNumberOfTouches:touches];
}
}
}
#end

Well, I have managed to come up a solution to this. Really it is probably just masking the issue rather than finding the route cause, but at this point I will take it.
First some code:
#interface UIView (ViewCapture)
- (UIImage*)captureView;
- (UIImage*)captureViewInRect:(CGRect)rect;
#end
#implementation UIView (ViewCapture)
- (UIImage*)captureView {
return [self captureViewInRect:self.frame];
}
- (UIImage*)captureViewInRect:(CGRect)rect
{
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.layer renderInContext:context];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShot;
}
#end
- (void)showPausingSimulation:(UILogicSimulatorView *)simulationView {
[simulationView stopRunning];
UIView* superView = simulationView.superview;
CGPoint oldOffset = simulationView.contentOffset;
for (UIView* subview in simulationView.subviews) {
//offset subviews so they appear when content offset is (0,0)
CGRect frame = subview.frame;
frame.origin.x -= oldOffset.x;
frame.origin.y -= oldOffset.y;
subview.frame = frame;
}
simulationView.contentOffset = CGPointZero; //set the offset to (0,0)
UIImage* image = [simulationView captureView]; //Capture the frame of the scrollview
simulationView.contentOffset = oldOffset; //restore the old offset
for (UIView* subview in simulationView.subviews) {
//Restore the original positions of the subviews
CGRect frame = subview.frame;
frame.origin.x += oldOffset.x;
frame.origin.y += oldOffset.y;
subview.frame = frame;
}
[simulationView setHidden:YES];
UIImageView* imageView = [[UIImageView alloc] initWithFrame:simulationView.frame];
[imageView setImage:image];
[imageView setTag:999];
[superView addSubview:imageView];
[imageView setHidden:NO];
superView = nil;
imageView = nil;
image = nil;
[self show];
}
- (void)dismissUnpausingSimulation:(UILogicSimulatorView *)simulationView {
UIView* superView = simulationView.superview;
UIImageView* imageView = (UIImageView*)[superView viewWithTag:999];
[imageView removeFromSuperview];
imageView = nil;
superView = nil;
[simulationView setHidden:NO];
[simulationView startRunning];
}
Then modifying the dismiss delegate method in my class to have this line:
- (void)alertView:(UIAlertView *)alertView didDismissWithButtonIndex:(NSInteger)buttonIndex {
[alertView dismissUnpausingSimulation:self.simulationView];
...
When the alert view is called, but before it is shown, I need to hide the simulator to prevent it corrupting the alert. However just hiding it is ugly as then all is visible behind is a empty view.
To fix this, I first make a UIImage from the simulator views graphics context. I then create a UIImageView with the same frame as the simulator and set the UIImage as its image.
I then hide the simulator view (curing the alert issue), and add my new UIImageView to the simulators superview. I also set the tag of the image view so I can find it later.
When the alert dismisses, the image view is then recovered based on its tag, and removed from its superview. The simulator is then unhidden.
The result is that the rendering issue is gone.

I know its too late for an answer to this question. Lately I had experianced this very same issue.
My Case:
Added couple of custom UIViews with background images and some controlls to the scroll view with shadow effect. I had also set the shadowOffset.
The Solution:
After some step by step analysis, I found out that setting the setShadowOpacity caused The rendering problem for me. When i commented that line of code, it cured the UIAlertView back to normal appearance.
More:
To make sure, I created a new project mimicing the original ui with shadowOpacity. But it didnt caused the rendering problem as i expected. So I am not sure about the root cause. For me it was setShadowOpacity.

Related

iPhone Camera App - how to stay in Camera view?

I've implemented the camera inside of my app with the default showsCameraControls = YES, and the issue I am having is when the user confirms that the image is a keeper, it dismisses the camera with [self.delegate didFinishWithCamera]. I would like to remain in the Camera view until the user is done taking photos. Without [self.delegate didFinishWithCamera], the app hangs after the user confirms they want to keep the photo and never returns back to the live camera feed. How do I remain in the camera view? Your help is appreciated!
#implementation PHFPhotoOverlayVC
- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil];
if (self) {
self.imagePickerController = [[UIImagePickerController alloc] init];
self.imagePickerController.delegate = self;
}
return self;
}
- (void)setupImagePicker:(UIImagePickerControllerSourceType)sourceType
{
self.imagePickerController.sourceType = sourceType;
if (sourceType == UIImagePickerControllerSourceTypeCamera)
{
self.imagePickerController.mediaTypes =
[NSArray arrayWithObjects:(NSString *) kUTTypeImage, nil];
self.imagePickerController.showsCameraControls = YES;
#if false
if ([[self.imagePickerController.cameraOverlayView subviews] count] == 0)
{
CGRect overlayViewFrame = self.imagePickerController.cameraOverlayView.frame;
CGRect newFrame = CGRectMake(0.0,
CGRectGetHeight(overlayViewFrame) -
self.view.frame.size.height - 10.0,
CGRectGetWidth(overlayViewFrame),
self.view.frame.size.height + 10.0);
self.view.frame = newFrame;
[self.imagePickerController.cameraOverlayView addSubview:self.view];
}
#endif
}
}
#pragma mark -
#pragma mark UIImagePickerControllerDelegate
- (void) imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info
{
self.imagePickerController.mediaTypes =
[NSArray arrayWithObjects:(NSString *) kUTTypeImage, nil];
self.imagePickerController.showsCameraControls = YES;
UIImage *image = [info valueForKey:UIImagePickerControllerOriginalImage];
if (self.delegate)
[self.delegate didTakePicture:image];
[self.delegate didFinishWithCamera];
}
- (void)imagePickerControllerDidCancel:(UIImagePickerController *)picker
{
[self.delegate didFinishWithCamera];
}
#end
In order to achieve this, you will have to implement your own method for taking a picture, since this default button causes the view to dismiss itself.
Obviously, you can do this by setting showsCameraControls=NO, but then you'd have to recreate all the other functionality that you still want.
I've never done this, but it should be possible to leave showsCameraControls=YES and use the cameraOverlayView to simply layer an identical "Take Picture" button over the existing one. If you do that, you just need to have that button call the method -takePicture from the instance of your UIImagePickerViewController.
Hopefully that's enough, feel free to comment and ask for any clarification.
You're going to need to implement a custom camera - check out Apple's SquareCam example
You'll be using AVFoundation and will be able to implement any kind of custom behavior when a photo is taken (including just staying on that camera page) - you'll even have to save it to the Photo Library yourself (the SquareCam example shows you how to do this). This also means that if you need the crop/resize controls, you'll have to create them yourself, as well as a picker view (grid gallery view) if you want the user to be able to review their photos after taking them.
It's probably around intermediate level stuff - took me a few days to implement, but if your app is in any way focused on photos, this is definitely the way to go. Gives you complete control of the camera UI and behavior.
Oh, and yeah you'll have to implement the flash button, shutter button, shutter effect, tap to focus, and anything else yourself too. AVFoundation basically gives you a straight pipe to the camera lens (figuratively).

Custom UIButton drawing - stays depressed after touchesEnded

I've made a custom zero-image, Core Graphics drawn UIButton based on Ray Wenderlich's Tutorial (the only modifications are to CoolButton's drawRect: method, altered drawing code), and it works great most of the time. However, sometimes when I click it for a short period of time, it stays in a depressed state and doesn't return to normal.
From here, the only way to get it back to a normal state is via a long press. Simply clicking means it stays depressed.
Another thing to note is that I've hooked Touch Up Inside up to a chain of a few long methods - I don't think it would take more than 0.1 seconds to complete. I've even used dispatch_async in the #selector that is hooked up to Touch Up Inside, so there shouldn't be a delay in the UI updating, I think.
I've put an NSLog in the drawRect: which fires 3 times per button press usually, and it varies what UIControlState the button is in for each press:
For some short presses, it goes Highlighted, Highlighted, Normal
for longer presses, it's Highlighted, Normal, Normal
However, for very short presses, it only fires twice, Highlighted -> Highlighted.
When it's a long press to get it back to Normal, it goes H, N, N.
This has been puzzling me for a while, and I haven't been able to work out why short presses only fire drawRect: twice, or why touchesEnded: doesn't seem to call drawRect:. Perhaps touchesEnded: isn't firing?
I really hope someone can help.
If you really want to generate the button images at runtime, generate them when the button is loaded. Then add them for different states using
[button setImage:btnImage forState:UIControlStateNormal];
You can always turn a view into an image using the following: http://pastie.org/244916
Really though, I'd recommend just making images beforehand. If you don't want to get photoshop, there's plenty of alternatives. The upcoming pixelmator update looks pretty suave, ands it's ridiculously cheap!
Well, well, that was easy. 0.15 second delay works, 0.1 doesn't.
As other said, you can simply generate an image and use that as background (if the image is static). No need photoshop, you may simply generate your image once and then take a snapshot, then cut the image out (with Anteprima) and save it as a png file :)
However, since you said the button is connected to some long methods, this may be why the button stay pressed: if you are not calling those methods in background, then the button will stay pressed since all the task are ended. Try (for a test) to connect the button with a single simple method (say NSLog something) and check if it stay pressed. If not, I suggest to detach your methods in background.
I too ran into a similar problem with a UIButton appearing to stick one state when I customize the drawRect method. My hunch is that the state of the button is changing more than once before drawRect is called. So I just added a custom property that would only be set by methods called during touchDown and touchUpInside events. I had to dispatch threads to do this because of other operations that were holding up the main thread, causing a delay in the redrawHighlighted method.
It may be a little messy, but here's what worked for me:
// CustomButton.h
#interface CustomButton : UIButton
#property (atomic) BOOL drawHighlighted;
// CustomButton.m
#implementation CustomButton
#synthesize drawHighlighted = _drawHighlighted;
- (id)initWithFrame:(CGRect)frame
{
if ((self = [super initWithFrame:frame]))
{
_drawHighlighted = NO;
[self addTarget:self action:#selector(redrawHighlighted) forControlEvents:UIControlEventTouchDown];
[self addTarget:self action:#selector(redrawNormal) forControlEvents:UIControlEventTouchUpInside];
[self addTarget:self action:#selector(redrawNormal) forControlEvents:UIControlEventTouchDragExit];
}
return self;
}
- (void)drawRect:(CGRect)frame
{
// draw button here
}
- (void)redrawNormal
{
[NSThread detachNewThreadSelector:#selector(redrawRectForNormal) toTarget:self withObject:nil];
}
- (void)redrawHighlighted
{
[NSThread detachNewThreadSelector:#selector(redrawRectForHighlighted) toTarget:self withObject:nil];
}
- (void)redrawRectForNormal
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
_drawHighlighted = NO;
[self setNeedsDisplay];
[pool release];
}
- (void)redrawRectForHighlighted
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
_drawHighlighted = YES;
[self setNeedsDisplay];
[pool release];
}
Another cheap but effective alternative I've found is to create a drawRect: that doesn't do any highlighting, and simply alter the button subclass's alpha property when the user interacts with the button.
For example:
- (BOOL)beginTrackingWithTouch:(UITouch *)touch withEvent:(UIEvent *)event
{
self.alpha = 0.3f;
return [super beginTrackingWithTouch:touch withEvent:event];
}
- (void)endTrackingWithTouch:(UITouch *)touch withEvent:(UIEvent *)event
{
self.alpha = 1.0f;
[super endTrackingWithTouch:touch withEvent:event];
}
- (void)cancelTrackingWithEvent:(UIEvent *)event
{
self.alpha = 0.3f;
[super cancelTrackingWithEvent:event];
}
- (void)drawRect:(CGRect)rect
{
// Your drawing code here sans highlighting
}

how to display UIActivityIndicatorView BEFORE rotation begins

I'd like to display an activity indicator BEFORE the work undertaken by willAnimateRotationToInterfaceOrientation:duration: begins. Most of the time in my app, this work is quickly completed and there would be no need for an activity indicator, but occasionally (first rotation, i.e. before I have cached data, when working with a large file) there can be a noticeable delay. Rather than re-architect my app to cope with this uncommon case, I'd rather just show the UIActivityIndicatorView while the app generates a cache and updates the display.
The problem is (or seems to be) that the display is not updated between the willRotateToInterfaceOrientation:duration and the willAnimateRotationToInterfaceOrientation:duration: method. So asking iOS to show UIActivityIndicator view in willRotate method doesn't actually affect the display until after the willAnimateRotation method.
The following code illustrates the issue. When run, the activity indicator appears only very briefly and AFTER the simulateHardWorkNeededToGetDisplayInShapeBeforeRotation method has completed.
Am I missing something obvious? And if not, any smart ideas as to how I could work around this issue?
Update: While suggestions about farming the heavy lifting off to another thread etc. are generally helpful, in my particular case I kind of do want to block the main thread to do my lifting. In the app, I have a tableView all of whose heights need to be recalculated. When - which is not a very common use case or I wouldn't even be considering this approach - there are very many rows, all the new heights are calculated (and then cached) during a [tableView reloadData]. If I farm the lifting off and let the rotate proceed, then after the rotate and before the lifting, my tableView hasn't been re-loaded. In the portrait to landscape case, for example, it doesn't occupy the full width. Of course, there are other workarounds, e.g. building a tableView with just a few rows prior to the rotate and then reloading the real one over that etc.
Example code to illustrate the issue:
#implementation ActivityIndicatorViewController
#synthesize activityIndicatorView = _pgActivityIndicatorView;
#synthesize label = _pgLabel;
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown);
}
- (void) willRotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration;
{
NSLog(#"willRotate");
[self showActivityIndicatorView];
}
- (void) willAnimateRotationToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration;
{
NSLog(#"willAnimateRotation");
[self simulateHardWorkNeededToGetDisplayInShapeBeforeRotation];
}
- (void) didRotateFromInterfaceOrientation:(UIInterfaceOrientation)fromInterfaceOrientation;
{
NSLog(#"didRotate");
[self hideActivityIndicatorView];
}
- (void) simulateHardWorkNeededToGetDisplayInShapeBeforeRotation;
{
NSLog(#"Starting simulated work");
NSDate* date = [NSDate date];
while (fabs([date timeIntervalSinceNow]) < 2.0)
{
//
}
NSLog(#"Finished simulated work");
}
- (void) showActivityIndicatorView;
{
NSLog(#"showActivity");
if (![self activityIndicatorView])
{
UIActivityIndicatorView* activityIndicatorView = [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleGray];
[self setActivityIndicatorView:activityIndicatorView];
[[self activityIndicatorView] setCenter:[[self view] center]];
[[self activityIndicatorView] startAnimating];
[[self view] addSubview: [self activityIndicatorView]];
}
// in shipping code, an animation with delay would be used to ensure no indicator would show in the good cases
[[self activityIndicatorView] setHidden:NO];
}
- (void) hideActivityIndicatorView;
{
NSLog(#"hideActivity");
[[self activityIndicatorView] setHidden:YES];
}
- (void) dealloc;
{
[_pgActivityIndicatorView release];
[super dealloc];
}
- (void) viewDidLoad;
{
UILabel* label = [[UILabel alloc] initWithFrame:CGRectMake(50.0, 50.0, 0.0, 0.0)];
[label setText:#"Activity Indicator and Rotate"];
[label setTextAlignment: UITextAlignmentCenter];
[label sizeToFit];
[[self view] addSubview:label];
[self setLabel:label];
[label release];
}
#end
The app doesn't update the screen to show the UIActivityIndicatorView until the main run loop regains control. When a rotation event happens, the willRotate... and willAnimateRotation... methods are called in one pass through the main run loop. So you block on the hard work method before displaying the activity indicator.
To make this work, you need to push the hard work over to another thread. I would put the call to the hard work method in the willRotate... method. That method would call back to this view controller when the work is completed so the view can be updated. I would put show the activity indicator in the willAnimateRotation... method. I wouldn't bother with a didRotateFrom... method. I recommend reading the Threaded Programming Guide.
Edit in response to a comment: You can effectively block user interaction by having the willAnimateRotation... method put a non functioning interface on screen such as a view displaying a dark overlay over and the UIActivityIndicatorView. Then when the heavy lifting is done, this overlay is removed, and the interface becomes active again. Then the drawing code will have the opportunity to properly add and animate the activity indicator.
More digging (first in Matt Neuberg's Programming iPhone 4) and then this helpful question on forcing Core Animation to run its thread from stackoverflow and I have a solution that seems to be working well. Both Neuberg and Apple issue strong caution about this approach because of the potential for unwelcome side effects. In testing so far, it seems to be OK for my particular case.
Changing the code above as follows implements the change. The key addition is [CATransaction flush], forcing the UIActivityIndicatorView to start displaying even though the run loop won't be ended until after the willAnimateRotationToInterfaceOrientation:duration method completes.
- (void) willRotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration;
{
NSLog(#"willRotate");
[self showActivityIndicatorView];
[CATransaction flush]; // this starts the animation right away, w/o waiting for end of the run loop
}
- (void) willAnimateRotationToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration;
{
NSLog(#"willAnimateRotation");
[self simulateHardWorkNeededToGetDisplayInShapeBeforeRotation];
[self hideActivityIndicatorView];
}
- (void) didRotateFromInterfaceOrientation:(UIInterfaceOrientation)fromInterfaceOrientation;
{
NSLog(#"didRotate");
}
Try performing you work on a second thread after showing the activity view.
[self showActivityIndicatorView];
[self performSelector:#selector(simulateHardWorkNeededToGetDisplayInShapeBeforeRotation) withObject:nil afterDelay:0.01];
Either execute the heavy lifting in a background thread and post the results in the foreground thread to update the UI (UIKit is only thread safe since iOS 4.0):
[self performSelectorInBackground:#selector(simulateHardWorkNeededToGetDisplayInShapeBeforeRotation) withObject:nil]
Or you can schedule the heavy lifting method to be executed after the rotation took place:
[self performSelector:#selector(simulateHardWorkNeededToGetDisplayInShapeBeforeRotation) withObject:nil afterDelay:0.4]
But these are only hacks and the real solution is to have proper background processing if your UI needs heavy processing to get updated, may it be in portrait or landscape. NSOperation and NSOperationQueue is a good place to start.

iOS: How to access the `UIKeyboard`?

I want to get a pointer reference to UIKeyboard *keyboard to the keyboard on screen so that I can add a transparent subview to it, covering it completely, to achieve the effect of disabling the UIKeyboard without hiding it.
In doing this, can I assume that there's only one UIKeyboard on the screen at a time? I.e., is it a singleton? Where's the method [UIKeyboard sharedInstance]. Brownie points if you implement that method via a category. Or, even more brownie points if you convince me why it's a bad idea to assume only one keyboard and give me a better solution.
Try this:
// my func
- (void) findKeyboard {
// Locate non-UIWindow.
UIWindow *keyboardWindow = nil;
for (UIWindow *testWindow in [[UIApplication sharedApplication] windows]) {
if (![[testWindow class] isEqual:[UIWindow class]]) {
keyboardWindow = testWindow;
break;
}
}
// Locate UIKeyboard.
UIView *foundKeyboard = nil;
for (UIView *possibleKeyboard in [keyboardWindow subviews]) {
// iOS 4 sticks the UIKeyboard inside a UIPeripheralHostView.
if ([[possibleKeyboard description] hasPrefix:#"<UIPeripheralHostView"]) {
possibleKeyboard = [[possibleKeyboard subviews] objectAtIndex:0];
}
if ([[possibleKeyboard description] hasPrefix:#"<UIKeyboard"]) {
foundKeyboard = possibleKeyboard;
break;
}
}
}
How about using -[UIApplication beginIgnoringInteractionEvents]?
Also, another trick to get the view containing the keyboard is to initialize a dummy view with CGRectZero and set it as the inputAccessoryView of your UITextField or UITextView. Then, get its superview. Still, such shenanigans is private/undocumented, but I've heard of apps doing that and getting accepted anyhow. I mean, how else would Instagram be able to make their comment keyboard interactive (dismiss on swipe) like the Messages keyboard?
I found that developerdoug's answer wasn't working on iOS 7, but by modifying things slightly I managed to get access to what I needed. Here's the code I used:
-(UIView*)findKeyboard
{
UIView *keyboard = nil;
for (UIWindow* window in [UIApplication sharedApplication].windows)
{
for (UIView *possibleKeyboard in window.subviews)
{
if ([[possibleKeyboard description] hasPrefix:#"<UIPeripheralHostView"])
{
keyboard = possibleKeyboard;
break;
}
}
}
return keyboard;
}
From what I could make out, in iOS 7 the keyboard is composed of a UIPeripheralHostView containing two subviews: a UIKBInputBackdropView (which provides the blur effect on whatever's underneath the keyboard) and a UIKeyboardAutomatic (which provides the character keys). Manipulating the UIPeripheralHostView seems to be equivalent to manipulating the entire keyboard.
Discaimer: I have no idea whether Apple will accept an app that uses this technique, nor whether it will still work in future SDKs.
Be aware, Apple has made it clear that applications which modify private view hierarchies without explicit approval beforehand will be rejected. Take a look in the Apple Developer Forums for various developers' experience on the issue.
If you're just trying to disable the keyboard (prevent it from receiving touches), you might try adding a transparent UIView that is the full size of the screen for the current orientation. If you add it as a subview of the main window, it might work. Apple hasn't made any public method of disabling the keyboard that I'm aware of - you might want to use one of your support incidents with Apple, maybe they will let you in on the solution.
For an app I am currently developing I am using a really quick and easy method:
Add this in the header file:
// Add in interface
UIWindow * _window;
// Add as property
#property (strong, nonatomic) IBOutlet UIView * _keyboard;
Then add this code in the bottom of the keyboardWillShow function:
-(void) keyboardWillShow: (NSNotification *) notification {
.... // other keyboard will show code //
_window = [UIApplication sharedApplication].windows.lastObject;
[NSTimer scheduledTimerWithTimeInterval:0.05
target:self
selector:#selector(allocateKeyboard)
userInfo:nil
repeats:NO];
}
This code look for when the keyboard is raised and then allocates the current window. I have then added a timer to allocate the keyboard as there were some issues when allocated immediately.
- (void)allocateKeyboard {
if (!_keyboard) {
if (_window.subviews.count) {
// The keyboard is always the 0th subview
_keyboard = _window.subviews[0];
}
}
}
We now have the keyboard allocated which gives you direct "access" to the keyboard as the question asks.
Hope this helps
Under iOS 8 it appears you have to jump down the chain more than in the past. The following works for me to get the keyboard, although with custom keyboards available and such I wouldn't rely on this working unless you're running in a controlled environment.
- (UIView *)findKeyboard {
for (UIWindow* window in [UIApplication sharedApplication].windows) {
UIView *inputSetContainer = [self viewWithPrefix:#"<UIInputSetContainerView" inView:window];
if (inputSetContainer) {
UIView *inputSetHost = [self viewWithPrefix:#"<UIInputSetHostView" inView:inputSetContainer];
if (inputSetHost) {
UIView *kbinputbackdrop = [self viewWithPrefix:#"<_UIKBCompatInput" inView:inputSetHost];
if (kbinputbackdrop) {
UIView *theKeyboard = [self viewWithPrefix:#"<UIKeyboard" inView:kbinputbackdrop];
return theKeyboard;
}
}
}
}
return nil;
}
- (UIView *)viewWithPrefix:(NSString *)prefix inView:(UIView *)view {
for (UIView *subview in view.subviews) {
if ([[subview description] hasPrefix:prefix]) {
return subview;
}
}
return nil;
}

Customizing UIPickerView

I have a requirement where UIPickerView should be customized. The picker view should look something like this:
The application which has customized the pickerView similarly is:
http://itunes.apple.com/us/app/convert-the-unit-calculator/id325758140?mt=8
I have tried removing the default pickerView selection bar by resetting the property showsSelectionIndicator of UIImagePicker and adding a overlay view. But the problem is, the overlay view should be transparent so that the wheel behind it is visible. But the other application somehow does it even though the selection bar is not transparent.
Any ideas on how to achieve this feat?
Thanks and Regards,
Raj
You're going to have to write your own from scratch on this one. UIPickerview isn't customizable. At. All. Sucks, but that's how it is. I'd start out creating a uitableview and layering a frame around it, and trying to mimic uipickerview.
I think you can print the subviews under the picker and modify them.
The UIPickerView create subviews after the data is first loaded.
With performSelecter:WithObject:afterDelay:, you can remove them or insert whatever you need.
- (void)viewDidLoad
{
[super viewDidLoad];
[self refreshClock];
[timePicker_ performSelector:#selector(leakSelf) withObject:nil afterDelay:0];
[self performSelector:#selector(setupTimePicker) withObject:nil afterDelay:0];
}
- (void)setupTimePicker {
[[timePicker_ subviewOfClass:NSClassFromString(#"_UIPickerViewTopFrame")] removeFromSuperview];
CGRect frame = timePicker_.frame;
frame.size.width += 20;
frame.origin.x -= 10;
timePicker_.frame = frame;
}
#implementation UIView(UIViewDebugTool)
- (void)leakSubview:(UIView*)subroot atDepth:(NSUInteger)dep {
DLog( #"trace sub view[%u] %#", dep, [subroot description] );
CALayer* layer = subroot.layer;
for( CALayer* l in layer.sublayers ) {
DLog( #"layer[%u] %#", dep, l );
}
for( UIView* v in subroot.subviews ) {
[self leakSubview:v atDepth:dep+1];
}
}
- (void)leakSelf {
NSUInteger dep = 0;
[self leakSubview: self atDepth:dep];
}
#end