I have a UIButton that I'm adding dynamically using content parsed from an XML file (it's also getting cached).
The first time I run the app, the button's action isn't getting called - but its image and everything else loads just fine. The second time I run the app, the button works.
Any clue on why the button's action doesn't get called the first time I run the app?
- (void)fetchHeader
{
[[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:YES];
// Initiate the request...
channel1 = [[FeedStore sharedStore] fetchFeaturedHeaderWithCompletion:
^(RSSChannel *obj, NSError *err) {
if(!err) {
[[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:NO];
// Set our channel to the merged one
channel1 = obj;
RSSItem *d = [[channel1 items] objectAtIndex:0];
RSSItem *c = [[channel1 items] objectAtIndex:1];
NSString *param = [d photoURL]; // the URL from the XML
NSString *param1 = [c photoURL]; // the URL from the XML
featured1 = [[UIButton alloc] init];
[featured1 addTarget:self action:#selector(featuredButtonPress:) forControlEvents:UIControlEventTouchUpInside];
[featured1 setFrame:CGRectMake(18, 20, 123, 69)];
[featured1 setImageWithURL:[NSURL URLWithString:param] placeholderImage:[UIImage imageNamed:#"featuredheaderbg.png"]];
featured1.tag = 1;
[[self view] addSubview:featured1];
}
}];
}
The issue was that a UIView was covering up the button below it. Because the view was transparent, I didn't realize it was covering anything up.
Related
I wrote a SpriteKit app last year targeting 10.10 (Yosemite). Everything ran fine, but when I upgraded to El Capitan this year it freezes in one particular spot. It's a tough problem to diagnose because there is a lot of code so I'll try to be as descriptive as possible. I've also created a YOUTUBE screen recording of the issue.
App's Purpose
The app is basically a leaderboard that I created for a tournament at the school that I teach at. When the app launches, it goes to the LeaderboardScene scene and displays the leaderboard.
The app stays in this scene for the rest of the time. The sword that says "battle" is a button. When it is pressed it creates an overlay and shows the two students that will be facing each other in video form (SKVideoNode).
The videos play continuously and the user of the app eventually clicks on whichever student wins that match and then the overlay is removed from the scene and the app shows the leaderboard once again.
Potential Reasons For High CPU
Playing video: Normally the overlay shows video, but I also created an option where still images are loaded instead of video just in case I had a problem. Whether I load images or video, the CPU usage is super high.
Here's some of the code that is most likely causing this issue:
LeaderboardScene.m
//when the sword button is pressed it switches to the LB_SHOW_VERSUS_SCREEN state
-(void) update:(NSTimeInterval)currentTime {
switch (_leaderboardState) {
...
case LB_SHOW_VERSUS_SCREEN: { //Case for "Versus Screen" overlay
[self showVersusScreen];
break;
}
case LB_CHOOSE_WINNER: {
break;
}
default:
break;
}
}
...
//sets up the video overlay
-(void) showVersusScreen {
//doesn't allow the matchup screen to pop up until the producer FLASHING actions are complete
if ([_right hasActions] == NO) {
[self addChild:_matchup]; //_matchup is an object from the Matchup.m class
NSArray *producers = #[_left, _right];
[_matchup createRound:_round WithProducers:producers VideoType:YES]; //creates the matchup with VIDEO
//[_matchup createRound:_round WithProducers:producers VideoType:NO]; //creates the matchup without VIDEO
_leaderboardState = LB_CHOOSE_WINNER;
}
}
Matchup.m
//more setting up of the overlay
-(void) createRound:(NSString*)round WithProducers:(NSArray*)producers VideoType:(bool)isVideoType {
SKAction *wait = [SKAction waitForDuration:1.25];
[self loadSoundsWithProducers:producers];
[self runAction:wait completion:^{ //resets the overlay
_isVideoType = isVideoType;
[self removeAllChildren];
[self initBackground];
[self initHighlightNode];
[self initOutline];
[self initText:round];
if (_isVideoType)
[self initVersusVideoWithProducers:producers]; //this is selected
else
[self initVersusImagesWithProducers:producers];
[self animationSequence];
_currentSoundIndex = 0;
[self playAudio];
}];
}
...
//creates a VersusSprite object which represents each of the students
-(void) initVersusVideoWithProducers:(NSArray*)producers {
Producer *left = (Producer*)[producers objectAtIndex:0];
Producer *right = (Producer*)[producers objectAtIndex:1];
_leftProducer = [[VersusSprite alloc] initWithProducerVideo:left.name LeftSide:YES];
_leftProducer.name = left.name;
_leftProducer.zPosition = 5;
_leftProducer.position = CGPointMake(-_SCREEN_WIDTH/2, _SCREEN_HEIGHT/3);
[self addChild:_leftProducer];
_rightProducer = [[VersusSprite alloc] initWithProducerVideo:right.name LeftSide:NO];
_rightProducer.name = right.name;
_rightProducer.zPosition = 5;
_rightProducer.xScale = -1;
_rightProducer.position = CGPointMake(_SCREEN_WIDTH + _SCREEN_WIDTH/2, _SCREEN_HEIGHT/3);
[self addChild:_rightProducer];
}
VersusSprite.m
-(instancetype) initWithProducerVideo:(NSString*)fileName LeftSide:(bool)isLeftSide {
if (self = [super init]) {
_isVideo = YES;
_isLeftSide = isLeftSide;
self.name = fileName;
[self initVideoWithFileName:fileName]; //creates videos
[self addProducerLabel];
}
return self;
}
...
//creates the videos for the VersusSprite
-(void) initVideoWithFileName:(NSString*)fileName {
NSArray *paths = NSSearchPathForDirectoriesInDomains (NSDesktopDirectory, NSUserDomainMask, YES);
NSString *desktopPath = [paths objectAtIndex:0];
NSString *resourcePath = [NSString stringWithFormat:#"%#/vs", desktopPath];
NSString *videoPath = [NSString stringWithFormat:#"%#/%#.mp4", resourcePath, fileName];
NSURL *fileURL = [NSURL fileURLWithPath:videoPath];
AVPlayer *avPlayer = [[AVPlayer alloc] initWithURL:fileURL];
_vid = [SKVideoNode videoNodeWithAVPlayer:avPlayer];
//[_vid setScale:1];
[self addChild:_vid];
[_vid play];
avPlayer.actionAtItemEnd = AVPlayerActionAtItemEndNone;
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(playerItemDidReachEnd:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:[avPlayer currentItem]];
}
//used to get the videos to loop
- (void)playerItemDidReachEnd:(NSNotification *)notification {
AVPlayerItem *p = [notification object];
[p seekToTime:kCMTimeZero];
}
UPDATE
The issue has been identified and is very specific to my project, so it probably won't help anyone else unfortunately. When clicking on the "sword" icon that says "Battle", the scene gets blurred and then the overlay is put on top of it. The blurring occurs on a background thread as you'll see below:
[self runAction:[SKAction waitForDuration:1.5] completion:^{
[self blurSceneProgressivelyToValue:15 WithDuration:1.25];
}];
I'll have to handle the blur in another way or just remove it altogether.
In my app, I have a button which, when pressed, lets you watch a youtube video (a movie trailer). Within the app, without launching safari. Below you can see a code snippet. This code works pefrectly fine under iOS5. However, in iOS 6, the UIButton in findButtonInView is always nil. Any ideas what might be the reason?
youtubeWebView.delegate = self;
youtubeWebView.backgroundColor = [UIColor clearColor];
NSString* embedHTML = #" <html><head> <style type=\"text/css\"> body {background-color: transparent; color: white; }</style></head><body style=\"margin:0\"><embed id=\"yt\" src=\"%#?version=3&app=youtube_gdata\" type=\"application/x-shockwave-flash\"width=\"%0.0f\" height=\"%0.0f\"></embed></body></html>";
NSURL *movieTrailer;
if(tmdbMovie) movieTrailer = tmdbMovie.trailer;
else movieTrailer = [NSURL URLWithString:movie.trailerURLString];
NSString *html = [NSString stringWithFormat:embedHTML,
movieTrailer,
youtubeWebView.frame.size.width,
youtubeWebView.frame.size.height];
[youtubeWebView loadHTMLString:html baseURL:nil];
[self addSubview:youtubeWebView];
- (void)webViewDidFinishLoad:(UIWebView *)_webView {
isWatchTrailerBusy = NO;
[manager displayNetworkActivityIndicator:NO];
//stop the activity indicator and enable the button
UIButton *b = [self findButtonInView:_webView];
//TODO this returns null in case of iOS 6, redirect to the youtube app
if(b == nil) {
NSURL *movieTrailer;
if(tmdbMovie) {
movieTrailer = tmdbMovie.trailer;
} else {
movieTrailer = [NSURL URLWithString:movie.trailerURLString];
}
[[UIApplication sharedApplication] openURL:movieTrailer];
} else {
[b sendActionsForControlEvents:UIControlEventTouchUpInside];
}
}
- (UIButton *)findButtonInView:(UIView *)view {
UIButton *button = nil;
if ([view isMemberOfClass:[UIButton class]]) {
return (UIButton *)view;
}
if (view.subviews && [view.subviews count] > 0) {
for (UIView *subview in view.subviews) {
button = [self findButtonInView:subview];
if (button) return button;
}
}
return button;
}
Apple changed how YouTube videos are handled in iOS6. I also was using the findButtonInView method but that no longer works.
I've discovered the following seems to work (untested in iOS < 6):
- (void)viewDidLoad
{
[super viewDidLoad];
self.webView.mediaPlaybackRequiresUserAction = NO;
}
- (void)playVideo
{
self.autoPlay = YES;
// Replace #"y8Kyi0WNg40" with your YouTube video id
[self.webView loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:[NSString stringWithFormat:#"%#%#", #"http://www.youtube.com/embed/", #"y8Kyi0WNg40"]]]];
}
// UIWebView delegate
- (void)webViewDidFinishLoad:(UIWebView *)webView {
if (self.autoPlay) {
self.autoPlay = NO;
[self clickVideo];
}
}
- (void)clickVideo {
[self.webView stringByEvaluatingJavaScriptFromString:#"\
function pollToPlay() {\
var vph5 = document.getElementById(\"video-player\");\
if (vph5) {\
vph5.playVideo();\
} else {\
setTimeout(pollToPlay, 100);\
}\
}\
pollToPlay();\
"];
}
dharmabruce solution works great on iOS 6, but in order to make it work on iOS 5.1, I had to substitute the javascript click() with playVideo(), and I had to set UIWebView's mediaPlaybackRequiresUserAction to NO
Here's the modified code:
- (void)viewDidLoad
{
[super viewDidLoad];
self.webView.mediaPlaybackRequiresUserAction = NO;
}
- (void)playVideo
{
self.autoPlay = YES;
// Replace #"y8Kyi0WNg40" with your YouTube video id
[self.webView loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:[NSString stringWithFormat:#"%#%#", #"http://www.youtube.com/embed/", #"y8Kyi0WNg40"]]]];
}
// UIWebView delegate
- (void)webViewDidFinishLoad:(UIWebView *)webView {
if (self.autoPlay) {
self.autoPlay = NO;
[self clickVideo];
}
}
- (void)clickVideo {
[self.webView stringByEvaluatingJavaScriptFromString:#"\
function pollToPlay() {\
var vph5 = document.getElementById(\"video-player-html5\");\
if (vph5) {\
vph5.playVideo();\
} else {\
setTimeout(pollToPlay, 100);\
}\
}\
pollToPlay();\
"];
}
I have solved the problems with dharmabruce and JimmY2K. solutions, with the fact that it only works the first time a video is played, as mentioned by Dee
Here is the code(including event when a video ends):
- (void)embedYouTube:(NSString *)urlString frame:(CGRect)frame {
videoView = [[UIWebView alloc] init];
videoView.frame = frame;
videoView.delegate = self;
[videoView loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:urlString]]];// this format: http://www.youtube.com/embed/xxxxxx
[self.view addSubview:videoView];
//https://github.com/nst/iOS-Runtime-Headers/blob/master/Frameworks/MediaPlayer.framework/MPAVController.h
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(playbackDidEnd:)
name:#"MPAVControllerItemPlaybackDidEndNotification"//#"MPAVControllerPlaybackStateChangedNotification"
object:nil];
}
- (void)webViewDidFinishLoad:(UIWebView *)webView {
//http://stackoverflow.com/a/12504918/860488
[videoView stringByEvaluatingJavaScriptFromString:#"\
var intervalId = setInterval(function() { \
var vph5 = document.getElementById(\"video-player\");\
if (vph5) {\
vph5.playVideo();\
clearInterval(intervalId);\
} \
}, 100);"];
}
- (void)playbackDidEnd:(NSNotification *)note
{
[videoView removeFromSuperview];
videoView.delegate = nil;
videoView = nil;
}
Untested and not sure if that is the problem here, becasue I don't know the URL you're using. But I stumbled right about following:
As of iOS 6, embedded YouTube URLs in the form of http://www.youtube.com/watch?v=oHg5SJYRHA0 will no longer work. These URLs are for viewing the video on the YouTube site, not for embedding in web pages. Instead, the format that should be used is described here: https://developers.google.com/youtube/player_parameters.
from http://thetecherra.com/2012/09/12/ios-6-gm-released-full-changelog-inside/
I'm working on this same general problem and I'll share the approach I've come up with. I have some remaining kinks to work out for my particular situation, but the general approach may work for you, depending on your UI requirements.
If you structure your HTML embed code so that the YouTube player covers the entire bounds of the UIWebView, that UIWebView effectively becomes a big play button - a touch anywhere on it's surface will make the video launch into the native media player. To create your button, simply cover the UIWebView with a UIView that has userInteractionEnabled set to NO. Does it matter what the size the UIWebView is in this situation? Not really, since the video will open into the native player anyway.
If you want the user to perceive an area of your UI as being where the video is going to "play", then you can grab the thumbnail for the video from YouTube and position that wherever. If need be, you could put a second UIWebView behind that view, so if the user touches that view, the video would also launch - or just spread the other UIWebView out so that it is "behind" both of these views.
If you post a screenshot of your UI, I can make some more specific suggestions, but the basic idea is to turn the UIWebView into a button by putting a view in front of it that has user interaction disabled.
I've found that the code below works for both iOS5 & iOS6. In my example, I have a text field that contains the url to the video. I first convert the formats to the same starting string 'http://m...". Then I check to see if the the format is the old format with the 'watch?v=' and then replace it with the new format. I've tested this on an iPhone with iOS5 and in the simulators in iOS5 & iOS6:
-(IBAction)loadVideoAction {
[activityIndicator startAnimating];
NSString *urlString = textFieldView.text;
urlString = [urlString stringByReplacingOccurrencesOfString:#"http://www.youtube" withString:#"http://m.youtube"];
urlString = [urlString stringByReplacingOccurrencesOfString:#"http://m.youtube.com/watch?v=" withString:#""];
urlString = [NSString stringWithFormat:#"%#%#", #"http://www.youtube.com/embed/", urlString];
[webView loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:urlString]]];
}
- (void)webViewDidFinishLoad:(UIWebView *)webView
{
[activityIndicator stopAnimating];
}
I believe the problem is that when webViewDidFinishLoad was triggered, the UIButton was not added to the view yet. I implemented a small delay to find the button after the callback is returned and it works for me on ios5 and ios6. drawback is that there is no guarantee the UIButton is ready to be found after the delay.
- (void)autoplay {
UIButton *b = [self findButtonInView:webView];
[b sendActionsForControlEvents:UIControlEventTouchUpInside]; }
- (void)webViewDidFinishLoad:(UIWebView *)_webView {
[self performSelector:#selector(autoplay) withObject:nil afterDelay:0.3]; }
I had to use the url http://www.youtube.com/watch?v=videoid this is the only way it will work for me
- (void) performTapInView: (UIView *) view {
BOOL found = NO;
if ([view gestureRecognizers] != nil) {
for (UIGestureRecognizer *gesture in [view gestureRecognizers]) {
if ([gesture isKindOfClass: [UITapGestureRecognizer class]]) {
if ([gesture.view respondsToSelector: #selector(_singleTapRecognized:)]) {
found = YES;
[gesture.view performSelector: #selector(_singleTapRecognized:) withObject: gesture afterDelay: 0.07];
break;
}
}
}
}
if (!found) {
for (UIView *v in view.subviews) {
[self performTapInView: v];
}
}
}
#pragma mark - UIWebViewDelegate methods
- (void) webViewDidFinishLoad: (UIWebView *) webView {
if (findWorking) {
return;
}
findWorking = YES;
[self performTapInView: webView];
}
I am asking this question regarding my questions Display photolibrary images in an effectual way iPhone and Highly efficient UITableView "cellForRowIndexPath" method to bind the PhotoLibrary images.
So I would like to request that answers are not duplicated to this one without reading the below details :)
Let's come to the issue,
I have researched detailed about my above mentioned issue, and I have found the document about operation queues from here.
So I have created one sample application to display seven photo-library images using operation queues through ALAsset blocks.
Here are the sample application details.
Step 1:
In the NSOperationalQueueViewController viewDidLoad method, I have retrieved all the photo-gallery ALAsset URLs in to an array named urlArray.
Step 2:
After all the URLs are added to the urlArray, the if(group != nil) condition will be false in assetGroupEnumerator, so I have created a NSOperationQueue, and then created seven UIImageView's through a for loop and created my NSOperation subclass object with the corresponding image-view and URL for each one and added them in to the NSOperationQueue.
See my NSOperation subclass here.
See my implementation (VierwController) class here.
Let's come to the issue.
It not displaying all the seven images consistently. Some of the images are missing. The missing order is changing multiple times (one time it doesn't display the sixth and seventh, and another time it doesn't display only the second and third). The console log displays Could not find photo pic number. However, the URLs are logged properly.
You can see the log details here.
Are there any mistakes in my classes?
Also, when I go through the above mentioned operational queue documentation, I have read about NSBlockOperation. Do I need to implement NSBlockOperation instead of NSOperation while dealing with ALAsset blocks?
The NSBlockOperation description says
A class you use as-is to execute one or more block objects
concurrently. Because it can execute more than one block, a block
operation object operates using a group semantic; only when all of the
associated blocks have finished executing is the operation itself
considered finished.
How can I implement the NSBlockOperation with ALAsset block regarding my sample application?
I have gone through Stack Overflow question Learning NSBlockOperation. However, I didn't get any idea to implement the NSBlockOperation with ALAsset block!!
This is the tutorial about "How to access all images from iPhonePhoto Library using ALAsset Library and show them on UIScrollView like iPhoneSimulator" .
First of all add AssetsLibrary.framework to your project.
Then in your viewController.h file import #import <AssetsLibrary/AssetsLibrary.h> header file.
This is your viewController.h file
#import <UIKit/UIKit.h>
#import <AssetsLibrary/AssetsLibrary.h>
#import "AppDelegate.h"
#interface ViewController : UIViewController <UIScrollViewDelegate>
{
ALAssetsLibrary *assetsLibrary;
NSMutableArray *groups;
ALAssetsGroup *assetsGroup;
// I will show all images on `UIScrollView`
UIScrollView *myScrollView;
UIActivityIndicatorView *activityIndicator;
NSMutableArray *assetsArray;
// Will handle thumbnail of images
NSMutableArray *imageThumbnailArray;
// Will handle original images
NSMutableArray *imageOriginalArray;
UIButton *buttonImage;
}
-(void)displayImages;
-(void)loadScrollView;
#end
And this is your viewController.m file -
viewWillAppear:
#import "ViewController.h"
#import <QuartzCore/QuartzCore.h>
#implementation ViewController
- (void)viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated];
assetsArray = [[NSMutableArray alloc]init];
imageThumbnailArray = [[NSMutableArray alloc]init];
imageOriginalArray = [[NSMutableArray alloc]init];
myScrollView = [[UIScrollView alloc]initWithFrame:CGRectMake(0.0, 0.0, 320.0, 416.0)];
myScrollView.delegate = self;
myScrollView.contentSize = CGSizeMake(320.0, 416.0);
myScrollView.backgroundColor = [UIColor whiteColor];
activityIndicator = [[UIActivityIndicatorView alloc]initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleGray];
activityIndicator.center = myScrollView.center;
[myScrollView addSubview:activityIndicator];
[self.view addSubview:myScrollView];
[activityIndicator startAnimating];
}
viewDidAppear:
-(void)viewDidAppear:(BOOL)animated
{
if (!assetsLibrary) {
assetsLibrary = [[ALAssetsLibrary alloc] init];
}
if (!groups) {
groups = [[NSMutableArray alloc] init];
}
else {
[groups removeAllObjects];
}
ALAssetsLibraryGroupsEnumerationResultsBlock listGroupBlock = ^(ALAssetsGroup *group, BOOL *stop) {
//NSLog(#"group %#",group);
if (group) {
[groups addObject:group];
//NSLog(#"groups %#",groups);
} else {
//Call display Images method here.
[self displayImages];
}
};
ALAssetsLibraryAccessFailureBlock failureBlock = ^(NSError *error) {
NSString *errorMessage = nil;
switch ([error code]) {
case ALAssetsLibraryAccessUserDeniedError:
case ALAssetsLibraryAccessGloballyDeniedError:
errorMessage = #"The user has declined access to it.";
break;
default:
errorMessage = #"Reason unknown.";
break;
}
};
[assetsLibrary enumerateGroupsWithTypes:ALAssetsGroupSavedPhotos usingBlock:listGroupBlock failureBlock:failureBlock];
}
And this is displayImages: method body
-(void)displayImages
{
// NSLog(#"groups %d",[groups count]);
for (int i = 0 ; i< [groups count]; i++) {
assetsGroup = [groups objectAtIndex:i];
if (!assetsArray) {
assetsArray = [[NSMutableArray alloc] init];
}
else {
[assetsArray removeAllObjects];
}
ALAssetsGroupEnumerationResultsBlock assetsEnumerationBlock = ^(ALAsset *result, NSUInteger index, BOOL *stop) {
if (result) {
[assetsArray addObject:result];
}
};
ALAssetsFilter *onlyPhotosFilter = [ALAssetsFilter allPhotos];
[assetsGroup setAssetsFilter:onlyPhotosFilter];
[assetsGroup enumerateAssetsUsingBlock:assetsEnumerationBlock];
}
//Seprate the thumbnail and original images
for(int i=0;i<[assetsArray count]; i++)
{
ALAsset *asset = [assetsArray objectAtIndex:i];
CGImageRef thumbnailImageRef = [asset thumbnail];
UIImage *thumbnail = [UIImage imageWithCGImage:thumbnailImageRef];
[imageThumbnailArray addObject:thumbnail];
ALAssetRepresentation *representation = [asset defaultRepresentation];
CGImageRef originalImage = [representation fullResolutionImage];
UIImage *original = [UIImage imageWithCGImage:originalImage];
[imageOriginalArray addObject:original];
}
[self loadScrollView];
}
Now you have two array one is imageThumbnailArray and another is imageOriginalArray.
Use imageThumbnailArray for showing on UIScrollView for which your scrolling will not be slow.... And use imageOriginalArray for an enlarged preview of image.
'loadScrollView:' method, This is how to images on UIScrollView like iPhoneSimulator
#pragma mark - LoadImages on UIScrollView
-(void)loadScrollView
{
float horizontal = 8.0;
float vertical = 8.0;
for(int i=0; i<[imageThumbnailArray count]; i++)
{
if((i%4) == 0 && i!=0)
{
horizontal = 8.0;
vertical = vertical + 70.0 + 8.0;
}
buttonImage = [UIButton buttonWithType:UIButtonTypeCustom];
[buttonImage setFrame:CGRectMake(horizontal, vertical, 70.0, 70.0)];
[buttonImage setTag:i];
[ buttonImage setImage:[imageThumbnailArray objectAtIndex:i] forState:UIControlStateNormal];
[buttonImage addTarget:self action:#selector(buttonImagePressed:) forControlEvents:UIControlEventTouchUpInside];
[myScrollView addSubview:buttonImage];
horizontal = horizontal + 70.0 + 8.0;
}
[myScrollView setContentSize:CGSizeMake(320.0, vertical + 78.0)];
[activityIndicator stopAnimating];
[activityIndicator removeFromSuperview];
}
And here you can find which image button has been clicked -
#pragma mark - Button Pressed method
-(void)buttonImagePressed:(id)sender
{
NSLog(#"you have pressed : %d button",[sender tag]);
}
Hope this tutorial will help you and many users who search for the same.. Thank you!
You have a line in your DisplayImages NSOperation subclass where you update the UI (DisplayImages.m line 54):
self.imageView.image = topicImage;
This operation queue is running on a background thread, and we know that you should only update the state of the UI on the main thread. Since updating the view of an image view is definitely updating the UI, this can be simply fixed by wrapping the call with:
dispatch_async(dispatch_get_main_queue(), ^{
self.imageView.image = topicImage;
});
This puts an asynchronous call on the main queue to update the UIImageView with the image. It's asynchronous so your other tasks can be scheduled in the background, and it's safe as it is running on the main queue - which is the main thread.
Is it just me or has the action sheet on <img> tags been disabled in UIWebView? In Safari, e.g, when you want to save an image locally, you touch and hold on the image to get an action sheet shown. But it's not working in my custom UIWebView. I mean, it is still working for <a> tags, i.e, when I touch and hold on html links, an action sheet shows up. But not for the <img> tags.
I've tried things like putting img { -webkit-touch-callout: inherit; } in css, which didn't work. On the other hand, when I double-tap and hold on the images, a copy-balloon shows up.
So the question is, has the default action sheet callout for <img> tags been disabled for UIWebView? Is so, is there a way to re-enable it? I've googled around and saw many Q&As on how to disable it in UIWebView, so is it just me who aren't seeing the popup?
Thanks in advance!
Yes apple has disabled this feature (among others) in UIWebViews and kept it for Safari only.
However you can recreate this yourself by extending this tutorial, http://www.icab.de/blog/2010/07/11/customize-the-contextual-menu-of-uiwebview/.
Once you've finished this tutorial you'll want to add a few extra's so you can actually save images (which the tutorial doesn't cover).
I added an extra notification called #"tapAndHoldShortNotification" after 0.3 seconds which calls a method with just the disable callout code in it (to prevent both the default and your own menu popping while the page is still loading, a little bug fix).
Also to detect images you'll need to extend the JSTools.js, here's mine with the extra functions.
function MyAppGetHTMLElementsAtPoint(x,y) {
var tags = ",";
var e = document.elementFromPoint(x,y);
while (e) {
if (e.tagName) {
tags += e.tagName + ',';
}
e = e.parentNode;
}
return tags;
}
function MyAppGetLinkSRCAtPoint(x,y) {
var tags = "";
var e = document.elementFromPoint(x,y);
while (e) {
if (e.src) {
tags += e.src;
break;
}
e = e.parentNode;
}
return tags;
}
function MyAppGetLinkHREFAtPoint(x,y) {
var tags = "";
var e = document.elementFromPoint(x,y);
while (e) {
if (e.href) {
tags += e.href;
break;
}
e = e.parentNode;
}
return tags;
}
Now you can detect the user clicking on images and actually find out the images url they are clicking on, but we need to change the -(void)openContextualMenuAtPoint: method to provide extra options.
Again here's mine (I tried to copy Safari's behaviour for this):
- (void)openContextualMenuAt:(CGPoint)pt{
// Load the JavaScript code from the Resources and inject it into the web page
NSString *path = [[NSBundle mainBundle] pathForResource:#"JSTools" ofType:#"js"];
NSString *jsCode = [NSString stringWithContentsOfFile:path encoding:NSUTF8StringEncoding error:nil];
[webView stringByEvaluatingJavaScriptFromString:jsCode];
// get the Tags at the touch location
NSString *tags = [webView stringByEvaluatingJavaScriptFromString:
[NSString stringWithFormat:#"MyAppGetHTMLElementsAtPoint(%i,%i);",(NSInteger)pt.x,(NSInteger)pt.y]];
NSString *tagsHREF = [webView stringByEvaluatingJavaScriptFromString:
[NSString stringWithFormat:#"MyAppGetLinkHREFAtPoint(%i,%i);",(NSInteger)pt.x,(NSInteger)pt.y]];
NSString *tagsSRC = [webView stringByEvaluatingJavaScriptFromString:
[NSString stringWithFormat:#"MyAppGetLinkSRCAtPoint(%i,%i);",(NSInteger)pt.x,(NSInteger)pt.y]];
UIActionSheet *sheet = [[UIActionSheet alloc] initWithTitle:nil delegate:self cancelButtonTitle:nil destructiveButtonTitle:nil otherButtonTitles:nil];
selectedLinkURL = #"";
selectedImageURL = #"";
// If an image was touched, add image-related buttons.
if ([tags rangeOfString:#",IMG,"].location != NSNotFound) {
selectedImageURL = tagsSRC;
if (sheet.title == nil) {
sheet.title = tagsSRC;
}
[sheet addButtonWithTitle:#"Save Image"];
[sheet addButtonWithTitle:#"Copy Image"];
}
// If a link is pressed add image buttons.
if ([tags rangeOfString:#",A,"].location != NSNotFound){
selectedLinkURL = tagsHREF;
sheet.title = tagsHREF;
[sheet addButtonWithTitle:#"Open"];
[sheet addButtonWithTitle:#"Copy"];
}
if (sheet.numberOfButtons > 0) {
[sheet addButtonWithTitle:#"Cancel"];
sheet.cancelButtonIndex = (sheet.numberOfButtons-1);
[sheet showInView:webView];
}
[selectedLinkURL retain];
[selectedImageURL retain];
[sheet release];
}
(NOTES: selectedLinkURL and selectedImageURL are declared in the .h file to let them be accessed throughout the class, for saving or opening the link latter.
So far we've just been going back over the tutorials code making changes but now we will move into what the tutorial doesn't cover (it stops before actually mentioning how to handle saving the images or opening the links).
To handle the users choice we now need to add the actionSheet:clickedButtonAtIndex: method.
-(void)actionSheet:(UIActionSheet *)actionSheet clickedButtonAtIndex:(NSInteger)buttonIndex{
if ([[actionSheet buttonTitleAtIndex:buttonIndex] isEqualToString:#"Open"]){
[webView loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:selectedLinkURL]]];
}
else if ([[actionSheet buttonTitleAtIndex:buttonIndex] isEqualToString:#"Copy"]){
[[UIPasteboard generalPasteboard] setString:selectedLinkURL];
}
else if ([[actionSheet buttonTitleAtIndex:buttonIndex] isEqualToString:#"Copy Image"]){
[[UIPasteboard generalPasteboard] setString:selectedImageURL];
}
else if ([[actionSheet buttonTitleAtIndex:buttonIndex] isEqualToString:#"Save Image"]){
NSOperationQueue *queue = [NSOperationQueue new];
NSInvocationOperation *operation = [[NSInvocationOperation alloc] initWithTarget:self selector:#selector(saveImageURL:) object:selectedImageURL];
[queue addOperation:operation];
[operation release];
}
}
This checks what the user wants to do and handles /most/ of them, only the "save image" operation needs another method to handle that. For the progress I used MBProgressHub.
Add an MBProgressHUB *progressHud; to the interface declaration in the .h and set it up in the init method (of whatever class you're handling the webview from).
progressHud = [[MBProgressHUD alloc] initWithView:self.view];
progressHud.customView = [[[UIImageView alloc] initWithImage:[UIImage imageNamed:#"Tick.png"]] autorelease];
progressHud.opacity = 0.8;
[self.view addSubview:progressHud];
[progressHud hide:NO];
progressHud.userInteractionEnabled = NO;
And the -(void)saveImageURL:(NSString*)url; method will actually save it to the image library.
(A better way would be to do the download through an NSURLRequest and update the progress hud in MBProgressHUDModeDeterminate to deflect how long it'll actually take to download, but this is a more hacked together implementation then that)
-(void)saveImageURL:(NSString*)url{
[self performSelectorOnMainThread:#selector(showStartSaveAlert) withObject:nil waitUntilDone:YES];
UIImageWriteToSavedPhotosAlbum([UIImage imageWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:url]]], nil, nil, nil);
[self performSelectorOnMainThread:#selector(showFinishedSaveAlert) withObject:nil waitUntilDone:YES];
}
-(void)showStartSaveAlert{
progressHud.mode = MBProgressHUDModeIndeterminate;
progressHud.labelText = #"Saving Image...";
[progressHud show:YES];
}
-(void)showFinishedSaveAlert{
// Set custom view mode
progressHud.mode = MBProgressHUDModeCustomView;
progressHud.labelText = #"Completed";
[progressHud performSelector:#selector(hide:) withObject:[NSNumber numberWithBool:YES] afterDelay:0.5];
}
And of cause add [progressHud release]; to the dealloc method.
Hopefully this shows you how to add some of the options to a webView that apple left out.
Of cause though you can add more things to this like a "Read Later" option for instapaper or a "Open In Safari" button.
(looking at the length of this post I'm seeing why the original tutorial left out the finial implementation details)
Edit: (updated with more info)
I was asked about the detail I glossed over at the top, the #"tapAndHoldShortNotification", so this is clarifying it.
This is my UIWindow subclass, it adds the second notification to cancel the default selection menu (this is because when I tried the tutorial it showed both menus).
- (void)tapAndHoldAction:(NSTimer*)timer {
contextualMenuTimer = nil;
UIView* clickedView = [self hitTest:CGPointMake(tapLocation.x, tapLocation.y) withEvent:nil];
while (clickedView != nil) {
if ([clickedView isKindOfClass:[UIWebView class]]) {
break;
}
clickedView = clickedView.superview;
}
if (clickedView) {
NSDictionary *coord = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:tapLocation.x],#"x",
[NSNumber numberWithFloat:tapLocation.y],#"y",nil];
[[NSNotificationCenter defaultCenter] postNotificationName:#"TapAndHoldNotification" object:coord];
}
}
- (void)tapAndHoldActionShort:(NSTimer*)timer {
UIView* clickedView = [self hitTest:CGPointMake(tapLocation.x, tapLocation.y) withEvent:nil];
while (clickedView != nil) {
if ([clickedView isKindOfClass:[UIWebView class]]) {
break;
}
clickedView = clickedView.superview;
}
if (clickedView) {
NSDictionary *coord = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:tapLocation.x],#"x",
[NSNumber numberWithFloat:tapLocation.y],#"y",nil];
[[NSNotificationCenter defaultCenter] postNotificationName:#"TapAndHoldShortNotification" object:coord];
}
}
- (void)sendEvent:(UIEvent *)event {
NSSet *touches = [event touchesForWindow:self];
[touches retain];
[super sendEvent:event]; // Call super to make sure the event is processed as usual
if ([touches count] == 1) { // We're only interested in one-finger events
UITouch *touch = [touches anyObject];
switch ([touch phase]) {
case UITouchPhaseBegan: // A finger touched the screen
tapLocation = [touch locationInView:self];
[contextualMenuTimer invalidate];
contextualMenuTimer = [NSTimer scheduledTimerWithTimeInterval:0.8 target:self selector:#selector(tapAndHoldAction:) userInfo:nil repeats:NO];
NSTimer *myTimer;
myTimer = [NSTimer scheduledTimerWithTimeInterval:0.2 target:self selector:#selector(tapAndHoldActionShort:) userInfo:nil repeats:NO];
break;
case UITouchPhaseEnded:
case UITouchPhaseMoved:
case UITouchPhaseCancelled:
[contextualMenuTimer invalidate];
contextualMenuTimer = nil;
break;
}
} else { // Multiple fingers are touching the screen
[contextualMenuTimer invalidate];
contextualMenuTimer = nil;
}
[touches release];
}
The notification is then handled like this:
// in -viewDidLoad
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(stopSelection:) name:#"TapAndHoldShortNotification" object:nil];
- (void)stopSelection:(NSNotification*)notification{
[webView stringByEvaluatingJavaScriptFromString:#"document.documentElement.style.webkitTouchCallout='none';"];
}
It's only a little change but it fixes the annoying little bug where you get 2 menus appear (the standard one and yours).
Also you could easily add iPad support by sending the touches location as the notification fires and then showing the UIActionSheet from that point, though this was written before the iPad so doesn't include support for that.
After struggling for, like 2 or 3 days non-stop on this problem, it seems like the position is computed "relatively" to the UIWebView's "TOP-LEFT" corner (I am programing for iOS 7).
So, to make this work, when you get the position, on the controller where your WebView is (i'll put a snippet of my code below), don't add the "scroll-offset"
SNIPPET - ContextualMenuAction:
- (void)contextualMenuAction:(NSNotification*)notification {
// Load javascript
[self loadJavascript];
// Initialize the coordinates
CGPoint pt;
pt.x = [[[notification object] objectForKey:#"x"] floatValue];
pt.y = [[[notification object] objectForKey:#"y"] floatValue];
// Convert point from window to view coordinate system
pt = [self.WebView convertPoint:pt fromView:nil];
// Get PAGE and UIWEBVIEW dimensions
CGSize pageDimensions = [self.WebView documentSize];
CGSize webviewDimensions = self.WebView.frame.size;
/***** If the page is in MOBILE version *****/
if (webviewDimensions.width == pageDimensions.width) {
}
/***** If the page is in DESKTOP version *****/
else {
// convert point from view to HTML coordinate system
CGSize viewSize = [self.WebView frame].size;
// Contiens la portion de la page visible depuis la webview (en fonction du zoom)
CGSize windowSize = [self.WebView windowSize];
CGFloat factor = windowSize.width / viewSize.width;
CGFloat factorHeight = windowSize.height / viewSize.height;
NSLog(#"factor: %f", factor);
pt.x = pt.x * factor; // ** logically, we would add the offset **
pt.y = pt.y * factorHeight; // ** logically, we would add the offset **
}
NSLog(#"x: %f and y: %f", pt.x, pt.y);
NSLog(#"WINDOW: width: %f height: %f", [self.WebView windowSize].width, [self.WebView windowSize].height);
NSLog(#"DOCUMENT: width: %f height: %f", pageDimensions.width, pageDimensions.height);
[self openContextualMenuAt:pt];
}
SNIPPET - in openContextualMenuAt:
To load the correct JS function:
- (void)openContextualMenuAt:(CGPoint)pt {
// Load javascript
[self loadJavascript];
// get the Tags at the touch location
NSString *tags = [self.WebView stringByEvaluatingJavaScriptFromString:[NSString stringWithFormat:#"getHTMLTagsAtPoint(%li,%li);",(long)pt.x,(long)pt.y]];
...
}
SNIPPET - in JSTools.js:
This is the function I use to get the element touched
function getHTMLTagsAtPoint(x,y) {
var tags = ",";
var element = document.elementFromPoint(x,y);
while (element) {
if (element.tagName) {
tags += element.tagName + ',';
}
element = element.parentNode;
}
return tags;
}
SNIPPET - loadJavascript
I use this one to inject my JS code in the webview
-(void)loadJavascript {
[self.WebView stringByEvaluatingJavaScriptFromString:
[NSString stringWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"JSTools" ofType:#"js"] encoding:NSUTF8StringEncoding error:nil]];
}
This part (everything I did to overrride the default UIActionSheet) is HEAVILY (should I say completely) based on
this post
#Freerunning's answer is complete (i did almost everything he said in my other classes, like on the post my code is based on), the snippets i posted is just to show you more "completely" how my code is.
Hope this helps! ^^
First of all thanks to Freerunnering for the great solution!
But you can do this with an UILongPressGestureRecognizer instead of a custom LongPressRecognizer. This makes things a bit easier to implement:
In the Viewcontroller Containing the webView:
Add UIGestureRecognizerDelegate to your ViewController
let mainJavascript = "function MyAppGetHTMLElementsAtPoint(x,y) { var tags = \",\"; var e = document.elementFromPoint(x,y); while (e) { if (e.tagName) { tags += e.tagName + ','; } e = e.parentNode; } return tags; } function MyAppGetLinkSRCAtPoint(x,y) { var tags = \"\"; var e = document.elementFromPoint(x,y); while (e) { if (e.src) { tags += e.src; break; } e = e.parentNode; } return tags; } function MyAppGetLinkHREFAtPoint(x,y) { var tags = \"\"; var e = document.elementFromPoint(x,y); while (e) { if (e.href) { tags += e.href; break; } e = e.parentNode; } return tags; }"
func viewDidLoad() {
...
let longPressRecognizer = UILongPressGestureRecognizer(target: self, action: #selector(CustomViewController.longPressRecognizerAction(_:)))
self.webView.scrollView.addGestureRecognizer(longPressRecognizer)
longPressRecognizer.delegate = self
...
}
func longPressRecognizerAction(sender: UILongPressGestureRecognizer) {
if sender.state == UIGestureRecognizerState.Began {
let tapPostion = sender.locationInView(self.webView)
let tags = self.webView.stringByEvaluatingJavaScriptFromString("MyAppGetHTMLElementsAtPoint(\(tapPostion.x),\(tapPostion.y));")
let href = self.webView.stringByEvaluatingJavaScriptFromString("MyAppGetLinkHREFAtPoint(\(tapPostion.x),\(tapPostion.y));")
let src = self.webView.stringByEvaluatingJavaScriptFromString("MyAppGetLinkSRCAtPoint(\(tapPostion.x),\(tapPostion.y));")
print("tags: \(tags)\nhref: \(href)\nsrc: \(src)")
// handle the results, for example with an UIDocumentInteractionController
}
}
// Without this function, the customLongPressRecognizer would be replaced by the original UIWebView LongPressRecognizer
func gestureRecognizer(gestureRecognizer: UIGestureRecognizer, shouldRecognizeSimultaneouslyWithGestureRecognizer otherGestureRecognizer: UIGestureRecognizer) -> Bool {
return true
}
And thats it!
I've used a recipe from the iPhone Developer's Cookbook called ModalAlert in order to get some text from a user; however, when the alert is shown, the keyboard and buttons are frozen. Here is the code for the modal alert.
+(NSString *) textQueryWith: (NSString *)question prompt: (NSString *)prompt button1: (NSString *)button1 button2:(NSString *) button2
{
// Create alert
CFRunLoopRef currentLoop = CFRunLoopGetCurrent();
ModalAlertDelegate *madelegate = [[ModalAlertDelegate alloc] initWithRunLoop:currentLoop];
UIAlertView *alertView = [[UIAlertView alloc] initWithTitle:question message:#"\n" delegate:madelegate cancelButtonTitle:button1 otherButtonTitles:button2, nil];
// Build text field
UITextField *tf = [[UITextField alloc] initWithFrame:CGRectMake(0.0f, 0.0f, 260.0f, 30.0f)];
tf.borderStyle = UITextBorderStyleRoundedRect;
tf.tag = TEXT_FIELD_TAG;
tf.placeholder = prompt;
tf.clearButtonMode = UITextFieldViewModeWhileEditing;
tf.keyboardType = UIKeyboardTypeAlphabet;
tf.keyboardAppearance = UIKeyboardAppearanceAlert;
tf.autocapitalizationType = UITextAutocapitalizationTypeWords;
tf.autocorrectionType = UITextAutocorrectionTypeNo;
tf.contentVerticalAlignment = UIControlContentVerticalAlignmentCenter;
// Show alert and wait for it to finish displaying
[alertView show];
while (CGRectEqualToRect(alertView.bounds, CGRectZero));
// Find the center for the text field and add it
CGRect bounds = alertView.bounds;
tf.center = CGPointMake(bounds.size.width / 2.0f, bounds.size.height / 2.0f - 10.0f);
[alertView addSubview:tf];
[tf release];
// Set the field to first responder and move it into place
[madelegate performSelector:#selector(moveAlert:) withObject:alertView afterDelay: 0.7f];
// Start the run loop
CFRunLoopRun();
// Retrieve the user choices
NSUInteger index = madelegate.index;
NSString *answer = [[madelegate.text copy] autorelease];
if (index == 0) answer = nil; // assumes cancel in position 0
[alertView release];
[madelegate release];
return answer;
}
Thanks!
You should probably check whether a UITextField's userInteractionEnabled property defaults to YES or NO.
// Put the modal alert inside a new thread. This happened to me before, and this is how i fixed it.
- (void)SomeMethod {
[NSThread detachNewThreadSelector:#selector(CheckCurrentPuzzle) toTarget:self withObject:nil]; }
-(void) CheckCurrentPuzzle {
NSAutoreleasePool *pool2 = [[NSAutoreleasePool alloc] init];
// code that should be run in the new thread goes here
if ([gameBoard AreAllCellsFilled]) {
if ([gameBoard FilledWithoutWin]) {
//only show this message once per puzzle
if (![currentPuzzle showedRemovalMessage]) {
NSArray *buttons = [NSArray arrayWithObject:#"Yes"];
if ([ModalAlert ask:#"blah blah blah" withTitle:#"Incomplete Puzzle" withCancel:#"No" withButtons:buttons] == 1) {
NSLog(#"Remove The Incorrect Cells");
[gameBoard RemoveIncorrect];
} else {
[gameSounds.bloop2 play];
}
}
} else {
if ([gameBoard IsBoardComplete]) {
[self performSelectorOnMainThread:#selector(WINNER) withObject:nil waitUntilDone:false];
}
}
}
[pool2 release];
}
-(void) WINNER {
//ladies and gentleman we have a winner
}
I had a problem similar to this in my educational game QPlus. It bugged me because I had the "exact" same code in two related apps, and they did not have the bug. It turned out that the bug was because the selector method was not declared in the header file. I am working in Xcode 4.2.
Details below:
In .m:
tapRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(emailLabelPressed)];
tapRecognizer.numberOfTapsRequired = 1;
[aLabel addGestureRecognizer:tapRecognizer];
[aLabel setUserInteractionEnabled:YES];
And later in the .m:
(void)emailLabelPressed {
//details
}
That works just fine in the simulator, but on an actual device the email interface presented modally will not edit. You can send or save as draft but no editing.
Then add this to the .h file:
(void)emailLabelPressed;
And voila, it works on the device. Of course this was the difference with the related apps - they both had the method declared in the header file. I would classify this as an iOS bug, but being such a novice developer I wouldn't presume to know.
Based on this, you may want to verify that your selector method moveAlert: is declared in your header file.
Enjoy,
Damien