How to display two UIImages one over another - iphone

i have to display two images one below the other. The images should look as if one is overlapping another(more over like a 3d image). i am using(i must use) drawRect method to display the images. i am even putting a code snippet that im using... Ca anyone guide me regarding this. Your inputs would help me go a long way.. Thank you.
*Here coverRect contains an image and UIImage *s is also a image...
if (columnIndex == 1) {
coverRect = CGRectMake(41,77 ,120 ,150 );
textRect = CGRectMake(31, 190 ,120 ,15 );
if (rowIndex != 0 && currentlyInEditingMode == NO) {
UIImage *s = [UIImage imageNamed:#"tray_center.png"];
[s drawInRect:CGRectMake(0, 0, s.size.width, s.size.height)];
}
}

Use the following API,
drawInRect:(CGRect)rect blendMode:(CGBlendMode)blendMode alpha:(CGFloat)alpha
and for the above image set the alpha value according the opacity value you require. And set the 1.0 as the alpha value for the below image.

Related

UIImage Frame Null in NSMutableArray

I am trying to display star rating images, and I have three star images: full star, half star and an unselected grey-ed out star.
I have an array which holds the stars: so for rating 4.5, it would hold 4 selected and 1 half. I am adding the same star objects into the array so that I do not have to create multiple instances of the stars. I have just three instances and in my calculations I am just using addObject on those three different images as follows:
for (int i = 0; i < ratingCount; i++) {
if (rating >= 1)
[self.imageViews addObject:self.selected];
else
[self.imageViews addObject:self.halfSelected];
rating--;
}
I am having an issue drawing these images. In a subsequent loop, I am trying to draw them out as follows:
for (int i = 0; i < self.imageViews.count; ++i) {
UIImageView *imageView = [self.imageViews objectAtIndex:i];
imageView.frame = CGRectMake(i * (5 + imageWidth), 0, imageView.frame.size.width, imageView.frame.size.height);
[self.view addSubview:imageView];
}
This crashes because imageView.frame is coming out as null. When I debugged it, it prints out null <0x00000000>. Why is the frame coming out as null? The images are not printing as null and I know that they are added to the array properly.
When removing imageView.frame, I also get an [UIImage superview]: unrecognized selector sent to instance 0x8644590
The images are instantiated using imageNamed in the init method. Would that cause an issue? Do they get allocated early? That should print null when trying to see the image in the debugger using po so I don't think that is the issue.
You're confusing a UIImage with a UIImageView. Your array contains images, which do not have frames, since they are not views.
You need a separate set of image views, which have the images assigned to them.

Averaging multiple UIImages

I have been searching for this answer for a while. But I haven't been able to find it.
I would like to average the pixels of 30 UIimages. To do so, I would like to do it using Quartz2D instead of going over all the pixels of all the images. It ocurred to me that, in order to paint 30 images together I should just adjust the alpha channel of each of them to 1/30. Then, after painting one in top of the other I would get the desired effect.
the desired formula should be: Dest Px = (img[0].px+....img[29].px)/30
I have tried to achieve it using an imageContext and blending the images together with no luck:
UIGraphicsBeginImageContext(CGSizeMake(sz.width, sz.height));
for (int i=0; i<30; i++) {
UIImage* img = [self.delegate requestImage:self at:i];
CGPoint coord = [self.delegate requestTranslation:self at:i];
[img drawAtPoint:coord blendMode:kCGBlendModeNormal alpha:1/30];
}
UIImage* im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
How could I get an averaged image of many UIimages?
I have also tried adding an image with many sublayers, but I also get washed out images.
Thanks!
Try changing the following:
[img drawAtPoint:coord blendMode:kCGBlendModeNormal alpha:1/30];
to
[img drawAtPoint:coord blendMode:kCGBlendModeNormal alpha:1.0/30.0];
1/30 (using integer values) == 0, so you'll be drawing the images completely transparent. By adding the .0, you clarify that you want a CGFloat.

Uiimage from UIView: higher than on-screen resolution?

I've got a UIView which I'm rendering to a UIImage via the typical UIGraphicsBeginImageContextWithOptions method, using a scale of 2.0 so the image output will always be the "retina display" version of what would show up onscreen, regardless of the user's actual screen resolution.
The UIView I'm rendering contains both images and text (UIImages and UILabels).  The image is appearing in the rendered UIImage at its full resolution, and looks great.  But the UILabels appear to have been rasterized at a 1.0 scale and then upscaled to 2.0, resulting in blurry text.
Is there something I'm doing wrong, or is there some way to get the text to render nice and crisp at the higher scale level?  Or is there some way to do this other than using the scaling parameter of UIGraphicsBeginImageContextWithOptions that would have better results?   Thanks!
The solution is to change the labels's contentsScale to 2 before you draw it, then set it back immediately thereafter. I just coded up a project to verify it, and its working just fine making a 2x image in a normal retina phone (simulator). [If you have a public place I can put it let me know.]
EDIT: the extended code walks the subviews and any container UIViews to set/unset the scale
- (IBAction)snapShot:(id)sender
{
[self changeScaleforView:snapView scale:2];
UIGraphicsBeginImageContextWithOptions(snapView.bounds.size, snapView.opaque, 2);
[snapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageDisplay.image = img; // contentsScale
imageDisplay.contentMode = UIViewContentModeScaleAspectFit;
[self changeScaleforView:snapView scale:1];
}
- (void)changeScaleforView:(UIView *)aView scale:(CGFloat)scale
{
[aView.subviews enumerateObjectsUsingBlock:^void(UIView *v, NSUInteger idx, BOOL *stop)
{
if([v isKindOfClass:[UILabel class]]) {
v.layer.contentsScale = scale;
} else
if([v isKindOfClass:[UIImageView class]]) {
// labels and images
// v.layer.contentsScale = scale; won't work
// if the image is not "#2x", you could subclass UIImageView and set the name of the #2x
// on it as a property, then here you would set this imageNamed as the image, then undo it later
} else
if([v isMemberOfClass:[UIView class]]) {
// container view
[self changeScaleforView:v scale:scale];
}
} ];
}
Try rendering to an image with double size, and then create the scaled image:
UIGraphicsBeginImageContextWithOptions(size, NO, 1.0);
// Do stuff
UImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
newImage=[UIImage imageWithCGImage:[newImage CGImage] scale:2.0 orientation:UIImageOrientationUp];
Where:
size = realSize * scale;
I have been struggling with much the same oddities in the context of textview to PDF rendering. I found out that there are some documented properties on the CALayer objects which make up the view. Maybe setting the rasterizationScale of the relevant (sub)layer(s) helps.

Retina display of images-iPhone 3 to 4

I have developed an application of tile game for iPhone 3.
In which I took an image from my resource and divided it into number of tiles using CGImageCreateWithImageInRect ( originalImage.CGImage, frame ); function.
It works great on all iPhones but now I want it to work on Retina Displays also.
So as per this link I have taken anothe image with its size double the current images size and rename it by adding suffix #2x. But the problem is it takes the upper half part of the retina display image only. I think thats because of the frame I have set while using CGImageCreateWithImageInRect. So What shall be done in respect to make this work.
Any kind of help will be really appreciated.
Thanks in advance...
The problem is likely that the #2x image scale is only automatically set up properly for certain initializers of UIImage... Try loading your UIImages using code like this from Tasty Pixel. The entry at that link talks more about this issue.
Using the UIImage+TPAdditions category from the link, you'll implement it like so (after making sure that the images and their #2x counterparts are in your project):
NSString *baseImagePath = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents"];
NSString *myImagePath = [baseImagePath stringByAppendingPathComponent:#"myImage.png"]; // note no need to add #2x.png here
UIImage *myImage = [UIImage imageWithContentsOfResolutionIndependentFile:myImagePath];
Then you should be able to use CGImageCreateWithImageInRect(myImage.CGImage, frame);
Here's how I got it to work in an app I did:
//this is a method that takes a UIImage and slices it into 16 tiles (GridSize * GridSize)
#define GridSize 4
- (void) sliceImage:(UIImage *)image {
CGSize imageSize = [image size];
CGSize square = CGSizeMake(imageSize.width/GridSize, imageSize.height/GridSize);
CGFloat scaleMultiplier = [image scale];
square.width *= scaleMultiplier;
square.height *= scaleMultiplier;
CGFloat scale = ([self frame].size.width/GridSize)/square.width;
CGImageRef source = [image CGImage];
if (source != NULL) {
for (int r = 0; r < GridSize; ++r) {
for (int c = 0; c < GridSize; ++c) {
CGRect slice = CGRectMake(c*square.width, r*square.height, square.width, square.height);
CGImageRef sliceImage = CGImageCreateWithImageInRect(source, slice);
if (sliceImage) {
//we have a tile (as a CGImageRef) from the source image
//do something with it
CFRelease(sliceImage);
}
}
}
}
}
The trick is using the -[UIImage scale] property to figure out how big of a rect you should be slicing.

UITableViewCell has square corner with image

I have a grouped UITableView that contains several cells (just standard UITableViewCells), all of which are of UITableViewCellStyleSubtitle style. Bueno. However, when I insert images into them (using the provided imageView property), the corners on the left side become square.
Example Image http://files.lithiumcube.com/tableView.png
The code being used to assign the values into the cell is:
cell.textLabel.text = currentArticle.descriptorAndTypeAndDifferentiator;
cell.detailTextLabel.text = currentArticle.stateAndDaysWorn;
cell.imageView.image = currentArticle.picture;
and currentArticle.picture is a UIImage (also the pictures, as you can see, display just fine with the exception of the square corners).
It displays the same on my iPhone 3G, in the iPhone 4 simulator and in the iPad simulator.
What I'm going for is something similar to the UITableViewCells that Apple uses in its iTunes app.
Any ideas about what I'm doing wrong?
Thanks,
-Aaron
cell.imageView.layer.cornerRadius = 16; // 16 is just a guess
cell.imageView.clipsToBounds = YES;
This will round the UIImageView so it does not draw over the cell. It will also round all the corners of all your images, but that may be OK.
Otherwise, you will have to add your own image view that will just round the one corner. You can do that by setting up a clip region in drawRect: before calling super. Or just add your own image view that is not so close to the left edge.
You can add a category on UIImage and include this method:
// Return the image, but with rounded corners. Useful for masking an image
// being used in a UITableView which has grouped style
- (UIImage *)imageWithRoundedCorners:(UIRectCorner)corners radius:(CGFloat)radius {
// We need to create a CGPath to set a clipping context
CGRect aRect = CGRectMake(0.f, 0.f, self.size.width, self.size.height);
CGPathRef clippingPath = [UIBezierPath bezierPathWithRoundedRect:aRect byRoundingCorners:corners cornerRadii:CGSizeMake(radius, radius)].CGPath;
// Begin drawing
// Start a context with a scale of 0.0 uses the current device scale so that this doesn't unnecessarily drop resolution on a retina display.
// Use `UIGraphicsBeginImageContextWithOptions(aRect.size)` instead for pre-iOS 4 compatibility.
UIGraphicsBeginImageContextWithOptions(aRect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, clippingPath);
CGContextClip(context);
[self drawInRect:aRect];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
Then when you're configuring your cells, in the table view controller, call something like:
if ( *SINGLE_ROW* ) {
// We need to clip to both corners
cell.imageView.image = [image imageWithRoundedCorners:(UIRectCornerTopLeft | UIRectCornerBottomLeft) radius:radius];
} else if (indexPath.row == 0) {
cell.imageView.image = [image imageWithRoundedCorners:UIRectCornerTopLeft radius:radius];
} else if (indexPath.row == *NUMBER_OF_ITEMS* - 1) {
cell.imageView.image = [image imageWithRoundedCorners:UIRectCornerBottomLeft radius:radius];
} else {
cell.imageView.image = image;
}
but replace the SINGLE_ROW etc with real logic to determine whether you've got a single row in a section, or it's the last row. One thing to note here, is that I've found (experimentally) that the radius for a group style table is 12, which works perfectly in the simulator, but not on an iPhone. I've not been able to test it on a non-retina device. A radius of 30 looks good on the iPhone 4 (so I'm wondering if this is an image scale thing, as the images I'm using are from the AddressBook, so don't have an implied scale factor). Therefore, I've got some code before this that modifies the radius...
CGFloat radius = GroupStyleTableCellCornerRadius;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)] && [[UIScreen mainScreen] scale] == 2){
// iPhone 4
radius = GroupStyleTableCellCornerRadiusForRetina;
}
hope that helps.