resizing image for a bar - iphone

I am resizing a image to fit it in a bar. I need to resize it because the image is in a db and this is use for the same app in android too. the thing is this code is working well in iOS 6.1 but in lower 5.0 5.1 I am having a black image in the bar. The code is the following:
CGSize newSize=CGSizeMake(320,44);
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.navigationController.navigationBar setBackgroundImage:newImage forBarMetrics:UIBarMetricsDefault];
some ideas of this wrong behavior?
Thanks

Use following method for specific hight and width with image
+ (UIImage*)resizeImage:(UIImage*)image withWidth:(int)width withHeight:(int)height
{
CGSize newSize = CGSizeMake(width, height);
float widthRatio = newSize.width/image.size.width;
float heightRatio = newSize.height/image.size.height;
if(widthRatio > heightRatio)
{
newSize=CGSizeMake(image.size.width*heightRatio,image.size.height*heightRatio);
}
else
{
newSize=CGSizeMake(image.size.width*widthRatio,image.size.height*widthRatio);
}
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This method return NewImage, with specific size that you specify :)
This code may be helpful for you:)

Related

How to set frame UITableviewCell image size?

I use a small image size, such are 26*27, 15*28,... but the image is stretched and how to avoid and to set all image alignment are center.
You can set your image to center of image using UIViewContentModeCenter
Using code
imageView.contentMode=UIViewContentModeCenter;
using Nib
Use following method for to specify the height and width of image
+ (UIImage*)resizeImage:(UIImage*)image withWidth:(int)width withHeight:(int)height
{
CGSize newSize = CGSizeMake(width, height);
float widthRatio = newSize.width/image.size.width;
float heightRatio = newSize.height/image.size.height;
if(widthRatio > heightRatio)
{
newSize=CGSizeMake(image.size.width*heightRatio,image.size.height*heightRatio);
}
else
{
newSize=CGSizeMake(image.size.width*widthRatio,image.size.height*widthRatio);
}
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This method return NewImage, with a specific size that you specify
i think u want :
imageView.contentMode = UIViewContentModeScaleAspectFill;

how to merge 2 images without using set alpha?

I am a Fresher Developer in iPhone .
I want Merge Two Images and Get Only One Image In UIImageView without set alpha.
This is my code. This code is working using alpha,
but I want set without set alpha.
MYCODE:-
-(UIImage *)maskingImage:(UIImage *)image
{
CGSize sizeR = CGSizeMake(200, 220);
// UIImage *textureImage = [UIImage imageNamed:#"tt.png"];
UIImage *textureImage =imgView2.image;
UIGraphicsBeginImageContextWithOptions(sizeR, YES, textureImage.scale);
[textureImage drawInRect:CGRectMake(0.0, 0.0, 200, 220)];
UIImage *bottomImage = UIGraphicsGetImageFromCurrentImageContext();
UIImage *upperImage = image;
CGSize newSize = sizeR ;
UIGraphicsBeginImageContext(newSize);
[bottomImage drawInRect:CGRectMake(0.0, 0.0, 200, 220)];
[upperImage drawInRect:CGRectMake(0.0, 0.0, 200, 220) blendMode:kCGBlendModeNormal alpha:0.5];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Thanks in advance.
UIGraphicsBeginImageContext(YOUR SIZE);
//FIRST IMAGE
[FIRST_IMAGE drawInRect:CGRectMake(0, 0, YOUR_SIZE_WIDTH/2, YOUR_SIZE_HEIGHT)];
//SECOND IMAGE
[SECOND_IMAGE drawInRect:CGRectMake(YOUR_SIZE_WIDTH/2, 0, YOUR_SIZE_WIDTH/2, YOUR_SIZE_HEIGHT)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
use this function
- (UIImage * ) mergeImage: (UIImage *) imageA
withImage: (UIImage *) imageB
strength: (float) strength X:(float )x Y:(float)y{
UIGraphicsBeginImageContextWithOptions(CGSizeMake([imageA size].width,[imageA size].height), NO, 0.0);
[imageA drawAtPoint: CGPointMake(0,0)];
[imageB drawAtPoint: CGPointMake(x,y)
blendMode: kCGBlendModeNormal // you can play with this
alpha: strength]; // 0 - 1
UIImage *mergedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return mergedImage;}
here x and y is the placemnt where you want to show second image
i just faced the same problem , now i got the solution for my problem
CGSize newSize = CGSizeMake(320, 377);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[ image1 drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity
[image2 drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Try this ,its work like a charm for me , i hope you will also get the solution.
you can use like --
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.0];

Blending UIImage with alpha value

How to blend two images by changing alfa of only one image so that the upper image will be slightly transparent and the above image will be displayed as it is drawn on the background image.
You can do it through imageView alpha property
imageView.alpha = 0.0f;
Detailed code:
UIImage *bottomImage = [UIImage imageNamed:#"bottomImage.png"];
UIImage *topImage = [UIImage imageNamed:#"topImage.png"];
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity
[topImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
see more details here

convert rectangular image to square image using objective c

I am working on creating an image gallery which has thumbnails in different sizes. I want to convert these rectangle thumbnails to square size so that all of them could appear similar in size. I dont mind cropping it from extended portion but I am not sure how to do it. can anyone please help me?
Thanks
Pankaj
You need to use the image in rect method passing in the image and the required bounds...
CGImageRef imageRef = CGImageCreateWithImageInRect([anImage CGImage], requiredBounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
I have added this to a UIImage category (UIImage+Resize) in the following post, you can download the source code as well - Categories example
Well if you use an UIImageView to display your images (wich I am more than sure that you do) you can set it's contentMode property to UIViewContentModeScaleAspectFill. This should 'crop' your image to the boundaries of your UIImageView. In case your image will go out of the boundaries of the UIImageView make sure clipsToBounds is also set to YES.
Let me know if that helps.
I'm using the next method. The input are the UIImage to scale and the size of the UIImageView's frame where the UIImage is. It works when the frame's height and width are equal.
One important thing: I keep the image's ratio. I don't expand the image to cover the full square. If you want to do it you have to change the 'drawInRect' line for [self drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)]; and remove the if-else.
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
CGFloat scaleRatio;
if (image.size.width > image.size.height) {
scaleRatio = image.size.height/image.size.width;
}else{
scaleRatio = image.size.width/image.size.height;
}
CGAffineTransform scaleTransform = CGAffineTransformMakeScale(scaleRatio, scaleRatio);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextConcatCTM(context, scaleTransform);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
if (image.size.width > image.size.height) {
[image drawInRect:CGRectMake(0, (newSize.height/2)-(newSize.height*scaleRatio/2), newSize.width, newSize.height*scaleRatio)];
}else{
[image drawInRect:CGRectMake((newSize.width/2)-(newSize.width*scaleRatio/2), 0, newSize.width*scaleRatio, newSize.height)];
}
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

UIGraphicsBeginImageContext with parameters

I am taking a screenshot in my application. I am able to take the screenshot.
Now I want to take the screenshot by specifying the x and y coordinate. Is that possible?
UIGraphicsBeginImageContext( self.view.bounds.size );
[self.view.layer renderInContext:UIGraphicsGetCurrentContext( )];
UIImage* aImage = UIGraphicsGetImageFromCurrentImageContext( );
UIGraphicsBeginImageContext(self.view.bounds.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(c, 0, -40); // <-- shift everything up by 40px when drawing.
[self.view.layer renderInContext:c];
UIImage* viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
If you're using a newer retina display device, your code should factor in the resolution by using UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext:
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size,YES,2.0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(c, 0, -40); // <-- shift everything up by 40px when drawing.
[self.view.layer renderInContext:c];
UIImage* viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This will render the retina display image context.
Just do something like this, way easier than all those complex calculations
+ (UIImage *)imageWithView:(UIView *)view {
UIGraphicsBeginImageContextWithOptions([view bounds].size, NO, [[UIScreen mainScreen] scale]);
[[view layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Here is Swift version
//Capture Screen
func capture()->UIImage {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, false, UIScreen.mainScreen().scale)
self.view.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
swift version
UIGraphicsBeginImageContext(self.view.bounds.size)
let image: CGContextRef = UIGraphicsGetCurrentContext()!
CGContextTranslateCTM(image, 0, -40)
// <-- shift everything up by 40px when drawing.
self.view.layer.renderInContext(image)
let viewImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil)