Retina display of images-iPhone 3 to 4 - iphone

I have developed an application of tile game for iPhone 3.
In which I took an image from my resource and divided it into number of tiles using CGImageCreateWithImageInRect ( originalImage.CGImage, frame ); function.
It works great on all iPhones but now I want it to work on Retina Displays also.
So as per this link I have taken anothe image with its size double the current images size and rename it by adding suffix #2x. But the problem is it takes the upper half part of the retina display image only. I think thats because of the frame I have set while using CGImageCreateWithImageInRect. So What shall be done in respect to make this work.
Any kind of help will be really appreciated.
Thanks in advance...

The problem is likely that the #2x image scale is only automatically set up properly for certain initializers of UIImage... Try loading your UIImages using code like this from Tasty Pixel. The entry at that link talks more about this issue.
Using the UIImage+TPAdditions category from the link, you'll implement it like so (after making sure that the images and their #2x counterparts are in your project):
NSString *baseImagePath = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents"];
NSString *myImagePath = [baseImagePath stringByAppendingPathComponent:#"myImage.png"]; // note no need to add #2x.png here
UIImage *myImage = [UIImage imageWithContentsOfResolutionIndependentFile:myImagePath];
Then you should be able to use CGImageCreateWithImageInRect(myImage.CGImage, frame);

Here's how I got it to work in an app I did:
//this is a method that takes a UIImage and slices it into 16 tiles (GridSize * GridSize)
#define GridSize 4
- (void) sliceImage:(UIImage *)image {
CGSize imageSize = [image size];
CGSize square = CGSizeMake(imageSize.width/GridSize, imageSize.height/GridSize);
CGFloat scaleMultiplier = [image scale];
square.width *= scaleMultiplier;
square.height *= scaleMultiplier;
CGFloat scale = ([self frame].size.width/GridSize)/square.width;
CGImageRef source = [image CGImage];
if (source != NULL) {
for (int r = 0; r < GridSize; ++r) {
for (int c = 0; c < GridSize; ++c) {
CGRect slice = CGRectMake(c*square.width, r*square.height, square.width, square.height);
CGImageRef sliceImage = CGImageCreateWithImageInRect(source, slice);
if (sliceImage) {
//we have a tile (as a CGImageRef) from the source image
//do something with it
CFRelease(sliceImage);
}
}
}
}
}
The trick is using the -[UIImage scale] property to figure out how big of a rect you should be slicing.

Related

How do I save a section on my screen to the users images (in swift)?

I want my user to be able to upload some images into a little square, and then I want all of them to be saved into one image on the user's iPhone.
I'm basically making an app the combines the users pictures beside each other (there's a ton of apps like that but I want to learn how they work), and then saves the total thing as an image on their phone.
Save all your images in an array(arrImage) and use the following method to merge images
- (UIImage *) mergeImages:(NSArray*)arrImage{
float width = 2024;//set the width of merged image
float height = 2024;//set the height of merged image
CGSize mergedImageSize = CGSizeMake(width, height);
float x = 0;
float y= 0;
UIGraphicsBeginImageContext(mergedImageSize);
for(UIImage *img in arrImages){
CGRect rect = CGRectMake(x, y, width/arrimage.count, height/arrImage.count);
[img drawInRect:rect];
x=x+(width/arrimage.count);
y=y+(height/arrImage.count))
}
 
UIImage* mergedImage = UIGraphicsGetImageFromCurrentImageContext();// it will return an image based on the contents of the current bitmap-based graphics context.
UIGraphicsEndImageContext();
return mergedImage;
}

image cropping in UIimage view in ios [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Cropping a UIImage
I have one UIImage view,In that i need to crop some space in that UIImage view and need to store as Uiimage.And the pass to another view that saved image.
there are quite a few ways to crop images using the Core Graphics functions, the most basic one would be:
CGRect cropRect = CGRectMake(0, 0, 100, 100);
CGImageRef cropped_img = CGImageCreateWithImageInRect(yourUIImage.CGImage, cropRect)
Also please check below tutorial
How to Crop an Image (UIImage) On iOS
This might help. It's mostly copy-paste from my code.
I have a big UIImageView with the original image - mImageViewCropper,
Then I have a semitransparent view,
then a smaller UIImageView overlayed - mImageViewCropperSmallWindow. (it looks like the cropper in Instragram)
On pinchGesture and panGesture, I resize the small imageView, and then I call the following function, which loads into the small imageView the correspondant cropped image from the big picture.
-(void)refreshImageInMImageViewCropperSmallWindow
{
double imageBeginX = 0; //we need to set these because of possible ratio mismatch (black stripes)
double imageEndX = 0;
double imageBeginY = 0;
double imageEndY = 0;
CGSize imageSize = mImageViewCropper.image.size;
imageBeginX = mImageViewCropper.frame.size.width /2 - imageSize.width/2;
imageEndX = mImageViewCropper.frame.size.width /2 + imageSize.width/2;
imageBeginY = mImageViewCropper.frame.size.height /2 - imageSize.height/2;
imageEndY = mImageViewCropper.frame.size.height /2 + imageSize.height/2;
CGRect smallFrame = mImageViewCropperSmallWindow.frame;
UIImage *croppedImage = [mImageViewCropper.image crop: CGRectMake(smallFrame.origin.x - imageBeginX, smallFrame.origin.y - imageBeginY,
smallFrame.size.width, smallFrame.size.height)];
mImageViewCropperSmallWindow.image = croppedImage;
}
-the method is far from perfect, but it's a starting point

Core graphics RGB data issue

I am trying to to pixel-by-pixel image filters using Core Graphics (breaking a CGImage into unsigned integers using CFData)
When I try to create an imaged with the processed data, however, the resulting image comes out with significantly different colors.
I commented out the entire loop where I actually alter the pixels' rgb values and nothing changes, either.
When I initialize the UIImage I am using in the filter; I do a resize using drawInRect with UIGraphicsBeginContext(); on an image taken from the camera.
When I remove the resize step and set my image directly from the camera; the filters seem to work just fine. Here's the code where I initialize the image I am using (from inside didFinishPickingImage)
self.editingImage is a UIImageView and self.editingUIImage is a UIImage
-(void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingImage : (UIImage *)image
editingInfo:(NSDictionary *)editingInfo
{
self.didAskForImage = YES;
UIGraphicsBeginImageContext(self.editingImage.frame.size);
float prop = image.size.width / image.size.height;
float left, top, width, height;
if(prop < 1){
height = self.editingImage.frame.size.height;
width = (height / image.size.height) * image.size.width;
left = (self.editingImage.frame.size.width - width)/2;
top = 0;
}else{
width = self.editingImage.frame.size.width;
height = (width / image.size.width) * image.size.height;
top = (self.editingImage.frame.size.height - height)/2;
left = 0;
}
[image drawInRect:CGRectMake(left, top, width, height)];
self.editingUIImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.editingImage.image = self.editingUIImage;
[self.contrastSlider addTarget:self action:#selector(doImageFilter:) forControlEvents:UIControlEventValueChanged];
[self.brightnessSlider addTarget:self action:#selector(doImageFilter:) forControlEvents:UIControlEventValueChanged];
[picker dismissModalViewControllerAnimated:YES];
picker = nil;
}
The resizes the image just the way I need it as far as position;
Here's the image filtering function, I've taken the actual loop contents out because they're irrelevant.
- (void) doImageFilter:(id)sender{
CGImageRef src = self.editingUIImage.CGImage;
CFDataRef dta;
dta = CGDataProviderCopyData(CGImageGetDataProvider(src));
UInt8 *pixData = (UInt8 *) CFDataGetBytePtr(dta);
int dtaLen = CFDataGetLength(dta);
for (int i = 0; i < dtaLen; i += 3) {
//the loop
}
CGContextRef ctx;
ctx = CGBitmapContextCreate(pixData, CGImageGetWidth(src), CGImageGetHeight(src), 8, CGImageGetBytesPerRow(src), CGImageGetColorSpace(src), kCGImageAlphaPremultipliedLast);
CGImageRef newCG = CGBitmapContextCreateImage(ctx);
UIImage *new = [UIImage imageWithCGImage:newCG];
CGContextRelease(ctx);
CFRelease(dta);
CGImageRelease(newCG);
self.editingImage.image = new;
}
The image looks like this at first
and then after doing doImageFilter...
As mentioned before, this only happens when I use the resize method shown above.
Really stumped on this one, been researching it all day... any help very appreciated!
Cheers
Update: I've examined all the image objects' color spaces and they're all kCGColorSpaceDeviceRGB. Pretty stumped on this one guys, I'm pretty some something is going wrong when I break the image into unsigned integers, but I'm not sure what.. Anyone?
Your problem is on the last line:
ctx = CGBitmapContextCreate(pixData,
CGImageGetWidth(src),
CGImageGetHeight(src),
8,
CGImageGetBytesPerRow(src),
CGImageGetColorSpace(src),
kCGImageAlphaPremultipliedLast);
You're making an assumption about the alpha and the component ordering of the data of the source image, which is apparently not correct. You should get that from the source image via CGImageGetBitmapInfo(src).
To avoid issues like this one, if you're starting with an arbitrary CGImage and you want to manipulate the bytes of the bitmap directly, it is best to make a CGBitmapContext in a format that you specify yourself (not directly taken from the source image). Then, draw your source image into the bitmap context; CG will convert the image's data into your bitmap context's format, if necessary. Then get the data from the bitmap context and manipulate it.

How to Define UIImageView size as UIImage resolution?

I have scenario, in which I am getting images using Web Service and all images are in different resolution. Now my requirement is that I want resolution of each Images and using that I want to define size of UIImageView so I can prevent my Images from getting blurred
For example image resolution if 326 pixel/inch the imageview should be as size of that image can represent fully without any blur.
UIImage *img = [UIImage imageNamed:#"foo.png"];
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
UIImageView *imgView = [[UIImageView alloc] initWithFrame:rect];
[imgView setImage:img];
Image size IS it's resolution.
Your problem might be - retina display!
Check for Retina display and thus - make UIImageView width/height twice smaller (so that each UIImageView pixel would consist of four smaller UIImage pixels for retina display).
How to check for retina display:
https://stackoverflow.com/a/7607087/894671
How to check image size (without actually loading image in memory):
NSString *mFullPath = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject]
stringByAppendingPathComponent:#"imageName.png"];
NSURL *imageFileURL = [NSURL fileURLWithPath:mFullPath];
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)imageFileURL, NULL);
if (imageSource == NULL)
{
// Error loading image ...
}
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:NO], (NSString *)kCGImageSourceShouldCache, nil];
CFDictionaryRef imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, (CFDictionaryRef)options);
NSNumber *mImgWidth;
NSNumber *mImgHeight;
if (imageProperties)
{
//loaded image width
mImgWidth = (NSNumber *)CFDictionaryGetValue(imageProperties, kCGImagePropertyPixelWidth);
//loaded image height
mImgHeight = (NSNumber *)CFDictionaryGetValue(imageProperties, kCGImagePropertyPixelHeight);
CFRelease(imageProperties);
}
if (imageSource != NULL)
{
CFRelease(imageSource);
}
So - for example:
UIImageView *mImgView = [[UIImageView alloc] init];
[mImgView setImage:[UIImage imageNamed:#"imageName.png"]];
[[self view] addSubview:mImgView];
if ([UIScreen instancesRespondToSelector:#selector(scale)])
{
CGFloat scale = [[UIScreen mainScreen] scale];
if (scale > 1.0)
{
//iphone retina screen
[mImgView setFrame:CGRectMake(0,0,[mImgWidth intValue]/2,[mImgHeight intValue]/2)];
}
else
{
//iphone screen
[mImgView setFrame:CGRectMake(0,0,[mImgWidth intValue],[mImgHeight intValue])];
}
}
Hope that helps!
You can get image size using following code. So, first calculate downloaded image size and than make image view according to that.
UIImage *Yourimage = [UIImage imageNamed:#"image.png"];
CGFloat width = Yourimage.size.width;
CGFloat height = Yourimage.size.height;
Hope, this will help you..
UIImage *oldimage = [UIImage imageWithContentsOfFile:imagePath]; // or you can set from url with NSURL
CGSize imgSize = [oldimage size];
imgview.frame = CGRectMake(10, 10, imgSize.width,imgSize.height);
[imgview setImage:oldimage];
100% working ....
To solve this problem, we need to take care of the device's display resolution..
For example you have an image of resolution 326ppi which is same as of iPhone4, iPhone4S and iPod 4th Gen. So you can simply use solutions suggested by #Nit and #Peko. But for other devices(or for image with different resolution on these devices) you will need to apply maths to calculate size for better display.
Now suppose you have 260ppi(with dimensions W x H) image and you wish to display it on iPhone4S, so as the information contained in it per inches is less than the display resolution of iPhone so we will need to resize it by reducing image size by 326/260 factor. so now the size for imageView that you will use is
imageViewWidth = W*(260/326);
imageViewHeight = H*(260/326);
In general:
resizeFactor = imageResolution/deviceDisplayResolution;
imageViewWidth = W*resizeFactor;
imageViewHeight = H*resizeFactor;
Here I am considering when we set an image in imageView and resize it, it does not removes or adds pixels from image,
Let the UIImageView do the work by utilizing the contentMode property to do your image resizing for you.
You probably want to be displaying your UIImageView with a static sizing (the "frame" property) that represents the maximum size of the image you want to display, and allowing the images to resize within that frame relative to their own particular size requirements (overall size, aspect ratio, etc.). You can let the UIImageView do the heavy lifting for you of dealing with different sized images by mastering the contentMode property. It has many different settings, one of which is UIViewContentModeScaleAspectFit, which will downsize your image as necessary to fit within the UIImageView, which if the image is smaller, it will simply display centered. You can play with the setting to get the results you want.
Note that with this approach, there is nothing special you need to do to deal with scaling issues associated with a Retina display.
As per the requirement you stated in the question body, I believe you need not change UIImageView size.
Image can represent fully without any blur using this line of code:
imageView.contentMode = UIViewContentModeScaleAspectFit;

Upsize (explode) CGImageRef like iPhone simulator on iPad. 1px to 2px

I would like to upsize (explode) the image by duplicating pixels.
1 px -> 2px
Knowing how to read individal pixels (RGBA):
CFDataRef m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(myImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
int length = CFDataGetLength(m_DataRef);
for (int i=0; i<length; i+=4){
int r = i;
int g = i+1;
int b = i+2;
int a = i+3;
}
How do I go about upsizing the image. I tried copying pixels in a linear fashion but I ended up with an image of two images side by side.
create a CGBitmapContext at the destination size then draw the image to it.
I maintain a very easy-to-use library for iOS called ANImageBitmapRep. The ANImageBitmapRep class allows you to easily scale images in a number of different ways, including the most basic:
UIImage * myImage = [UIImage imageNamed:#"myImage.png"];
ANImageBitmapRep * irep = [ANImageBitmapRep imageBitmapRepWithImage:myImage];
[irep setSize:BMPointMake(width, height)];
UIImage * scaled = [irep image];
This will achieve what you are looking to do without having to deal with tons of bitmap contexts and CoreGraphics. I will also note that every ANImageBitmapRep is backed by a CGContextRef, which can be easily retrieved using the context method. If you manually change the context, just run [myBitmap setNeedsUpdate:YES].
You can also use the setSizeFittingFrame: and setSizeFillingFrame: methods for scaling that will never take the image out of proportion.