Detect cropping rectangle for UIImage with transparent background - iphone

What is the strategy to take a UIImage with a transparent background and determine the smallest rectangle to crop to so that only the visible image data is left (along with the extra transparent background if the image data is not rectangular of course)?
I have found lots of information on cropping a UIImage to a CGRect, plenty of cropping view controllers that require user intervention, and several open source libraries with image processing classes and categories (including MGImageUtilities and NYXImagesKit), but nothing yet that solves my particular problem.
My current app is targeting iOS 5.0, so compatibility with that would be optimal.
EDIT: By the way, I am hoping that there is a better way than brute force looking at every pixel in the worst case scenario of the image data in rows and columns looking for the edge boundaries.

Have you had a chance to see https://gist.github.com/spinogrizz/3549921 ?
it looks like it's exactly what you need.
just so it's not lost, a copy & paste from that page:
- (UIImage *) imageByTrimmingTransparentPixels {
int rows = self.size.height;
int cols = self.size.width;
int bytesPerRow = cols*sizeof(uint8_t);
if ( rows < 2 || cols < 2 ) {
return self;
}
//allocate array to hold alpha channel
uint8_t *bitmapData = calloc(rows*cols, sizeof(uint8_t));
//create alpha-only bitmap context
CGContextRef contextRef = CGBitmapContextCreate(bitmapData, cols, rows, 8, bytesPerRow, NULL, kCGImageAlphaOnly);
//draw our image on that context
CGImageRef cgImage = self.CGImage;
CGRect rect = CGRectMake(0, 0, cols, rows);
CGContextDrawImage(contextRef, rect, cgImage);
//summ all non-transparent pixels in every row and every column
uint16_t *rowSum = calloc(rows, sizeof(uint16_t));
uint16_t *colSum = calloc(cols, sizeof(uint16_t));
//enumerate through all pixels
for ( int row = 0; row < rows; row++) {
for ( int col = 0; col < cols; col++)
{
if ( bitmapData[row*bytesPerRow + col] ) { //found non-transparent pixel
rowSum[row]++;
colSum[col]++;
}
}
}
//initialize crop insets and enumerate cols/rows arrays until we find non-empty columns or row
UIEdgeInsets crop = UIEdgeInsetsMake(0, 0, 0, 0);
for ( int i = 0; i<rows; i++ ) { //top
if ( rowSum[i] > 0 ) {
crop.top = i; break;
}
}
for ( int i = rows; i >= 0; i-- ) { //bottom
if ( rowSum[i] > 0 ) {
crop.bottom = MAX(0, rows-i-1); break;
}
}
for ( int i = 0; i<cols; i++ ) { //left
if ( colSum[i] > 0 ) {
crop.left = i; break;
}
}
for ( int i = cols; i >= 0; i-- ) { //right
if ( colSum[i] > 0 ) {
crop.right = MAX(0, cols-i-1); break;
}
}
free(bitmapData);
free(colSum);
free(rowSum);
if ( crop.top == 0 && crop.bottom == 0 && crop.left == 0 && crop.right == 0 ) {
//no cropping needed
return self;
}
else {
//calculate new crop bounds
rect.origin.x += crop.left;
rect.origin.y += crop.top;
rect.size.width -= crop.left + crop.right;
rect.size.height -= crop.top + crop.bottom;
//crop it
CGImageRef newImage = CGImageCreateWithImageInRect(cgImage, rect);
//convert back to UIImage
return [UIImage imageWithCGImage:newImage];
}
}

here it is in Swift 4
extension UIImage {
func cropAlpha() -> UIImage {
let cgImage = self.cgImage!;
let width = cgImage.width
let height = cgImage.height
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bytesPerPixel:Int = 4
let bytesPerRow = bytesPerPixel * width
let bitsPerComponent = 8
let bitmapInfo: UInt32 = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Big.rawValue
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
let ptr = context.data?.assumingMemoryBound(to: UInt8.self) else {
return self
}
context.draw(self.cgImage!, in: CGRect(x: 0, y: 0, width: width, height: height))
var minX = width
var minY = height
var maxX: Int = 0
var maxY: Int = 0
for x in 1 ..< width {
for y in 1 ..< height {
let i = bytesPerRow * Int(y) + bytesPerPixel * Int(x)
let a = CGFloat(ptr[i + 3]) / 255.0
if(a>0) {
if (x < minX) { minX = x };
if (x > maxX) { maxX = x };
if (y < minY) { minY = y};
if (y > maxY) { maxY = y};
}
}
}
let rect = CGRect(x: CGFloat(minX),y: CGFloat(minY), width: CGFloat(maxX-minX), height: CGFloat(maxY-minY))
let imageScale:CGFloat = self.scale
let croppedImage = self.cgImage!.cropping(to: rect)!
let ret = UIImage(cgImage: croppedImage, scale: imageScale, orientation: self.imageOrientation)
return ret;
}
}

Refer CoreImage Slides on WWDC.
Your answer is direct.

Related

CIImage with alpha from sourceFrame of a video

I am trying to get CIImage with alpha, from a video that has an alpha.
The video is coded in HEVC with Alpha.
When I am doing this:
if let sourcePixel = request.sourceFrame(byTrackID: 3) {
let image = CIImage(cvPixelBuffer: sourcePixel
}
The alpha in the image is 1 (opaque).
If I test a pixel value from the relevant plane of sourcePixel, I got the correct alpha according to the value of the video.
if let sourcePixel = request.sourceFrame(byTrackID: 3) {
let results = CVPixelBufferLockBaseAddress(sourcePixel, .readOnly)
if let baseAddress = CVPixelBufferGetBaseAddressOfPlane(sourcePixel, 0) {
//Get width and height of buffer
let bytesPerRow = CVPixelBufferGetBytesPerRow(sourcePixel)
let buffer = baseAddress.assumingMemoryBound(to: UInt8.self)
var r = buffer[0]
var g = buffer[1]
var b = buffer[2]
var a = buffer[3]
print("value: ,(\(r),\(g),\(b),\(a))")
}
}
I tried to get the bitmap of the sourcePixel, and crate the CIImage from the bitmap.
But I didn't manage. I got image that looks like all the values are zero
if let sourcePixel = request.sourceFrame(byTrackID: layerInstruction.trackID) {
print (sourcePixel)
let results = CVPixelBufferLockBaseAddress(sourcePixel, .readOnly)
if let baseAddress = CVPixelBufferGetBaseAddressOfPlane(sourcePixel, 0) {
let width = CVPixelBufferGetWidth(sourcePixel)
let height = CVPixelBufferGetHeight(sourcePixel)
let bitmapData = Data(bytesNoCopy: baseAddress, count: width * height * 4, deallocator: .none)
let ciImage = CIImage(bitmapData: bitmapData, bytesPerRow: width * 4, size: CGSize(width: width, height: height), format: .ABGR8, colorSpace: CGColorSpaceCreateDeviceRGB())
}
}
Does using the bitmap is the right approach? if yes what am I doing wrong? If not what should I use?
And another related question, If I print the sourceFrame I got
<CVPixelBuffer 0x282458fa0 width=1080 height=1920 pixelFormat=420f iosurface=0x281778830 planes=2 poolName=22133:decode_1>.
<Plane 0 width=1080 height=1920 bytesPerRow=1088>.
<Plane 1 width=540 height=960 bytesPerRow=1088>
<attributes={
PixelFormatDescription = {
BitsPerComponent = 8;
ComponentRange = FullRange;
ContainsAlpha = 0;
ContainsGrayscale = 0;
ContainsRGB = 0;
ContainsYCbCr = 1;
FillExtendedPixelsCallback = {length = 24, bytes = 0x00000000000000001060248b010000000000000000000000};
IOSurfaceCoreAnimationCompatibility = 1;
IOSurfaceCoreAnimationCompatibilityHTPCOK = 1;
IOSurfaceOpenGLESFBOCompatibility = 1;
IOSurfaceOpenGLESTextureCompatibility = 1;
OpenGLESCompatibility = 1;
PixelFormat = 875704422;
Planes = (
{
BitsPerBlock = 8;
BlackBlock = {length = 1, bytes = 0x00};
},
{
BitsPerBlock = 16;
BlackBlock = {length = 2, bytes = 0x8080};
HorizontalSubsampling = 2;
VerticalSubsampling = 2;
}
);
};
} propagatedAttachments={
AlphaChannelMode = StraightAlpha;
CVFieldCount = 1;
CVImageBufferChromaLocationBottomField = Left;
CVImageBufferChromaLocationTopField = Left;
CVImageBufferColorPrimaries = "ITU_R_709_2";
CVImageBufferTransferFunction = "ITU_R_709_2";
CVImageBufferYCbCrMatrix = "ITU_R_709_2";
CVPixelAspectRatio = {
HorizontalSpacing = 1;
VerticalSpacing = 1;
};
} nonPropagatedAttachments={.
Why does the video have two planes?
why does it say that it contains no alpha?

Convert for loop from Objective-C

I am trying to convert an old Objective-C code into Swift, however I cannot find the way to make this for loop to work without errors. How could this be converted into Swift?
for var y = 0; y < columns; y++ { //C-style for statement is deprecated and will be removed in a future version of Swift
xPos = 0.0
for var x = 0; x < rows; x++ { //C-style for statement is deprecated and will be removed in a future version of Swift
var rect: CGRect = CGRectMake(xPos, yPos, width, height)
var cImage: CGImageRef = CGImageCreateWithImageInRect(image.CGImage, rect)!
var dImage: UIImage = UIImage(CGImage: cImage)
var imageView: UIImageView = UIImageView(frame: CGRectMake(x * width, y * height, width, height))
imageView.image = dImage
imageView.layer.borderColor = UIColor.blackColor().CGColor
imageView.layer.borderWidth = 1.0
//self.view!.addSubview(imageView)
arrayImages.append(dImage)
xPos += width
}
yPos += height
}
In your case, this should do it.
for y in 0..<columns {
for x in 0..<rows {
// do what you want with x and y
}
}

How to know the image size after applying aspect fit for the image in an UIImageView

I am loading an image to an imageview with mode as 'Aspect Fit'. I need to know the size to which my image is being scaled to. Please help.
Why not use the OS function AVMakeRectWithAspectRatioInsideRect?
I wanted to use AVMakeRectWithAspectRatioInsideRect() without including the AVFoundation framework.
So I've implemented the following two utility functions:
CGSize CGSizeAspectFit(CGSize aspectRatio, CGSize boundingSize)
{
float mW = boundingSize.width / aspectRatio.width;
float mH = boundingSize.height / aspectRatio.height;
if( mH < mW )
boundingSize.width = boundingSize.height / aspectRatio.height * aspectRatio.width;
else if( mW < mH )
boundingSize.height = boundingSize.width / aspectRatio.width * aspectRatio.height;
return boundingSize;
}
CGSize CGSizeAspectFill(CGSize aspectRatio, CGSize minimumSize)
{
float mW = minimumSize.width / aspectRatio.width;
float mH = minimumSize.height / aspectRatio.height;
if( mH > mW )
minimumSize.width = minimumSize.height / aspectRatio.height * aspectRatio.width;
else if( mW > mH )
minimumSize.height = minimumSize.width / aspectRatio.width * aspectRatio.height;
return minimumSize;
}
Edit: Optimized below by removing duplicate divisions.
CGSize CGSizeAspectFit(const CGSize aspectRatio, const CGSize boundingSize)
{
CGSize aspectFitSize = CGSizeMake(boundingSize.width, boundingSize.height);
float mW = boundingSize.width / aspectRatio.width;
float mH = boundingSize.height / aspectRatio.height;
if( mH < mW )
aspectFitSize.width = mH * aspectRatio.width;
else if( mW < mH )
aspectFitSize.height = mW * aspectRatio.height;
return aspectFitSize;
}
CGSize CGSizeAspectFill(const CGSize aspectRatio, const CGSize minimumSize)
{
CGSize aspectFillSize = CGSizeMake(minimumSize.width, minimumSize.height);
float mW = minimumSize.width / aspectRatio.width;
float mH = minimumSize.height / aspectRatio.height;
if( mH > mW )
aspectFillSize.width = mH * aspectRatio.width;
else if( mW > mH )
aspectFillSize.height = mW * aspectRatio.height;
return aspectFillSize;
}
End of edit
This takes a given size (first parameter) and maintains its aspect ratio. It then fills the given bounds (second parameter) as much as possible without violating the aspect ratio.
Using this to answer the original question:
// Using aspect fit, scale the image (size) to the image view's size.
CGSize sizeBeingScaledTo = CGSizeAspectFit(theImage.size, theImageView.frame.size);
Note how the image determines the aspect ratio, while the image view determines the size to be filled.
Feedback is very welcome.
Please see #Paul-de-Lange's answer instead of this one
I couldn't find anything in an easily accessible variable that had this, so here is the brute force way:
- (CGSize) aspectScaledImageSizeForImageView:(UIImageView *)iv image:(UIImage *)im {
float x,y;
float a,b;
x = iv.frame.size.width;
y = iv.frame.size.height;
a = im.size.width;
b = im.size.height;
if ( x == a && y == b ) { // image fits exactly, no scaling required
// return iv.frame.size;
}
else if ( x > a && y > b ) { // image fits completely within the imageview frame
if ( x-a > y-b ) { // image height is limiting factor, scale by height
a = y/b * a;
b = y;
} else {
b = x/a * b; // image width is limiting factor, scale by width
a = x;
}
}
else if ( x < a && y < b ) { // image is wider and taller than image view
if ( a - x > b - y ) { // height is limiting factor, scale by height
a = y/b * a;
b = y;
} else { // width is limiting factor, scale by width
b = x/a * b;
a = x;
}
}
else if ( x < a && y > b ) { // image is wider than view, scale by width
b = x/a * b;
a = x;
}
else if ( x > a && y < b ) { // image is taller than view, scale by height
a = y/b * a;
b = y;
}
else if ( x == a ) {
a = y/b * a;
b = y;
} else if ( y == b ) {
b = x/a * b;
a = x;
}
return CGSizeMake(a,b);
}
This simple function will calculate size of image after aspect fit:
Swift 5.1
extension UIImageView {
var imageSizeAfterAspectFit: CGSize {
var newWidth: CGFloat
var newHeight: CGFloat
guard let image = image else { return frame.size }
if image.size.height >= image.size.width {
newHeight = frame.size.height
newWidth = ((image.size.width / (image.size.height)) * newHeight)
if CGFloat(newWidth) > (frame.size.width) {
let diff = (frame.size.width) - newWidth
newHeight = newHeight + CGFloat(diff) / newHeight * newHeight
newWidth = frame.size.width
}
} else {
newWidth = frame.size.width
newHeight = (image.size.height / image.size.width) * newWidth
if newHeight > frame.size.height {
let diff = Float((frame.size.height) - newHeight)
newWidth = newWidth + CGFloat(diff) / newWidth * newWidth
newHeight = frame.size.height
}
}
return .init(width: newWidth, height: newHeight)
}
}
Objective C:
-(CGSize)imageSizeAfterAspectFit:(UIImageView*)imgview{
float newwidth;
float newheight;
UIImage *image=imgview.image;
if (image.size.height>=image.size.width){
newheight=imgview.frame.size.height;
newwidth=(image.size.width/image.size.height)*newheight;
if(newwidth>imgview.frame.size.width){
float diff=imgview.frame.size.width-newwidth;
newheight=newheight+diff/newheight*newheight;
newwidth=imgview.frame.size.width;
}
}
else{
newwidth=imgview.frame.size.width;
newheight=(image.size.height/image.size.width)*newwidth;
if(newheight>imgview.frame.size.height){
float diff=imgview.frame.size.height-newheight;
newwidth=newwidth+diff/newwidth*newwidth;
newheight=imgview.frame.size.height;
}
}
NSLog(#"image after aspect fit: width=%f height=%f",newwidth,newheight);
//adapt UIImageView size to image size
//imgview.frame=CGRectMake(imgview.frame.origin.x+(imgview.frame.size.width-newwidth)/2,imgview.frame.origin.y+(imgview.frame.size.height-newheight)/2,newwidth,newheight);
return CGSizeMake(newwidth, newheight);
}
Swift 3 Human Readable Version
extension UIImageView {
/// Find the size of the image, once the parent imageView has been given a contentMode of .scaleAspectFit
/// Querying the image.size returns the non-scaled size. This helper property is needed for accurate results.
var aspectFitSize: CGSize {
guard let image = image else { return CGSize.zero }
var aspectFitSize = CGSize(width: frame.size.width, height: frame.size.height)
let newWidth: CGFloat = frame.size.width / image.size.width
let newHeight: CGFloat = frame.size.height / image.size.height
if newHeight < newWidth {
aspectFitSize.width = newHeight * image.size.width
} else if newWidth < newHeight {
aspectFitSize.height = newWidth * image.size.height
}
return aspectFitSize
}
/// Find the size of the image, once the parent imageView has been given a contentMode of .scaleAspectFill
/// Querying the image.size returns the non-scaled, vastly too large size. This helper property is needed for accurate results.
var aspectFillSize: CGSize {
guard let image = image else { return CGSize.zero }
var aspectFillSize = CGSize(width: frame.size.width, height: frame.size.height)
let newWidth: CGFloat = frame.size.width / image.size.width
let newHeight: CGFloat = frame.size.height / image.size.height
if newHeight > newWidth {
aspectFillSize.width = newHeight * image.size.width
} else if newWidth > newHeight {
aspectFillSize.height = newWidth * image.size.height
}
return aspectFillSize
}
}
I also wanted to calculate height after the aspect ratio is applied to be able to calculate the height of table view's cell. So, I achieved via little math
ratio = width / height
and height would become
height = width / ratio
So code snippet would be
UIImage *img = [UIImage imageNamed:#"anImage"];
float aspectRatio = img.size.width/img.size.height;
float requiredHeight = self.view.bounds.size.width / aspectRatio;
For Swift use below code
func imageSizeAspectFit(imgview: UIImageView) -> CGSize {
var newwidth: CGFloat
var newheight: CGFloat
let image: UIImage = imgFeed.image!
if image.size.height >= image.size.width {
newheight = imgview.frame.size.height;
newwidth = (image.size.width / image.size.height) * newheight
if newwidth > imgview.frame.size.width {
let diff: CGFloat = imgview.frame.size.width - newwidth
newheight = newheight + diff / newheight * newheight
newwidth = imgview.frame.size.width
}
}
else {
newwidth = imgview.frame.size.width
newheight = (image.size.height / image.size.width) * newwidth
if newheight > imgview.frame.size.height {
let diff: CGFloat = imgview.frame.size.height - newheight
newwidth = newwidth + diff / newwidth * newwidth
newheight = imgview.frame.size.height
}
}
print(newwidth, newheight)
//adapt UIImageView size to image size
return CGSizeMake(newwidth, newheight)
}
And Call Function
imgFeed.sd_setImageWithURL(NSURL(string:"Your image URL")))
self.imageSizeAfterAspectFit(imgFeed)
Maybe does not fit your case, but this simple approach solve my problem in a similar case:
UIImageView *imageView = [[UIImageView alloc] initWithImage:bigSizeImage];
[imageView sizeToFit];
After image view executes sizeToFit if you query the imageView.frame.size you will get the new image view size that fits the new image size.
Swift 4:
Frame for .aspectFit image is -
import AVFoundation
let x: CGRect = AVMakeRect(aspectRatio: myImage.size, insideRect: sampleImageView.frame)
+(UIImage *)CreateAResizeImage:(UIImage *)Img ThumbSize:(CGSize)ThumbSize
{
float actualHeight = Img.size.height;
float actualWidth = Img.size.width;
if(actualWidth==actualHeight)
{
actualWidth = ThumbSize.width;
actualHeight = ThumbSize.height;
}
float imgRatio = actualWidth/actualHeight;
float maxRatio = ThumbSize.width/ThumbSize.height; //320.0/480.0;
if(imgRatio!=maxRatio)
{
if(imgRatio < maxRatio)
{
imgRatio = ThumbSize.height / actualHeight; //480.0 / actualHeight;
actualWidth = imgRatio * actualWidth;
actualHeight = ThumbSize.height; //480.0;
}
else
{
imgRatio = ThumbSize.width / actualWidth; //320.0 / actualWidth;
actualHeight = imgRatio * actualHeight;
actualWidth = ThumbSize.width; //320.0;
}
}
else
{
actualWidth = ThumbSize.width;
actualHeight = ThumbSize.height;
}
CGRect rect = CGRectMake(0, 0, (int)actualWidth, (int)actualHeight);
UIGraphicsBeginImageContext(rect.size);
[Img drawInRect:rect];
UIImage *NewImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return NewImg;
}
Swift 3 UIImageView extension:
import AVFoundation
extension UIImageView {
var imageSize: CGSize {
if let image = image {
return AVMakeRect(aspectRatio: image.size, insideRect: bounds).size
}
return CGSize.zero
}
}
This single line can do this job
CGSize sizeInView = AVMakeRectWithAspectRatioInsideRect(imgViewFake.image.size, imgViewFake.bounds).size;
The accepted answer is incredibly complicated and fails for some edge cases. I think this solution is much more elegant:
- (CGSize) sizeOfImage:(UIImage*)image inAspectFitImageView:(UIImageView*)imageView
{
UKAssert(imageView.contentMode == UIViewContentModeScaleAspectFit, #"Image View must use contentMode = UIViewContentModeScaleAspectFit");
CGFloat imageViewWidth = imageView.bounds.size.width;
CGFloat imageViewHeight = imageView.bounds.size.height;
CGFloat imageWidth = image.size.width;
CGFloat imageHeight = image.size.height;
CGFloat scaleFactor = MIN(imageViewWidth / imageWidth, imageViewHeight / imageHeight);
return CGSizeMake(image.size.width*scaleFactor, image.size.height*scaleFactor);
}
Here is my solution for same problem:
https://github.com/alexgarbarev/UIImageView-ImageFrame
Advantages:
UIViewContentMode modes supported
Can query for scales and for rect separately
Can ask about image frame right from UIImageView
Here's my AVFoundation-less solution.
First here's a CGSize extension for calculating a size that would fit another size:
extension CGSize
{
func sizeThatFitsSize(_ aSize: CGSize) -> CGSize
{
let width = min(self.width * aSize.height / self.height, aSize.width)
return CGSize(width: width, height: self.height * width / self.width)
}
}
So the solution to OP's problem gets down to:
let resultSize = image.size.sizeThatFitsSize(imageView.bounds.size)
Also here's another extension for fitting a rect within another rect (it utilizes the above CGSize extension):
extension CGRect
{
func rectThatFitsRect(_ aRect:CGRect) -> CGRect
{
let sizeThatFits = self.size.sizeThatFitsSize(aRect.size)
let xPos = (aRect.size.width - sizeThatFits.width) / 2
let yPos = (aRect.size.height - sizeThatFits.height) / 2
let ret = CGRect(x: xPos, y: yPos, width: sizeThatFits.width, height: sizeThatFits.height)
return ret
}
}
Swift 5 Extension
extension CGSize {
func aspectFit(to size: CGSize) -> CGSize {
let mW = size.width / self.width;
let mH = size.height / self.height;
var result = size
if( mH < mW ) {
result.width = size.height / self.height * self.width;
}
else if( mW < mH ) {
result.height = size.width / self.width * self.height;
}
return result;
}
func aspectFill(to size: CGSize) -> CGSize {
let mW = size.width / self.width;
let mH = size.height / self.height;
var result = size
if( mH > mW ) {
result.width = size.height / self.height * self.width;
}
else if( mW > mH ) {
result.height = size.width / self.width * self.height;
}
return result;
}
}
I'm using the following in Swift:
private func CGSizeAspectFit(aspectRatio:CGSize,boundingSize:CGSize) -> CGSize
{
var aspectFitSize = boundingSize
let mW = boundingSize.width / aspectRatio.width
let mH = boundingSize.height / aspectRatio.height
if( mH < mW )
{
aspectFitSize.width = mH * aspectRatio.width
}
else if( mW < mH )
{
aspectFitSize.height = mW * aspectRatio.height
}
return aspectFitSize
}
private func CGSizeAspectFill(aspectRatio:CGSize,minimumSize:CGSize) -> CGSize
{
var aspectFillSize = minimumSize
let mW = minimumSize.width / aspectRatio.width
let mH = minimumSize.height / aspectRatio.height
if( mH > mW )
{
aspectFillSize.width = mH * aspectRatio.width
}
else if( mW > mH )
{
aspectFillSize.height = mW * aspectRatio.height
}
return aspectFillSize
}
I'm using it like this:
let aspectSize = contentMode == .ScaleAspectFill ? CGSizeAspectFill(oldSize,minimumSize: newSize) : CGSizeAspectFit(oldSize, boundingSize: newSize)
let newRect = CGRect( x: (newSize.width - aspectSize.width)/2, y: (newSize.height - aspectSize.height)/2, width: aspectSize.width, height: aspectSize.height)
CGContextSetFillColorWithColor(context,IOSXColor.whiteColor().CGColor)
CGContextFillRect(context, CGRect(origin: CGPointZero,size: newSize))
CGContextDrawImage(context, newRect, cgImage)
If you know only the width of the imageview and when the height of the image is dynamic then you need to scale the image's height according to the given width to remove the white spaces above and below your image. Use the following method from here to scale the height of the image according to the standard width of your screen.
-(UIImage*)imageWithImage: (UIImage*) sourceImage scaledToWidth: (float) i_width
{
float oldWidth = sourceImage.size.width;
float scaleFactor = i_width / oldWidth;
float newHeight = sourceImage.size.height * scaleFactor;
float newWidth = oldWidth * scaleFactor;
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
[sourceImage drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
And call it from your cellForRowAtIndexPath: method like this:
UIImage *img = [dictImages objectForKey:yourImageKey]; // loaded the image
cell.imgView.image = [self imageWithImage:img scaledToWidth:self.view.frame.size.width];
Swift 4 version
extension CGSize {
enum AspectMode {
case fit
case fill
}
enum Orientation {
case portrait
case landscape
}
func aspectCorrectSizeToFit(targetSize: CGSize, aspectMode: AspectMode = .fill) -> CGSize {
switch aspectMode {
case .fill: return aspectFill(targetSize: targetSize)
case .fit: return aspectFit(targetSize: targetSize)
}
}
var orientation: Orientation {
if height >= width { return .portrait }
else { return .landscape }
}
func aspectFit(targetSize: CGSize) -> CGSize {
let wRatio = targetSize.width / width
let hRatio = targetSize.height / height
let scale = min(wRatio, hRatio)
return applying(CGAffineTransform(scaleX: scale, y: scale))
}
func aspectFill(targetSize: CGSize) -> CGSize {
let wRatio = targetSize.width / width
let hRatio = targetSize.height / height
let scale = max(wRatio, hRatio)
return applying(CGAffineTransform(scaleX: scale, y: scale))
}
}
The above mentioned methods never give the required values.As aspect fit maintains the same aspect ratio we just need simple maths to calculate the values
Detect the aspect ratio
CGFloat imageViewAspectRatio = backgroundImageView.bounds.size.width / backgroundImageView.bounds.size.height;
CGFloat imageAspectRatio = backgroundImageView.image.size.width / backgroundImageView.image.size.height;
CGFloat mulFactor = imageViewAspectRatio/imageAspectRatio;
Get the new values
CGFloat newImageWidth = mulFactor * backgroundImageView.bounds.size.width;
CGFloat newImageHeight = mulFactor * backgroundImageView.bounds.size.height;

Can i find out what color is at a certain point of a UIImage? [duplicate]

I have a UIImage (Cocoa Touch). From that, I'm happy to get a CGImage or anything else you'd like that's available. I'd like to write this function:
- (int)getRGBAFromImage:(UIImage *)image atX:(int)xx andY:(int)yy {
// [...]
// What do I want to read about to help
// me fill in this bit, here?
// [...]
int result = (red << 24) | (green << 16) | (blue << 8) | alpha;
return result;
}
FYI, I combined Keremk's answer with my original outline, cleaned-up the typos, generalized it to return an array of colors and got the whole thing to compile. Here is the result:
+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)x andY:(int)y count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
NSUInteger byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
for (int i = 0 ; i < count ; ++i)
{
CGFloat alpha = ((CGFloat) rawData[byteIndex + 3] ) / 255.0f;
CGFloat red = ((CGFloat) rawData[byteIndex] ) / alpha;
CGFloat green = ((CGFloat) rawData[byteIndex + 1] ) / alpha;
CGFloat blue = ((CGFloat) rawData[byteIndex + 2] ) / alpha;
byteIndex += bytesPerPixel;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
One way of doing it is to draw the image to a bitmap context that is backed by a given buffer for a given colorspace (in this case it is RGB): (note that this will copy the image data to that buffer, so you do want to cache it instead of doing this operation every time you need to get pixel values)
See below as a sample:
// First get the image into your data buffer
CGImageRef image = [myUIImage CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height));
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
red = rawData[byteIndex];
green = rawData[byteIndex + 1];
blue = rawData[byteIndex + 2];
alpha = rawData[byteIndex + 3];
Apple's Technical Q&A QA1509 shows the following simple approach:
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
Use CFDataGetBytePtr to get to the actual bytes (and various CGImageGet* methods to understand how to interpret them).
Couldn't believe that there is not one single correct answer here. No need to allocate pointers, and the unmultiplied values still need to be normalized. To cut to the chase, here is the correct version for Swift 4. For UIImage just use .cgImage.
extension CGImage {
func colors(at: [CGPoint]) -> [UIColor]? {
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bytesPerPixel = 4
let bytesPerRow = bytesPerPixel * width
let bitsPerComponent = 8
let bitmapInfo: UInt32 = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Big.rawValue
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo),
let ptr = context.data?.assumingMemoryBound(to: UInt8.self) else {
return nil
}
context.draw(self, in: CGRect(x: 0, y: 0, width: width, height: height))
return at.map { p in
let i = bytesPerRow * Int(p.y) + bytesPerPixel * Int(p.x)
let a = CGFloat(ptr[i + 3]) / 255.0
let r = (CGFloat(ptr[i]) / a) / 255.0
let g = (CGFloat(ptr[i + 1]) / a) / 255.0
let b = (CGFloat(ptr[i + 2]) / a) / 255.0
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
}
The reason you have to draw/convert the image first into a buffer is because images can have several different formats. This step is required to convert it to a consistent format you can read.
NSString * path = [[NSBundle mainBundle] pathForResource:#"filename" ofType:#"jpg"];
UIImage * img = [[UIImage alloc]initWithContentsOfFile:path];
CGImageRef image = [img CGImage];
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(image));
const unsigned char * buffer = CFDataGetBytePtr(data);
Here is a SO thread where #Matt renders only the desired pixel into a 1x1 context by displacing the image so that the desired pixel aligns with the one pixel in the context.
Swift 5 version
The answers given here are either outdated or incorrect because they don't take into account the following:
The pixel size of the image can differ from its point size that is returned by image.size.width/image.size.height.
There can be various layouts used by pixel components in the image, such as BGRA, ABGR, ARGB etc. or may not have an alpha component at all, such as BGR and RGB. For example, UIView.drawHierarchy(in:afterScreenUpdates:) method can produce BGRA images.
Color components can be premultiplied by the alpha for all pixels in the image and need to be divided by alpha in order to restore the original color.
For memory optimization used by CGImage, the size of a pixel row in bytes can be greater than the mere multiplication of the pixel width by 4.
The code below is to provide a universal Swift 5 solution to get the UIColor of a pixel for all such special cases. The code is optimized for usability and clarity, not for performance.
public extension UIImage {
var pixelWidth: Int {
return cgImage?.width ?? 0
}
var pixelHeight: Int {
return cgImage?.height ?? 0
}
func pixelColor(x: Int, y: Int) -> UIColor {
assert(
0..<pixelWidth ~= x && 0..<pixelHeight ~= y,
"Pixel coordinates are out of bounds")
guard
let cgImage = cgImage,
let data = cgImage.dataProvider?.data,
let dataPtr = CFDataGetBytePtr(data),
let colorSpaceModel = cgImage.colorSpace?.model,
let componentLayout = cgImage.bitmapInfo.componentLayout
else {
assertionFailure("Could not get a pixel of an image")
return .clear
}
assert(
colorSpaceModel == .rgb,
"The only supported color space model is RGB")
assert(
cgImage.bitsPerPixel == 32 || cgImage.bitsPerPixel == 24,
"A pixel is expected to be either 4 or 3 bytes in size")
let bytesPerRow = cgImage.bytesPerRow
let bytesPerPixel = cgImage.bitsPerPixel/8
let pixelOffset = y*bytesPerRow + x*bytesPerPixel
if componentLayout.count == 4 {
let components = (
dataPtr[pixelOffset + 0],
dataPtr[pixelOffset + 1],
dataPtr[pixelOffset + 2],
dataPtr[pixelOffset + 3]
)
var alpha: UInt8 = 0
var red: UInt8 = 0
var green: UInt8 = 0
var blue: UInt8 = 0
switch componentLayout {
case .bgra:
alpha = components.3
red = components.2
green = components.1
blue = components.0
case .abgr:
alpha = components.0
red = components.3
green = components.2
blue = components.1
case .argb:
alpha = components.0
red = components.1
green = components.2
blue = components.3
case .rgba:
alpha = components.3
red = components.0
green = components.1
blue = components.2
default:
return .clear
}
// If chroma components are premultiplied by alpha and the alpha is `0`,
// keep the chroma components to their current values.
if cgImage.bitmapInfo.chromaIsPremultipliedByAlpha && alpha != 0 {
let invUnitAlpha = 255/CGFloat(alpha)
red = UInt8((CGFloat(red)*invUnitAlpha).rounded())
green = UInt8((CGFloat(green)*invUnitAlpha).rounded())
blue = UInt8((CGFloat(blue)*invUnitAlpha).rounded())
}
return .init(red: red, green: green, blue: blue, alpha: alpha)
} else if componentLayout.count == 3 {
let components = (
dataPtr[pixelOffset + 0],
dataPtr[pixelOffset + 1],
dataPtr[pixelOffset + 2]
)
var red: UInt8 = 0
var green: UInt8 = 0
var blue: UInt8 = 0
switch componentLayout {
case .bgr:
red = components.2
green = components.1
blue = components.0
case .rgb:
red = components.0
green = components.1
blue = components.2
default:
return .clear
}
return .init(red: red, green: green, blue: blue, alpha: UInt8(255))
} else {
assertionFailure("Unsupported number of pixel components")
return .clear
}
}
}
public extension UIColor {
convenience init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
self.init(
red: CGFloat(red)/255,
green: CGFloat(green)/255,
blue: CGFloat(blue)/255,
alpha: CGFloat(alpha)/255)
}
}
public extension CGBitmapInfo {
enum ComponentLayout {
case bgra
case abgr
case argb
case rgba
case bgr
case rgb
var count: Int {
switch self {
case .bgr, .rgb: return 3
default: return 4
}
}
}
var componentLayout: ComponentLayout? {
guard let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue) else { return nil }
let isLittleEndian = contains(.byteOrder32Little)
if alphaInfo == .none {
return isLittleEndian ? .bgr : .rgb
}
let alphaIsFirst = alphaInfo == .premultipliedFirst || alphaInfo == .first || alphaInfo == .noneSkipFirst
if isLittleEndian {
return alphaIsFirst ? .bgra : .abgr
} else {
return alphaIsFirst ? .argb : .rgba
}
}
var chromaIsPremultipliedByAlpha: Bool {
let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue)
return alphaInfo == .premultipliedFirst || alphaInfo == .premultipliedLast
}
}
UIImage is a wrapper the bytes are CGImage or CIImage
According the the Apple Reference on UIImage the object is immutable and you have no access to the backing bytes. While it is true that you can access the CGImage data if you populated the UIImage with a CGImage (explicitly or implicitly), it will return NULL if the UIImage is backed by a CIImage and vice-versa.
Image objects not provide direct access to their underlying image
data. However, you can retrieve the image data in other formats for
use in your app. Specifically, you can use the cgImage and ciImage
properties to retrieve versions of the image that are compatible with
Core Graphics and Core Image, respectively. You can also use the
UIImagePNGRepresentation(:) and UIImageJPEGRepresentation(:_:)
functions to generate an NSData object containing the image data in
either the PNG or JPEG format.
Common tricks to getting around this issue
As stated your options are
UIImagePNGRepresentation or JPEG
Determine if image has CGImage or CIImage backing data and get it there
Neither of these are particularly good tricks if you want output that isn't ARGB, PNG, or JPEG data and the data isn't already backed by CIImage.
My recommendation, try CIImage
While developing your project it might make more sense for you to avoid UIImage altogether and pick something else. UIImage, as a Obj-C image wrapper, is often backed by CGImage to the point where we take it for granted. CIImage tends to be a better wrapper format in that you can use a CIContext to get out the format you desire without needing to know how it was created. In your case, getting the bitmap would be a matter of calling
- render:toBitmap:rowBytes:bounds:format:colorSpace:
As an added bonus you can start doing nice manipulations to the image by chaining filters onto the image. This solves a lot of the issues where the image is upside down or needs to be rotated/scaled etc.
Building on Olie and Algal's answer, here is an updated answer for Swift 3
public func getRGBAs(fromImage image: UIImage, x: Int, y: Int, count: Int) -> [UIColor] {
var result = [UIColor]()
// First get the image into your data buffer
guard let cgImage = image.cgImage else {
print("CGContext creation failed")
return []
}
let width = cgImage.width
let height = cgImage.height
let colorSpace = CGColorSpaceCreateDeviceRGB()
let rawdata = calloc(height*width*4, MemoryLayout<CUnsignedChar>.size)
let bytesPerPixel = 4
let bytesPerRow = bytesPerPixel * width
let bitsPerComponent = 8
let bitmapInfo: UInt32 = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Big.rawValue
guard let context = CGContext(data: rawdata, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
print("CGContext creation failed")
return result
}
context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
// Now your rawData contains the image data in the RGBA8888 pixel format.
var byteIndex = bytesPerRow * y + bytesPerPixel * x
for _ in 0..<count {
let alpha = CGFloat(rawdata!.load(fromByteOffset: byteIndex + 3, as: UInt8.self)) / 255.0
let red = CGFloat(rawdata!.load(fromByteOffset: byteIndex, as: UInt8.self)) / alpha
let green = CGFloat(rawdata!.load(fromByteOffset: byteIndex + 1, as: UInt8.self)) / alpha
let blue = CGFloat(rawdata!.load(fromByteOffset: byteIndex + 2, as: UInt8.self)) / alpha
byteIndex += bytesPerPixel
let aColor = UIColor(red: red, green: green, blue: blue, alpha: alpha)
result.append(aColor)
}
free(rawdata)
return result
}
swift
To access the raw RGB values of an UIImage in Swift 5 use the underlying CGImage and its dataProvider:
import UIKit
let image = UIImage(named: "example.png")!
guard let cgImage = image.cgImage,
let data = cgImage.dataProvider?.data,
let bytes = CFDataGetBytePtr(data) else {
fatalError("Couldn't access image data")
}
assert(cgImage.colorSpace?.model == .rgb)
let bytesPerPixel = cgImage.bitsPerPixel / cgImage.bitsPerComponent
for y in 0 ..< cgImage.height {
for x in 0 ..< cgImage.width {
let offset = (y * cgImage.bytesPerRow) + (x * bytesPerPixel)
let components = (r: bytes[offset], g: bytes[offset + 1], b: bytes[offset + 2])
print("[x:\(x), y:\(y)] \(components)")
}
print("---")
}
https://www.ralfebert.de/ios/examples/image-processing/uiimage-raw-pixels/
Based on different answers but mainly on this, this works for what I need:
UIImage *image1 = ...; // The image from where you want a pixel data
int pixelX = ...; // The X coordinate of the pixel you want to retrieve
int pixelY = ...; // The Y coordinate of the pixel you want to retrieve
uint32_t pixel1; // Where the pixel data is to be stored
CGContextRef context1 = CGBitmapContextCreate(&pixel1, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
CGContextDrawImage(context1, CGRectMake(-pixelX, -pixelY, CGImageGetWidth(image1.CGImage), CGImageGetHeight(image1.CGImage)), image1.CGImage);
CGContextRelease(context1);
As a result of this lines, you will have a pixel in AARRGGBB format with alpha always set to FF in the 4 byte unsigned integer pixel1.

resize and crop image centered

so currently i'm trying to crop and resize a picture to fit it into a specific size without losing the ratio.
a small image to show what i mean:
i played a bit with vocaro's categories but they don't work with png's and have problems with gifs. also the image doesn't get cropped.
does anyone have a suggestion how to do this resizing the best way or probably have a link to an existing library/categorie/whatever?
thanks for all hints!
p.s.: does ios implement a "select a excerpt" so that i have the right ratio and only have to scale it?!
This method will do what you want and is a category of UIImage for ease of use. I went with resize then crop, you could switch the code around easily enough if you want crop then resize. The bounds checking in the function is purely illustrative. You might want to do something different, for example center the crop rect relative to the outputImage dimensions but this ought to get you close enough to make whatever other changes you need.
#implementation UIImage( resizeAndCropExample )
- (UIImage *) resizeToSize:(CGSize) newSize thenCropWithRect:(CGRect) cropRect {
CGContextRef context;
CGImageRef imageRef;
CGSize inputSize;
UIImage *outputImage = nil;
CGFloat scaleFactor, width;
// resize, maintaining aspect ratio:
inputSize = self.size;
scaleFactor = newSize.height / inputSize.height;
width = roundf( inputSize.width * scaleFactor );
if ( width > newSize.width ) {
scaleFactor = newSize.width / inputSize.width;
newSize.height = roundf( inputSize.height * scaleFactor );
} else {
newSize.width = width;
}
UIGraphicsBeginImageContext( newSize );
context = UIGraphicsGetCurrentContext();
// added 2016.07.29, flip image vertically before drawing:
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0, newSize.height);
CGContextScaleCTM(context, 1, -1);
CGContextDrawImage(context, CGRectMake(0, 0, newSize.width, newSize.height, self.CGImage);
// // alternate way to draw
// [self drawInRect: CGRectMake( 0, 0, newSize.width, newSize.height )];
CGContextRestoreGState(context);
outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
inputSize = newSize;
// constrain crop rect to legitimate bounds
if ( cropRect.origin.x >= inputSize.width || cropRect.origin.y >= inputSize.height ) return outputImage;
if ( cropRect.origin.x + cropRect.size.width >= inputSize.width ) cropRect.size.width = inputSize.width - cropRect.origin.x;
if ( cropRect.origin.y + cropRect.size.height >= inputSize.height ) cropRect.size.height = inputSize.height - cropRect.origin.y;
// crop
if ( ( imageRef = CGImageCreateWithImageInRect( outputImage.CGImage, cropRect ) ) ) {
outputImage = [[[UIImage alloc] initWithCGImage: imageRef] autorelease];
CGImageRelease( imageRef );
}
return outputImage;
}
#end
I have came across the same issue in one of my application and developed this piece of code:
+ (UIImage*)resizeImage:(UIImage*)image toFitInSize:(CGSize)toSize
{
UIImage *result = image;
CGSize sourceSize = image.size;
CGSize targetSize = toSize;
BOOL needsRedraw = NO;
// Check if width of source image is greater than width of target image
// Calculate the percentage of change in width required and update it in toSize accordingly.
if (sourceSize.width > toSize.width) {
CGFloat ratioChange = (sourceSize.width - toSize.width) * 100 / sourceSize.width;
toSize.height = sourceSize.height - (sourceSize.height * ratioChange / 100);
needsRedraw = YES;
}
// Now we need to make sure that if we chnage the height of image in same proportion
// Calculate the percentage of change in width required and update it in target size variable.
// Also we need to again change the height of the target image in the same proportion which we
/// have calculated for the change.
if (toSize.height < targetSize.height) {
CGFloat ratioChange = (targetSize.height - toSize.height) * 100 / targetSize.height;
toSize.height = targetSize.height;
toSize.width = toSize.width + (toSize.width * ratioChange / 100);
needsRedraw = YES;
}
// To redraw the image
if (needsRedraw) {
UIGraphicsBeginImageContext(toSize);
[image drawInRect:CGRectMake(0.0, 0.0, toSize.width, toSize.height)];
result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
// Return the result
return result;
}
You can modify it according to your needs.
Had the same task for preview image in a gallery. For fixed Crop-Area (Image for SwiftUI and Canvas Rect for Kotiln) I want to crop central content of image - by maximum of on of the image's side. (see explanation below)
Here solutions for
Swift
func getImageCropped(srcImage : UIImage, sizeToCrop : CGSize) -> UIImage{
let ratioImage = Double(srcImage.cgImage!.width ) / Double(srcImage.cgImage!.height )
let ratioCrop = Double(sizeToCrop.width) / Double(sizeToCrop.height)
let cropRect: CGRect = {
if(ratioCrop > 1.0){
// crop LAND -> fit image HORIZONTALLY
let widthRatio = CGFloat(srcImage.cgImage!.width) / CGFloat(sizeToCrop.width)
var cropWidth = Int(sizeToCrop.width * widthRatio)
var cropHeight = Int(sizeToCrop.height * widthRatio)
var cropX = 0
var cropY = srcImage.cgImage!.height / 2 - cropHeight / 2
// [L1] [L2] : OK
if(ratioImage > 1.0) {
// image LAND
// [L3] : OK
if(cropHeight > srcImage.cgImage!.height) {
// [L4] : Crop-Area exceeds Image-Area > change crop orientation to PORTrait
let heightRatio = CGFloat(srcImage.cgImage!.height) / CGFloat(sizeToCrop.height)
cropWidth = Int(sizeToCrop.width * heightRatio)
cropHeight = Int(sizeToCrop.height * heightRatio)
cropX = srcImage.cgImage!.width / 2 - cropWidth / 2
cropY = 0
}
}
return CGRect(x: cropX, y: cropY, width: cropWidth, height: cropHeight)
}
else if(ratioCrop < 1.0){
// crop PORT -> fit image VERTICALLY
let heightRatio = CGFloat(srcImage.cgImage!.height) / CGFloat(sizeToCrop.height)
var cropWidth = Int(sizeToCrop.width * heightRatio)
var cropHeight = Int(sizeToCrop.height * heightRatio)
var cropX = srcImage.cgImage!.width / 2 - cropWidth / 2
var cropY = 0
// [P1] [P2] : OK
if(ratioImage < 1.0) {
// // image PORT
// [P3] : OK
if(cropWidth > srcImage.cgImage!.width) {
// [L4] : Crop-Area exceeds Image-Area > change crop orientation to LANDscape
let widthRatio = CGFloat(srcImage.cgImage!.width) / CGFloat(sizeToCrop.width)
cropWidth = Int(sizeToCrop.width * widthRatio)
cropHeight = Int(sizeToCrop.height * widthRatio)
cropX = 0
cropY = srcImage.cgImage!.height / 2 - cropHeight / 2
}
}
return CGRect(x: cropX, y: cropY, width: cropWidth, height: cropHeight)
}
else {
// CROP CENTER SQUARE
var fitSide = 0
var cropX = 0
var cropY = 0
if (ratioImage > 1.0){
// crop LAND -> fit image HORIZONTALLY !!!!!!
fitSide = srcImage.cgImage!.height
cropX = srcImage.cgImage!.width / 2 - fitSide / 2
}
else if (ratioImage < 1.0){
// crop PORT -> fit image VERTICALLY
fitSide = srcImage.cgImage!.width
cropY = srcImage.cgImage!.height / 2 - fitSide / 2
}
else{
// ImageAre and GropArea are square both
fitSide = srcImage.cgImage!.width
}
return CGRect(x: cropX, y: cropY, width: fitSide, height: fitSide)
}
}()
let imageRef = srcImage.cgImage!.cropping(to: cropRect)
let cropped : UIImage = UIImage(cgImage: imageRef!, scale: 0, orientation: srcImage.imageOrientation)
return cropped
}
and
Kotlin
data class RectCrop(val x: Int, val y: Int, val width: Int, val height: Int)
fun getImageCroppedShort(srcBitmap: Bitmap, sizeToCrop: Size):Bitmap {
val ratioImage = srcBitmap.width.toFloat() / srcBitmap.height.toFloat()
val ratioCrop = sizeToCrop.width.toFloat() / sizeToCrop.height.toFloat()
// 1. choose fit size
val cropRect: RectCrop =
if(ratioCrop > 1.0){
// crop LAND
val widthRatio = srcBitmap.width.toFloat() / sizeToCrop.width.toFloat()
var cropWidth = (sizeToCrop.width * widthRatio).toInt()
var cropHeight= (sizeToCrop.height * widthRatio).toInt()
var cropX = 0
var cropY = srcBitmap.height / 2 - cropHeight / 2
if(ratioImage > 1.0) {
// image LAND
if(cropHeight > srcBitmap.height) {
val heightRatio = srcBitmap.height.toFloat() / sizeToCrop.height.toFloat()
cropWidth = (sizeToCrop.width * heightRatio).toInt()
cropHeight = (sizeToCrop.height * heightRatio).toInt()
cropX = srcBitmap.width / 2 - cropWidth / 2
cropY = 0
}
}
RectCrop(cropX, cropY, cropWidth, cropHeight)
}
else if(ratioCrop < 1.0){
// crop PORT
val heightRatio = srcBitmap.height.toFloat() / sizeToCrop.height.toFloat()
var cropWidth = (sizeToCrop.width * heightRatio).toInt()
var cropHeight= (sizeToCrop.height * heightRatio).toInt()
var cropX = srcBitmap.width / 2 - cropWidth / 2
var cropY = 0
if(ratioImage < 1.0) {
// image PORT
if(cropWidth > srcBitmap.width) {
val widthRatio = srcBitmap.width.toFloat() / sizeToCrop.width.toFloat()
cropWidth = (sizeToCrop.width * widthRatio).toInt()
cropHeight = (sizeToCrop.height * widthRatio).toInt()
cropX = 0
cropY = srcBitmap.height / 2 - cropHeight / 2
}
}
RectCrop(cropX, cropY, cropWidth, cropHeight)
}
else {
// CROP CENTER SQUARE
var fitSide = 0
var cropX = 0
var cropY = 0
if (ratioImage > 1.0){
fitSide = srcBitmap.height
cropX = srcBitmap.width/ 2 - fitSide / 2
}
else if (ratioImage < 1.0){
fitSide = srcBitmap.width
cropY = srcBitmap.height / 2 - fitSide / 2
}
else{
fitSide = srcBitmap.width
}
RectCrop(cropX, cropY, fitSide, fitSide)
}
return Bitmap.createBitmap(
srcBitmap,
cropRect.x,
cropRect.y,
cropRect.width,
cropRect.height)
}
An explanation for those who want to understand algorithm. The main idea - we should stretch a Crop-Area proportionally(!) until the biggest side of it fits image. But there is one unacceptable case (L4 and P4) when Crop-Area exceeds Image-Area. So here we have only one way - change fit direction and stretch Crop-Area to the other side
On Scheme I didn't centering of crop (for better understanding idea), but both of this solutions do this. Here result of getImageCropped:
This SwiftUI code provides images above to test:
var body: some View {
// IMAGE LAND
let ORIG_NAME = "image_land.jpg"
let ORIG_W = 400.0
let ORIG_H = 265.0
// > crop Land
let cropW = 400.0
let cropH = 200.0
// > crop Port
// let cropW = 50.0
// let cropH = 265.0
// > crop Center Square
// let cropW = 265.0
// let cropH = 265.0
// IMAGE PORT
// let ORIG_NAME = "image_port.jpg"
// let ORIG_W = 350.0
// let ORIG_H = 500.0
// > crop Land
// let cropW = 350.0
// let cropH = 410.0
// > crop Port
// let cropW = 190.0
// let cropH = 500.0
// > crop Center Square
// let cropW = 350.0
// let cropH = 350.0
let imageOriginal = UIImage(named: ORIG_NAME)!
let imageCropped = self.getImageCroppedShort(srcImage: imageOriginal, sizeToCrop: CGSize(width: cropW, height: cropH))
return VStack{
HStack{
Text("ImageArea \nW:\(Int(ORIG_W)) \nH:\(Int(ORIG_H))").font(.body)
Text("CropArea \nW:\(Int(cropW)) \nH:\(Int(cropH))").font(.body)
}
ZStack{
Image(uiImage: imageOriginal)
.resizable()
.opacity(0.4)
Image(uiImage: imageCropped)
.resizable()
.frame(width: CGFloat(cropW), height: CGFloat(cropH))
}
.frame(width: CGFloat(ORIG_W), height: CGFloat(ORIG_H))
.background(Color.black)
}
}
Kotlin solution works identically. Trust me)