I wrote an NSImage extension to allow me to take random samples of an image. I would like those samples to retain the same quality as the original image. However, they appear to be aliased or slightly blurry. Here's an example - the original drawn on the right and a random sample on the left:
I'm playing around with this in SpriteKit at the moment. Here's how I create the original image:
let bg = NSImage(imageLiteralResourceName: "ref")
let tex = SKTexture(image: bg)
let sprite = SKSpriteNode(texture: tex)
sprite.position = CGPoint(x: size.width/2, y:size.height/2)
addChild(sprite)
And here's how I create the sample:
let sample = bg.sample(size: NSSize(width: 100, height: 100))
let sampletex = SKTexture(image:sample!)
let samplesprite = SKSpriteNode(texture:sampletex)
samplesprite.position = CGPoint(x: 60, y:size.height/2)
addChild(samplesprite)
Here's the NSImage extension (and randomNumber func) that creates the sample:
extension NSImage {
/// Returns the height of the current image.
var height: CGFloat {
return self.size.height
}
/// Returns the width of the current image.
var width: CGFloat {
return self.size.width
}
func sample(size: NSSize) -> NSImage? {
// Resize the current image, while preserving the aspect ratio.
let source = self
// Make sure that we are within a suitable range
var checkedSize = size
checkedSize.width = floor(min(checkedSize.width,source.size.width * 0.9))
checkedSize.height = floor(min(checkedSize.height, source.size.height * 0.9))
// Get random points for the crop.
let x = randomNumber(range: 0...(Int(source.width) - Int(checkedSize.width)))
let y = randomNumber(range: 0...(Int(source.height) - Int(checkedSize.height)))
// Create the cropping frame.
var frame = NSRect(x: x, y: y, width: Int(checkedSize.width), height: Int(checkedSize.height))
// let ref = source.cgImage.cropping(to:frame)
let ref = source.cgImage(forProposedRect: &frame, context: nil, hints: nil)
let rep = NSBitmapImageRep(cgImage: ref!)
// Create a new image with the new size
let img = NSImage(size: checkedSize)
// Set a graphics context
img.lockFocus()
defer { img.unlockFocus() }
// Fill in the sample image
if rep.draw(in: NSMakeRect(0, 0, checkedSize.width, checkedSize.height),
from: frame,
operation: NSCompositingOperation.copy,
fraction: 1.0,
respectFlipped: false,
hints: [NSImageHintInterpolation:NSImageInterpolation.high.rawValue]) {
// Return the cropped image.
return img
}
// Return nil in case anything fails.
return nil
}
}
func randomNumber(range: ClosedRange<Int> = 0...100) -> Int {
let min = range.lowerBound
let max = range.upperBound
return Int(arc4random_uniform(UInt32(1 + max - min))) + min
}
I've tried this about 10 different ways and the results always seem to be a slightly blurry sample. I even checked for smudges on my screen. :)
How can I create a sample of an NSImage that retains the exact qualities of the section of the original source image?
Switching the interpolation mode to NSImageInterpolation.none was apparently sufficient in this case.
It's also important to handle the draw destination rect correctly. Since cgImage(forProposedRect:...) may change the proposed rect, you should use a destination rect that's based on it. You should basically use a copy of frame that's offset by (-x, -y) so it's relative to (0, 0) instead of (x, y).
Related
I'm doing attentionBased saliency and should pass image to the request. When the contentMode is ScaleAspectFill, the result of the request is not correct, because I use full image (not visible on screen part)
I'm trying to crop UIImage, but this method doesn't crop correctly
let newImage = cropImage(imageToCrop: imageView.image, toRect: imageView.frame)
func cropImage(imageToCrop: UIImage?, toRect rect: CGRect) -> UIImage? {
guard let imageRef = imageToCrop?.cgImage?.cropping(to: rect) else {
return nil
}
let cropped: UIImage = UIImage(cgImage: imageRef)
return cropped
}
How can I make saliency request only for the visible part of the image (which changes when change contentMode)?
If I understand your goal correctly...
Suppose we have this 640 x 360 image:
and we display it in a 240 x 240 image view, using .scaleAspectFill...
It looks like this (the red outline is the image view frame):
and, with .clipsToBounds = true:
we want to generate this new 360 x 360 image (that is, we want to keep the original image resolution... we don't want to end up with a 240 x 240 image):
To crop the visible portion of the image, we need to calculate the scaled rect, including the offset:
func cropImage(imageToCrop: UIImage?, toRect rect: CGRect) -> UIImage? {
guard let imageRef = imageToCrop?.cgImage?.cropping(to: rect) else {
return nil
}
let cropped: UIImage = UIImage(cgImage: imageRef)
return cropped
}
func myCrop(imgView: UIImageView) -> UIImage? {
// get the image from the imageView
guard let img = imgView.image else { return nil }
// image view rect
let vr: CGRect = imgView.bounds
// image size -- we need to account for scale
let imgSZ: CGSize = CGSize(width: img.size.width * img.scale, height: img.size.height * img.scale)
let viewRatio: CGFloat = vr.width / vr.height
let imgRatio: CGFloat = imgSZ.width / imgSZ.height
var newRect: CGRect = .zero
// calculate the rect that needs to be clipped from the full image
if viewRatio > imgRatio {
// image has a wider aspect ratio than the image view
// so top and bottom will be clipped
let f: CGFloat = imgSZ.width / vr.width
let h: CGFloat = vr.height * f
newRect.origin.y = (imgSZ.height - h) * 0.5
newRect.size.width = imgSZ.width
newRect.size.height = h
} else {
// image has a narrower aspect ratio than the image view
// so left and right will be clipped
let f: CGFloat = imgSZ.height / vr.height
let w: CGFloat = vr.width * f
newRect.origin.x = (imgSZ.width - w) * 0.5
newRect.size.width = w
newRect.size.height = imgSZ.height
}
return cropImage(imageToCrop: img, toRect: newRect)
}
and call it like this:
if let croppedImage = myCrop(imgView: theImageView) {
// do something with the new image
}
I write SplitMirror filter like in Apple Motion app or Snapchat lenses by using UIGraphics to use it with real time camera feed or video processing, with single image request it works well like in attached image in the question but for multiple filtering request not. I think it's code must be changed from UIGraphics to Core Image & CIContext for better performance and less memory usage like CIFilters and actually I don't know how to do it.
I tried several ways to convert it but I stuck on merging left and right, this can be done with `CICategoryCompositeOperations filters but which one is fit this case I o have idea, so need some help with this issue.
Filter code using UIGraphics:
//
// SplitMirror filter.swift
// Image Editor
//
// Created by Coder ACJHP on 25.02.2022.
//
// SnapChat & Tiktok & Motion App like image filter
// Inspired from 'https://support.apple.com/tr-tr/guide/motion/motn169f94ea/mac'
// Splits an image in half vertically and reverses the left remaining half to create a reflection.
final func splitMirrorFilter(processingImage image: UIImage) -> UIImage {
// Image size
let imageSize = image.size
// Left half
let leftHalfRect = CGRect(
origin: .zero,
size: CGSize(
width: imageSize.width/2,
height: imageSize.height
)
)
// Right half
let rightHalfRect = CGRect(
origin: CGPoint(
x: imageSize.width - (imageSize.width/2).rounded(),
y: 0
),
size: CGSize(
width: imageSize.width - (imageSize.width/2).rounded(),
height: imageSize.height
)
)
// Split image into two parts
guard let cgRightHalf = image.cgImage?.cropping(to: rightHalfRect) else { return image }
// Flip right side to be used as left side
let flippedLeft = UIImage(cgImage: cgRightHalf, scale: image.scale, orientation: .upMirrored)
let unFlippedRight = UIImage(cgImage: cgRightHalf, scale: image.scale, orientation: image.imageOrientation)
UIGraphicsBeginImageContextWithOptions(imageSize, false, image.scale)
flippedLeft.draw(at: leftHalfRect.origin)
unFlippedRight.draw(at: rightHalfRect.origin)
guard let splitMirroredImage = UIGraphicsGetImageFromCurrentImageContext() else { return image }
UIGraphicsEndImageContext()
return splitMirroredImage
}
Here is what it tried with Core Image
// Splits an image in half vertically and reverses the left remaining half to create a reflection.
final func splitMirrorFilterCIImageVersion(processingImage image: UIImage) -> CIImage? {
guard let ciImageCopy = CIImage(image: image) else { return image.ciImage }
// Image size
let imageSize = ciImageCopy.extent.size
let imageRect = CGRect(origin: .zero, size: imageSize)
// Left half
let leftHalfRect = CGRect(
origin: .zero,
size: CGSize(
width: imageSize.width/2,
height: imageSize.height
)
)
// Right half
let rightHalfRect = CGRect(
origin: CGPoint(
x: imageSize.width - (imageSize.width/2).rounded(),
y: 0
),
size: CGSize(
width: imageSize.width - (imageSize.width/2).rounded(),
height: imageSize.height
)
)
// Split image into two parts
let cgRightHalf = ciImageCopy.cropped(to: rightHalfRect)
context.draw(cgRightHalf.oriented(.upMirrored), in: leftHalfRect, from: imageRect)
context.draw(cgRightHalf, in: rightHalfRect, from: imageRect)
// I'm stuck here
// Merge two images into one
// Here I don't know which filter can be used to merge op
// CICategoryCompositeOperation filters may fits
}
I think you are on the right track.
You can create the left half from the right half by applying transformations to it using let leftHalf = rightHalf.transformed(by: transformation). The transformation should mirror it and translate it to the correct position, i.e., next to the right half.
You can them combine the two into one image using let result = leftHalf.composited(over: rightHalf) and render that result using a CIContext.
After getting the correct idea from Frank Schlegel 's answer I rewrote filter code and now works well.
// Splits an image in half vertically and reverses the left remaining half to create a reflection.
final func splitMirrorFilterCIImageVersion(processingImage image: UIImage) -> CIImage? {
guard let ciImageCopy = CIImage(image: image) else { return image.ciImage }
// Image size
let imageSize = ciImageCopy.extent.size
// Right half
let rightHalfRect = CGRect(
origin: CGPoint(
x: imageSize.width - (imageSize.width/2).rounded(),
y: 0
),
size: CGSize(
width: imageSize.width - (imageSize.width/2).rounded(),
height: imageSize.height
)
)
// Split image into two parts
let ciRightHalf = ciImageCopy.cropped(to: rightHalfRect)
// Make transform to move right part to left
let transform = CGAffineTransform(translationX: -rightHalfRect.size.width, y: -rightHalfRect.origin.y)
// Create left part and apply transform then flip it
let ciLeftHalf = ciRightHalf.transformed(by: transform).oriented(.upMirrored)
// Merge two images into one
return ciLeftHalf.composited(over: ciRightHalf)
}
I have the following scene example where I can crop an image based on the selection (red square).
That square has dynamic Height and Width - base on this fact I want to use the selected Height and Width to crop what is inside of the Red square.
The function that I am using for cropping is from Apple developer and looks like this:
func cropImage(_ inputImage: UIImage, toRect cropRect: CGRect, viewWidth: CGFloat, viewHeight: CGFloat) -> UIImage?
{
let imageViewScale = max(inputImage.size.width / viewWidth,
inputImage.size.height / viewHeight)
// Scale cropRect to handle images larger than shown-on-screen size
let cropZone = CGRect(x:cropRect.origin.x * imageViewScale,
y:cropRect.origin.y * imageViewScale,
width:cropRect.size.width * imageViewScale,
height:cropRect.size.height * imageViewScale)
// Perform cropping in Core Graphics
guard let cutImageRef: CGImage = inputImage.cgImage?.cropping(to:cropZone)
else {
return nil
}
// Return image to UIImage
let croppedImage: UIImage = UIImage(cgImage: cutImageRef)
return croppedImage
}
Now. I want to use the given Height and Width to crop that selection.
let croppedImage = cropImage(image!, toRect: CGRect(x:?? , y:?? , width: ??, height: ??), viewWidth: ??, viewHeight: ??)
What should I fill in these parameters in order to crop the image based on the above dynamic selection?
Ok, since you just have info of width and height of the cropping shape. You'll need to calculate the x and y by yourself.
First, let's consider these information:
// let's pretend this is a sample of size that your crop tool provides to you
let cropSize = CGSize(width: 120, height: 260)
Next, you'll need to obtain the display size (width and height) of your image. Display size here is the frame's size of your image, not the size of the image itself.
// again, lets pretend it's just a frame size of your image
let imageSize = CGSize(width: 320, height: 480)
With this info, you can obtain the x and y necessary to compose a CGRect and then, provide to a cropping function you desire.
let x = (imageSize.width - cropSize.width) / 2
let y = (imageSize.height - cropSize.height) / 2
So now, you can create a rectangle to crop your image like this:
let cropRect = CGRect(x: x, y: y, width: cropSize.width, height: cropSize.height)
With cropRect you can use on both cropping or cropImage functions mentioned in your question.
Ok, let's assume that your image is in imageView, wich is located somewhere in your screen. The rect is a variable where your selected frame (related to the imageView.frame) is stored. So the result is:
let croppedImage = cropImage(image!, toRect: rect, viewWidth: imageView.width, viewHeight: imageView.height)
I've used the info from all of your answers and especially #matt's comment and this is the final solution.
Using the input values that my red square returned, I've adapted the original Crop function to this one:
func cropImage(_ inputImage: UIImage, width: Double, height: Double) -> UIImage?
{
let imsize = inputImage.size
let ivsize = UIScreen.main.bounds.size
var scale : CGFloat = ivsize.width / imsize.width
if imsize.height * scale < ivsize.height {
scale = ivsize.height / imsize.height
}
let croppedImsize = CGSize(width:height/scale, height:width/scale)
let croppedImrect =
CGRect(origin: CGPoint(x: (imsize.width-croppedImsize.width)/2.0,
y: (imsize.height-croppedImsize.height)/2.4),
size: croppedImsize)
let r = UIGraphicsImageRenderer(size:croppedImsize)
let croppedIm = r.image { _ in
inputImage.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
}
return croppedIm
}
I am trying to get an image drawn of a CALayer containing a number of sublayers positioned at specific points, but at the moment it does not honour the zPosition of the sublayers when I use CALayer.render(in ctx:). It works fine on screen but when rendering to PDF it seems to render them in the order they were created.
These sublayers that are positioned(x, y, angle) on the drawing layer.
One solution seems to be to override the render(in ctx:) method on the drawing layer, which seems to work except the rendering of the sublayers is in the incorrect position. They are all in the bottom left corner (0,0) and not rotated correctly.
override func render(in ctx: CGContext) {
if let layers:[CALayer] = self.sublayers {
let orderedLayers = layers.sorted(by: {
$0.zPosition < $1.zPosition
})
for v in orderedLayers {
v.render(in: ctx)
}
}
}
If I don't override this method then they are positioned correctly but just in the wrong zPosition - i.e. ones that should be at the bottom (zPosition-0) are at the top.
What am I missing here ? It seems I need to position the sublayers correctly somehow in the render(incts:) function?
How do I do this ? These sublayers have already been positioned on screen and all I am trying to do is generate an image of the drawing. This is done using the following function.
func createPdfData()->Data?{
DebugLog("")
let scale: CGFloat = 1
let mWidth = drawingLayer.frame.width * scale
let mHeight = drawingLayer.frame.height * scale
var cgRect = CGRect(x: 0, y: 0, width: mWidth, height: mHeight)
let documentInfo = [kCGPDFContextCreator as String:"MakeSpace(www.xxxx.com)",
kCGPDFContextTitle as String:"Layout Image",
kCGPDFContextAuthor as String:GlobalVars.shared.appUser?.username ?? "",
kCGPDFContextSubject as String:self.level?.imageCode ?? "",
kCGPDFContextKeywords as String:"XXXX, Layout"]
let data = NSMutableData()
guard let pdfData = CGDataConsumer(data: data),
let ctx = CGContext.init(consumer: pdfData, mediaBox: &cgRect, documentInfo as CFDictionary) else {
return nil}
ctx.beginPDFPage(nil)
ctx.saveGState()
ctx.scaleBy(x: scale, y: scale)
self.drawingLayer.render(in: ctx)
ctx.restoreGState()
ctx.endPDFPage()
ctx.closePDF()
return data as Data
}
This is what I ended up doing - and it seems to work.
class ZOrderDrawingLayer: CALayer {
override func render(in ctx: CGContext) {
if let layers:[CALayer] = self.sublayers {
let orderedLayers = layers.sorted(by: {
$0.zPosition < $1.zPosition
})
for v in orderedLayers {
ctx.saveGState()
// Translate and rotate the context using the sublayers
// size, position and transform (angle)
let w = v.bounds.width/2
let ww = w*w
let h = v.bounds.height/2
let hh = h*h
let c = sqrt(ww + hh)
let theta = asin(h/c)
let angle = atan2(v.transform.m12, v.transform.m11)
let x = c * cos(theta+angle)
let y = c * sin(theta+angle)
ctx.translateBy(x: v.position.x-x, y: v.position.y-y)
ctx.rotate(by: angle)
v.render(in: ctx)
ctx.restoreGState()
}
}
}
}
Simply put, if I have an image, I and another image J, I want to replace the RGB value at a position I(t,s) and assign that pixel to J(t,s). How might I do this in Core Image, or using a custom kernel?
This seems like it might not be an easy thing to do, considering the way Core Image works. However, I was wondering maybe there was a way to extract the value of the pixel at (t,s), create an image K as large as J with just that pixel, and then overlay J with K only at that one point. Just an idea, hopefully there's a better way.
If you want to set just one pixel, you can create a small color kernel that compares a passed target coordinate with the current coordinate and colors the output appropriately. For example:
let kernel = CIColorKernel(string:
"kernel vec4 setPixelColorAtCoord(__sample pixel, vec2 targetCoord, vec4 targetColor)" +
"{" +
" return int(targetCoord.x) == int(destCoord().x) && int(targetCoord.y) == int(destCoord().y) ? targetColor : pixel; " +
"}"
)
Then, given an image, let's say a solid red:
let image = CIImage(color: CIColor(red: 1, green: 0, blue: 0))
.imageByCroppingToRect(CGRect(x: 0, y: 0, width: 640, height: 640))
If we want to set the pixel at 500, 100 to blue:
let targetCoord = CIVector(x: 500, y: 100)
let targetColor = CIColor(red: 0, green: 0, blue: 1)
The following will create a CIImage named final with a single blue pixel in a sea of red:
let args = [image, targetCoord, targetColor]
let final = kernel?.applyWithExtent(image.extent, arguments: args)
If you want to draw more than one pixel, this may not be the best solution though.
Simon
I created a useful helper for handling images by pixel. It basically creates an RGBA image context, copies the image into it (so that we can work with the image data as it's jpeg or something), and gets the raw data buffer of it. This is the class I made:
public final class RGBAPixels {
static let bitmapInfo = CGBitmapInfo.ByteOrder32Little.rawValue | CGImageAlphaInfo.PremultipliedLast.rawValue
static let colorSpace = CGColorSpaceCreateDeviceRGB()
public typealias Pixel = (r: UInt8, g: UInt8, b: UInt8, a: UInt8)
let context : CGContext
let pointer : UnsafeMutablePointer<Pixel>
public let width : Int
public let height : Int
public let range : (x: Range<Int>, y: Range<Int>)
/// Generates an image from the current pixels
public var CGImage : CGImageRef? {
return CGBitmapContextCreateImage(context)
}
public init?(CGImage: CGImageRef) {
width = CGImageGetWidth(CGImage)
height = CGImageGetHeight(CGImage)
range = (0..<width, 0..<height)
guard let context = CGBitmapContextCreate(
nil, width, height, 8, sizeof(Pixel) * width,
RGBAPixels.colorSpace, RGBAPixels.bitmapInfo)
else { return nil }
self.context = context
let rect = CGRect(origin: .zero, size: CGSize(width: width, height: height))
CGContextDrawImage(context, rect, CGImage)
pointer = UnsafeMutablePointer(CGBitmapContextGetData(context))
}
public subscript (x: Int, y: Int) -> Pixel {
get {
assert(range.x ~= x && range.y ~= y, "Pixel position (\(x), \(y)) is out of the bounds \(range))")
return pointer[y * width + x]
}
set {
assert(range.x ~= x && range.y ~= y, "Pixel position (\(x), \(y)) is out of the bounds \(range))")
pointer[y * width + x] = newValue
}
}
}
It is useable by itself, but you might want to have some more convenience:
public protocol Image {
var CGImage : CGImageRef? { get }
}
public extension Image {
public var pixels : RGBAPixels? {
return CGImage.flatMap(RGBAPixels.init)
}
public func copy(#noescape modifying : (pixels: RGBAPixels) -> Void) -> CGImageRef? {
return pixels.flatMap{ pixels in
modifying(pixels: pixels)
return pixels.CGImage
}
}
}
extension CGImage : Image {
public var CGImage: CGImageRef? { return self }
}
#if os(iOS)
import class UIKit.UIImage
extension UIImage : Image {}
extension RGBAPixels {
public var image : UIImage? {
return CGImage.map(UIImage.init)
}
}
#elseif os(OSX)
import class AppKit.NSImage
extension NSImage : Image {
public var CGImage : CGImageRef? {
var rect = CGRect(origin: .zero, size: size)
return CGImageForProposedRect(&rect, context: nil, hints: nil)
}
}
extension RGBAPixels {
public var image : NSImage? {
return cgImage.flatMap{ img in
NSImage(CGImage: img, size: CGSize(width: width, height: height))
}
}
}
#endif
It is useable like this:
let image = UIImage(named: "image")!
let pixels = image.pixels!
// Add a red square
for x in 10..<15 {
for y in 10..<15 {
pixels[x, y] = (255, 0, 0, 255)
}
}
let modifiedImage = pixels.image
// Copy the image, while changing the pixel at (10, 10) to be blue
let otherImage = UIImage(named: "image")!.copy{ $0[10, 10] = (0, 255, 0, 255) }
let a = UIImage(named: "image")!.pixels!
let b = UIImage(named: "image")!.pixels!
// Set the pixel at (10, 10) of a to the pixel at (20, 20) of b
a[10, 10] = b[20, 20]
let result = a.image!
I unwrapped for the demonstration, don't actually do this an your app.
The implementation is as fast as it can get with the CPU. If you need to modify lots of images in more complicated ways than just copying, you may want to use CoreImage instead.
I made this work with both OSX's NSImages and iOS's UIImages.