WatchOS5 - what is the "Graphic rectangular" complication image size? - apple-watch

I'm looking at this list of complication images for WatchOS5 by Apple, it mentions template for the graphic rectangular complication on 44mm watch being 342px × 108px (171pt × 54pt #2x)
I tried sending 342x108 image, and it is too large - it appears the default scaling mode is "center". I also tried 171x54 and it is too small and blurry - other images I display on apple watch are much more crisp
What is the correct size /scale for the Graphic Rectangular WatchOS5 complication ? Is it possible for the app or watchkit/exstension to query the rectangle available for complication?
var image: UIImage = UIImage()
let fileManager = FileManager.default
do {
let fileURL = try //...URL of complication file
let data = try Data(contentsOf: fileURL)
image = UIImage(data: data)
} catch {
image = UIImage(named: "placeholder") ?? UIImage()
}
let textProvider = CLKSimpleTextProvider(text: SessionDelegater.title)
template.textProvider = textProvider
template.imageProvider = CLKFullColorImageProvider(fullColorImage: image)

Partial workaround - manually recreate image from CGImage and assign scale factor of 2:
var image: UIImage = UIImage()
do {
let fileURL = try FileManager.fileURL("complication")
let data = try Data(contentsOf: fileURL)
image = UIImage(data: data) ?? UIImage()
if let cgImage = image.cgImage {
image = UIImage(cgImage: cgImage, scale: 2, orientation: image.imageOrientation)
}
} catch {
print(error)
image = UIImage(named: "image1") ?? UIImage()
}

Related

why the image rotate 90 degrees after let cimg = CIImage(image: self.img)

I'm processing image with apple core image, when I translate a UIImage to CIImage, the image seems rotate 90 degrees. Main code is like the following,
struct PhotoDetailView: View {
#Binding var img:UIImage
var body: some View {
print(self.img.size)
let cimg = CIImage(image: self.img)
print(cimg?.extent.size)
let context = CIContext()
let filter = CIFilter(name: "CIPhotoEffectMono")!
filter.setValue(cimg, forKey: kCIInputImageKey)
let result = filter.outputImage!
print(result.extent.size)
print output:
(3024.0, 4032.0)
Optional((4032.0, 3024.0))
(4032.0, 3024.0)
you should keep the orientation info of a image
let orientation = self.img.imageOrientation
let cimg = CIImage(image: self.img)
let context = CIContext()
...
let img2 = UIImage(cgImage: cgImage!, scale: 1.0, orientation: orientation)

How do I crop Jpeg image from/to a URL, Swift, MacOS

I want to do a rectangular crop of a JPEG image. I have the following code that will create a duplicate image. It uses an NSImage. I do not know how to create a cropped image.
func crop(index: Int) {
let croppedImageUrl = ...
let imageUrl = ...
// Create a cropped image.
let data = try? Data(contentsOf: imageUrl)
let image = NSImage(data: data!)
let tiffRepresentation = (image?.tiffRepresentation)!
let bitmap = NSBitmapImageRep(data: tiffRepresentation)
let representation = bitmap?.representation(using: NSBitmapImageRep.FileType.jpeg, properties: [:])
do {
try representation?.write(to: croppedImageUrl, options: [.withoutOverwriting])
} catch let error as NSError {
print(error.localizedDescription)
}
}
Something like...
func crop(nsImage: NSImage,rect: CGRect) -> NSImage {
let cgImage = (nsImage?.cgImage(forProposedRect: nil, context: nil, hints: nil)?.cropping(to: rect))!
let size = NSSize(width: rect.width, height: rect.height)
return NSImage(cgImage: cgImage, size: size)
}
Sorry, not compiled this code fragment but general method worked in my code. Probably better done as an extension to NSImage, if that is possible.
This may help you to crop image
func crop() -> UIImage? {
let imageUrl = URL(string: "imageUrl")!
let data = try! Data(contentsOf: imageUrl)
let image = UIImage(data: data)!
// Crop rectangle
let width = min(image.size.width, image.size.height)
let size = CGSize(width: width, height: width)
// If you want to crop center of image
let startPoint = CGPoint(x: (image.size.width - width) / 2, y: (image.size.height - width) / 2)
UIGraphicsBeginImageContextWithOptions(size, false, 0)
image.draw(in: CGRect(origin: startPoint, size: size))
let croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return croppedImage
}

Why does this Swift UIImage function overflow my memory?

I'm building an app for iPhone using Swift 4. I have a few test filters. Both work fine through the camera's output, but when I'm creating an array of images out of the more complex one, my memory overflows to catastrophic proportions and crashes my app.
I'm calling this one below in a loop which overflows my memory:
func rotateHue2(with ciImage: CIImage,
rotatedByHue deltaHueRadians: CGFloat,
orientation:UIImageOrientation?,
screenWidth:CGFloat,
screenHeight:CGFloat) -> UIImage {
let sourceCore = ciImage
let transBG = UIImage(color: .clear, size: CGSize(width: screenWidth, height: screenHeight))
let transBGCI = CIImage(cgImage: (transBG?.cgImage)!)
// Part 1
let gradientPoint0Pos: [CGFloat] = [0, 0]
let inputPoint0Vector = CIVector(values: gradientPoint0Pos, count: gradientPoint0Pos.count)
var gradientPoint1Pos: [CGFloat]
if(orientation == nil){
gradientPoint1Pos = [0, screenWidth*2]
}else{
gradientPoint1Pos = [screenHeight*2, 0]
}
let inputPoint1Vector = CIVector(values: gradientPoint1Pos, count: gradientPoint1Pos.count)
let gradientFilter = CIFilter(name: "CISmoothLinearGradient")
gradientFilter?.setDefaults()
gradientFilter?.setValue(inputPoint0Vector, forKey: "inputPoint0")
gradientFilter?.setValue(inputPoint1Vector, forKey: "inputPoint1")
gradientFilter?.setValue(CIColor.clear, forKey:"inputColor0")
gradientFilter?.setValue(CIColor.black, forKey:"inputColor1")
let gradient = gradientFilter?.outputImage?
.cropped(to: sourceCore.extent)
let hue1 = sourceCore
.applyingFilter("CIHueAdjust", parameters: [kCIInputImageKey: sourceCore,
kCIInputAngleKey: deltaHueRadians])
.cropped(to: sourceCore.extent)
let alphaMaskBlend1 = CIFilter(name: "CIBlendWithAlphaMask",
withInputParameters: [kCIInputImageKey: hue1,
kCIInputBackgroundImageKey: transBGCI,
kCIInputMaskImageKey:gradient!])?.outputImage?
.cropped(to: sourceCore.extent)
// Part 2
let hue2 = sourceCore
.applyingFilter("CIHueAdjust", parameters: [kCIInputImageKey: sourceCore,
kCIInputAngleKey: deltaHueRadians+1.5707])
.cropped(to: sourceCore.extent)
let blendedMasks = hue2
.applyingFilter(compositeOperationFilters[compositeOperationFiltersIndex], parameters: [kCIInputImageKey: alphaMaskBlend1!,
kCIInputBackgroundImageKey: hue2])
.cropped(to: sourceCore.extent)
// Convert the filter output back into a UIImage.
let context = CIContext(options: nil)
let resultRef = context.createCGImage(blendedMasks, from: blendedMasks.extent)
var result:UIImage? = nil
if(orientation != nil){
result = UIImage(cgImage: resultRef!, scale: 1.0, orientation: orientation!)
}else{
result = UIImage(cgImage: resultRef!)
}
return result!
}
Each image is resized down to 1280 or 720 wide depending on the phone's orientation. Why does this give me a memory warning when my other image filter works fine?
Just for kicks, here's the other one that doesn't make it crash:
func rotateHue(with ciImage: CIImage,
rotatedByHue deltaHueRadians: CGFloat,
orientation:UIImageOrientation?,
screenWidth:CGFloat,
screenHeight:CGFloat) -> UIImage {
// Create a Core Image version of the image.
let sourceCore = ciImage
// Apply a CIHueAdjust filter
let hueAdjust = CIFilter(name: "CIHueAdjust")
hueAdjust?.setDefaults()
hueAdjust?.setValue(sourceCore, forKey: "inputImage")
hueAdjust?.setValue(deltaHueRadians, forKey: "inputAngle")
let resultCore = CIFilter(name: "CIHueAdjust",
withInputParameters: [kCIInputImageKey: sourceCore,
kCIInputAngleKey: deltaHueRadians])?.outputImage?
.cropped(to: sourceCore.extent)
// Convert the filter output back into a UIImage.
let context = CIContext(options: nil)
let resultRef = context.createCGImage(resultCore!, from: (resultCore?.extent)!)
var result:UIImage? = nil
if(orientation != nil){
result = UIImage(cgImage: resultRef!, scale: 1.0, orientation: orientation!)
}else{
result = UIImage(cgImage: resultRef!)
}
return result!
}
The first thing you should do is move your CIContext out of the function and make it as global as possible. Creating it is a major use of memory.
Less an issue, why are you cropping five times per image? This probably isn't the issue, but it "feels" wrong to me. A CIImage isn't an image - it's much closer to a "recipe".
Chain things more tightly - let the input of the next filter be the output of the prior one. Crop when finished. And most of all, create as few CIContexts as possible.

Convert UIImage to grayscale keeping image quality

I have this extension (found in obj-c and I converted it to Swift3) to get the same UIImage but grayscaled:
public func getGrayScale() -> UIImage
{
let imgRect = CGRect(x: 0, y: 0, width: width, height: height)
let colorSpace = CGColorSpaceCreateDeviceGray()
let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue).rawValue)
context?.draw(self.cgImage!, in: imgRect)
let imageRef = context!.makeImage()
let newImg = UIImage(cgImage: imageRef!)
return newImg
}
I can see the gray image but its quality is pretty bad... The only thing I can see that's related to the quality is bitsPerComponent: 8 in the context contructor. However looking at Apple's doc, here is what I get:
It shows that iOS only supports 8bpc... Thus why can't I improve the quality ?
Try below code:
Note: code Updated and error been fixed...
Code tested in Swift 3.
originalImage is the image that you trying to convert.
Answer 1:
var context = CIContext(options: nil)
Update: CIContext is the Core Image component that handles rendering and All of the processing of a core image is done in a CIContext. This is somewhat similar to a Core Graphics or OpenGL context.For more info available in Apple Doc.
func Noir() {
let currentFilter = CIFilter(name: "CIPhotoEffectNoir")
currentFilter!.setValue(CIImage(image: originalImage.image!), forKey: kCIInputImageKey)
let output = currentFilter!.outputImage
let cgimg = context.createCGImage(output!,from: output!.extent)
let processedImage = UIImage(cgImage: cgimg!)
originalImage.image = processedImage
}
Also you need to Considered following filter that can produce similar effect
CIPhotoEffectMono
CIPhotoEffectTonal
Output from Answer 1:
Output from Answer 2:
Improved answer :
Answer 2: Auto adjusting input image before applying coreImage filter
var context = CIContext(options: nil)
func Noir() {
//Auto Adjustment to Input Image
var inputImage = CIImage(image: originalImage.image!)
let options:[String : AnyObject] = [CIDetectorImageOrientation:1 as AnyObject]
let filters = inputImage!.autoAdjustmentFilters(options: options)
for filter: CIFilter in filters {
filter.setValue(inputImage, forKey: kCIInputImageKey)
inputImage = filter.outputImage
}
let cgImage = context.createCGImage(inputImage!, from: inputImage!.extent)
self.originalImage.image = UIImage(cgImage: cgImage!)
//Apply noir Filter
let currentFilter = CIFilter(name: "CIPhotoEffectTonal")
currentFilter!.setValue(CIImage(image: UIImage(cgImage: cgImage!)), forKey: kCIInputImageKey)
let output = currentFilter!.outputImage
let cgimg = context.createCGImage(output!, from: output!.extent)
let processedImage = UIImage(cgImage: cgimg!)
originalImage.image = processedImage
}
Note: If you want to see the better result.You should be testing your code on real device not in the simulator...
A Swift 4.0 extension that returns an optional UIImage to avoid any potential crashes down the road.
import UIKit
extension UIImage {
var noir: UIImage? {
let context = CIContext(options: nil)
guard let currentFilter = CIFilter(name: "CIPhotoEffectNoir") else { return nil }
currentFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
if let output = currentFilter.outputImage,
let cgImage = context.createCGImage(output, from: output.extent) {
return UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)
}
return nil
}
}
To use this:
let image = UIImage(...)
let noirImage = image.noir // noirImage is an optional UIImage (UIImage?)
Joe's answer as an UIImage exension for Swift 4 working correctly for different scales:
extension UIImage {
var noir: UIImage {
let context = CIContext(options: nil)
let currentFilter = CIFilter(name: "CIPhotoEffectNoir")!
currentFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
let output = currentFilter.outputImage!
let cgImage = context.createCGImage(output, from: output.extent)!
let processedImage = UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)
return processedImage
}
}
I'd use CoreImage, which may keep the quality.
func convertImageToBW(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
// convert UIImage to CIImage and set as input
let ciInput = CIImage(image: image)
filter?.setValue(ciInput, forKey: "inputImage")
// get output CIImage, render as CGImage first to retain proper UIImage scale
let ciOutput = filter?.outputImage
let ciContext = CIContext()
let cgImage = ciContext.createCGImage(ciOutput!, from: (ciOutput?.extent)!)
return UIImage(cgImage: cgImage!)
}
Depending on how you use this code, you may want to create the CIContext outside of it for performance reasons.
Here's a category in objective c. Note that, critically, this version takes scale into consideration.
- (UIImage *)grayscaleImage{
return [self imageWithCIFilter:#"CIPhotoEffectMono"];
}
- (UIImage *)imageWithCIFilter:(NSString*)filterName{
CIImage *unfiltered = [CIImage imageWithCGImage:self.CGImage];
CIFilter *filter = [CIFilter filterWithName:filterName];
[filter setValue:unfiltered forKey:kCIInputImageKey];
CIImage *filtered = [filter outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgimage = [context createCGImage:filtered fromRect:CGRectMake(0, 0, self.size.width*self.scale, self.size.height*self.scale)];
// Do not use initWithCIImage because that renders the filter each time the image is displayed. This causes slow scrolling in tableviews.
UIImage *image = [[UIImage alloc] initWithCGImage:cgimage scale:self.scale orientation:self.imageOrientation];
CGImageRelease(cgimage);
return image;
}
All the above solutions rely on CIImage, while UIImage will often have CGImage as its underlying image, not CIImage. So it means you have to convert your underlying image into CIImage in the beginning, and convert it back to CGImage in the end (if you don't, constructing UIImage with CIImage will effectively do it for you).
Although it probably OK for many use cases, the conversion between CGImage and CIImage is not free: it can be slow, and can create a big memory spike while converting.
So I want to mention a completely different solution, that doesn't require converting image back and forth. It's using Accelerate, and it's perfectly described by Apple here.
Here's a playground example that demonstrates both methods.
import UIKit
import Accelerate
extension CIImage {
func toGrayscale() -> CIImage? {
guard let output = CIFilter(name: "CIPhotoEffectNoir", parameters: [kCIInputImageKey: self])?.outputImage else {
return nil
}
return output
}
}
extension CGImage {
func toGrayscale() -> CGImage {
guard let format = vImage_CGImageFormat(cgImage: self),
// The source image bufffer
var sourceBuffer = try? vImage_Buffer(
cgImage: self,
format: format
),
// The 1-channel, 8-bit vImage buffer used as the operation destination.
var destinationBuffer = try? vImage_Buffer(
width: Int(sourceBuffer.width),
height: Int(sourceBuffer.height),
bitsPerPixel: 8
) else {
return self
}
// Declare the three coefficients that model the eye's sensitivity
// to color.
let redCoefficient: Float = 0.2126
let greenCoefficient: Float = 0.7152
let blueCoefficient: Float = 0.0722
// Create a 1D matrix containing the three luma coefficients that
// specify the color-to-grayscale conversion.
let divisor: Int32 = 0x1000
let fDivisor = Float(divisor)
var coefficientsMatrix = [
Int16(redCoefficient * fDivisor),
Int16(greenCoefficient * fDivisor),
Int16(blueCoefficient * fDivisor)
]
// Use the matrix of coefficients to compute the scalar luminance by
// returning the dot product of each RGB pixel and the coefficients
// matrix.
let preBias: [Int16] = [0, 0, 0, 0]
let postBias: Int32 = 0
vImageMatrixMultiply_ARGB8888ToPlanar8(
&sourceBuffer,
&destinationBuffer,
&coefficientsMatrix,
divisor,
preBias,
postBias,
vImage_Flags(kvImageNoFlags)
)
// Create a 1-channel, 8-bit grayscale format that's used to
// generate a displayable image.
guard let monoFormat = vImage_CGImageFormat(
bitsPerComponent: 8,
bitsPerPixel: 8,
colorSpace: CGColorSpaceCreateDeviceGray(),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue),
renderingIntent: .defaultIntent
) else {
return self
}
// Create a Core Graphics image from the grayscale destination buffer.
guard let result = try? destinationBuffer.createCGImage(format: monoFormat) else {
return self
}
return result
}
}
To test, I used a full size of this image.
let start = Date()
var prev = start.timeIntervalSinceNow * -1
func info(_ id: String) {
print("\(id)\t: \(start.timeIntervalSinceNow * -1 - prev)")
prev = start.timeIntervalSinceNow * -1
}
info("started")
let original = UIImage(named: "Golden_Gate_Bridge_2021.jpg")!
info("loaded UIImage(named)")
let cgImage = original.cgImage!
info("original.cgImage")
let cgImageToGreyscale = cgImage.toGrayscale()
info("cgImage.toGrayscale()")
let uiImageFromCGImage = UIImage(cgImage: cgImageToGreyscale, scale: original.scale, orientation: original.imageOrientation)
info("UIImage(cgImage)")
let ciImage = CIImage(image: original)!
info("CIImage(image: original)!")
let ciImageToGreyscale = ciImage.toGrayscale()!
info("ciImage.toGrayscale()")
let uiImageFromCIImage = UIImage(ciImage: ciImageToGreyscale, scale: original.scale, orientation: original.imageOrientation)
info("UIImage(ciImage)")
The result (in sec)
CGImage method took about 1 sec. total:
original.cgImage : 0.5257829427719116
cgImage.toGrayscale() : 0.46222901344299316
UIImage(cgImage) : 0.1819549798965454
CIImage method took about 7 sec. total:
CIImage(image: original)! : 0.6055610179901123
ciImage.toGrayscale() : 4.969912052154541
UIImage(ciImage) : 2.395193934440613
When saving images as JPEG to disk, the one created with CGImage was also 3 times smaller than the one created with CIImage (5 MB vs. 17 MB). The quality was good on both images. Here's a small version that fits SO restrictions:
As per Joe answer we easily converted Original to B&W . But back to Original image refer these code :
var context = CIContext(options: nil)
var startingImage : UIImage = UIImage()
func Noir() {
startingImage = imgView.image!
var inputImage = CIImage(image: imgView.image!)!
let options:[String : AnyObject] = [CIDetectorImageOrientation:1 as AnyObject]
let filters = inputImage.autoAdjustmentFilters(options: options)
for filter: CIFilter in filters {
filter.setValue(inputImage, forKey: kCIInputImageKey)
inputImage = filter.outputImage!
}
let cgImage = context.createCGImage(inputImage, from: inputImage.extent)
self.imgView.image = UIImage(cgImage: cgImage!)
//Filter Logic
let currentFilter = CIFilter(name: "CIPhotoEffectNoir")
currentFilter!.setValue(CIImage(image: UIImage(cgImage: cgImage!)), forKey: kCIInputImageKey)
let output = currentFilter!.outputImage
let cgimg = context.createCGImage(output!, from: output!.extent)
let processedImage = UIImage(cgImage: cgimg!)
imgView.image = processedImage
}
func Original(){
imgView.image = startingImage
}

How to copy an UIImage?

I have an UIImageView with an UIImage. I want to assign a copy of these picture to two variables. Based on what the user is doing the image should be manipulated. The problem is, the image is the same in each variables. I guess, because they are passed by reference.
let image1 = imageView.image!
let image2 = imageView.image!
How do I get two separate copies of this image?
What I want to achieve: Just crop the one image, keep the other like the original.
let imageLeft = googleImageView.image!
let imageRef = CGImageCreateCopy(googleImageView.image!.CGImage)
let imageRight = UIImage(CGImage: imageRef!, scale: googleImageView.image!.scale, orientation: googleImageView.image!.imageOrientation)
if translation.x < 0 {
let scale = imageLeft.scale
let newRect = CGRectMake(0, 0, (imageLeft.size.width + translation.x) * scale, imageLeft.size.height * scale)
let imageRef = CGImageCreateWithImageInRect(imageLeft.CGImage, newRect)
if let croppedImage = imageRef {
googleImageView.image = UIImage(CGImage: croppedImage, scale: scale, orientation: imageLeft.imageOrientation)
}
}
print("left image: \(imageLeft) right image \(imageRight)")
The code above prints to the console:
left image: <UIImage: 0x7fd020dca070>, {111, 167}
right image <UIImage: 0x7fd020dda430>, {111, 167}
left image: <UIImage: 0x7fd020df9ba0>, {110, 167}
right image <UIImage: 0x7fd020d45670>, {110, 167}
... and so on. So, BOTH images gets a new size. Only the left Image should get cropped.
As #Aggressor suggested you can copy your image from imageView this way:
let newCgIm = CGImageCreateCopy(imageView.image?.CGImage)
let newImage = UIImage(CGImage: newCgIm, scale: imageView.image!.scale, orientation: imageView.image!.imageOrientation)
There is a new copy function of cgImage comes with Swift 3 and also compatible with Swift 4. So you can use in this way
guard let cgImage = imageView.image?.cgImage?.copy() else {
return
}
let newImage = UIImage(cgImage: cgImage,
scale: imageView.image!.scale,
orientation: imageView.image!.imageOrientation)
Solution for Swift 5 or better:
Add this extension to your code
extension UIImage {
func clone() -> UIImage? {
guard let originalCgImage = self.cgImage, let newCgImage = originalCgImage.copy() else {
return nil
}
return UIImage(cgImage: newCgImage, scale: self.scale, orientation: self.imageOrientation)
}
}
then clone the image to get a new image object and not a reference:
let image1 = imageView.image!
let image2 = image1().clone()
! Note that cloning the image increases the memory consumption.