We used to initialize CVPixelBufferRef like below.
CVPixelBufferRef pxbuffer = NULL;
But in Swift we can not use NULL, so we tried following but of course XCODE want us to have it initalized to use it
let pxbuffer: CVPixelBufferRef
but how ?
In Obj_C we were creating buffer like this, but as I was trying to explain above when converting to Swift I have been stopped at first line.
CVPixelBufferRef pxbuffer = NULL;
CVPixelBufferCreate(kCFAllocatorDefault, picGenislik,
frameHeight, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
Use CVPixelBufferCreate(_:_:_:_:_:_:) to create the object
Adding some demo code, hope this helps you. Also do read Using Legacy C APIs with Swift
var keyCallBack: CFDictionaryKeyCallBacks
var valueCallBacks: CFDictionaryValueCallBacks
var empty: CFDictionaryRef = CFDictionaryCreate(kCFAllocatorDefault, nil, nil, 0, &keyCallBack, &valueCallBacks)
var attributes = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&keyCallBack,
&valueCallBacks);
var iOSurfacePropertiesKey = kCVPixelBufferIOSurfacePropertiesKey
withUnsafePointer(&iOSurfacePropertiesKey) { unsafePointer in
CFDictionarySetValue(attributes, unsafePointer, empty)
}
var width = 10
var height = 12
var pixelBuffer: CVPixelBufferRef? = nil
var status: CVReturn = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32BGRA, attributes, &pixelBuffer)
Here is another way to do it (Swift 3):
var pixelBuffer: UnsafeMutablePointer<CVPixelBuffer?>!
if pixelBuffer == nil {
pixelBuffer = UnsafeMutablePointer<CVPixelBuffer?>.allocate(capacity: MemoryLayout<CVPixelBuffer?>.size)
}
CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32BGRA, attributes, pixelBuffer)
and then you can access the buffer object like this:
pixelBuffer.pointee
EDIT:
pixelBuffer = UnsafeMutablePointer<CVPixelBuffer?>.allocate(capacity: MemoryLayout<CVPixelBuffer?>.size)
Creates a memory leak if you don't manually deallocate the pointer when you're done with it, to avoid this change your code to the following:
var pixelBuffer : CVPixelBuffer? = nil
CVPixelBufferCreate(kCFAllocatorDefault, cgimg.width, cgimg.height, kCVPixelFormatType_32BGRA, nil, &pixelBuffer)
func getCVPixelBuffer(_ image: CGImage) -> CVPixelBuffer? {
let imageWidth = Int(image.width)
let imageHeight = Int(image.height)
let attributes : [NSObject:AnyObject] = [
kCVPixelBufferCGImageCompatibilityKey : true as AnyObject,
kCVPixelBufferCGBitmapContextCompatibilityKey : true as AnyObject
]
var pxbuffer: CVPixelBuffer? = nil
CVPixelBufferCreate(kCFAllocatorDefault,
imageWidth,
imageHeight,
kCVPixelFormatType_32ARGB,
attributes as CFDictionary?,
&pxbuffer)
if let _pxbuffer = pxbuffer {
let flags = CVPixelBufferLockFlags(rawValue: 0)
CVPixelBufferLockBaseAddress(_pxbuffer, flags)
let pxdata = CVPixelBufferGetBaseAddress(_pxbuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB();
let context = CGContext(data: pxdata,
width: imageWidth,
height: imageHeight,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(_pxbuffer),
space: rgbColorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
if let _context = context {
_context.draw(image, in: CGRect.init(x: 0, y: 0, width: imageWidth, height: imageHeight))
}
else {
CVPixelBufferUnlockBaseAddress(_pxbuffer, flags);
return nil
}
CVPixelBufferUnlockBaseAddress(_pxbuffer, flags);
return _pxbuffer;
}
return nil
}
Related
I'm using the method below to add drawings to a pixel buffer, then append it to an AVAssetWriterInputPixelBufferAdaptor.
It works on my Mac mini (macOS 12 beta 7), but the drawingHandler draws nothing on my MacBook (macOS 11.5.2).
Is there anything wrong with this code?
#if os(macOS)
import AppKit
#else
import UIKit
#endif
import CoreMedia
extension CMSampleBuffer {
func pixelBuffer(drawingHandler: ((CGRect) -> Void)? = nil) -> CVPixelBuffer? {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(self) else {
return nil
}
guard let drawingHandler = drawingHandler else {
return pixelBuffer
}
guard CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly) == kCVReturnSuccess else {
return pixelBuffer
}
let data = CVPixelBufferGetBaseAddress(pixelBuffer)
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let bitsPerComponent = 8
let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let imageByteOrderInfo = CGImageByteOrderInfo.order32Little
let imageAlphaInfo = CGImageAlphaInfo.premultipliedFirst
if let ctx = CGContext(data: data,
width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bytesPerRow: bytesPerRow,
space: colorSpace,
bitmapInfo: imageByteOrderInfo.rawValue | imageAlphaInfo.rawValue)
{
// Push
#if os(macOS)
let graphCtx = NSGraphicsContext(cgContext: ctx, flipped: false)
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.current = graphCtx
#else
UIGraphicsPushContext(ctx)
#endif
let rect = CGRect(x: 0, y: 0, width: width, height: height)
drawingHandler(rect)
// Pop
#if os(macOS)
NSGraphicsContext.restoreGraphicsState()
#else
UIGraphicsPopContext()
#endif
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
return pixelBuffer
}
}
Change the lock flags.
let lockFlags = CVPixelBufferLockFlags(rawValue: 0)
guard CVPixelBufferLockBaseAddress(pixelBuffer, lockFlags) == kCVReturnSuccess else {
return pixelBuffer
}
// ...
CVPixelBufferUnlockBaseAddress(pixelBuffer, lockFlags)
I'm trying to modify the pixel buffer from live video feed from AVFoundation to stream through OpenTok's API. But whenever I try to do so and feed it through OpenTok's consumeFrame, it crashes.
I am doing this so I can apply different live video effects (filters, stickers, etc).. I have tried converting CGImage->CVPixelBuffer with different methods but nothing works.
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
if !capturing || videoCaptureConsumer == nil {
return
}
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
else {
print("Error acquiring sample buffer")
return
}
guard let videoInput = videoInput
else {
print("Capturer does not have a valid input")
return
}
let time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
videoFrame.clearPlanes()
videoFrame.timestamp = time
let height = UInt32(CVPixelBufferGetHeight(imageBuffer))
let width = UInt32(CVPixelBufferGetWidth(imageBuffer))
if width != captureWidth || height != captureHeight {
updateCaptureFormat(width: width, height: height)
}
// This is where I convert CVImageBuffer->CIImage, modify it, turn it into CGImage, then CGImage->CVPixelBuffer
guard let finalImage = makeBigEyes(imageBuffer) else { return }
CVPixelBufferLockBaseAddress(finalImage, CVPixelBufferLockFlags(rawValue: 0))
videoFrame.format?.estimatedCaptureDelay = 10
videoFrame.orientation = .left
videoFrame.clearPlanes()
videoFrame.planes?.addPointer(CVPixelBufferGetBaseAddress(finalImage))
delegate?.finishPreparingFrame(videoFrame)
videoCaptureConsumer!.consumeFrame(videoFrame)
CVPixelBufferUnlockBaseAddress(finalImage, CVPixelBufferLockFlags(rawValue: 0))
}
And here is my CGImage->CVPixelBuffer method:
func buffer(from image: UIImage) -> CVPixelBuffer? {
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer : CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
guard (status == kCVReturnSuccess) else {
return nil
}
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pixelData, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
context?.translateBy(x: 0, y: image.size.height)
context?.scaleBy(x: 1.0, y: -1.0)
UIGraphicsPushContext(context!)
image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer
}
I get this error on the first frame:
* Terminating app due to uncaught exception 'NSRangeException', reason: '* -[NSConcretePointerArray pointerAtIndex:]: attempt to
access pointer at index 1 beyond bounds 1'
If you made it this far, thank you for reading. I've been stuck on this issue a while now, so any kind of pointer would be greatly appreciated. Thanks.
Since you are converting the camera(NV12) frame to RGB, you need to set pixelFormat to OTPixelFormatARGB on videoFrame.format
I use the vImageBoxConvolve_ARGB8888 function in order to blur a UIImage. The code is shown below:
public func blur(_ size: Int) -> UIImage! {
let boxSize = size - (size % 2) + 1
let image = self.cgImage
let inProvider = image?.dataProvider
let height = vImagePixelCount((image?.height)!)
let width = vImagePixelCount((image?.width)!)
let rowBytes = image?.bytesPerRow
var inBitmapData = inProvider?.data
let inData = UnsafeMutableRawPointer(mutating: CFDataGetBytePtr(inBitmapData))
var inBuffer = vImage_Buffer(data: inData, height: height, width: width, rowBytes: rowBytes!)
let outData = malloc((image?.bytesPerRow)! * (image?.height)!)
var outBuffer = vImage_Buffer(data: outData, height: height, width: width, rowBytes: rowBytes!)
var error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, nil, 0, 0, UInt32(boxSize), UInt32(boxSize), nil, vImage_Flags(kvImageEdgeExtend))
error = vImageBoxConvolve_ARGB8888(&outBuffer, &inBuffer, nil, 0, 0, UInt32(boxSize), UInt32(boxSize), nil, vImage_Flags(kvImageEdgeExtend))
error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, nil, 0, 0, UInt32(boxSize), UInt32(boxSize), nil, vImage_Flags(kvImageEdgeExtend))
inBitmapData = nil
let colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: outBuffer.data, width: Int(outBuffer.width), height: Int(outBuffer.height), bitsPerComponent: 8, bytesPerRow: outBuffer.rowBytes, space: colorSpace, bitmapInfo: (image?.bitmapInfo.rawValue)!, releaseCallback: {(ptr1, ptr2) in
}, releaseInfo: outData)!
var imageRef = context.makeImage()
let bluredImage = UIImage(cgImage: imageRef!)
imageRef = nil
free(outData)
context.flush()
context.synchronize()
return bluredImage
}
The vImageBoxConvolve_ARGB8888 function accepts a background color parameter (the 8th parameter) which is of type UnsafePointer. Here the parameter in nil, but I want to set it in red color. I have no idea how to do that. If someone could give any tips I would appreciate. Thanks in advance.
You can do it like so :
let redPointer = UnsafePointer<UInt8>([0xFF, 0x00, 0x00])
var error = vImageBoxConvolve_ARGB8888(&inBuffer,
&outBuffer,
nil,
0,
0,
UInt32(boxSize),
UInt32(boxSize),
redPointer,
vImage_Flags(kvImageBackgroundColorFill))
Bear in mind that the flags kvImageBackgroundColorFill and kvImageEdgeExtend are mutually exclusive, so you can't pass kvImageBackgroundColorFill + kvImageEdgeExtend in the flags argument.
I am trying to convert the raw image data to jpeg in swift. But unfortunately, the jpeg image created is skewed. Please find below the code used for the conversion.
let rawPtr = (rawImgData as NSData).bytes
let mutableRawPtr = UnsafeMutableRawPointer.init(mutating: rawPtr)
let context = CGContext.init(data: mutableRawPtr,
width: 750,
height: 1334,
bitsPerComponent: 32,
bytesPerRow: (8 * 750)/8,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue())
let imageRef = CGContext.makeImage(context!)
let imageRep = NSBitmapImageRep(cgImage: imageRef()!)
let finalData = imageRep.representation(using: .jpeg,
properties: [NSBitmapImageRep.PropertyKey.compressionFactor : 0.5])
Here's the converted jpeg image
Any help or pointers would be greatly appreciated. Thanks in advance!
Finally figured out the answer
var width:size_t?
var height:size_t?
let bitsPerComponent:size_t = 8
var bytesPerRow:size_t?
var bitmapInfo: UInt32
if #available(OSX 10.12, *) {
bitmapInfo = UInt32(CGImageAlphaInfo.premultipliedFirst.rawValue) | UInt32(CGImageByteOrderInfo.order32Little.rawValue)
} else {
bitmapInfo = UInt32(CGImageAlphaInfo.premultipliedFirst.rawValue)
}
let colorSpace = CGColorSpaceCreateDeviceRGB()
do {
let streamAttributesFuture = self.bitmapStream?.streamAttributes()
streamAttributesFuture?.onQueue(.main, notifyOfCompletion: { (streamAttributesFut) in
})
try streamAttributesFuture?.await()
let dict = streamAttributesFuture?.result
width = (dict?.attributes["width"] as! Int)
height = (dict?.attributes["height"] as! Int)
bytesPerRow = (dict?.attributes["row_size"] as! Int)
} catch let frameError{
print("frame error = \(frameError)")
return
}
let rawPtr = (data as NSData).bytes
let mutableRawPtr = UnsafeMutableRawPointer.init(mutating: rawPtr)
let context = CGContext.init(data: mutableRawPtr, width: width!, height: height!, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow!, space: colorSpace, bitmapInfo: bitmapInfo)
if context == nil {
return
}
let imageRef = context?.makeImage()
let scaledWidth = Int(Double(width!) / 1.5)
let scaledHeight = Int(Double(height!) / 1.5)
let resizeCtx = CGContext.init(data: nil, width: scaledWidth, height: scaledHeight, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow!, space: colorSpace, bitmapInfo: bitmapInfo)
resizeCtx?.draw(imageRef!, in: CGRect(x: 0, y: 0, width: scaledWidth, height: scaledHeight))
resizeCtx?.interpolationQuality = .low
let resizedImgRef = resizeCtx?.makeImage()
let imageRep = NSBitmapImageRep(cgImage: resizedImgRef!)
let finalData = imageRep.representation(using: .jpeg, properties: [NSBitmapImageRep.PropertyKey.compressionFactor : compression])
Try this, change imageName to what you want, and save it in any Directory
func convertToJPEG(image: UIImage) {
if let jpegImage = UIImageJPEGRepresentation(image, 1.0){
do {
//Convert
let tempDirectoryURL = URL(fileURLWithPath: NSTemporaryDirectory(), isDirectory: true)
let newFileName = "\(imageName.append(".JPG"))"
let targetURL = tempDirectoryURL.appendingPathComponent(newFileName)
try jpegImage.write(to: targetURL, options: [])
}catch {
print (error.localizedDescription)
print("FAILED")
}
}else{
print("FAILED")
}
}
I am trying to get Apple's sample Core ML Models that were demoed at the 2017 WWDC to function correctly. I am using the GoogLeNet to try and classify images (see the Apple Machine Learning Page). The model takes a CVPixelBuffer as an input. I have an image called imageSample.jpg that I'm using for this demo. My code is below:
var sample = UIImage(named: "imageSample")?.cgImage
let bufferThree = getCVPixelBuffer(sample!)
let model = GoogLeNetPlaces()
guard let output = try? model.prediction(input: GoogLeNetPlacesInput.init(sceneImage: bufferThree!)) else {
fatalError("Unexpected runtime error.")
}
print(output.sceneLabel)
I am always getting the unexpected runtime error in the output rather than an image classification. My code to convert the image is below:
func getCVPixelBuffer(_ image: CGImage) -> CVPixelBuffer? {
let imageWidth = Int(image.width)
let imageHeight = Int(image.height)
let attributes : [NSObject:AnyObject] = [
kCVPixelBufferCGImageCompatibilityKey : true as AnyObject,
kCVPixelBufferCGBitmapContextCompatibilityKey : true as AnyObject
]
var pxbuffer: CVPixelBuffer? = nil
CVPixelBufferCreate(kCFAllocatorDefault,
imageWidth,
imageHeight,
kCVPixelFormatType_32ARGB,
attributes as CFDictionary?,
&pxbuffer)
if let _pxbuffer = pxbuffer {
let flags = CVPixelBufferLockFlags(rawValue: 0)
CVPixelBufferLockBaseAddress(_pxbuffer, flags)
let pxdata = CVPixelBufferGetBaseAddress(_pxbuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB();
let context = CGContext(data: pxdata,
width: imageWidth,
height: imageHeight,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(_pxbuffer),
space: rgbColorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
if let _context = context {
_context.draw(image, in: CGRect.init(x: 0, y: 0, width: imageWidth, height: imageHeight))
}
else {
CVPixelBufferUnlockBaseAddress(_pxbuffer, flags);
return nil
}
CVPixelBufferUnlockBaseAddress(_pxbuffer, flags);
return _pxbuffer;
}
return nil
}
I got this code from a previous StackOverflow post (last answer here). I recognize that the code may not be correct, but I have no idea of how to do this myself. I believe that this is the section that contains the error. The model calls for the following type of input: Image<RGB,224,224>
You don't need to do a bunch of image mangling yourself to use a Core ML model with an image — the new Vision framework can do that for you.
import Vision
import CoreML
let model = try VNCoreMLModel(for: MyCoreMLGeneratedModelClass().model)
let request = VNCoreMLRequest(model: model, completionHandler: myResultsMethod)
let handler = VNImageRequestHandler(url: myImageURL)
handler.perform([request])
func myResultsMethod(request: VNRequest, error: Error?) {
guard let results = request.results as? [VNClassificationObservation]
else { fatalError("huh") }
for classification in results {
print(classification.identifier, // the scene label
classification.confidence)
}
}
The WWDC17 session on Vision should have a bit more info — it's tomorrow afternoon.
You can use a pure CoreML, but you should resize an image to (224,224)
DispatchQueue.global(qos: .userInitiated).async {
// Resnet50 expects an image 224 x 224, so we should resize and crop the source image
let inputImageSize: CGFloat = 224.0
let minLen = min(image.size.width, image.size.height)
let resizedImage = image.resize(to: CGSize(width: inputImageSize * image.size.width / minLen, height: inputImageSize * image.size.height / minLen))
let cropedToSquareImage = resizedImage.cropToSquare()
guard let pixelBuffer = cropedToSquareImage?.pixelBuffer() else {
fatalError()
}
guard let classifierOutput = try? self.classifier.prediction(image: pixelBuffer) else {
fatalError()
}
DispatchQueue.main.async {
self.title = classifierOutput.classLabel
}
}
// ...
extension UIImage {
func resize(to newSize: CGSize) -> UIImage {
UIGraphicsBeginImageContextWithOptions(CGSize(width: newSize.width, height: newSize.height), true, 1.0)
self.draw(in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return resizedImage
}
func cropToSquare() -> UIImage? {
guard let cgImage = self.cgImage else {
return nil
}
var imageHeight = self.size.height
var imageWidth = self.size.width
if imageHeight > imageWidth {
imageHeight = imageWidth
}
else {
imageWidth = imageHeight
}
let size = CGSize(width: imageWidth, height: imageHeight)
let x = ((CGFloat(cgImage.width) - size.width) / 2).rounded()
let y = ((CGFloat(cgImage.height) - size.height) / 2).rounded()
let cropRect = CGRect(x: x, y: y, width: size.height, height: size.width)
if let croppedCgImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCgImage, scale: 0, orientation: self.imageOrientation)
}
return nil
}
func pixelBuffer() -> CVPixelBuffer? {
let width = self.size.width
let height = self.size.height
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer: CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault,
Int(width),
Int(height),
kCVPixelFormatType_32ARGB,
attrs,
&pixelBuffer)
guard let resultPixelBuffer = pixelBuffer, status == kCVReturnSuccess else {
return nil
}
CVPixelBufferLockBaseAddress(resultPixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(resultPixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
guard let context = CGContext(data: pixelData,
width: Int(width),
height: Int(height),
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(resultPixelBuffer),
space: rgbColorSpace,
bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue) else {
return nil
}
context.translateBy(x: 0, y: height)
context.scaleBy(x: 1.0, y: -1.0)
UIGraphicsPushContext(context)
self.draw(in: CGRect(x: 0, y: 0, width: width, height: height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(resultPixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
return resultPixelBuffer
}
}
The expected image size for inputs you can find in the mimodel file:
A demo project that uses both pure CoreML and Vision variants you can find here: https://github.com/handsomecode/iOS11-Demos/tree/coreml_vision/CoreML/CoreMLDemo
If the input is UIImage, rather than an URL, and you want to use VNImageRequestHandler, you can use CIImage.
func updateClassifications(for image: UIImage) {
let orientation = CGImagePropertyOrientation(image.imageOrientation)
guard let ciImage = CIImage(image: image) else { return }
let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation)
}
From Classifying Images with Vision and Core ML