I am trying to combine CoreML and ARKit in my project using the given inceptionV3 model on Apple website.
I am starting from the standard template for ARKit (Xcode 9 beta 3)
Instead of intanciating a new camera session, I reuse the session that has been started by the ARSCNView.
At the end of my viewDelegate, I write:
sceneView.session.delegate = self
I then extend my viewController to conform to the ARSessionDelegate protocol (optional protocol)
// MARK: ARSessionDelegate
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate frame: ARFrame) {
do {
let prediction = try self.model.prediction(image: frame.capturedImage)
DispatchQueue.main.async {
if let prob = prediction.classLabelProbs[prediction.classLabel] {
self.textLabel.text = "\(prediction.classLabel) \(String(describing: prob))"
}
}
}
catch let error as NSError {
print("Unexpected error ocurred: \(error.localizedDescription).")
}
}
}
At first I tried that code, but then noticed that inception requires a pixel Buffer of type Image. < RGB,<299,299>.
Although not recommenced, I thought I would just resize my frame then try to get a prediction out of it. I am resizing using this function (took it from https://github.com/yulingtianxia/Core-ML-Sample)
func resize(pixelBuffer: CVPixelBuffer) -> CVPixelBuffer? {
let imageSide = 299
var ciImage = CIImage(cvPixelBuffer: pixelBuffer, options: nil)
let transform = CGAffineTransform(scaleX: CGFloat(imageSide) / CGFloat(CVPixelBufferGetWidth(pixelBuffer)), y: CGFloat(imageSide) / CGFloat(CVPixelBufferGetHeight(pixelBuffer)))
ciImage = ciImage.transformed(by: transform).cropped(to: CGRect(x: 0, y: 0, width: imageSide, height: imageSide))
let ciContext = CIContext()
var resizeBuffer: CVPixelBuffer?
CVPixelBufferCreate(kCFAllocatorDefault, imageSide, imageSide, CVPixelBufferGetPixelFormatType(pixelBuffer), nil, &resizeBuffer)
ciContext.render(ciImage, to: resizeBuffer!)
return resizeBuffer
}
Unfortunately, this is not enough to make it work. This is the error that is catched:
Unexpected error ocurred: Input image feature image does not match model description.
2017-07-20 AR+MLPhotoDuplicatePrediction[928:298214] [core]
Error Domain=com.apple.CoreML Code=1
"Input image feature image does not match model description"
UserInfo={NSLocalizedDescription=Input image feature image does not match model description,
NSUnderlyingError=0x1c4a49fc0 {Error Domain=com.apple.CoreML Code=1
"Image is not expected type 32-BGRA or 32-ARGB, instead is Unsupported (875704422)"
UserInfo={NSLocalizedDescription=Image is not expected type 32-BGRA or 32-ARGB, instead is Unsupported (875704422)}}}
Not sure what I can do from here.
If there is any better suggestion to combine both, I'm all ears.
Edit: I also tried the resizePixelBuffer method from the YOLO-CoreML-MPSNNGraph suggested by #dfd , the error is exactly the same.
Edit2: So I changed the pixel format to be kCVPixelFormatType_32BGRA (not the same format as the pixelBuffer passed in the resizePixelBuffer).
let pixelFormat = kCVPixelFormatType_32BGRA // line 48
I do not have the error anymore. But as soon as I try to make a prediction, the AVCaptureSession stops. Seems I am running into the same issue Enric_SA is running on the apple developers forum.
Edit3: So I tried implementing rickster solution. Works well with inceptionV3. I wanted to try a a feature observation (VNClassificationObservation). At this time, it is not working using TinyYolo. The bounding are wrong. Trying to figure it out.
Don't process images yourself to feed them to Core ML. Use Vision. (No, not that one. This one.) Vision takes an ML model and any of several image types (including CVPixelBuffer) and automatically gets the image to the right size and aspect ratio and pixel format for the model to evaluate, then gives you the model's results.
Here's a rough skeleton of the code you'd need:
var request: VNRequest
func setup() {
let model = try VNCoreMLModel(for: MyCoreMLGeneratedModelClass().model)
request = VNCoreMLRequest(model: model, completionHandler: myResultsMethod)
}
func classifyARFrame() {
let handler = VNImageRequestHandler(cvPixelBuffer: session.currentFrame.capturedImage,
orientation: .up) // fix based on your UI orientation
handler.perform([request])
}
func myResultsMethod(request: VNRequest, error: Error?) {
guard let results = request.results as? [VNClassificationObservation]
else { fatalError("huh") }
for classification in results {
print(classification.identifier, // the scene label
classification.confidence)
}
}
See this answer to another question for some more pointers.
Related
My use case is I want to calculate various gestures of a hand (the first hand) seen by the camera. I am able to find body anchors and hand anchors and poses. See my video here.
I am trying to utilize previous position SIMD3 information to calculate what kind of gesture was demonstrated. I did see the example posted by Apple which shows pinching to write virtually, I am not sure that a buffer is the right solution for something like this.
A specific example of what I am trying to do is detect a swipe, long-press, tap as if the user is wearing a pair of AR glasses (made by Apple one day). For clarification I want to raycast from my hand and perform a gesture on an Entity or Anchor.
Here is a snippet for those of you that want to know how to get body anchors:
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
let capturedImage = frame.capturedImage
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: capturedImage,
orientation: .right,
options: [:])
let handPoseRequest = VNDetectHumanHandPoseRequest()
//let bodyPoseRequest = VNDetectHumanBodyPoseRequest()
do {
try imageRequestHandler.perform([handPoseRequest])
guard let observation = handPoseRequest.results?.first else {
return
}
// Get points for thumb and index finger.
let thumbPoints = try observation.recognizedPoints(.thumb)
let indexFingerPoints = try observation.recognizedPoints(.indexFinger)
let pinkyFingerPoints = try observation.recognizedPoints(.littleFinger)
let ringFingerPoints = try observation.recognizedPoints(.ringFinger)
let middleFingerPoints = try observation.recognizedPoints(.middleFinger)
self.detectHandPose(handObservations: observation)
} catch {
print("Failed to perform image request.")
}
}
There is a problem with QR code generation using the following simple code:
override func viewDidLoad() {
super.viewDidLoad()
let image = generateQRCode(from: "Hacking with Swift is the best iOS coding tutorial I've ever read!")
imageView.image = image
}
func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
if let filter = CIFilter(name: "CIQRCodeGenerator") {
filter.setValue(data, forKey: "inputMessage")
let transform = CGAffineTransform(scaleX: 5.3, y: 5.3)
if let output = filter.outputImage?.transformed(by: transform) {
return UIImage(ciImage: output)
}
}
return nil
}
This code produces the following image:
But when magnifying any corner marker, we can see the difference in border thickness:
I. e. not every scale value produces correct final image. How to fix it out?
The behavior you show is expected whenever you use a non-integer scale, such as 5.3. If having consistent marker widths is something you care about, use only integer scales, such as 5 or 6.
I am using Swift to detect squares on an image and I can't seem to get it to detect them. It seems to detect rectangles sometimes not sure if this is the correct approach. I am new to swift and image detection so if there is something else I can be doing to detect squares I would greatly appreciate getting pointed in the right direction.
From what I have found on searches is issues around detecting squares / rectangles and perspective. Not sure if this is the issue or just my lack of knowledge of Swift and image detection.
Test image
lazy var rectangleDetectionRequest: VNDetectRectanglesRequest = {
let rectDetectRequest = VNDetectRectanglesRequest(completionHandler: self.handleDetectedRectangles)
// Customize & configure the request to detect only certain rectangles.
rectDetectRequest.maximumObservations = 8 // Vision currently supports up to 16.
rectDetectRequest.minimumConfidence = 0.6 // Be confident.
rectDetectRequest.minimumAspectRatio = 0.3 // height / width
return rectDetectRequest
}()
fileprivate func handleDetectedRectangles(request: VNRequest?, error: Error?) {
if let nsError = error as NSError? {
self.presentAlert("Rectangle Detection Error", error: nsError)
return
}
// Since handlers are executing on a background thread, explicitly send draw calls to the main thread.
DispatchQueue.main.async {
guard let drawLayer = self.pathLayer,
let results = request?.results as? [VNRectangleObservation] else {
return
}
self.draw(rectangles: results, onImageWithBounds: drawLayer.bounds)
drawLayer.setNeedsDisplay()
}
}
I have also changed minimumAspectRatio to 1.0 which from what information I have found would be a square and it still did not give the expected results.
I am working with the Vision framework in iOS 13 and am trying to achieve the following tasks;
Take an image (in this case, a CIImage) and locate all faces in the image using Vision.
Crop each face into its own CIImage (I'll call this a "face image").
Filter each face image using a CoreImage filter, such as a blur or comic book effect.
Composite the face image back over the original image, hereby creating effects that only apply to the face.
A better example of this would be the end goal of taking a live camera feed from an AVCaptureSession and blurring every face in the video frame, compositing the blurred faces back over the original image for saving.
I almost have this working, save for the fact that there seems to be a coordinates/translation issue. For example, when I test this code and move my face, the "blurred" section goes the wrong direction (if I turn my face right, the box goes left, if I look up, the box goes down). While I think this may have something to do with mirroring on the front-facing camera, I can't seem to figure out what I should try next;
func drawFaceBox(bufferImage: CIImage, observations: [VNFaceObservation]) -> CVPixelBuffer? {
// The filter
let blur = CIFilter(name: "CICrystallize")
// The unfiltered image, prepared for filtering
var filteredImage = bufferImage
// Find and crop each face
if !observations.isEmpty {
for face in observations {
let faceRect = VNImageRectForNormalizedRect(face.boundingBox, Int(bufferImage.extent.size.width), Int(bufferImage.extent.size.height))
let croppedFace = bufferImage.cropped(to: faceRect)
blur?.setValue(croppedFace, forKey: kCIInputImageKey)
blur?.setValue(10.0, forKey: kCIInputRadiusKey)
if let blurred = blur?.value(forKey: kCIOutputImageKey) as? CIImage {
compositorCIFilter?.setValue(blurred, forKey: kCIInputImageKey)
compositorCIFilter?.setValue(filteredImage, forKey: kCIInputBackgroundImageKey)
if let output = compositorCIFilter?.value(forKey: kCIOutputImageKey) as? CIImage {
filteredImage = output
}
}
}
}
// Convert image to CVPixelBuffer and return. This part works fine.
}
Any thoughts on how I can composite the blurred face image(s) back to their original position with accuracy? Or any other approach to only filter part of the original CIImage to avoid this issue altogether/save processing? Thanks!
I believe this issue stems from an orientation problem earlier on in the pipeline (specifically, during the output of the sample buffers from the camera, which is where the Vision task was instantiated). I have updated my didOutputSampleBuffer code like so;
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
...
// Setup the current device orientation
let curDeviceOrientation = UIDevice.current.orientation
// Handle the image property orientation
//let orientation = self.exifOrientation(from: curDeviceOrientation)
// Setup the image request handler
//let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: CGImagePropertyOrientation(rawValue: UInt32(1))!)
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:])
// Setup the completion handler
let completion: VNRequestCompletionHandler = {request, error in
let observations = request.results as! [VNFaceObservation]
// Draw faces
DispatchQueue.main.async {
// HANDLE FACES
self.drawFaceBoxes(for: observations)
}
}
// Setup the image request
let request = VNDetectFaceRectanglesRequest(completionHandler: completion)
// Handle the request
do {
try handler.perform([request])
} catch {
print(error)
}
}
As noted, I have commented out the let orientation = ... and the first let handler = ..., which was using the orientation. By removing the reference to the orientation, I seem to have removed any issue with orientation in the Vision calculations.
I'm using live camera output to update a CIImage on a MTKView. My main issue is that I have a large, negative performance difference where an older iPhone gets better CPU performance than a newer one, despite all their settings I've come across are the same.
This is a lengthy post, but I decided to include these details since they could be important to the cause of this problem. Please let me know what else I can include.
Below, I have my captureOutput function with two debug bools that I can turn on and off while running. I used this to try to determine the cause of my issue.
applyLiveFilter - bool whether or not to manipulate the CIImage with a CIFilter.
updateMetalView - bool whether or not to update the MTKView's CIImage.
// live output from camera
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){
/*
Create CIImage from camera.
Here I save a few percent of CPU by using a function
to convert a sampleBuffer to a Metal texture, but
whether I use this or the commented out code
(without captureOutputMTLOptions) does not have
significant impact.
*/
guard let texture:MTLTexture = convertToMTLTexture(sampleBuffer: sampleBuffer) else{
return
}
var cameraImage:CIImage = CIImage(mtlTexture: texture, options: captureOutputMTLOptions)!
var transform: CGAffineTransform = .identity
transform = transform.scaledBy(x: 1, y: -1)
transform = transform.translatedBy(x: 0, y: -cameraImage.extent.height)
cameraImage = cameraImage.transformed(by: transform)
/*
// old non-Metal way of getting the ciimage from the cvPixelBuffer
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else
{
return
}
var cameraImage:CIImage = CIImage(cvPixelBuffer: pixelBuffer)
*/
var orientation = UIImage.Orientation.right
if(isFrontCamera){
orientation = UIImage.Orientation.leftMirrored
}
// apply filter to camera image
if debug_applyLiveFilter {
cameraImage = self.applyFilterAndReturnImage(ciImage: cameraImage, orientation: orientation, currentCameraRes:currentCameraRes!)
}
DispatchQueue.main.async(){
if debug_updateMetalView {
self.MTLCaptureView!.image = cameraImage
}
}
}
Below is a chart of results between both phones toggling the different combinations of bools discussed above:
Even without the Metal view's CIIMage updating and no filters being applied, the iPhone XS's CPU is 2% greater than iPhone 6S Plus's, which isn't a significant overhead, but makes me suspect that somehow how the camera is capturing is different between the devices.
My AVCaptureSession's preset is set identically between both phones
(AVCaptureSession.Preset.hd1280x720)
The CIImage created from captureOutput is the same size (extent)
between both phones.
Are there any settings I need to set manually between these two phones AVCaptureDevice's settings, including activeFormat properties, to make them the same between devices?
The settings I have now are:
if let captureDevice = AVCaptureDevice.default(for:AVMediaType.video) {
do {
try captureDevice.lockForConfiguration()
captureDevice.isSubjectAreaChangeMonitoringEnabled = true
captureDevice.focusMode = AVCaptureDevice.FocusMode.continuousAutoFocus
captureDevice.exposureMode = AVCaptureDevice.ExposureMode.continuousAutoExposure
captureDevice.unlockForConfiguration()
} catch {
// Handle errors here
print("There was an error focusing the device's camera")
}
}
My MTKView is based off code written by Simon Gladman, with some edits for performance and to scale the render before it is scaled up to the width of the screen using Core Animation suggested by Apple.
class MetalImageView: MTKView
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
var textureCache: CVMetalTextureCache?
var sourceTexture: MTLTexture!
lazy var commandQueue: MTLCommandQueue =
{
[unowned self] in
return self.device!.makeCommandQueue()
}()!
lazy var ciContext: CIContext =
{
[unowned self] in
return CIContext(mtlDevice: self.device!)
}()
override init(frame frameRect: CGRect, device: MTLDevice?)
{
super.init(frame: frameRect,
device: device ?? MTLCreateSystemDefaultDevice())
if super.device == nil
{
fatalError("Device doesn't support Metal")
}
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.device!, nil, &textureCache)
framebufferOnly = false
enableSetNeedsDisplay = true
isPaused = true
preferredFramesPerSecond = 30
}
required init(coder: NSCoder)
{
fatalError("init(coder:) has not been implemented")
}
// The image to display
var image: CIImage?
{
didSet
{
setNeedsDisplay()
}
}
override func draw(_ rect: CGRect)
{
guard var
image = image,
let targetTexture:MTLTexture = currentDrawable?.texture else
{
return
}
let commandBuffer = commandQueue.makeCommandBuffer()
let customDrawableSize:CGSize = drawableSize
let bounds = CGRect(origin: CGPoint.zero, size: customDrawableSize)
let originX = image.extent.origin.x
let originY = image.extent.origin.y
let scaleX = customDrawableSize.width / image.extent.width
let scaleY = customDrawableSize.height / image.extent.height
let scale = min(scaleX*IVScaleFactor, scaleY*IVScaleFactor)
image = image
.transformed(by: CGAffineTransform(translationX: -originX, y: -originY))
.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
ciContext.render(image,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable!)
commandBuffer?.commit()
}
}
My AVCaptureSession (captureSession) and AVCaptureVideoDataOutput (videoOutput) are setup below:
func setupCameraAndMic(){
let backCamera = AVCaptureDevice.default(for:AVMediaType.video)
var error: NSError?
var videoInput: AVCaptureDeviceInput!
do {
videoInput = try AVCaptureDeviceInput(device: backCamera!)
} catch let error1 as NSError {
error = error1
videoInput = nil
print(error!.localizedDescription)
}
if error == nil &&
captureSession!.canAddInput(videoInput) {
guard CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, MetalDevice, nil, &textureCache) == kCVReturnSuccess else {
print("Error: could not create a texture cache")
return
}
captureSession!.addInput(videoInput)
setDeviceFrameRateForCurrentFilter(device:backCamera)
stillImageOutput = AVCapturePhotoOutput()
if captureSession!.canAddOutput(stillImageOutput!) {
captureSession!.addOutput(stillImageOutput!)
let q = DispatchQueue(label: "sample buffer delegate", qos: .default)
videoOutput.setSampleBufferDelegate(self, queue: q)
videoOutput.videoSettings = [
kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: NSNumber(value: kCVPixelFormatType_32BGRA),
kCVPixelBufferMetalCompatibilityKey as String: true
]
videoOutput.alwaysDiscardsLateVideoFrames = true
if captureSession!.canAddOutput(videoOutput){
captureSession!.addOutput(videoOutput)
}
captureSession!.startRunning()
}
}
setDefaultFocusAndExposure()
}
The video and mic are recorded on two separate streams. Details on the microphone and recording video have been left out since my focus is performance of live camera output.
UPDATE - I have a simplified test project on GitHub that makes it a lot easier to test the problem I'm having: https://github.com/PunchyBass/Live-Filter-test-project
From the top of my mind, you are not comparing pears with pears, even if you are running with the 2.49 GHz of A12 against 1.85 GHz of A9, the differences between the cameras are also huge, even if you use them with the same parameters there are several features from XS's camera that require more CPU resources (dual camera, stabilization, smart HDR, etc).
Sorry for the sources, I tried to find metrics of the CPU cost of those features, but I couldn't find it, unfortunately for your needs, that information is not relevant for marketing, when they are selling it as the best camera ever for an smartphone.
They are selling it as the best processor as well, we don't know what would happen using the XS camera with an A9 processor, it would probably crash, we will never know...
PS.... Your metrics are for the whole processor or for the used core? For the whole processor, you also need to consider other tasks that the devices can be executing, for the single core, is 21% of 200% against 39% of 600%