Error when running coreml in the background: Error computing NN outputs error - swift

I'm running an mlmodel that is coming from keras on an iPhone 6. The predictions often fails with the error Error computing NN outputs. Does anyone know what could be the cause and if there is anything I can do about it?
do {
return try model.prediction(input1: input)
} catch let err {
fatalError(err.localizedDescription) // Error computing NN outputs error
}
EDIT: I tried apple's sample project and that one works in the background so it seems it's specific to either our project or model type.

I got the same error myself at similar "seemingly random" times. A bit of debug tracing established that it was caused by the app sometimes trying to load its coreml model when it was sent to background, then crashing or freezing when reloaded into foreground.
The message Error computing NN outputs error was preceded by:
Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background) (IOAF code 6)
I didn't need (or want) the model to be used when the app was in background, so I detected when the app was going in / out of background, set a flag and used a guard statement before attempting to call the model.
Detect when going into background using applicationWillResignActive within the AppDelegate.swift file and set a Bool flag e.g. appInBackground = true. See this for more info: Detect iOS app entering background
Detect when app re-enters foreground using applicationDidBecomeActive in the same AppDelegate.swift file, and reset flag appInBackground = false
Then in the function where you call the model, just before calling model, use a statement such as:
guard appInBackground == false else { return } // new line to add
guard let model = try? VNCoreMLModel(for modelName.model) else { fatalError("could not load model") // original line to load model
I doubt this is the most elegant solution, but it worked for me.
I haven't established why the attempt to load the model in background only happens sometimes.
In the Apple example you link to, it looks like their app only ever calls the model in response to a user input, so it will never try to load the model when in background. Hence the difference in my case ... and possibly yours as well?

In the end it was enough for us to set the usesCPUOnly flag. Using the GPU in the background seems prohibited in iOS. Apple actually wrote about this in their documentation as well. To specify this flag we couldn't use the generated model class anymore but had to call the raw coreml classes instead. I can imagine this changing in a future version however. The snippet below is taken from the generated model class, but with the added MLPredictionOptions specified.
let options = MLPredictionOptions()
options.usesCPUOnly = true // Can't use GPU in the background
// Copied from from the generated model class
let input = model_input(input: mlMultiArray)
let output = try generatedModel.model.prediction(from: input, options: options)
let result = model_output(output: output.featureValue(for: "output")!.multiArrayValue!).output

Related

How to get "Area under aircraft unsuitable for landing" message?

When I initiate auto-landing in the DJI Fly app I sometimes get the following message, especially under bad lighting conditions:
Now, in my own code, when I call DJIFlightController.startLandingWithCompletion, the drone would not land and the completion block gets executed without any error.
My question is, how can I intercept the equivalent to DJIs error message as shown above? What code is relevant for that?
EDIT 1:
I am also checking if a landing confirmation is needed with the following code:
func observeConfirmLanding() {
guard let confirmLandingKey = DJIFlightControllerKey(param: DJIFlightControllerParamConfirmLanding) else { return }
DJISDKManager.keyManager()?.startListeningForChanges(on: confirmLandingKey, withListener: self) { (oldValue: DJIKeyedValue?, newValue: DJIKeyedValue?) in
DispatchQueue.main.async {
if let oldBoolValue = oldValue?.boolValue,
let newBoolValue = newValue?.boolValue,
oldBoolValue != newBoolValue {
self.landingConfirmationNeeded = newBoolValue
self.logger.debug("Landing confirmation is needed")
}
}
}
}
It never enters the closure.
As I understood the landing confirmation might be needed at a height of 0.3m, but in my case, the landing process gets interrupted at different heights that are more than 0.3m, e.g. already at 2m or 1.5m
EDIT 2:
I have changed the surface below the drone in my basement by adding a bright carpet with a distinct pattern. This improves the whole stability of the drone AND even more important: The drone just lands without being interrupted. I do not get the warning message in the DJI Fly app any more.
I check for isLandingConfirmation the way Brien suggests in his comment, I finally get true when testing this in the simulator.
extension FlightControllerObserver: DJIFlightControllerDelegate {
func flightController(_ fc: DJIFlightController, didUpdate state: DJIFlightControllerState) {
if (landingConfirmationNeeded != state.isLandingConfirmationNeeded) {
landingConfirmationNeeded = state.isLandingConfirmationNeeded
}
}
But, when I test this in my basement (flight mode "OPTI") and outside (flight mode "GPS") the drone just lands without waiting for any confirmation.
While I learned a lot, it is still a miracle to me which class in the DJI Mobile SDK is responsible for "throwing" that warning message.
If the completion handler completes without any error you might need to check if isLandingConfirmationNeeded in DJIFlightControllerState is set to true. If thats the case then you will need to implement the function confirmLandingWithCompletion.
Sounds relevant to your experience looking at the documentation
(void)confirmLandingWithCompletion:(DJICompletionBlock)completion Confirms continuation of landing action. When the clearance between
the aircraft and the ground is less than 0.3m, the aircraft will pause
landing and wait for user's confirmation. Can use
isLandingConfirmationNeeded in DJIFlightControllerState to check if
confirmation is needed. It is supported by flight controller firmware
3.2.0.0 and above.
The landingProtectionState property of the DJIVisionControlState class could be a good place to look for the cause of that error message. One of the potential states that sounds relevant is
DJIVisionLandingProtectionStateNotSafeToLand -> Landing area is not flat
enough to be considered safe for landing. The aircraft should be moved
to an area that is more flat and an auto-land should be attempted
again or the user should land the aircraft manually.
Also within a section of DJI's documentation there is a section on an article about flight control that talks about landing protection and forcing a landing. I couldn't see any functions in the SDK to force a landing.
You should be aware of that the fly app does not use the sdk internally. It uses the middlelayer directly.
You often get different behaivor when using the SDK compared to the app. Some functions are not available at all.
I usually disable it completly, I want it to land when I say so :-)
(exit_landing_ground_not_smooth_enable g_config.landing.exit_landing_ground_not_smooth_enable)

bypassing exc_breakpoint crash to continue program execution

During testing my iOS app, (it's a workout app), the app crashed (EXC_BREAKPOINT) as it was trying to save the workout data.
The crash was an index out of range issue whereby the array count is 1 lesser than the workout seconds. (I should have started the seconds counter from 1 instead of 0)
for i in 0...seconds {
let data = "\(i),\(dataArray.powerGenY[i-1]),\(dataArray.powerGenYAlt[i-1])\n"
do {
try data.appendToURL(fileURL: fileURL)
}
catch {
print("Could not write data to file")
}
}
anyways, the error dropped me to LLDB. Is there any way I an Use LLDB to bypass this error and continue execution?
Having worked out for a hour, I wasn't prepared to have this crash take my data along with it. Since the crashed dropped me into LLDB, I wanted to see if there's any way to salvage the data by stepping over / bypassing / changing the value of i so that the program execution can continue.
Initially I tried
(lldb) po i = 3327
error: <EXPR>:3:1: error: cannot assign to value: 'i' is immutable
i = 3327
^
but it won't let me change the value (I is immutable)
Then I tried thread jump -l 1 but it spewed some error about not the code execution outside of current function.
(lldb) th j -l 29
error: CSVExport.swift:29 is outside the current function.
Finally, going thru this site https://www.inovex.de/blog/lldb-patch-your-code-with-breakpoints/ and trying a few things. The one that helped was thread jump
thread return
The mentioned disadvantage of thread jump can be avoided by using a
different technique to change control flow behaviour. Instead of
manipulating the affected lines of code directly the idea is to
manipulate other parts of the program which in turn results in the
desired behaviour. For the given example this means changing the
return value of can_pass() from 0 to 1. Which, of course, can be done
through LLDB. The command to use is thread just like before, but this
time with the subcommand return to prematurely return from a stack
frame, thereby short-circuiting its execution.
Executing thread return 1 did the trick. This returned true (1) to the index out of range issue and then continued the execution to the next line of code.

Using CoreML to classify NSImages

I'm trying to work with Xcode CoreML to classify images that are simply single digits or letters. To start out with I'm just usiing .png images of digits. Using Create ML tool, I built an image classifier (NOT including any Vision support stuff) and provided a set of about 300 training images and separate set of 50 testing images. When I run this model, it trains and tests successfully and generates a model. Still within the tool I access the model and feed it another set of 100 images to classify. It works properly, identifying 98 of them corrrectly.
Then I created a Swift sample program to access the model (from the Mac OS X single view template); it's set up to accept a dropped image file and then access the model's prediction method and print the result. The problem is that the model expects an object of type CVPixelBuffer and I'm not sure how to properly create this from NSImage. I found some reference code and incorported but when I actually drag my classification images to the app it's only about 50% accurate. So I'm wondering if anyone has any experience with this type of model. It would be nice if there were a way to look at the "Create ML" source code to see how it processes a dropped image when predicting from the model.
The code for processing the image and invoking model prediction method is:
// initialize the model
mlModel2 = MLSample() //MLSample is model generated by ML Create tool and imported to project
// prediction logic for the image
// (included in a func)
//
let fimage = NSImage.init(contentsOfFile: fname) //fname is obtained from dropped file
do {
let fcgImage = fimage.cgImage(forProposedRect: nil, context: nil, hints: nil)
let imageConstraint = mlModel2?.model.modelDescription.inputDescriptionsByName["image"]?.imageConstraint
let featureValue = try MLFeatureValue(cgImage: fcgImage!, constraint: imageConstraint!, options: nil)
let pxbuf = featureValue.imageBufferValue
let mro = try mlModel2?.prediction(image: pxbuf!)
if mro != nil {
let mroLbl = mro!.classLabel
let mroProb = mro!.classLabelProbs[mroLbl] ?? 0.0
print(String.init(format: "M2 MLFeature: %# %5.2f", mroLbl, mroProb))
return
}
}
catch {
print(error.localizedDescription)
}
return
There are several ways to do this.
The easiest is what you're already doing: create an MLFeatureValue from the CGImage object.
My repo CoreMLHelpers has a different way to convert CGImage to CVPixelBuffer.
A third way is to get Xcode 12 (currently in beta). The automatically-generated class now accepts images instead of just CVPixelBuffer.
In cases like this it's useful to look at the image that Core ML actually sees. You can use the CheckInputImage project from https://github.com/hollance/coreml-survival-guide to verify this (it's an iOS project but easy enough to port to the Mac).
If the input image is correct, and you still get the wrong predictions, then probably the image preprocessing options on the model are wrong. For more info: https://machinethink.net/blog/help-core-ml-gives-wrong-output/

How to set AVAudioEngine input and output devices (swift/macos)

I've hunted high and low and cannot find a solution to this problem. I am looking for a method to change the input/output devices which an AVAudioEngine will use on macOS.
When simply playing back an audio file the following works as expected:
var outputDeviceID:AudioDeviceID = xxx
let result:OSStatus = AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size))
if result != 0 {
print("error setting output device \(result)")
return
}
However if I initialize the audio input (with let input = engine.inputNode) then I get an error once I attempt to start the engine:
AVAEInternal.h:88 required condition is false: [AVAudioEngine.mm:1055:CheckCanPerformIO: (canPerformIO)]
I know that my playback code is OK since, if I avoid changing the output device then I can hear the microphone and the audio file, and if I change the output device but don't initialize the inputNode the file plays to the specified destination.
Additionally to this I have been trying to change the input device, I understood from various places that the following should do this:
let result1:OSStatus = AudioUnitSetProperty(inputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Output, 0, &inputDeviceID, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size))
if result1 != 0 {
print("failed with error \(result1)")
return
}
However, this doesn't work - in most cases it throws an error (10853) although if I select a sound card that has both inputs and outputs it succeeds - it appears that when I am attempting to set the output or the input node it is actually setting the device for both.
I would think that this meant that an AVAudioEngine instance can only deal with one device, however it is quite happy working with the default devices (mic and speakers/headphones) so I am confident that isn't the issue. Looking at some solutions I have seen online people simply change the default input, but this isn't a massively nice solution.
Does anyone have any ideas as to whether this is possible?
It's worth noting that kAudioOutputUnitProperty_CurrentDevice is the only property available, there is not an equivalent kAudioInputUnitProperty_CurrentDevice key, due to the fact that as I understand it both the inputNode and outputNode are classed as "Output Units" (as they both emit sound somewhere).
Any ideas would be much appreciated as this is very very frustrating!!
Thanks
So I filed a support request with apple on this and another issue and the response confirms that an AVAudioEngine can only be assigned to a single Aggregate device (that is, a device with both input and output channels) - the system default units create effectively an aggregate device internally which is why they work, although I've found an additional issue in that if the input device also has output capabilities (and you activate the inputNode) then that device has to be both the input and output device as otherwise the output appears not to work.
So answer is that I think there is no answer..

Xcode 8 random command failed due to signal segmentation fault 11

I have a strange problem with the new Xcode 8 (no beta version) and swift3.
Once every other 3-4 times that I compile my code I get a 'command failed due to signal segmentation fault 11' error. I just need to enter new empty line, or sometimes changing some spaces, or add a comment (everywhere in the code) and the error disappears and I can compile again.
This is really strange because I'm not changing anything in the code! And sometimes I can compile and it works, then I don't change anything, I compile again and I get the error.
This is really annoying!
I have noticed this is happening since I have installed several 'Firebase' pods (Firebase, Firebase/Auth etc...). But I need them.
Anyone has any suggestion?
PS: I have set the Enable Bitcode of my project to No as many solution suggested, but nothing. In the error message it is not indicated any swift page where the error can be, an example is:
While loading members for 'Class_name' at
While deserializing 'func_name' (FuncDecl #42)
'func_name' is this one:
public class func loginUser(fir_user: FIRUser) {
let user = SFUser()
user.email = fir_user.email
user.isLogged = true
try! sfRealm.write() {
sfRealm.add(user, update:true)
}
var userToAdd = [String:AnyObject]()
userToAdd["email"] = fir_user.email! as NSString
let ref=FIRDatabase.database().reference()
let usersRef = ref.child(childName)
usersRef.setValue([key:value])
}
But then, as I said, I can just enter an empty row in another file and it compiles!
Thanks
I have the same issue i just figure out that i was using xcode 8.1 and the project's working copy was in xcode 8.2.1 so i just re install xcode 8.2.1 and problem got solved. Hope other can get the help trough this.
Ok, it seems that I have found the solution: it is a problem with Firebase and cocoapods, so 2 solutions:
Download Firebase and import into your project
I, instead, updated cocoapods to the last version and it worked. Upgraded Firebase - Now Getting Swift Compile Error
In my case there was some type checking issue deep down the compiler so the editor didn't give error in the gutter but on building the project I was getting signal setmentation fault 11 error:
1. While type-checking 'GetStoreAPIRequestModel' at /Users/.../StoreAPIModel.swift:9:1
2. While type-checking expression at [/Users/.../StoreAPIModel.swift:15:18 - line:15:31] RangeText="[Dictionary]()"
3. While resolving type [Dictionary] at [/Users/.../StoreAPIModel.swift:15:18 - line:15:29] RangeText="[Dictionary]"
So I changed my code from:
var stores = [Dictionary]() {
willSet {
allStores.removeAll()
for model in newValue {
allStores.append(StoreAPIModel(dictionary: model as! Dictionary).getModel())
}
}
}
To (more descriptive dictionary):
var stores = [[String : Any]]() {
willSet {
allStores.removeAll()
for model in newValue {
allStores.append(StoreAPIModel(dictionary: model as [String : AnyObject]).getModel())
}
}
}
This is tricky problem. Issue can be with line of code or syntax. I was getting similar error and it was due to incorrect usage of dictionary. I was trying to increment the value of dictionary element.
Solution is to triage the code, detailed error provide which module has issue, so try commenting part of code until you find the line which is causing the issue.
Hi i had the same issue with FireBase , my problem was that i was extending FIRStorageReference and FIRDatabaseReference and some time it compile successfully some time i get
command failed due to signal segmentation fault 11
so i removed that files and implement the method other way , now everything works fine.
Found my problem when this occurred. (No cocoapods.) I thought I had left the program in a working state, but I was wrong. I am writing a straightforward command-line program. What it does is somewhat general, so I defined all the strings that make it specific in let statements at the top of the program so that I could someday use the program in a different context.
Since that was working so well, I thought I'd be clever and do the same with a filter of an array of dictionaries. I turned:
list.filter { $0["SearchStrings"] == nil }
into:
let test = { $0["SearchStrings"] == nil }
// ...
list.filter(test)
meaning to continue working on the let, but I never went back and did that. Building gave me the segmentation fault error. Defining test as a function fixed the problem.
(Incidentally, I understand how to strip a filtering function down to the terse braces notation in the context of the call to Array.filter, and why that works, but I don't understand why I can't assign the brace expression to a constant and use it as such.)