Download Custom CoreML Model and Load for Usage [Swift] - swift

I am creating an app based on Neural Network and the CoreML model size is of around 150MB. So, it's obvious that I can't ship it within the app.
To overcome this issue, I came to know about this article, mentioning that you can download and compile the CoreML model on device.
I did and I download on my device, but the problem is I cannot do the predictions as the original model. Like, the original model is taking UIImage as an input but the MLModel is MLFeatureProvider, can anyone address how can I do the type casting to my model and use it as original?
do {
let compiledUrl = try MLModel.compileModel(at: modelUrl)
let model = try MLModel(contentsOf: compiledUrl)
debugPrint("Model compiled \(model.modelDescription)")
//model.prediction(from: MLFeatureProvider) //Problem
//It should be like this
//guard let prediction = try? model.prediction(image: pixelBuffer!) else {
// return
//}
} catch {
debugPrint("Error while compiling \(error.localizedDescription)")
}

When you add an mlmodel file to your project, Xcode automatically generates a source file for you. That's why you were able to write model.prediction(image: ...) before.
If you compile your mlmodel at runtime then you don't have that special source file and you need to call the MLModel API yourself.
The easiest solution here is to add the mlmodel file to your project, copy-paste the automatically generated source file into a new source file, and use that with the mlmodel you compile at runtime. (After you've copied the generated source, you can remove the mlmodel again from your Xcode project.)
Also, if your model is 150MB, you may want to consider making a small version of it by choosing an architecture that is more suitable for mobile. (Not VGG16, which it seems you're currently using.)

guard let raterOutput = try? regressionModel.prediction(from: RegressorFeatureProviderInput(
feature1: 3.4,
feature2: 4.5))
else {return 0}
return Double(truncating: NSNumber(value:RegressorFeatureProviderOutput.init(features: raterOutput).isSaved))
Adding to what #Matthjis Hollemans said

let url = try! MLModel.compileModel(at: URL(fileURLWithPath: model))
visionModel = try! VNCoreMLModel(for: MLModel(contentsOf: url))

Related

Loading .rcproject from iOS local directory

I am trying to load a rcproject from a local directory. My target is, to load it from an URL and then show it.
If I load it like this:
let modelScene = try? Entity.loadAnchor(named: "Experience")
everything works fine.
But if I do this:
let url = URL(fileURLWithPath: "./Experience")
or
let url = URL(fileURLWithPath: "./Experience.rcproject")
and
let modelScene = try? Entity.loadAnchor(contentsOf: url, withName: "Experience")
or
let modelScene = try? Entity.loadAnchor(contentsOf: url)
I get the following error:
// [Pipeline] Failed to open scene 'Experience -- file:///'.
I have no idea, what the issue here is. Did someone has an idea, what i can try?
My development target is 14.4
In the apple docs, they write, that it should work like this, right?
loadAnchor(contentsOf:withName:) type method was composed for .usd, .usda, .usdc, .usdz and .reality file formats. However official documentation says that now they work only for Reality files. You can read about it here.
public static func loadAnchor(contentsOf url: URL,
withName resourceName: String?) throws -> AnchorEntity
And here's a definition inside your code:
Supported file formats are USD or Reality. In order to identify a resource across a network session, the resource needs to have a unique name. This name is set using resourceName. All participants in the network session need to load the resource and assign the same resourceName.

importing obj (or dae or lidar scan ) into swift SceneAssests without using xcode gui interface from files during app runtime

System. (Mac OS:Catalina 10.15.7 ,xcode version: 12.3 , swift language version 5)
Has anyone accessed a .dae or .obj in the project file ( in files ) on an ipad/iphone and been able to create a sceneAsset from this file? I do not want to use the graphical interface in xcode and drop files in before the app builds. I want to be able to load and create a scene (.scn) with the image overlay and use whatever object origin that was in the file as the new scene origin. Is this even possible?
I have been playing with some example code with modelIO but hasn't really worked yet.
ex. Call the function somewhere and tell it the filename "scannedChair"
below is from https://developer.apple.com/forums/thread/103245
func loadSavedOBJ(filename:String)->Project?{
let DocumentDirectoryURL = try! FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true)
let objURL = DocumentDirectoryURL.appendingPathComponent(filename).appendingPathExtension("exampleProject")
//return loadProject(url: fileURL)
let asset = MDLAsset(url:objURL)
guard let object = asset.object(at: 0) as? MDLMesh else {
fatalError("Failed to get mesh from obj asset.")
}
// Wrap the ModelIO object in a SceneKit object
let scene = SCNScene()
let node = SCNNode(mdlObject: object)
scene.rootNode.addChildNode(node)
everywhere i try to search for this people are so focused on exporting 3d files from their app or using them in augmented reality but not so much to use scanned items in a traditional app scene. ex. I scan a chair and want that chair to be seen in a pov game or app.
*note: I am very green in xcode / swift . I'm just looking for pointers at not a solution unless it presents itself. Apologies if my terminology is completely off-base.

Using CoreML to classify NSImages

I'm trying to work with Xcode CoreML to classify images that are simply single digits or letters. To start out with I'm just usiing .png images of digits. Using Create ML tool, I built an image classifier (NOT including any Vision support stuff) and provided a set of about 300 training images and separate set of 50 testing images. When I run this model, it trains and tests successfully and generates a model. Still within the tool I access the model and feed it another set of 100 images to classify. It works properly, identifying 98 of them corrrectly.
Then I created a Swift sample program to access the model (from the Mac OS X single view template); it's set up to accept a dropped image file and then access the model's prediction method and print the result. The problem is that the model expects an object of type CVPixelBuffer and I'm not sure how to properly create this from NSImage. I found some reference code and incorported but when I actually drag my classification images to the app it's only about 50% accurate. So I'm wondering if anyone has any experience with this type of model. It would be nice if there were a way to look at the "Create ML" source code to see how it processes a dropped image when predicting from the model.
The code for processing the image and invoking model prediction method is:
// initialize the model
mlModel2 = MLSample() //MLSample is model generated by ML Create tool and imported to project
// prediction logic for the image
// (included in a func)
//
let fimage = NSImage.init(contentsOfFile: fname) //fname is obtained from dropped file
do {
let fcgImage = fimage.cgImage(forProposedRect: nil, context: nil, hints: nil)
let imageConstraint = mlModel2?.model.modelDescription.inputDescriptionsByName["image"]?.imageConstraint
let featureValue = try MLFeatureValue(cgImage: fcgImage!, constraint: imageConstraint!, options: nil)
let pxbuf = featureValue.imageBufferValue
let mro = try mlModel2?.prediction(image: pxbuf!)
if mro != nil {
let mroLbl = mro!.classLabel
let mroProb = mro!.classLabelProbs[mroLbl] ?? 0.0
print(String.init(format: "M2 MLFeature: %# %5.2f", mroLbl, mroProb))
return
}
}
catch {
print(error.localizedDescription)
}
return
There are several ways to do this.
The easiest is what you're already doing: create an MLFeatureValue from the CGImage object.
My repo CoreMLHelpers has a different way to convert CGImage to CVPixelBuffer.
A third way is to get Xcode 12 (currently in beta). The automatically-generated class now accepts images instead of just CVPixelBuffer.
In cases like this it's useful to look at the image that Core ML actually sees. You can use the CheckInputImage project from https://github.com/hollance/coreml-survival-guide to verify this (it's an iOS project but easy enough to port to the Mac).
If the input image is correct, and you still get the wrong predictions, then probably the image preprocessing options on the model are wrong. For more info: https://machinethink.net/blog/help-core-ml-gives-wrong-output/

Xcode. Image resources added to a test target are not copied into the tests bundle

I wrote some extension to UIImage class and want to cover it by Unit Tests. I added some image to test target and checked that it presents in the Copy Bundle Resources list.
In the test code I use a model object which pvovides me test data.
class TestConstants {
private static var bundle: Bundle {
return Bundle(for: UIImageExtensionsTests.self)
}
private static var birdImageURL: URL {
let path = self.bundle.url(forResource: "birds", withExtension: "png")
return path!
}
static var birdImageData: Data {
return try! Data(contentsOf: self.birdImageURL)
}
}
Unfortunately birds.png image is not located in test bundle, but another resource drm.txt presents.
I'm a bit confused is it the bug in Xcode?
By the way I downloaded Xcode 9.4 beta and there are the same behavior - images do not copy into the test bundle.
#UPDATE:
UIImageExtensionsTests is a test class presents in the same target with drm.txt and birds.png file
To get the appropriate Bundle use :
let bundle = Bundle.init(for: TestConstants.classForCoder())
My bad that I didn't read log carefully.
The issue was happen with "wrong" image founded in internet. Xcode couldn't proceed image, so it didn't copy it to test bundle.
While reading
/Users/igork/developer/gitlab/ios.gym-master/GymMasterTests/Resources/birds.png
pngcrush caught libpng error: Not a PNG file..
And this error doesn't fail building and running the target. So in my case the tests were starting :(

Swift NSUnarchiver Error

I get the following error when i try to unarchive a custom object
'cannot decode object of class (PhotoList) for key (root); the class may be defined in source code or a library that is not linked'
I currently have a version on the app store (v1.0, have issued an update via version TestFlight (v2.0) and this is where the error happens. The error doesn't happen on the same version builds via Xcode. Nothing has changed (that I can see!) that would have caused this.
Here is the code I have for archiving the object
let data = NSKeyedArchiver.archivedDataWithRootObject(VehicleList.instance)
NSUserDefaults.standardUserDefaults().setObject(data, forKey: "archiveName")
And here is the code I have for unarchiving the object
if let data = NSUserDefaults.standardUserDefaults().objectForKey("archiveName") as? NSData {
let photoList : PhotoList = NSKeyedUnarchiver.unarchiveObjectWithData(data) as! PhotoList
}
It turns out the issue was related to multiple targets. It seems when I duplicated targets, I swapped over the original target with another, so it was in fact a different target that couldn't unarchive the data.
So be careful when you create targets and make sure you don't mix them up! Hope this saves someone the amount of time it took me to figure this out!