importing obj (or dae or lidar scan ) into swift SceneAssests without using xcode gui interface from files during app runtime - swift

System. (Mac OS:Catalina 10.15.7 ,xcode version: 12.3 , swift language version 5)
Has anyone accessed a .dae or .obj in the project file ( in files ) on an ipad/iphone and been able to create a sceneAsset from this file? I do not want to use the graphical interface in xcode and drop files in before the app builds. I want to be able to load and create a scene (.scn) with the image overlay and use whatever object origin that was in the file as the new scene origin. Is this even possible?
I have been playing with some example code with modelIO but hasn't really worked yet.
ex. Call the function somewhere and tell it the filename "scannedChair"
below is from https://developer.apple.com/forums/thread/103245
func loadSavedOBJ(filename:String)->Project?{
let DocumentDirectoryURL = try! FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true)
let objURL = DocumentDirectoryURL.appendingPathComponent(filename).appendingPathExtension("exampleProject")
//return loadProject(url: fileURL)
let asset = MDLAsset(url:objURL)
guard let object = asset.object(at: 0) as? MDLMesh else {
fatalError("Failed to get mesh from obj asset.")
}
// Wrap the ModelIO object in a SceneKit object
let scene = SCNScene()
let node = SCNNode(mdlObject: object)
scene.rootNode.addChildNode(node)
everywhere i try to search for this people are so focused on exporting 3d files from their app or using them in augmented reality but not so much to use scanned items in a traditional app scene. ex. I scan a chair and want that chair to be seen in a pov game or app.
*note: I am very green in xcode / swift . I'm just looking for pointers at not a solution unless it presents itself. Apologies if my terminology is completely off-base.

Related

SpriteKit in screensaver can't find images

I am making a screensaver in Swift using SpriteKit.
While testing screensaver in app, all the textures load properly. As soon as I make .saver and load it in System Preferences, SpriteKit shows that images are not found.
I used (imageNamed: ""), so I tried using
var imageURL = Bundle.main.url(forResource: "gradient", withExtension: "png")
let imageGradient = NSImage(contentsOf: imageURL!)!
,but I got the same result.
SpriteKit can't access images when built into .saver file, but works perfectly when ran through the app.
I have included images in bundle, in assets, in Target's Copy Bundle Resources/Development Assets/Resource Tags.
You can clone the project from here: https://github.com/Nazar-Bibik/SavePortal
I have looked into this project: https://github.com/leebradley/nyan-screensaver
And I found how to properly do it:
You need to put images into bundle, not xcassets ( put it with .swift files )
Instead of using image name use the following
let sourceBundle = Bundle(for: MainSwiftFile.self)
let imagePath = sourceBundle.path(forResource: "picture-name", ofType: "png")!
neededTextures = SKTexture(imageNamed: imagePath)
You need to provide the name of the .swift file with your principal class ( principle class in info.plist file ).
The rest is simple - forResource is a file name, ofType is a file extension and instead of passing usual name in imageNamed: you provide String of path to file.
"imagePath" is of a String type.

Download Custom CoreML Model and Load for Usage [Swift]

I am creating an app based on Neural Network and the CoreML model size is of around 150MB. So, it's obvious that I can't ship it within the app.
To overcome this issue, I came to know about this article, mentioning that you can download and compile the CoreML model on device.
I did and I download on my device, but the problem is I cannot do the predictions as the original model. Like, the original model is taking UIImage as an input but the MLModel is MLFeatureProvider, can anyone address how can I do the type casting to my model and use it as original?
do {
let compiledUrl = try MLModel.compileModel(at: modelUrl)
let model = try MLModel(contentsOf: compiledUrl)
debugPrint("Model compiled \(model.modelDescription)")
//model.prediction(from: MLFeatureProvider) //Problem
//It should be like this
//guard let prediction = try? model.prediction(image: pixelBuffer!) else {
// return
//}
} catch {
debugPrint("Error while compiling \(error.localizedDescription)")
}
When you add an mlmodel file to your project, Xcode automatically generates a source file for you. That's why you were able to write model.prediction(image: ...) before.
If you compile your mlmodel at runtime then you don't have that special source file and you need to call the MLModel API yourself.
The easiest solution here is to add the mlmodel file to your project, copy-paste the automatically generated source file into a new source file, and use that with the mlmodel you compile at runtime. (After you've copied the generated source, you can remove the mlmodel again from your Xcode project.)
Also, if your model is 150MB, you may want to consider making a small version of it by choosing an architecture that is more suitable for mobile. (Not VGG16, which it seems you're currently using.)
guard let raterOutput = try? regressionModel.prediction(from: RegressorFeatureProviderInput(
feature1: 3.4,
feature2: 4.5))
else {return 0}
return Double(truncating: NSNumber(value:RegressorFeatureProviderOutput.init(features: raterOutput).isSaved))
Adding to what #Matthjis Hollemans said
let url = try! MLModel.compileModel(at: URL(fileURLWithPath: model))
visionModel = try! VNCoreMLModel(for: MLModel(contentsOf: url))

Load .obj to .scn with multiple sub-objects, textures, materials in SceneKit & Model I/O?

I'm currently working with large .obj files in Apple's SceneKit/Model I/O that contain multiple objects within, each with separate textures and materials. This means I cannot apply one single texture to the file like many other form posts suggest. Is there a good way to import in the the materials and textures?
I have my obj mtl and jpg's all in one directory where I'm also putting the scn scene.
The code currently follows this design, where I access it from its respective location, load it into a MDLAsset then place it into a SCNScene where it is saved back to a file to be loaded in later in the code.
//...
// Get the mesh from the obj object
let asset = MDLAsset(url:url)
//asset.loadTextures()
guard let object = asset.object(at: 0) as? MDLMesh else {
fatalError("Failed to get mesh from obj asset.")
}
// Wrap the ModelIO object in a SceneKit object
let scene = SCNScene()
let node = SCNNode(mdlObject: object)
scene.rootNode.addChildNode(node)
// Get the document directory name the write a url for the object replacing its extention with scn
let dirPaths = FileManager().urls(for: .documentDirectory, in: .userDomainMask)
let fileUrl = dirPaths[0].appendingPathComponent(url.lastPathComponent.replacingOccurrences(of: ".obj", with: ".scn"))
// Write the scene to the new url
if !scene.write(to: fileUrl, delegate: nil) {
print("Failed to write scn scene to file!")
return nil
}
// ...
The MDLAsset.loadTextures function has no documentation and only causes a memory leak so at the time of this post, it's not an option. Opening the model by hand and hitting the convert to SCNScene dosn't work either as I still lose the materials. Additionally I want this to be automated in code to allow for downloading and conversion of models at runtime.
It seems like there is not built in way to do this except to do each texture and material by hand in the code, which is easy when it's only one complete texture but this model might have 100 different materials. It looks like it requires me to parse the obj/mtl manually and then create and assign the materials by hand. This seems completely unreasonable and I figure there must be a better way that I don't know about.
When you import an OBJ file via Model I/O as an MDLAsset, it will arrive as a collection of one or more MDLMeshes. The meshes will have MDLMaterials associated with them, and the MDLMaterial will have attributes. Those attributes will be numeric, file paths, or images. You need to iterate the properties, and check if there is a path.
https://developer.apple.com/documentation/modelio/mdlmaterialproperty
If there is, it will likely be a fileURL with the same content as was in the OBJ file's associated MTL file.
The properties described in the MDLScatteringFunction correspond to the various properties in a typical MTL file.
https://developer.apple.com/documentation/modelio/mdlscatteringfunction
MDLAsset.loadTextures will add an MDLTextureSampler value to the property, if Model IO can actually find the texture referenced in the MTL file.

Accessing File saved in today extension from project

I am trying to access a file I saved from my today extension.In my today extenson I did this to save the file:
func startRecording() {
let audioFilename = getDocumentsDirectory().appendingPathComponent("recording.m4a")
let settings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 12000,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
do {
audioRecorder = try AVAudioRecorder(url: audioFilename, settings: settings)
audioRecorder.delegate = self
audioRecorder.record()
recordButton.setTitle("Stop", for: .normal)
} catch {
finishRecording(success: false)
}
}
func getDocumentsDirectory() -> URL {
let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
let documentsDirectory = paths[0]
return documentsDirectory
}
I then tried to get the data for my AVAudio Player in the main part of the project using this code:
let path = Bundle.main.path(forResource: "recording.m4a", ofType:nil)!
let url = URL(fileURLWithPath: path)
However, it gave the error: unexpectedly found nil while unwrapping an Optional value.
Thanks for the help.
Your extension saves the file to its document directory and your app code is looking for the file in the app bundle. The app bundle only contains the resources that are distributed with the app. You'll need to delve into the file system.
However, there's another problem. The extension and containing app don't share a documents directory. They each have their own container for writing data local to themselves. If you want to share data between them, it's a little more work. In summary:
Create an app group identifier for the app and the extension to share.
Use FileManager.containerURL(forSecurityApplicationGroupIdentifier:) to get the file URL for the shared container directory.
From the container URL, append the file name.
In the extension, you'll set up the AVAudioRecorder as usual and start recording.
In the main app, you'll want to use the NSFileCoordinator API to ensure that only one process is writing to the file at a time. Hopefully, the AVAudioRecorder uses the NSFileCoordinator API internally, although I didn't immediately find confirmation of this.
For more details about shared containers, see this blog post.
I just tried the same - record audio from a Today Extension. The code looks sooo familiar, so I'm taking a wild guess: you want to capture voice and send the file to the Google Speech API, correct?
Nonetheless, I think we're hitting restrictions of extensions: judging by https://developer.apple.com/library/content/qa/qa1872/_index.html extensions cannot record audio. The article has been writting for iOS 8, but I don't believe Apple ever lifted the restriction. Please correct me if I'm wrong, since I keep running into problems doing what OP does myself.
btw, check the result of audioRecorder.record(). It might be false and indicate that the capture never started (that's my error currently).

Adding Emitter Node to sks file and using it - XCode 6.0 + swift

I am trying to learn how to use SKEmitterNode, for which I created a SpriteKit project. By default it provides a GameScene.sks file, selecting which it displays a list of objects in library which includes an EmitterNode. I tried to drag EmitterNode over the GameScene and configure it in attribute inspector, which looks like this:
I tried to run the project on simulator after this simple setup, but it is not showing any particles on the screen, it is showing the default 'Hello World' UI :(
Is there any thing, which I am missing?
For example you can create separate file for your EmitterNode (TestNode.sks). In this file you can configure your EmitterNode like you make it in attribute inspector. Then you can add your node to your scene in GameScene.swift file like this:
let emitterPath: String = NSBundle.mainBundle().pathForResource("TestNode", ofType: "sks")!
let emitterNode = NSKeyedUnarchiver.unarchiveObjectWithFile(emitterPath) as SKEmitterNode
emitterNode.position = CGPointMake(100, 100)
emitterNode.name = "emitterNode"
emitterNode.zPosition = 10
emitterNode.targetNode = self
self.addChild(emitterNode)