How to deploy a pytorch model? - deployment

I trained a model for Style Transfer and trained it on 1000's of images. I saved the Model for every 1000 of images and also saved the Transform Network final weights. Now I want it to be saved as a model so that I can use it in an app but none of the search are giving me a clear answer how to do that.
VG1 = vgg.VGG16("/kaggle/working/transformer_weight.pth")
example = torch.rand(1, 3, 800, 800)
traced_script_module = torch.jit.script(VG1, example)
traced_script_module.save('kaggle/working')
but it gives
RuntimeError:
Module 'Sequential' has no attribute '_modules' :
File "/kaggle/working/vgg.py", line 43
layers = {'3': 'relu1_2', '8': 'relu2_2', '15': 'relu3_3', '22': 'relu4_3'}
features = {}
for name, layer in self.features._modules.items():
~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
x = layer(x)
if name in layers:
I am beginner and I have been trying for days. Pls tell if you want more information.
I want to save the model so that I can use it to an android studio to make app.
The notebook is in 'https://www.kaggle.com/starktony45/fast-neural-style'

Related

Is it possible to access/downlad glb or gltf models from Decentraland?

I have created a dummy scene using dcl init & dcl start and imported a few models from one of their github repositories.
I created a small script which creates a scene, imports those models into the scene and console logs the list of models I imported into the scene:
...
const trashCan = addToScene("trashCan", "models/Trash_Can.glb", new Transform({
position: new Vector3(0.2, 0.2, 0.2),
rotation: new Quaternion(0, 0, 0, 1),
scale: new Vector3(1, 1, 1)
}));
trashCan.addComponent(
new OnPointerDown((): void => {
console.log("Downloadable entities:");
// console.log(engine.entities);
for (let k in engine.entities) {
// console.log(engine.entities[k])
// console.log(engine.entities[k].components)
const shape = engine.entities[k].components["engine.shape"]
const transform = engine.entities[k].components["engine.transform"]
if (shape) {
// console.log(engine.entities[k].components["engine.shape"].data)
console.log(" name: " + shape.src)
}
if (transform) {
console.log(" position: " + transform.position)
}
}
})
)
...
The script gives me access to some model metadata i.e. modes' path in the project and their transform matrices in the scene:
I was wondering whether it is possible to access/download the 3D models.
Could it be possible to have access to those 3D models, maybe GET them? Does anybody know if Decentraland prohibits such practices? AFAIK they're using the Unity engine.
Just doing a GET on one of the models in the scene doesn't seem to be successful:
Could this be possible to achieve?
EDIT:
After the answer from #cachius, following his suggestion, I was able to find the following:
unity.data(#1) file is a UnityWebData1.0 file which can be decompressed using UnityPack as described here:
from unitypack.utils import BinaryReader
SIGNATURE = 'UnityWebData1.0'
class DataFile:
def load(self, file):
buf = BinaryReader(file, endian="<")
self.path = file.name
self.signature = buf.read_string()
header_length = buf.read_int()
if self.signature != SIGNATURE:
raise NotImplementedError('Invalid signature {}'.format(repr(self.signature)))
self.blobs = []
while buf.tell() < header_length:
offset = buf.read_int()
size = buf.read_int()
namez = buf.read_int()
name = buf.read_string(namez)
self.blobs.append({ 'name': name, 'offset': offset, 'size': size })
if buf.tell() > header_length:
raise NotImplementedError('Read past header length, invalid header')
for blob in self.blobs:
buf.seek(blob['offset'])
blob['data'] = buf.read(blob['size'])
if len(blob['data']) < blob['size']:
raise NotImplementedError('Invalid size or offset, reading past file')
import os
f = open('unity.data', 'rb')
df = DataFile()
df.load(f)
EXTRACTION_DIR = 'extracted'
for blob in df.blobs:
print('extracting # {}:\t{} ({})'.format(blob['offset'], blob['name'], blob['size']))
dest = os.path.join(EXTRACTION_DIR, blob['name'])
os.makedirs(os.path.dirname(dest), exist_ok=True)
with open(dest, 'wb') as f:
f.write(blob['data'])
The extracted data folder contains one or more .unity3d files which can be further unpacked using AssetStudio, however the tool looked a bit buggy/unstable to me, not sure how reliable it is.
From what I've discovered this contains much of the scene helper entities, but not the models. The models are downloaded separately in gltf format(#2). One can just download the file and import it using Blender.
So it seems that the gltf models are located in http://127.0.0.1:8001/content/contents/ and the files are renamed. I was unable to retrieve the metadata regarding the exact contents of http://127.0.0.1:8001/content/contents yet, so I'll keep digging.
Access
Have a look if the models appear as requests in the Network tab. If so you can right click and 'Copy as curl' to download them in the command line by adding > model.glb. This way you apply the same headers and parameters as the client.
Legality
There seems to be an important distinction between content provided by them vs content provided by users. Relevant sections from their Terms, legalese edited for clarity:
1. Acceptance of Terms
The Decentraland Foundation holds the rights over the DCL Client, the Desktop Client, the SDK 5.0, the Marketplace, the Builder, the Command Line Interface, DAO, and the Developers’ Hub which are referred to herein as the "Tools".
12. Proprietary Rights
12.1 All rights of the Tools are owned by the Foundation. Except as authorized in Section 13, you agree not to copy, modify, distribute, perform, display or create derivations based on the Tools. The visual interfaces, graphics including all art and drawings associated with and the the code and data of the Tools, excluding the Content submitted by Users, are owned by the Foundation. ... You agree that any "purchase" of LAND does not give you rights to the art and drawings associated with the Tools and content therein other than those expressly contained in these Terms. And that you do not have the right to reproduce the Foundation Materials without the Foundation’s written consent.
13. Open Source License.
13.1 Grant of Copyright License.
Each Contributor grants to you a copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
13.3 Redistribution.
You may reproduce and distribute copies or derivations in any medium, with or without modifications, and in Source or Object form, provided that you meet the following conditions:
modifications shall not infringe the Privacy and Content Policies, nor allow their infringement or of Section 12.3 and require any further Contributor to abide by these limitations;
any modifications can only take place until six (6) months have elapsed since the release to the general public;
you must give any other recipients a copy of this License;
you must cause any modified files to carry prominent notices stating that you changed the files;
you must retain in any derivations all copyright, patent, trademark, and attribution notices from the Source
if the Work includes a "NOTICE" text file, then any derivation must include a copy of the attribution notices in it. ...
You should reach out to them or their community directly on Discord Twitter Reddit Telegram or GitHub and add results here.

Implementing TensorFlowjs model in Ionic framework with Capacitor

I am working on a project where I am using Ionic and TensorFlow for machine learning. I have converted my TensorFlow model to a tensorflowjs model. I have put the model.json file and shard files of the tensorflowjs model in the assets folder in Ionic. Basically, I have put my tensorflowjs model in the assets folder of ionic. I am wanting to use Capacitor to access the camera and allow users to take photos. Then, the photos will be passed to the tensorflowjs model in assets to get and display a prediction for that user.
Here is my typescript code:
import { Component, OnInit, ViewChild, ElementRef, Renderer2 } from '#angular/core';
import { Plugins, CameraResultType, CameraSource} from '#capacitor/core';
import { DomSanitizer, SafeResourceUrl} from '#angular/platform-browser';
import { Platform } from '#ionic/angular';
import * as tf from '#tensorflow/tfjs';
import { rendererTypeName } from '#angular/compiler';
import { Base64 } from '#ionic-native/base64/ngx';
import { defineCustomElements } from '#ionic/pwa-elements/loader';
const { Camera } = Plugins;
#Component({
  selector: 'app-predict',
  templateUrl: './predict.page.html',
  styleUrls: ['./predict.page.scss'],
})
export class PredictPage{
  linearModel : tf.Sequential;
  prediction : any;
  InputTaken : any;
  ctx: CanvasRenderingContext2D;
  pos = { x: 0, y: 0 };
  canvasElement : any;
  photo: SafeResourceUrl;
  model: tf.LayersModel;
  constructor(public el : ElementRef , public renderer : Renderer2 , public platform : Platform, private base64: Base64,
    private sanitizer: DomSanitizer) 
  {}
  
  async takePicture() {
    const image = await Camera.getPhoto({
        quality: 90,
        allowEditing: true,
        resultType: CameraResultType.DataUrl,
        source: CameraSource.Camera});
      const model = await tf.loadLayersModel('src/app/assets/model.json');
      this.photo = this.sanitizer.bypassSecurityTrustResourceUrl(image.base64String);
      defineCustomElements(window);
    const pred = await tf.tidy(() => {
          // Make and format the predications
        const output = this.model.predict((this.photo)) as any;
                                
          // Save predictions on the component
        this.prediction = Array.from(output.dataSync()); 
        });
  }
}
In this code, I have imported the necessary tools. Then, I have my constructor function and a takepicture() function. In the takepicture function, I have included functionality for the user to take pictures. However, I am having trouble with passing the pictures taken to the tensorflowjs model to get a prediction. I am passing the picture taken to the tensorflowjs model in this line of code:
const output = this.model.predict((this.photo)) as any;
However, I am getting an error stating that:
Argument of type 'SafeResourceUrl' is not assignable to parameter of type 'Tensor | Tensor[]'.\n Type 'SafeResourceUrl' is missing the following properties from type 'Tensor[]': length, pop, push, concat, and 26 more.
It would be appreciated if I could receive some guidance regarding this topic.
The model is expecting you to pass in a Tensor input, but you're passing it some other image format that isn't in the tfjs ecosystem. You should first convert this.photo to a Tensor, or perhaps easier, convert image.base64String to tensor.
Since you seem to be using node, try this code
// Move the base64 image into a buffer
const b = Buffer.from(image.base64String, 'base64')
// get the tensor
const image = tf.node.decodeImage(b)
// Compute the output
const output = this.model.predict(image) as any;
Other conversion solutions here: convert base64 image to tensor

Use TensorFlow model with Swift for iOS

We are trying to use TensorFlow Face Mesh model within our iOS app. Model details: https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view.
I followed TS official tutorial for setting up the model: https://firebase.google.com/docs/ml-kit/ios/use-custom-models and also printed the model Input-Output using the Python script in the tutorial and got this:
INPUT
[ 1 192 192 3]
<class 'numpy.float32'>
OUTPUT
[ 1 1 1 1404]
<class 'numpy.float32'>
At this point, I'm pretty lost trying to understand what those numbers mean, and how do I pass the input image and get the output face mesh points using the model Interpreter. Here's my Swift code so far:
let coreMLDelegate = CoreMLDelegate()
var interpreter: Interpreter
// Core ML delegate will only be created for devices with Neural Engine
if coreMLDelegate != nil {
interpreter = try Interpreter(modelPath: modelPath,
delegates: [coreMLDelegate!])
} else {
interpreter = try Interpreter(modelPath: modelPath)
}
Any help will be highly appreciated!
What those numbers mean completely depends on the model you're using. It's unrelated to both TensorFlow and Core ML.
The output is a 1x1x1x1404 tensor, which basically means you get a list of 1404 numbers. How to interpret those numbers depends on what the model was designed to do.
If you didn't design the model yourself, you'll have to find documentation for it.

LSTM dimension issues with swift/coreml implementation

I have generated a LSTM model for audio classification using keras with tf as the backend. Upon conversion to a .mlmodel using coremltools I am running into issues as you can see here. The dimensions are very different from what is expected.
I used this for my base in xcode in swift.
Particularly this snip is what I believe is giving me the trouble:
do {
let request = try SNClassifySoundRequest(mlModel: soundClassifier.model)
try analyzer.add(request, withObserver: resultsObserver)
} catch {
print("Unable to prepare request: \(error.localizedDescription)")
return
}
}
Running this model gives me the following error:
Invalid model, inputDescriptions.count = 5
Unable to prepare request: Invalid model, inputDescriptions.count = 5
Even though when I build the model I see what is expected in the spec:
description {
input {
name: "audioSamples"
shortDescription: "Audio from microphone"
type {
multiArrayType {
shape: 13
dataType: DOUBLE
}
}
}
I am trying to incorporate this post into my code but I am not sure how to format it to my needs. Any advice is greatly appreciated. I can see that MLMultiArray is the key to my question, but I am unsure of: how to put the proper data into it and how to push this into a SNClassifySoundRequest type.
keras == 2.3.1
coremltools == 3.3
When you use SNClassifySoundRequest, your model needs to have a certain structure. I don't know the exact details off the top of my head, but I think it needs to be a pipeline where the first model is a built-in model that converts the audio to spectrograms.
If you trained your model with Keras, it's most likely not compatible with the requirements of SNClassifySoundRequest.
The good news is that you don't need SNClassifySoundRequest to run your model. Simply call soundClassifier.prediction(...) on the model.
Note that you need to pass in the input but also the hidden states of the LSTM layers. Core ML will not automatically manage the LSTM state for you (unlike Keras).

How to convert tensorflow .pb file to .bytes?

I am trying to convert the android tensorflow example provided in the tensorflow github into a Unity project. I have a .pb file for ssd_mobilenet_v1_android_export. But to use tensorflow models in Unity you have to have the model in a .bytes format. I can't figure out how to convert my .pb file to .bytes. I was going to use this code but I don't have any checkpoints for this graph, only the .pb file.
from tensorflow.python.tools import freeze_graph
freeze_graph.freeze_graph(input_graph = model_path +'/raw_graph_def.pb',
input_binary = True,
input_checkpoint = last_checkpoint,
output_node_names = "action",
output_graph = model_path +'/your_name_graph.bytes' ,
clear_devices = True, initializer_nodes = "",input_saver = "",
restore_op_name = "save/restore_all", filename_tensor_name = "save/Const:0")
Is there a simple way to do this conversion? Or a simple way to get a checkpoint for this model? It seems like this should be obvious but I can't figure it out. Thanks.
You can just switch extension from .pb to .bytes and for most cases this will work just fine. Check my TF Classify example for Unity.