I am trying to convert the android tensorflow example provided in the tensorflow github into a Unity project. I have a .pb file for ssd_mobilenet_v1_android_export. But to use tensorflow models in Unity you have to have the model in a .bytes format. I can't figure out how to convert my .pb file to .bytes. I was going to use this code but I don't have any checkpoints for this graph, only the .pb file.
from tensorflow.python.tools import freeze_graph
freeze_graph.freeze_graph(input_graph = model_path +'/raw_graph_def.pb',
input_binary = True,
input_checkpoint = last_checkpoint,
output_node_names = "action",
output_graph = model_path +'/your_name_graph.bytes' ,
clear_devices = True, initializer_nodes = "",input_saver = "",
restore_op_name = "save/restore_all", filename_tensor_name = "save/Const:0")
Is there a simple way to do this conversion? Or a simple way to get a checkpoint for this model? It seems like this should be obvious but I can't figure it out. Thanks.
You can just switch extension from .pb to .bytes and for most cases this will work just fine. Check my TF Classify example for Unity.
Related
I have a PySpark code to train an H2o DRF model. I need to save this model to disk and then load it.
from pysparkling.ml import H2ODRF
drf = H2ODRF(featuresCols = predictors,
labelCol = response,
columnsToCategorical = [response])
I can not find any document on this so I am asking this question here.
I think the section of the docs on deploying pipeline models might be relevant: https://docs.h2o.ai/sparkling-water/2.3/latest-stable/doc/deployment/pysparkling_pipeline.html
Pipelines may not be what you're looking for depending on the use case.
Something like the following might work for your use case.
drf = H2ODRF(featuresCols = predictors,
labelCol = response,
columnsToCategorical = [response])
pipeline = Pipeline(stages=[drf])
model = pipeline.fit(data)
model.save("drf_model")
lrmodel=logreg_pipeline.fit(X_train_resh,y_train_resh)
lrmodel.write().overwrite().save("E:/strokestuff/strokelrpred")
lrmodel.save("E:/strokestuff/strokelrpred")
lrmodel is a pipeline, I want to save it, My aim is to save this model then load it to deploy it in Flutter. I have tried every solution I got, can someone help me with this?
You can use joblib to save your model in .joblib file:
import joblib
pipe_clf_params = {}
filename = 'E:/strokestuff/strokelrpred/strokelrpred.joblib'
pipe_clf_params['pipeline'] = lrmodel
joblib.dump(pipe_clf_params, filename)
I am working on a project where I am using Ionic and TensorFlow for machine learning. I have converted my TensorFlow model to a tensorflowjs model. I have put the model.json file and shard files of the tensorflowjs model in the assets folder in Ionic. Basically, I have put my tensorflowjs model in the assets folder of ionic. I am wanting to use Capacitor to access the camera and allow users to take photos. Then, the photos will be passed to the tensorflowjs model in assets to get and display a prediction for that user.
Here is my typescript code:
import { Component, OnInit, ViewChild, ElementRef, Renderer2 } from '#angular/core';
import { Plugins, CameraResultType, CameraSource} from '#capacitor/core';
import { DomSanitizer, SafeResourceUrl} from '#angular/platform-browser';
import { Platform } from '#ionic/angular';
import * as tf from '#tensorflow/tfjs';
import { rendererTypeName } from '#angular/compiler';
import { Base64 } from '#ionic-native/base64/ngx';
import { defineCustomElements } from '#ionic/pwa-elements/loader';
const { Camera } = Plugins;
#Component({
selector: 'app-predict',
templateUrl: './predict.page.html',
styleUrls: ['./predict.page.scss'],
})
export class PredictPage{
linearModel : tf.Sequential;
prediction : any;
InputTaken : any;
ctx: CanvasRenderingContext2D;
pos = { x: 0, y: 0 };
canvasElement : any;
photo: SafeResourceUrl;
model: tf.LayersModel;
constructor(public el : ElementRef , public renderer : Renderer2 , public platform : Platform, private base64: Base64,
private sanitizer: DomSanitizer)
{}
async takePicture() {
const image = await Camera.getPhoto({
quality: 90,
allowEditing: true,
resultType: CameraResultType.DataUrl,
source: CameraSource.Camera});
const model = await tf.loadLayersModel('src/app/assets/model.json');
this.photo = this.sanitizer.bypassSecurityTrustResourceUrl(image.base64String);
defineCustomElements(window);
const pred = await tf.tidy(() => {
// Make and format the predications
const output = this.model.predict((this.photo)) as any;
// Save predictions on the component
this.prediction = Array.from(output.dataSync());
});
}
}
In this code, I have imported the necessary tools. Then, I have my constructor function and a takepicture() function. In the takepicture function, I have included functionality for the user to take pictures. However, I am having trouble with passing the pictures taken to the tensorflowjs model to get a prediction. I am passing the picture taken to the tensorflowjs model in this line of code:
const output = this.model.predict((this.photo)) as any;
However, I am getting an error stating that:
Argument of type 'SafeResourceUrl' is not assignable to parameter of type 'Tensor | Tensor[]'.\n Type 'SafeResourceUrl' is missing the following properties from type 'Tensor[]': length, pop, push, concat, and 26 more.
It would be appreciated if I could receive some guidance regarding this topic.
The model is expecting you to pass in a Tensor input, but you're passing it some other image format that isn't in the tfjs ecosystem. You should first convert this.photo to a Tensor, or perhaps easier, convert image.base64String to tensor.
Since you seem to be using node, try this code
// Move the base64 image into a buffer
const b = Buffer.from(image.base64String, 'base64')
// get the tensor
const image = tf.node.decodeImage(b)
// Compute the output
const output = this.model.predict(image) as any;
Other conversion solutions here: convert base64 image to tensor
We are trying to use TensorFlow Face Mesh model within our iOS app. Model details: https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view.
I followed TS official tutorial for setting up the model: https://firebase.google.com/docs/ml-kit/ios/use-custom-models and also printed the model Input-Output using the Python script in the tutorial and got this:
INPUT
[ 1 192 192 3]
<class 'numpy.float32'>
OUTPUT
[ 1 1 1 1404]
<class 'numpy.float32'>
At this point, I'm pretty lost trying to understand what those numbers mean, and how do I pass the input image and get the output face mesh points using the model Interpreter. Here's my Swift code so far:
let coreMLDelegate = CoreMLDelegate()
var interpreter: Interpreter
// Core ML delegate will only be created for devices with Neural Engine
if coreMLDelegate != nil {
interpreter = try Interpreter(modelPath: modelPath,
delegates: [coreMLDelegate!])
} else {
interpreter = try Interpreter(modelPath: modelPath)
}
Any help will be highly appreciated!
What those numbers mean completely depends on the model you're using. It's unrelated to both TensorFlow and Core ML.
The output is a 1x1x1x1404 tensor, which basically means you get a list of 1404 numbers. How to interpret those numbers depends on what the model was designed to do.
If you didn't design the model yourself, you'll have to find documentation for it.
I trained a model for Style Transfer and trained it on 1000's of images. I saved the Model for every 1000 of images and also saved the Transform Network final weights. Now I want it to be saved as a model so that I can use it in an app but none of the search are giving me a clear answer how to do that.
VG1 = vgg.VGG16("/kaggle/working/transformer_weight.pth")
example = torch.rand(1, 3, 800, 800)
traced_script_module = torch.jit.script(VG1, example)
traced_script_module.save('kaggle/working')
but it gives
RuntimeError:
Module 'Sequential' has no attribute '_modules' :
File "/kaggle/working/vgg.py", line 43
layers = {'3': 'relu1_2', '8': 'relu2_2', '15': 'relu3_3', '22': 'relu4_3'}
features = {}
for name, layer in self.features._modules.items():
~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
x = layer(x)
if name in layers:
I am beginner and I have been trying for days. Pls tell if you want more information.
I want to save the model so that I can use it to an android studio to make app.
The notebook is in 'https://www.kaggle.com/starktony45/fast-neural-style'