I have completed a tensorflow model than converted it correctly to tflite in order to use it in a mobile app using flutter. I didn't know how to integrate a personalized tflite model with flutter. All the available examples use the pretrained models. When I tried my model, the camera launches for a while and stops immediately!
I followed the source code from this link which uses the "tflite" pacakge : https://medium.com/#shaqian629/real-time-object-detection-in-flutter-b31c7ff9ef96
I added my own tflite model alongside with other models in all the ".dart" files and in "pubspec.yaml".
you can try the 'tflite_flutter' package. 'tflite' package is not suitable for custom trained models as you will get shape incompatible error message
Related
I have a basic AR app with Fluttern, using the arcore_flutter_plugin plugin "ArCoreReferenceNode".
I am trying to import a basic model but unfortunately it doesn't display it in any way, I don't even get an error message. The code shows no errors. Only the built-in cube is displayed.
Has anyone seen this error before?
GLB extension files were used as models.
I am trying to use MoveNet in Flutter using tflite. If anyone has experience with it or example with the implementation, an example would be appreciated.
I have successfully implemented MoveNet singlepose lightning model with tflite_flutter on my Flutter application. You can refer to this repo I made: https://github.com/AGRapista/FitnessInstructor The code you should be interested in is in lib/test.dart
I am currently working on a capacitor plugin, that should allow me to run a CoreML-model on the ios-version of my Ionic-App.
Even though I used the common terminology to access the model-file, the model is somehow not found in my ios-plugin-script. Is there a different way I can access the model besides VNCoreMLModel or is there maybe in general a problem with using CoreML models in capacitor plugins?
I also tried to load the model, using the same lines of code in a full/native swift app, what worked fine.
The model is already located in the Plugins' Directory (together with the files Plugin.swift, Plugin.m and so on...) and is accessed via calling it as //VNCoreMLModel(for: "modelname".model).
The error message in particular is : "Cannot find 'Resnet50' in scope"
code snippet:
guard let model = try? VNCoreMLModel(for: Resnet50().model) else {return}
(I personally think, that when integrating the plugin into my app, the model file is maybe not transferred into the 'Development Pods' for any reason.)
I don't know what capacitor is, but Resnet50 is a class that is automatically generated by Xcode. You either need to copy the source code for that class into your own project, or not use that class and instantiate an MLModel object for your model instead.
I'm on a project that uses tflite Posenet to run on a mobile with Flutter Framework. We wanted more precision score on our tests, but we realized that the repository which is given us by the original example on Dart API docs https://pub.dev/documentation/tflite/latest/ uses Multi-Person Pose Estimation by defalt. (Repository mentioned:https://github.com/shaqian/flutter_realtime_detection).
We know that the original tfjs repository talks about it and give examples https://github.com/tensorflow/tfjs-models/tree/master/posenet but we can't find it on Dart API for flutter.
How can I set Single-Person Pose Estimation?
There is another tflite_flutter plugin which is actively being developed: https://pub.dev/packages/tflite_flutter. Please take a look.
This plugin allows you to run any custom .tflite model, so you should be able to download the single person posenet model provided from the official TFLite site here: https://www.tensorflow.org/lite/models/pose_estimation/overview.
There aren't any official flutter examples, but you should be able to refer to the Android/iOS examples to see how to pre-process / post-process the data.
Hy guys, updating the issue:
We maneged it after some tests with the Tflite.runPoseNetOnImage function. This function has a property called numResults, wich we can set to "1". Tflite.runPoseNetOnFrame has the same property.
You can see it in this link (https://pub.dev/documentation/tflite/latest/)
I created a new classifier from the Visual Recognition Beta Tool and I am trying to access the new classifier from Unity. In the demo sample I see the classify function should ideally loop through all classifiers found in your bluemix visual recognition service instance.
However when I look at the output log on the console the only classifier that is found is the default classifier.
I know my credentials and the service instance are correct. Does this mean I should create my new classifier from code instead of doing it in the visual recognition beta tool? I don't see why this would make a difference as the classifier is up and running and works from the web gui.
Its only when I connect to my service instance from unity and test with the sample visual recognition unity SDK that this customer classifier isn't found
I am not sure why the unity Sdk sample demo does not see my classifier.
Regards
Leon
The Visual Recognition service abstraction in the Unity SDK does not iterate through all trained classifiers. Please specify the classifier you would like to use as a string array (classifierIDs).
VisualRecognition visualRecognition = new VisualRecognition();
string[] owners = {"IBM", "me"};
string[] classifierIDs = {"default", "<classifier-id>"};
visualRecognition.Classify(OnClassify, <imagePath>, owners, classifierIds, 0.5f);