I have been building hybrid app using Ionic/Angular with the capacitor plugin.
I code in Javascript.
I wanted to add my neural network classifier on it.
It requires 500x500 pixels image.
On computer with gen 2 nvidia GPU I have around 10 fps.
I was hoping to get 2 or 3 fps on recent mobile device but I wasn't able to use the computation power of the phone using TensorFlow.js.
How to use GPU for predictions in TensorFlow.js ?
In documentation, it's said that TensorFLow.js should automaticaly select GPU so I don't have to add any thing else to my code.
Also :
"TensorFlow.js executes operations on the GPU by running WebGL shader programs."
Is it not possible to use Capacitor + Tensorflow.js use full power of the GPU ?
Thank you very much for you help.
Related
I am searching a simulator for my robot learning research.
In the learning process, I need to change parameters of both environment (friction coefficients, terrain height in the world) and robot itself (mass, inertia).
How can simulators like Gazebo and Webots realize it?
(another problem: bisides physics engine, I also need visual reality for computer-vision-aided algorithms.
Is there any simulator that could provide both functions? )
Webots allows you from a supervisor program to easily change any parameter of a simulation (including friction coefficients) while it is running. Moreover it has a VR interface. I don't know about Gazebo.
Id like to know if it is currently possible to conduct network scans in Ionic on android and IOS, effectively what id like to know is if its possible to read the 5.2ghz and 2.4ghz networks and return BSSID its signal strength and what channel its using coded under the ionic framework. currently building a network planning tool and its something that I would like to include. I had heard that at one point IOS was not allowing these tools onto the App Store, but ive also heard now that they do allow it.
We are trying to develop a low-cost ultrasound device that can be used by inexperienced operators for health care in developing countries. We have created a low-profile optical tracking system that connects to the ultrasound probe. It outputs positional data from both the binocular camera and an on-board 9-axis IMU. The ultrasound pictures are collected on an iPhone at a frame rate of 60 per second and are time stamped to the millisecond based on the iPhone system time. The optical tracker collects positional data onto a Windows 10 laptop. We need to exactly synchronize the system time of the 2 devices (iPhone, laptop) at least to 1/10 sec and preferably to the millisecond.
Is there a way to access the precise system time on the iPhone and synchronize this with the laptop?
Full disclosure: I am an obstetrician and not an engineer. But I’m not satisfied with the story I’m getting from the developers about this. It must be possible.
We've tried pointing the laptop to the same internet clock as the iPhone, but the sync is not good enough. Maybe because of wifi latency?
I would like to run deep learning functionality with MATLAB, and my graphics card needs to have compute capability 3.0 or higher. How do I find out whether it is supported? I checked my PC, and it says Intel HD graphics.
Does my PC support this functionality?
If you're running MATLAB, you can type the command gpuDevice to return information about your graphics card, and it will tell you whether it's supported by MATLAB (i.e. compute capability 3.0 or higher). To run this you'll need to make sure you have a CUDA driver for your card installed (but if you run the command and you don't have the driver installed, MATLAB will give you an error message pointing you to the website where you can install the driver).
If it's a supported card, it should be capable of running MATLAB's deep learning functionality. Bear in mind that this functionality requires not only MATLAB, but also Parallel Computing Toolbox and Neural Network Toolbox as well.
Now I am working on a deep learning problem. I am trying to use Convolutional neural network in matlab. But the documentation says, we need NVIDIA graphics card for gpu computing.
My laptop has Intel HD graphics 2600 card for graphics processing. So can someone advise any other options we have in this case to run the deep learning algorithms and convnet algorithms.
Can i run those algorithms without GPU computing and what will be the effect(in time difference).
You are not going to be able to achieve much with an integrated Intel graphics card. First, most deep learning frameworks use CUDA to implement GPU computations and CUDA is supported only by the NVidia GPUs. There were several attempts to extend the standard deep learning frameworks to OpenCL, notably Theano has an incomplete OpenCL backend and Caffe has been ported to OpenCL by the AMD research lab. However, these are either incomplete at this point or not as actively maintained.
The other issue is performance. Depending on your application, you might require a much better GPU than what your laptop can provide. It is not uncommon to use multi-GPU machines equipped with NVidia Titans to train networks for days or even weeks.
My recommendation is to either buy a dedicated machine for deep learning computations (a single GPU machine with the just released NVidia GTX 1080 can be purchased for the price of a standard laptop) or rent GPU instances on Amazon EC2.
You can use Googles Tensor Processing Units with Google Colab where you can get a GPU for "free" they have a pretty cool cloud GPU technology
You can stat working with your google accounts
with a Jupiter notebook: https://colab.research.google.com/notebooks/intro.ipynb
Kaggle (Google Owned Data Science competition site) also has this option to create Jupiter notebooks + GPU, but only in limited cases:
notebooks: https://www.kaggle.com/kernels
Documentation for it: https://www.kaggle.com/docs/notebooks