I'm starting to use Theano to build my own Machine learning algorithms.
Will Theano parallelism features be lost when spark split the job?
Related
I am working on a project using R-CNN detector (Regional convolutional neural network) for object detection. I created the detector and trained by Matlab and tested and it works fine however when I come to the stage to test it and deploy it to raspberry, the Matlab does not support the deployment of this detector to hardware. I saw many resources that it can be converted to ONNX format. I have already converted the Matlab file to ONNX file and now I am looking for the steps to use this onnx file and deploy to raspi, therefore,
I am looking for your support to deploy the pre-trained detector.
Your ONNX file can be used with the ONNX Runtime Inference engine, following the ARM 32 Dockerfile build instructions.
Related Stack Overflow answer:
https://stackoverflow.com/a/59725880/10897861
I have developed a system using signal processing technique in matlab. I want to use raspberry pi to this system.
In this link, they say Octave, Scilab ,FreeMat tools can be used to replace pc with raspberry pi.
Can i use this tools to run signal processing algorithms?
Matlab/Simulink can not run m code directly on raspberries, but you can run Simulink models using the raspberry support package for simulink. This includes Matlab S-Functions, which contain m Code.
The typical workflow is:
Create a simulink model which implements the functionality. Try to generate code and test it if possible on your Pc.
Put in the blocks from the support package to get access to the io of the raspberry. Change the target to the raspberry and built it again.
Download the binary to the raspberry and start it. The application now runs on the raspberry, the pc is not needed.
No it is not possible. MATLAB can only run on Intel x86 architectures, and the Raspberry Pi uses an ARM processor. See here for which platforms MATLAB supports: http://www.mathworks.com/support/sysreq/current_release/.
However, you can use MATLAB to interface with the Raspberry Pi in order to get sensor and image data: http://www.mathworks.com/hardware-support/raspberry-pi-matlab.html
If you want to run signal processing algorithms, if you can, stick with Octave's signal package - http://octave.sourceforge.net/signal/ - and yes it is possible to run Octave on a Raspberry Pi: http://wiki.octave.org/Rasperry_Pi.
Alternatively, try installing NumPy and SciPy together with Python - http://wyolum.com/numpyscipymatplotlib-on-raspberry-pi/ - and use the signal package from that platform: http://docs.scipy.org/doc/scipy/reference/signal.html. NumPy has very similar syntax to MATLAB and it'll take you no time at all to learn it. http://cs231n.github.io/python-numpy-tutorial/
You have lots of alternatives... but unfortunately you can't use MATLAB. Besides which, MATLAB uses Java as the backbone, and running Java on a Raspberry Pi is very slow. Not only that, but MATLAB is several GB large, and to have this program occupy a good chunk of your SD card is very counter productive.
Another option is to use the MATLAB coder or the MATLAB Embedded Coder to generate C code from the MATLAB code. Note that only a sub-set of the MATLAB language supports code generation. That code can then be compiled and run natively on Raspberry Pi.
With the R2018b release of MATLAB, you can deploy your MATLAB code on Raspberry Pi as a standalone executable.
Refer the Deploying MATLAB functions on Raspberry Pi for more information.
What are the optimal performance tuning settings to put in my sts.ini file to ensure STS runs well on my Mac?
I am looking to optimize two machines. One is a MacBook Pro with 16GB ram and a 6-core 2.6Gh i7 processor and the other is an 8GB dual-core processor 2.2Ghz.
I am looking to get a faster overall speed for STS. The thing that really slows me down is the change event handler process. When it starts running then everything slows down.
There are quite a few one-off guides around for optimal performance tuning of the Spring Tool Suite. Some are written for a Windows platform and some for an OS X platform. Since STS runs on the JVM I thought the optimal settings would work in either environment.
I haven't seen a well-done list of performance tuning options. It would be nice to see if the configuration should change based off of system properties such as RAM, processor, and number of cores.
Has anyone had any success running mongodb on the beaglebone black? do I have to install a different flavor of linux to get this to work or can I use angstrom.
MongoDB (as at 2.4) does not officially support ARM processors. You can watch/upvote SERVER-1811 in the issue tracker, however I wouldn't expect this to get much traction until there are 64-bit server-class ARM processors commonly available.
In general a 32-bit low power ARM processor with limited memory (512Mb RAM on the BeagleBone Black) is not a great fit for a memory-mapped database server like MongoDB. Due to the use of memory-mapped files, 32-bit versions of MongoDB are also limited to about 2GB of data and indexes.
There are some extremely old versions of MongoDB that have been hacked to work on ARM to some extent (eg: MongoDB 2.1.1-pre), which is a very early development release of MongoDB 2.2. I wouldn't recommend this unless you're extremely desperate; likely you will spend far more time trying to get things working than writing productive code.
Better approaches would be to either:
use a database which is designed for lightweight environments (eg. SQLite)
use your BeagleBone to run a MongoDB client application rather than a server
I have the Matlab Parallel Computing Toolbox installed on two computers running Matlab (Macbook Pro i5 and Macbook Pro i7). For a thesis project, we have to shooting simulations, for which I need a lot of computer power. I know about the matlabpool option with parfor to use both cores on my local computer. Is there a way to connect the two macbooks via an ethernet cable or hub directly and configure a small local network so I can use four cores at the same time? How to set this up?
You can only do this if you also acquire the Matlab Distributed Computing Server software. On its own the Parallel Computing Toolbox only provides parallel computation on a single multi-CPU/multi-core computer.
On Mathworks' file exchange there is a package by Markus Buehren called Multicore - Parallel processing on multiple cores which
... provides parallel processing on multiple cores on a single machine or on multiple machines that have access to a common directory.
To run this package, no Matlab toolboxes are needed.
The package is discussed at http://tech.groups.yahoo.com/group/multicore_for_matlab/ (Markus provides user support there too, I guess).