how can I run this example in my pc? I don't have Nvidia graphic cards so I cannot use Cuda in Matlab.
I need to do it with Matlab because half of the my code is written in Matlab and all variables are in Matlab format.
My PC has ATI Radeon HD 4530 graphic card.
I read this page, but it is still confusing to understand which one is suitable.
Update1: I want to Train a deep neural network for image classification. A task similar to this example.
Update2: When I run the code mentioned in Update1, it gives me following error:
There is a problem with the CUDA driver or with this GPU device. Be sure that you have a supported GPU and that the
latest driver is installed.
Error in nnet.internal.cnn.SeriesNetwork/activations (line 48)
output = gpuArray(data);
Error in SeriesNetwork/activations (line 269)
YChannelFormat = predictNetwork.activations(X, layerID);
Error in DeepLearningImageClassificationExample (line 262)
trainingFeatures = activations(convnet, trainingSet, featureLayer, ...
Caused by:
The CUDA driver could not be loaded. The library name used was 'nvcuda.dll'. The error was:
The specified module could not be found.
Yes you can. You will have to create DLL's and use OpenCL. Look into S-Functions and Mex.
Check the documentation
There are third party tools that you may be able to use. I personally have never tried it.
Possible Tool
MatConvNet ->
Work both on CPU and GPU.
MatConvNet is a MATLAB toolbox implementing Convolutional Neural Networks (CNNs) for computer vision applications. It is simple, efficient, and can run and learn state-of-the-art CNNs. Many pre-trained CNNs for image classification, segmentation, face recognition, and text detection are available.
Another option: Caffe in general and Openmp variant of caffe in particular support Matlab and work both on CPU and GPU
Related
I have a .h5 file I want to upload to Matlab using the import tool for TensorFlow in matlab, like this:
layers = importKerasLayers('myModel.h5');
But I get the following error:
Option to import Keras networks containing LSTM layers is not yet
supported.
layers =importKerasLayers('myModel.h5');
I've tried this in 2018a, and apperantly all layers related to LSTM are available in this version after the toolbox is downloaded, but I keep getting the error. In this link, you can see the toolbox has support for LSTM layers, but not sure what's causing the error then.
Is there any workaround to solve this? What could be causing the error?
Your link is for R2018b documentation. This is the R2018a documentation and it shows no support for LSTM! So probably switch versions and try!
I am trying to get into deep learning using the neural network library in matlab. A good starting step seems to be training an autoencoder. In that respect, it would be good to see whether I am getting the msot out of my gpu.
In this connection, When I run
tic
autoenc1 = trainAutoencoder(allSets,5,...
'L2WeightRegularization',0.001,...
'SparsityRegularization',1,...
'SparsityProportion',0.2,...
'DecoderTransferFunction','logsig',...
'useGPU',true)
toc
I get "Elapsed time is 19.680823 seconds.".
However, not using the gpu (setting 'useGPU' to false) it only takes 8.272708 seconds.
I am puzzled by this, since I am assuming that using the gpu for neural networks will speed things up? Does anyone know of any way to check whether matlab and cuda are properly interfacing, or see how matlab is actually using the resources?
I have cuda 8.1 installed, and am using a GeForce GTX 960M (compute capability 5.0). The matlab version is 2016b.
EDIT: as has been pointed out, there is as of yet no cuda 8.1. What I do have is 8.0, and cudnn 5.1.
As pointed out in the comments, performing computations on the GPU is not necessarily faster. Instead, the impact on performance depends on the additional overhead of data conversion and transfer.
Usually, the overhead can be influenced via the batch size, but the trainAutoencoder function does not provide that option.
For general measurement and improvement of GPU performance in MATLAB, see this link.
I like to use cuSolver code for Eigen value decomposition of complex matrix in Matlab.
I am using MATLAB CUDA kernel and it seems that its not possible to interface cuSolver with MATLAB as the cuSolver contains the code for host as well as for device (as mentioned here: http://docs.nvidia.com/cuda/cusolver/#syevd-example1)
while MATLAB CUDA kernel works only for the kernel function..
Please comment.
Any other idea to compute Eigenvalue decomposition of large no of matrices containing complex data in parallel on GPU by using Matlab environment?
You almost certainly need to use the MEX interface. This allows you to take in gpuArray data, and call kernels and other CUDA library functions.
See the doc: http://uk.mathworks.com/help/distcomp/run-mex-functions-containing-cuda-code.html for more.
I'm receiving an out of memory error when training using GPU using Matlab's NN toolbox, and it appears that subdivision is not useful.
I have tried:
net2 = train(net1,x,t,'reduction',N);
using various reduction values, however I am unsure if it works for GPU side? It may not. Would using the GPU matrix setup directly as matlab talks about be the way to go or other there other options?
I tried with GPUmat, but the neural network toolbox from mathworks does´t support it. Otherwise I must change the nn-toolbox by myself. But it´s too hard for me. Any suggestion for me?
I don't know whether this will accelerate the Neural Network Toolbox in particular, but the Mathworks now offers CUDA GPU support via the Parallel Computing Toolbox:
http://www.mathworks.com/discovery/matlab-gpu.html?s_cid=HP_MI_tech_gpu
Matlab provides its own toolbox for training neural networks on GPU, see here.
As an author, I also advice to use my toolbox ConvNet, that uses kernels of Alex Krizhevsky's library cuda-convnet2. It also has pure CPU and Matlab versions, that work identically. There is also another toolbox for Matlab, called MatConvNet, but I have not checked it.