CMake Error at HW1_generated_student_func.cu [duplicate] - github

I'm currently trying to compile Darknet on the latest CUDA toolkit which is version 11.1. I have a GPU capable of running CUDA version 5 which is a GeForce 940M. However, while rebuilding darknet using the latest CUDA toolkit, it said
nvcc fatal : Unsupported GPU architecture 'compute_30'
compute_30 is for version 3, how can it fail while my GPU can run version 5
Is it possible that my code detected my intel graphics card instead of my Nvidia GPU? if that's the case, is it possible to change its detection?

Support for compute_30 has been removed for versions after CUDA 10.2. So if you are using nvcc make sure to use this flag to target the correct architecture in the build system for darknet
-gencode=arch=compute_50,code=sm_50
You may also need to use this one to avoid a warning of architectures are deprecated.
-Wno-deprecated-gpu-targets

I added the following:
makefiletemp = open('Makefile','r+')
list_of_lines = makefiletemp.readlines()
list_of_lines[15] = list_of_lines[14]
list_of_lines[16] = "ARCH= -gencode arch=compute_35,code=sm_35 \\\n"
makefiletemp = open('Makefile','w')
makefiletemp.writelines(list_of_lines)
makefiletemp.close()
right before the
#Compile Darknet
!make
command. That seemed to work!

Related

Failed to find 'Power Electronics/Full-Bridge Converter' in library 'powerlib'?

Im trying to run a simulation (done in matlab 2020a I guess) but when running it , it gives the next error
Failed to find 'Power Electronics/Full-Bridge Converter' in library 'powerlib' referenced by Full-Bridge Converter'
But in my install it already has the toolboxes installed
SimElectronics Version 2.5 (R2014a)
SimMechanics Version 4.4 (R2014a)
SimPowerSystems Version 6.1 (R2014a)
Simscape Version 3.11 (R2014a)
I have been told these are necessary,but then what? the needed toolboxes are supposed to be installed. What more is needed?
PD
Some friend told me the simpowersystems were merged alongside the simelectronics, into SimscapePowerSystems, so guessing the model is writen in that, whats the lower version with comes with these toolbox?, 2017?, 2016?

Selecting OpenCL CPU platform in Compute.scala

I installed 3 different OpenCL runtimes on my laptop:
NVIDIA CUDA OpenCL on GPU
Intel OpenCL SDK on CPU
POCL (also on CPU)
As a result, here is a part of the result of clinfo:
$ clinfo
Number of platforms 3
Platform Name Portable Computing Language
Platform Vendor The pocl project
Platform Version OpenCL 1.2 pocl 1.1 None+Asserts, LLVM 6.0.0, SPIR, SLEEF, DISTRO, POCL_DEBUG
...
Platform Name Intel(R) OpenCL
Platform Vendor Intel(R) Corporation
Platform Version OpenCL 1.2 LINUX
...
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 1.2 CUDA 9.0.282
Now I want to use the Compute.scala Scala library to perform NDArray computations on GPU and CPU (based on the LWJGL library.
The device type is selected using the following import line at the beginning of the program:
import com.thoughtworks.compute.gpu._ // for GPU
// OR
import com.thoughtworks.compute.cpu._ // for CPU
After a quick test, my code runs fine with both device types. However, how am I supposed to know WHICH platform is running when choosing CPU? Is it the Intel OpenCL platform, or POCL?
By looking at the code of the library, I suspect it just picks the first CPU platform in the platform list.
line with OpenCL.UseAllCpuDevices (https://github.com/ThoughtWorksInc/Compute.scala/blob/742d595e5eb56f4051edfc310f64e0f9dbab5ac1/cpu/src/main/scala/com/thoughtworks/compute/cpu.scala#L109)
line platformIds.collectFirst { ... (https://github.com/ThoughtWorksInc/Compute.scala/blob/742d595e5eb56f4051edfc310f64e0f9dbab5ac1/OpenCL/src/main/scala/com/thoughtworks/compute/OpenCL.scala#L363)
So my questions are:
How do I know which CPU platform is being used?
How can I select the platform I want to use in Compute.scala?
Maybe it is necessary to "disable" one of the platforms. If it's the case, how can I do that?
Thank you.
I found a quick-and-dirty way to switch between platforms: I simply rename the ICD file in /etc/OpenCL/vendors/ to "disable" it, so that only the platform I want is detected (can be checked with clinfo).
For example $ sudo mv /etc/OpenCL/vendors/pocl.icd /etc/OpenCL/vendors/pocl.icd_ to use intel64 (the other available CPU platform) instead of pocl, and vice-versa for using pocl instead of intel64.
If someone has a more clean and programmatic way to solve this, their are welcome!

OpenCL and MEX-FILES ERROR

I am constantly running across a problem for a week. I don't know what the issue could be. Hope you could help me. Thank you in advance.
So here is brief description of the issue.
I am trying to use OpenCL and Mex code. The Mex part of the code reads a matlab .dat file and the opencl part of code further runs the code on Intel CPU as of now, though future plan would be to run it on a GPU. The Intel PLatform is being detected on just a normal OPENCL code ( without mex)(platforms 1,Devices 1). But on using the OPenCL with Mex, OpenCl fails to recognize the Intel Platform ( platforms found 0 , error :1001).
Just by including the MEX part of the code the number of platforms shown is zero. I am using matlab Compile runtime (MCR) to compile the mex files.
make :
mpicxx -fPIC -L/opt/intel/opencl-1.2-4.4.0.117/lib64 -L/usr/local/MATLAB/MATLAB_Compiler_Runtime/v83/bin/glnxa64 test.cpp -o test -lOpenCL -I/usr/include/CL -lmat -lmx -lmex -I/usr/local/MATLAB/MATLAB_Compiler_Runtime/v83/extern/include -Wl,-rpath,/opt/intel/opencl-1.2-4.4.0.117/lib64,-rpath,/usr/local/MATLAB/MATLAB_Compiler_Runtime/v83/bin/glnxa64
Is there some problem with the way I am linking the files?
Most probably what happens is the following. MATLAB ships with some version of tbb (Intel Threading Building Blocks) library and this version is older than the one required by Intel OpenCL CPU runtime. When you run standalone OpenCL application the correct version of TBB is loaded (the one that comes with OpenCL runtime), but when MATLAB starts it loads its own version of TBB.
So, one solution would be to use LD_PRELOAD before starting MATLAB, like so:
$ LD_PRELOAD=/opt/intel/opencl-1.2-4.4.0.117/lib64/libtbb.so matlab
Same Problem here:
Any C/C++ Program not running as a MEX module will see the Intel OpenCL Platform AND the Nvidia OpenCL Platform. But while inside a MEX module only the Nvidia OpenCL Platform will show up.
Using the same Intel OpenCL release ( 4.4.0.117 ) in combination with Matlab R2012b on openSUSE 2012.2. I assume you are using Matlab 2014a.
The same problem exists when using the inofficial Matlab OpenCL Toolbox:
wiki
download
To replicate:
unzip, fix path for OpenCL headers and libOpenCL in make.m
run matlab and go to directory with make.m, then type:
make
opencl_info = opencl()
opencl_info.platforms
Edit:
Yuri is right, as soon as you preload the correct libtbb.so it works.
You could also replace the libtbb.so in your $MATLABROOT/bin/glnxa64/libtbb.so.2 with a link to the current one from the Intel OpenCL Runtime.

CUDA driver too old for Matlab GPU?

Ok,this is something am having problems with. I recently installed Matlab R2013a on a x86_64 Linux system running RHEL 5, attached to a Tesla S2050. I have never used the GPU functionality in Matlab itself (but have tried some of it using Jacket that lets one
program GPUs in Matlab).
The Tesla is working fine with all the drivers ,CUDA installed (Jacket v1.8.2 is running fine without complaints).
** Driver Version: 270.41.34 (the last version from 2011, supporting S2050) **
CUDA: v5.0.35
nvcc -V : Cuda compilation tools, release 5.0, V0.2.1221
But the Matlab r2013a complains:
gpuDevice errors:
Caused by:
The CUDA driver was found, but it is too old. The CUDA driver on your system supports CUDA version 4. The required CUDA version is 5 or greater.
Now, I understand the error that Matlab has problems with the Driver version. But, I have installed the latest CUDA toolkit and the latest driver that nVidia has to offer for the Tesla S2050 that I have.
Is there a later driver version available for this Tesla (i downloaded the latest driver & when trying to install, it simply complains that I don't have the compatible nVidia hardware).
How do I tell Matlab to consider the relevant CUDA ? (where to set PATH, CUDA_PATH etc., if any ? )
Are there any other checks i need to perform the evaluate the working of the attached Tesla ?
Thanks in advance for help.
You cannot use CUDA 5.0 with driver 270.41.34. CUDA 5 requires 304.54 or newer. This is not a MATLAB issue.
Newer drivers that support CUDA 5 will also support Tesla S2050.
For example this recent 319.17 driver lists Tesla S2050 on the supported products tab. Or use the 304.54 that comes with cuda 5.0.

Why I cannot make GPUvariable ? (Unable to allocate memory using cudaMalloc)

I am trying to use GPUmat(MATLAB) under ubuntu.
For my system, GPUstart works well without any error message like follows :
Starting GPU
- GPUmat version: 0.280
- Required CUDA version: 4.2
There is 1 device supporting CUDA
CUDA Driver Version: 4.20
CUDA Runtime Version: 3.0
Device 0: "GeForce GT 520"
CUDA Capability Major revision number: 2
CUDA Capability Minor revision number: 1
Total amount of global memory: 1073283072 bytes
- CUDA compute capability 2.1
...done
- Loading module EXAMPLES_CODEOPT
- Loading module EXAMPLES_NUMERICS
-> numerics21.cubin
- Loading module NUMERICS
-> numerics21.cubin
- Loading module RAND
But when I try to create variable like ' a = GPUdouble(rand(2)); '
the following error message appears
Error using mxNumericArrayToGPUtypePtr
Unable to allocate memory using cudaMalloc
Error in GPUdouble (line 52)
p.slot = mxNumericArrayToGPUtypePtr(p,
double(A));
I can't guess any of reason why this is hapenning. Can you give me some advice to solve this? I really appreciate for your help.
p.s) At the first time, GPUstart does not work due to the library problems. So I moved all the libraries of CUDA 4.2 to matlab library folders according to GPUmat developer's advice.
Thank you !
You have an incompatible version of the CUDA runtime installed. GPUStart tells you "Required CUDA version: 4.2" but you have the CUDA 3.0 toolkit installed.
You will need to update your CUDA toolkit to a supported version.