Is it possible to run yolov5 on raspberry pi 2 (32 bit) in real time? - raspberry-pi

The main issue i get is that pytorch cannot be compatible with 32 bit raspberry pi 2
I've seen some pytorch implementations for ARMv7 and failed to compile/run it

Related

installing python packages on raspberry pi pico

I'm trying to install pyalsaaudio on my Raspberry PI Pico board.
Firmware for my board I have build myself from micropython repo using documentation on how to do it from raspberry.
In documentation for micropython there is a section about Installing packages with mip but mip package is missing
>>> import mip
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: no module named 'mip'
Another way I tryed to install this package was via Thonny package manager, but again, with no positive results.
error:
Failed to build pyalsaaudio
Error Command
'['C:\\Users\\<username>\\AppData\\Local\\pipkin\\cache\\workspaces\\fa71e45a1e41e660688341b77ee2813a\\Scripts\\python.exe', '-I', '-m', 'pip', '--no-color', '--disable-pip-version-check', '--trusted-host', '127.0.0.1', 'install', '--no-compile', '--use-pep517', '--upgrade-strategy', 'only-if-needed', 'pyalsaaudio', '--index-url', 'http://127.0.0.1:36628']' returned non-zero exit status 1.
Is there another way to install this package?
Or is there other package that will allow me to record audio from microphone?
Do you have a Pico or a Pico W? The Pico doesn't have a network interface of its own, so the standard firmware won't include mip. It looks as if mip was added to the Pico W port fairly recently, so you might need to check you have a recent build.
However, pyalsaaudio is a Python wrapper for the ALSA library which runs under the Linux operating system. The Pico boards (like most microcontroller boards used with MicroPython) don't have an operating system and can't have ALSA, so pyalsaaudio is no help here.
How is the audio source connected to your Pico? If it's on an analogue input (with suitable signal conditioning) you should be able to capture audio into memory with ADC.read_timed. If you need more help than that I would try asking in MicroPython's GitHub Discussions which has replaced the older forum.

Which version of arm gnu toolchain should I use to run on Rpi 4B?

I want to use Raspberry Pi 4B to compile code for stm32. Therefore I download the "AArch64 Linux hosted cross toolchains" version of arm-none-eabi from this website. Then I scp to Rpi and unzip it.
But I cannot execute it. The shell told me that cannot execute binary file: Exec format error. Did I download the worng version?
The architecture of Rpi is "armv7l" and the OS I'm using is Pi OS(buster). I thought it belongs to "AArch64 Linux hosted".
Thank you ~
That is ARM v7 architecture.
IIRC, the raspi hardware is armv8 capable machines, but RPI OS runs in aarch32 mode, which is compatibility mode for armv7.
You can also check cpuinfo to be sure about it.
On an ARMv7 kernel/OS(aarch32), armv8 applications cannot be run, but on armv8(aarch64) kernel/OS armv7 applications can be run in aarch32 mode.
Refer ARM's Learn the architecture: Aarch64 Exception Model for more details.
pi#raspberrypi:~ $ cat /proc/cpuinfo
processor : 0
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 108.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xd08
CPU revision : 3
processor : 1
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 108.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xd08
CPU revision : 3
the stm is either armv6-m, arv7-m or one of the armv8-ms. The pi itself is armv8. You either need to run the pi in 32 bit aarch32 mode and use a native compiler or use a cross compiler. aarch64 is armv8-a (arm 64 bit, the armv7-a is the aarch32 compatibility mode).
So you will want to get arm-none-eabi, which you claim but you also claimed aarch64 if it were aarch64 then it would be aarch64-none-eabi or some such name.
Like any other computer you can just get or build for (not hard to build gnu toolchain from sources) arm (32 bit) and get the one that can build for stm32.
gcc version 3.x.x to the present, technically, can build code that will run on an stm32.
If you plan to use a C library then even though you can use the native toolchain on an armv7 based linux on the pi, it is arm-whatever-linux-gnueabi not arm-whatever-eabi and may not work for you.
Just get arm-none-eabi based toolchain, use the proper command line options, and be done with it.

Selecting OpenCL CPU platform in Compute.scala

I installed 3 different OpenCL runtimes on my laptop:
NVIDIA CUDA OpenCL on GPU
Intel OpenCL SDK on CPU
POCL (also on CPU)
As a result, here is a part of the result of clinfo:
$ clinfo
Number of platforms 3
Platform Name Portable Computing Language
Platform Vendor The pocl project
Platform Version OpenCL 1.2 pocl 1.1 None+Asserts, LLVM 6.0.0, SPIR, SLEEF, DISTRO, POCL_DEBUG
...
Platform Name Intel(R) OpenCL
Platform Vendor Intel(R) Corporation
Platform Version OpenCL 1.2 LINUX
...
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 1.2 CUDA 9.0.282
Now I want to use the Compute.scala Scala library to perform NDArray computations on GPU and CPU (based on the LWJGL library.
The device type is selected using the following import line at the beginning of the program:
import com.thoughtworks.compute.gpu._ // for GPU
// OR
import com.thoughtworks.compute.cpu._ // for CPU
After a quick test, my code runs fine with both device types. However, how am I supposed to know WHICH platform is running when choosing CPU? Is it the Intel OpenCL platform, or POCL?
By looking at the code of the library, I suspect it just picks the first CPU platform in the platform list.
line with OpenCL.UseAllCpuDevices (https://github.com/ThoughtWorksInc/Compute.scala/blob/742d595e5eb56f4051edfc310f64e0f9dbab5ac1/cpu/src/main/scala/com/thoughtworks/compute/cpu.scala#L109)
line platformIds.collectFirst { ... (https://github.com/ThoughtWorksInc/Compute.scala/blob/742d595e5eb56f4051edfc310f64e0f9dbab5ac1/OpenCL/src/main/scala/com/thoughtworks/compute/OpenCL.scala#L363)
So my questions are:
How do I know which CPU platform is being used?
How can I select the platform I want to use in Compute.scala?
Maybe it is necessary to "disable" one of the platforms. If it's the case, how can I do that?
Thank you.
I found a quick-and-dirty way to switch between platforms: I simply rename the ICD file in /etc/OpenCL/vendors/ to "disable" it, so that only the platform I want is detected (can be checked with clinfo).
For example $ sudo mv /etc/OpenCL/vendors/pocl.icd /etc/OpenCL/vendors/pocl.icd_ to use intel64 (the other available CPU platform) instead of pocl, and vice-versa for using pocl instead of intel64.
If someone has a more clean and programmatic way to solve this, their are welcome!

Build a shared library on a Raspberry using Lazarus

I'm trying to build a shared library to use on a raspberry pi (model 3B) using lazarus. After some puzzling I found out that my original library didn't work properly so I switched to a very simple library using this example.
But even this simple library doesn't compile properly to be used. When I try to open the library in another lazarus project it gives me the error:
< libName >: cannot open shared object file: No such file or directory
After some research I found the following diagnostics I could run. The file and ldd command in the compile library:
$file ./libname.so gives:
./libname.so: ELF 32-bit LSB shared object, ARM, EABI5 verion 1 (SYSV), dynamically linked, not stripped
$ldd ./libname.so
not a dynamic executable
I have searched for similar cases and from found that others had this problem when they tried to use libraries compile for a different platform/architecture. I already set the project>project options>compiler options>target platform to OS:= linux and Target CPU family: arm.
The system I am using (using $uname -a):
Linux raspberrypi 4.4.13-V7+ #894 SMP Mon Jun 13 13:13:27 BST 2016 armv7l GNU/Linux
with distro: Raspbian GNU/linux 8.0 (jessie)
Lazarus version: 1.2.4+dfsg2-1
FPC version: 2.6.4 arm-linux-gtk2
At this point I am not sure what is wrong and why this library isn't working. I can find very little information on this problem (on the aspberry platform and using lazarus) Any suggestions what I could try to make it work, compile it different or do some more diagnostics?
Found the solution. As Marco proposed above: Add initc to the uses part. It seems that if you try to make a library it has to include initc, cmem or LCL to the uses section because the library has to do some memory management for which it needs one of these units.
However this resulted in another error when trying to load the library:
"Undefined symbol: TC_SYSTEM_ISLIBRARY"
This seems to be a raspberry specific problem, compilers on other systems do not give this error. More information can be found HERE.
Eventually solved this by install fpc 3.0.0 compiler on the raspberry pi, compiled it and the library can be loaded properly (but still needs one of the above units).

CUDA driver too old for Matlab GPU?

Ok,this is something am having problems with. I recently installed Matlab R2013a on a x86_64 Linux system running RHEL 5, attached to a Tesla S2050. I have never used the GPU functionality in Matlab itself (but have tried some of it using Jacket that lets one
program GPUs in Matlab).
The Tesla is working fine with all the drivers ,CUDA installed (Jacket v1.8.2 is running fine without complaints).
** Driver Version: 270.41.34 (the last version from 2011, supporting S2050) **
CUDA: v5.0.35
nvcc -V : Cuda compilation tools, release 5.0, V0.2.1221
But the Matlab r2013a complains:
gpuDevice errors:
Caused by:
The CUDA driver was found, but it is too old. The CUDA driver on your system supports CUDA version 4. The required CUDA version is 5 or greater.
Now, I understand the error that Matlab has problems with the Driver version. But, I have installed the latest CUDA toolkit and the latest driver that nVidia has to offer for the Tesla S2050 that I have.
Is there a later driver version available for this Tesla (i downloaded the latest driver & when trying to install, it simply complains that I don't have the compatible nVidia hardware).
How do I tell Matlab to consider the relevant CUDA ? (where to set PATH, CUDA_PATH etc., if any ? )
Are there any other checks i need to perform the evaluate the working of the attached Tesla ?
Thanks in advance for help.
You cannot use CUDA 5.0 with driver 270.41.34. CUDA 5 requires 304.54 or newer. This is not a MATLAB issue.
Newer drivers that support CUDA 5 will also support Tesla S2050.
For example this recent 319.17 driver lists Tesla S2050 on the supported products tab. Or use the 304.54 that comes with cuda 5.0.