I am trying to incorporate OpenCV libraries in Arduino sketches using Eclipse. As a start, I separate the 2 prjects, OpenCV code and a simple Arduino sketch. They both compile and run beautifully in separate projects after linking everything OpenCV libraries and Arduino libraries, respectively. So then, I try adding OpenCV code into the Arduino sketch project to try and get them to run together. Even after the same linking as I did with the OpenCV project, there are compiling errors such as type '___' could not be resolved. The header inclusions like
#include <iostream>
#include <cv.h>
#include <highgui.h>
seem to not have errors to them. In the console I would have compilation errors until building such as /usr/include/c++/4.8/iostream:38:28: fatal error: bits/c++config.h: No such file or directory. I didn't have this error with the individual OpenCV project. So maybe the AVR C++ compiler is missing something the native c++ compiler has.
I don't get what I am doing wrong or know what I am forgetting.
Any help would be appreciated. Thanks!
It looks like you are trying to program OpenCV into the Arduino. Assuming you wnat to run this code in Arduino Uno, you are never be able to run OpenCV even though you successful compile the project. Arduino Uno has only around 2kb of RAM (think how big the image file you want to process is) and does not have enough flash memory for OpenCV code.
You have to look for another way like running separated OpenCV project on your computer and communicate with Arduino Uno through serial port or something.
Related
I’m new to both micro:bit and MicroPython (or Python in general) - but I want to have it all running in VSCode. I grabbed this extension which was really smooth working with.
My problem now is that I want to leverage external modules, for things like the NeoPixels and also the bit:bot stuff, but I don’t know how to actually get that working. The NeoPixel tutorial is straight forward, but there is no mention on how to add the module.
I tried adding them with pip - but that won’t make them end up on the device. I’ve also tried this extension - hoping it would do some more magic in getting it onto the device.
Is this doable? Or would I have to revert do the online editors?
The micro:bit is a very constrained environment and will not run Python only MicroPYthon. MicroPython was designed to work under the constrained conditions of a microcontroller. As a result MicroPython does not come with the full Python standard library and only includes a small subset of the Python standard library.
For MicroPython to run on the micro:bit there needs to be the MicroPython hex file and any Python code that you have written, with main.py being the entry point.
The VS Code Extensions you linked to use uFlash to copy from your machine to the micro:bit the hex file and any Python files you have written.
To use the neopixel module it should be as straight forward as import neopixel as it is part of the standard BBC micro:bit MicroPython.
For BitBot, it only uses the standard micro:bit MicroPython library so I'm not sure what you are looking to import.
You can create a module by putting the code in .py file and referencing it in your main.py file. You do this by using an import statement that calls the file or specific parts of it.
MicroPython does have the concept of upip but I am not aware of that being available on micro:bit.
I'm using MacOS 10.15.7 with Xcode 12.4, I want to use Armadillo library in my iOS Swift project. So first I installed Armadillo through Homebrew.
Armadillo was installed in path /usr/local/Cellar/armadillo/
I found header files in path /usr/local/Cellar/armadillo/10.5.1/include and library files in path /usr/local/Cellar/armadillo/10.5.1/lib
In Xcode project, Build settings I provided the above paths in Header Search path and Library search path.
I'm getting below three errors.
/usr/local/Cellar/armadillo/10.5.1/include/armadillo_bits/compiler_check.hpp:50:4: error: "*** C++11 compiler required; enable C++11 mode in your compiler, or use an earlier version of Armadillo"
/usr/local/Cellar/armadillo/10.5.1/include/armadillo:23:10: error: 'cstdlib' file not found #include <cstdlib>
<unknown>:0: error: failed to emit precompiled header
Not sure how to proceed.. Any suggestions ???
You will not be able to do this easily. Swift does not yet support C++ interop, but this is a long term goal. You can read about in the C++ Interop Manifesto in the Swift git repo. The best you will be able to do right now is write an extern "C" wrapper in C++ around any C++ functions you want to call, and then import your wrapper into Swift. Since you're using an Xcode project, I would recommend trying something like this.
Alternatively, depending on what you need Armadillo for, you might be able to by away without it. If you just need to do linear algebra, Apple includes LAPACK (see here) and BLAS (see here) inside Accelerate, which is available on all Apple platforms without installing anything. This might even be familiar to you because Armadillo does its matrix decompositions through LAPACK. There's also Quadrature (see here) if you are looking for integration. I've had good experiences with each of these.
I am trying to get the micro-coap librarby (https://github.com/1248/microcoap)
to work on my arduino. When I try to compile it in the Arduino IDE, it reports that <sys/socket.h> dependency in main-posix.c can not be found.
Searches for the problem were not helpful, except for some general C++ answer that was hinting that there is no sys/socket.h on Windows. But this should not have anything to do with Arduino right?
I looked at the ethernet library for arduino and there is a socket.h but it is not in a sys directory.
Hope you can help
It seems that the main-posix.c source file is meant to be compile for a UNIX/LINUX based operating system. If you want to use Windows to compile main-posix.c, you can use projects like Cygwin. Arduino does not have <sys/socket.h> dependency needed to compile main-posix.c. Instead open microcoap.ino in the Arduino IDE and compile and flash it to the hardware.
Someone suggested simply taking the file out of the arduino path / deleting it and that worked. As Stefan posted in his answer, it is used to build the library on unix/linux and has no relevance for arduino.
I am having a problem with compilation of my Java file.
I want to use the libusb-1.0.9 library, but the problem is that it cannot find the symbols, like LibUsbException, LibUsb.bulkTransfer, etc. The code is written in the nano text editor, and I have no idea how to implement this library to my code.
Hey there,
I need to get started to Cuda in Matlab. As I need additional functions than provided from matlab, I need to write my own c++ code, e.g. I want to run my program on 1..N GPU-processors and compare the results to calculate the speedup, which is not supported by Matlab itself (as Matlab always optimizes itself to use all processors).
Now I wonder how to get started best. I already read a lot of papers, but I still wonder for example, what those files are all about:
.cu
.cubin
.ptx
.mex
So which way do I need to go? Writing my code to a .cu file and than compiling it (which tool to use?
My computer is:
Q9550 with GTX460,
Win7 x64,
Matlab R2010b x64,
Visual Express C++ 2008 (free -> 32bit version),
Cuda Toolkit 3.2 (64bit),
Latest Nvidia Driver and GPU Programming SDK 3.2.16_win_64
How to get on? When I try to open one of the examples out of the GPU Programming SDK, e.g. the file vectorAdd_vc90.vcproj ouf ot C:\ProgramData\NVIDIA Corporation\NVIDIA GPU Computing SDK 3.2\C\src\vectorAdd
I get
"The following XML parsing-error occured:
File: C:\ProgramData\NVIDIA Corporation\NVIDIA GPU Computing SDK 3.2\C\src\vectorAdd\vectorAdd_vc90.vcproj
Row: 22
Column: 4
Fehlermeldung:
The user build-file "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\VCProjectDefaults\NvCudaRuntimeApi.rules" wasnt found or couldn't be loaded
The file "C:\ProgramData\NVIDIA Corporation\NVIDIA GPU Computing SDK 3.2\C\src\vectorAdd\vectorAdd_vc90.vcproj" couldn't be loaded"
When I just open the vectorAdd.cu I don't find any way to compile it to run it in Matlab. Perhaps it would also be possible to just work completely without Visual Studio, so that I write my code in Notepad++ for example and compile it myself?
Thanks a lot in advance guys!
If you have access to Parallel Computing Toolbox, you can use the GPU directly using GPUArrays. You can also more easily integrate your own hand-written CUDA code using the parallel.gpu.CUDAKernel object
If the parallel toolbox isn't available, you can still use the mexFunction capability to use the GPU's
http://www.mathworks.com.au/help/distcomp/create-and-run-mex-files-containing-cuda-code.html
I dont think this is available for earlier versions than 2013a. In this case, you can write the mexfunction entry point and include the cuda function calls to pass the memory to and from the device
http://developer.download.nvidia.com/compute/cuda/4_2/rel/toolkit/docs/online/