I was having a problem getting one of my programs that uses SDL to compile so to fix it I reinstalled SDL2 and SDL2Image following this link:
https://solarianprogrammer.com/2015/01/22/raspberry-pi-raspbian-getting-started-sdl-2/
I used this link before and have created windows and renderers successfully.
Now the program compiles and runs but I get the error
SDL Initialization failed no available video device
When initializing SDL.
I am not sure what video system is being used because the configuration command disables mir wayland x11 and opengl. The tutorial says something about forcing opengl es.
FOR SDL2:
I downloaded and unpacked the tar file into my home directory, then configured using this command:
../configure --disable-pulseaudio --disable-esd --disable-video-mir --disable-video-wayland --disable-video-x11 --disable-video-opengl
The output was:
SDL2 Configure Summary:
Building Shared Libraries
Building Static Libraries
Enabled modules : atomic audio video render events joystick haptic power filesystem threads timers file loadso cpuinfo assembly
Assembly Math :
Audio drivers : disk dummy oss alsa(dynamic) sndio
Video drivers : dummy opengl_es1 opengl_es2
Input drivers : linuxev linuxkd
Using libudev : YES
Using dbus : YES
I then used: make -j 4
then: sudo make install
FOR SDL2_Image:
I configured with: ../configure, there was no summary
then: make -j 4
then: sudo make install
I just tried the test program that the tutorial link gives and it executes and displays the image properly, here is the code for initializing things:
int main(int argc, char** argv) {
// Initialize SDL
check_error_sdl(SDL_Init(SDL_INIT_VIDEO) != 0, "Unable to initialize SDL");
// Create and initialize a 800x600 window
SDL_Window* window = SDL_CreateWindow("Test SDL 2", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
800, 600, SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL);
check_error_sdl(window == nullptr, "Unable to create window");
// Create and initialize a hardware accelerated renderer that will be refreshed in sync with your monitor (at approx. 60 Hz)
SDL_Renderer* renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC);
check_error_sdl(renderer == nullptr, "Unable to create a renderer");
// Set the default renderer color to corn blue
SDL_SetRenderDrawColor(renderer, 100, 149, 237, 255);
// Initialize SDL_img
int flags=IMG_INIT_JPG | IMG_INIT_PNG;
int initted = IMG_Init(flags);
check_error_sdl_img((initted & flags) != flags, "Unable to initialize SDL_image");
I copied that code over exactly to my code and I no longer get the SDL Init. failed but I get these errors:
Unable to initialize SDL_image Invalid renderer
Unable to create texture Invalid renderer
Unable to create texture Invalid renderer
Unable to create texture Invalid renderer
It is a 1-to-1 copy of this test file so I'm not sure what could be going on. Any suggesstions?
UPDATE:
After recompiling the test program it no longer works either and gives the SDL Init failed error. I compiled with this line:
g++ -std=c++0x -Wall -pedantic sdl2_test.cpp -o sdl2_test `sdl2-config --cflags --libs` -lSDL2_image
I reconfigured and installed, instead of configuring with:
../configure --disable-pulseaudio --disable-esd --disable-video-mir --disable-video-wayland --disable-video-x11 --disable-video-opengl
I simply did:
../configure
Not sure if this will introduce other problems down the road but for now the images display and keyboard input can be captured.
Related
I am writing a file-serving http server for ESP32 using ESP-IDF and PlatformIO, but I just can't make data upload to SPIFFS work. I am trying to send html and favicon to flash, so it can be served on http.
Code of the server is taken from an example https://github.com/espressif/esp-idf/tree/master/examples/protocols/http_server/file_serving. Noticeable difference is that example project uses only ESP-IDF tools (without platformio) and that in an example data files are in the same directory as source files, where in my projects I have separated directories for /src and /data.
SPIFFS is configured using custom partition table.
I was following instructions from both PlatformIO documents (https://docs.platformio.org/en/latest/platforms/espressif32.html?utm_source=platformio&utm_medium=piohome) as well as from ESP (https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-guides/build-system.html#embedding-binary-data).
I have custom partisions.csv file (identical to the example) + changed menuconfig to use it.
In platformio.ini I added:
board_build.partitions = partitions.csv
board_build.embed_txtfiles =
data/favicon.ico
data/upload_script.html
I also changed project CMakeLists file to embed data like this:
cmake_minimum_required(VERSION 3.16.0)
include($ENV{IDF_PATH}/tools/cmake/project.cmake)
project(HTTP-server)
target_add_binary_data(HTTP-server.elf "data/favicon.ico" TEXT)
target_add_binary_data(HTTP-server.elf "data/upload_script.html" TEXT)
/src/CMakeLists stayed unchanged:
FILE(GLOB_RECURSE app_sources ${CMAKE_SOURCE_DIR}/src/*.*)
idf_component_register(SRCS ${app_sources})
But even with all this configuration when I try to use this data in file_server.c like this:
extern const unsigned char favicon_ico_start[] asm("_binary_favicon_ico_start");
extern const unsigned char favicon_ico_end[] asm("_binary_favicon_ico_end");
extern const unsigned char upload_script_start[] asm("_binary_upload_script_html_start");
extern const unsigned char upload_script_end[] asm("_binary_upload_script_html_end");
I get compilation errors
Linking .pio/build/nodemcu-32s/firmware.elf
/home/artur/.platformio/packages/toolchain-xtensa32/bin/../lib/gcc/xtensa-esp32-elf/8.4.0/../../../../xtensa-esp32-elf/bin/ld: .pio/build/nodemcu-32s/src/file_server.o:(.literal.http_resp_dir_html+0x14): undefined reference to `_binary_upload_script_html_end'
/home/artur/.platformio/packages/toolchain-xtensa32/bin/../lib/gcc/xtensa-esp32-elf/8.4.0/../../../../xtensa-esp32-elf/bin/ld: .pio/build/nodemcu-32s/src/file_server.o:(.literal.http_resp_dir_html+0x18): undefined reference to `_binary_upload_script_html_start'
/home/artur/.platformio/packages/toolchain-xtensa32/bin/../lib/gcc/xtensa-esp32-elf/8.4.0/../../../../xtensa-esp32-elf/bin/ld: .pio/build/nodemcu-32s/src/file_server.o:(.literal.favicon_get_handler+0x0): undefined reference to `_binary_favicon_ico_end'
/home/artur/.platformio/packages/toolchain-xtensa32/bin/../lib/gcc/xtensa-esp32-elf/8.4.0/../../../../xtensa-esp32-elf/bin/ld: .pio/build/nodemcu-32s/src/file_server.o:(.literal.favicon_get_handler+0x4): undefined reference to `_binary_favicon_ico_start'
collect2: error: ld returned 1 exit status
*** [.pio/build/nodemcu-32s/firmware.elf] Error 1
================================================================================= [FAILED] Took 65.20 seconds =================================================================================
I tried changing extern definitions to:
extern const unsigned char favicon_ico_start[] asm("_binary_data_favicon_ico_start");
But it didn't change anything.
Additionally when running "Build Filesystem Image" task I get this error:
*** [.pio/build/nodemcu-32s/spiffs.bin] Implicit dependency `data/favicon' not found, needed by target `.pio/build/nodemcu-32s/spiffs.bin'.
================================================================================= [FAILED] Took 5.70 seconds =================================================================================
The terminal process "platformio 'run', '--target', 'buildfs', '--environment', 'nodemcu-32s'" terminated with exit code: 1.
Any help would be very much appreciated, as I feel that I did everything that the documentation stated.
I suspect you need to declare them as BINARY instead of TEXT. TEXT creates a null-terminated string which probably won't generate the _binary_..._end aliases.
target_add_binary_data(HTTP-server.elf "data/favicon.ico" BINARY)
target_add_binary_data(HTTP-server.elf "data/upload_script.html" BINARY)
Also something's off with your SPIFFS image generation. You know that the CMake macros target_add_binary_data() and idf_component_register(... EMBED_TXTFILES...) embed the stuff only into the application binary, right? You cannot use them add stuff to a pre-generated SPIFFS partition. For that you need to use spiffsgen-py script.
The problem was that I was a little bit confused with the difference between embedding files into an app and sending stuff to SPIFFS partition.
Solution was to move both .html and .ico files from /data to /src. I think the reason behind this is that this code: asm("_binary_favicon_ico_start") can't reference files in other directories, but I am not sure.
I also reversed project CMakeFile to default and added this line to /src/CMakeFile: idf_component_register(SRCS ${app_sources} EMBED_FILES "favicon.ico" "upload_script.html")
I also didn't need to create any filesystem images as SPIFFS partition was only needed for webserver itself.
I am on linux ubuntu and target is a PIC18F47J53.
I basically want to program the chip and then let it run, using command lines and using pickit4.
using ipecmd (from mplab x ide v5.45), this is my command:
/opt/microchip/mplabx/v5.45/sys/java/zulu8.40.0.25-ca-fx-jre8.0.222-linux_x64/bin/java -jar /opt/microchip/mplabx/v5.45/mplab_platform/mplab_ipe/ipecmd.jar -TPPK4 /P18F47J53 -M -F"/path_to_myfile.hex" -W
This is my output
DFP Version Used : PIC18F-J_DFP,1.4.41,Microchip
*****************************************************
Connecting to MPLAB PICkit 4...
Currently loaded versions:
Application version............00.06.66
Boot version...................01.00.00
Script version.................00.04.17
Script build number............db473af2f4
Tool pack version .............1.6.961
PICkit 4 is supplying power to the target (3.25 volts).
Target device PIC18F47J53 found.
Device Revision Id = 0x1
*****************************************************
Calculating memory ranges for operation...
Erasing...
The following memory area(s) will be programmed:
program memory: start address = 0x0, end address = 0x3ff
program memory: start address = 0x1fc00, end address = 0x1fff7
configuration memory
Programming/Verify complete
Program Report
30-Jan-2021, 12:54:41
Device Type:PIC18F47J53
Program Succeeded.
Operation Succeeded
All good, and takes about 12 seconds, however, after that the pickit4 turns off the power target, and the pickit LED is BLUE (I guess state "ready")
The main question is how can I let the pickit4 powering the boards? any specific parameter? (I cannot find on the readme.html)
If I use MPLAB X IPE GUI to program, the programming is much quicker (3 or 4 seconds), the pickit LED is YELLOW and the target is left powered on. (I selected "release from reset")
I have tried to get the log out with as many details as possible, but I cannot see the commands sent to the pickit4.
Any idea? thanks
I realize that it's been a while since you asked, but i put the answer here for anyone who needs it. Add -OL to your command line options.
I am trying to run some code with pybullet. I am on windows 10, have the latest vscode, and I am using WSL remote on vscode with ubuntu 18.04 LTS. I have a GTX 2070 graphics card. I just want to see this work, I've been trying to fix it for the last 3 hours.
First, here is the code I am trying to run in WSL:
import numpy as np
import pybullet as pb
physicsClient = pb.connect (pb.GUI)
#load plane
import pybullet_data
pb.setAdditionalSearchPath(pybullet_data.getDataPath())
planeId = pb.loadURDF('plane.urdf')
#load visual shape
visualShapeId = pb.createVisualShape(
shapeType=pb.GEOM_MESH,
fileName='random_urdfs/000/000.obj',
rgbaColor=None,
meshScale=[0.1, 0.1, 0.1])
collisionShapeId = pb.createCollisionShape(
shapeType=pb.GEOM_MESH,
fileName='random_urdfs/000/000_coll.obj',
meshScale=[0.1, 0.1, 0.1])
multiBodyId = pb.createMultiBody(
baseMass=1.0,
baseCollisionShapeIndex=collisionShapeId,
baseVisualShapeIndex=visualShapeId,
basePosition=[0, 0, 1],
baseOrientation=pb.getQuaternionFromEuler([0, 0, 0]))
I get no errors, but the X server window will pop up (black) and close immediately. I read that you need to disable your GPU with WSL, but I am scared of messing up my PC. I would only want to disable it for when I need to see graphics / use the X server, not for all WSL applications.
Here is what shows in my bash script:
user#DESKTOP-######:~/program$ python3 openAI.py
pybullet build time: Sep 22 2020 00:54:31
startThreads creating 1 threads.
starting thread 0
started thread 0
argc=2
argv[0] = --unused
argv[1] = --start_demo_name=Physics Server
ExampleBrowserThreadFunc started
X11 functions dynamically loaded using dlopen/dlsym OK!
X11 functions dynamically loaded using dlopen/dlsym OK!
Creating context
Failed to create GL 3.3 context ... using old-style GLX context
Indirect GLX rendering context obtained
Making context current
GL_VENDOR=NVIDIA Corporation
GL_RENDERER=GeForce RTX 2070 SUPER/PCIe/SSE2
GL_VERSION=1.4 (4.6.0 NVIDIA 451.67)
GL_SHADING_LANGUAGE_VERSION=(null)
pthread_getconcurrency()=0
Version = 1.4 (4.6.0 NVIDIA 451.67)
Vendor = NVIDIA Corporation
Renderer = GeForce RTX 2070 SUPER/PCIe/SSE2
Segmentation fault (core dumped)
user#DESKTOP-######:~/program$
#Emilio, I have got this working without any changes to the GPU, using the following process:
I used the VcXsrv application set up in the same way as this tutorial : https://jack-kawell.com/2020/06/12/ros-wsl2/ where crucially Native openGL is unchecked.
Export your ip address as in the tutorial, however instead of 'export DISPLAY={your_ip_address}:0.0', go to the VcXsrv window (which should be blank at this point) and replace :0.0 with whatever number of display is given. So for Display DESKTOP-1234AB:1.0 you would enter 'export DISPLAY={your_ip_address}:1.0'
In the linux terminal enter: export LIBGL_ALWAYS_INDIRECT=0
You can check that this has made an effect by entering: glxinfo
Which should print out:
direct rendering: yes
When you run your python program it should open up in the VcXsrv window. For me there was no cursor visible but I could still interact with the object as if I did have a cursor.
In a device driver source in the Linux tree, I saw dev_dbg(...) and dev_err(...), where do I find the logged message?
One reference suggest to add #define DEBUG . The other reference involves dynamic debug and debugfs, and I got lost.
dev_dbg() expands to dynamic_dev_dbg(), dev_printk(), or no-op depending on the compilation flags.
#if defined(CONFIG_DYNAMIC_DEBUG)
#define dev_dbg(dev, format, ...) \
do { \
dynamic_dev_dbg(dev, format, ##__VA_ARGS__); \
} while (0)
#elif defined(DEBUG)
#define dev_dbg(dev, format, arg...) \
dev_printk(KERN_DEBUG, dev, format, ##arg)
#else
#define dev_dbg(dev, format, arg...) \
({ \
if (0) \
dev_printk(KERN_DEBUG, dev, format, ##arg); \
})
#endif
dynamic_dev_dbg() and dev_printk() call dev_printk_emit() which calls vprintk_emit().
This very same function is called in a normal mode when you just do a printk(). Just note here, that the rest functions like dev_err() will end up in the same function.
Thus, obviously, the buffer is all the same, i.e. kernel intrenal buffer.
The logged message at the end is printed to
Current console if kernel loglevel value (can be changed via kernel command line or via procfs) is high enough for certain message, here KERN_DEBUG.
Internal buffer which can be read by running dmesg command.
Note, data in 2 is kept as long as there still room in the buffer. Since it's limited and circular, newer data preempts old one.
Additional information how to enable Dynamic Debug.
First of all, be sure you have CONFIG_DYNAMIC_DEBUG=y in the kernel configuration.
Assume we would like to enable all debug prints in the built-in module with name 8250. To achieve that we simple add to the kernel command line the following 8250.dyndbg=+p.
If the same driver is compiled as loadable module we may either add options 8250 dyndbg to the modprobe configuration or to the shell command line when do it manually, like modprobe 8250 dyndbg.
More details are described in the Dynamic Debug documentation.
The "How certain debug prints are automatically enabled in linux kernel?" raises the question why some debug prints are automatically enabled and how DEBUG affects that when CONFIG_DYNAMIC_DEBUG=y. The answer is lying in the dynamic_debug.h and since it's used during compilation the _DPRINTK_FLAGS_DEFAULT defines the certain message appearence.
#if defined DEBUG
#define _DPRINTK_FLAGS_DEFAULT _DPRINTK_FLAGS_PRINT
#else
#define _DPRINTK_FLAGS_DEFAULT 0
#endif
you can find dev_err(...) in kernel messages. As the name implies, dev_err(...) messages are error messages, so they will definitely be printed if the execution comes to that point. dev_dbg(...) are debug messages which are more generously used in the kernel driver code and they are not printed by default. So everything you have read about dynamic_debugging comes into play with dev_dbg(...).
There are several pre-conditions to have dynamic debugging working, below 1. and 2. are general preconditions for dynamic debugging. 3. and later are for your particular driver/module/subsystem and can be .
Dynamic debugging support has to be in your kernel config CONFIG_DYNAMIC_DEBUG=y. You may check if it is the case zgrep DYNAMIC_DEBUG /proc/config.gz
debugfs has to be mounted. You can check with sudo mount | grep debugfs and if not existing, you can mount with sudo mount -t debugfs /sys/kernel/debug
refer to dynamic_debugging and enable the particular file/function/line you are interested
Hi I want to make the snort 2.9.4 run on the mips-linux based devices, so I cross compile the snort and all the supportive packages.
I use the option --disable-static-daq when I configure snort because I dont want to utilize all the daq modes. What i need is just the afpacket mode.
When the cross compiling is OK, i move daq_afpacket.so, libsfbpf.so.0.0.1, libdaq.so.2.0.0, libdnet.1.0.1, libpcre.so.0.0.1, libpcap.so.1 to the target device's /usr/lib directory. And the binary snort is moved into target device's /bin directory.
Then i run the snort like this:
/bin/snort -vde --daq afpacket --daq-dir /usr/lib
The output shows:
Running in packet dump mode
--== Initializing Snort ==--
Initializing Output Plugins!
/usr/lib/daq_afpacket.so: dlopen: File not found
segmentation fault
If I run snort like this:
# /bin/snort -vde --daq afpacket
Running in packet dump mode
== Initializing Snort =
Initializing Output Plugins!
ERROR: Can't find afpacket DAQ!
Fatal Error, Quitting..
Do you know what i miss here?
let me answer it myself:
the daq_afpacket.so is depent on the libsfbpf.so.0, which is a symbol link to the so libsfbpf.so.0.0.1. Then i have to copy the libsfbpf.so.0.0.1 into /usr/lib and create the symobl link to it.
after that, snort can be started like:
/bin/snort -vde --daq afpacket --daq-dir /usr/lib --daq-var buffer_size_mb=2 -i eth0 &