I'm using a GUI application framework (EGT) on an ATMEL/ Microchip SAMA5D4. The framework features
DRM/KMS and X11 backends.
I've looked at using tslib to calibrate a restive touchscreen for the device but due to EGT limitations it looks like I'm going to have to use libinput for the moment.
Is there a calibration mechanism (equivalent of tslib) available for libinput? I've looked at xlibinput_calibrator & it seems like it could be a solution but I'll have to sort out the dependencies in the Yocto build.
Thanks,
For anyone looking at this xlibinput_calibrator looks like it requires X11 & was not an option.
I eventually ended up using a startup script to prevent EGT capturing raw events (deleted /dev/event0 or similiar) from the touchscreen & instead use the calibrated source from ts_input.
Related
Since the Raspberry Pi is transitioning from using the old raspistill and raspivid to the newer libcamera how should I take an image now if I don't want to use the CLI nor C as programming language? I can't find any wrapper for libcamera in any language other than C and the new official Picamera2 library is also in an alpha phase and not recommended for production use.
I am also using a 64-bit version of the Raspberry Pi OS so I can't use the legacy camera interface.
I could downgrade to 32-bit but where is the point in deprecating the old system if the new one is clearly not ready for productive use.
How do you guys handle using the camera of the Raspberry Pi at the moment if you want to use a wrapper like Picamera? Am I missing something?
At the moment, the best way, if you want to use bullseye, is probably to run libcamera-vid and pipe the output from that into a Python script. You can either use a subprocess() call, or just start a pipeline:
libcamera-vid <params> | python script.py
Be sure to read from sys.stdin.buffer like here to avoid CR/LF mangling.
Probably choose a YUV-based format to ensure frames are a deterministic length, as opposed to MJPEG where the frame length will vary according to image content and you'll have to search for JPEG SOI/EOI markers.
Did you try to see if the cam utility is installed?
I'm working in a project where i have to control a Robot (I have it already as a Simulink Model) with an XBOX Controller.
Until now I couldn't find a good Example or a good Idea to let these two interact.
I want to change some variables (INPUTs) with the buttons of the XBOX Controller and then get a feedback (Example: a vibration feedback) (OUTPUT).
Is it possible to do that with ROS and Simulink ? So that i can work with ros_joy, and then implement it in my Simulink model ?
It'll be very helpful when i get some advises here.
Thanks a lot.
I can't test this, but I know from ubuntu <= 16 that jstest and jstest-gtk (the nicer gui interface) can be installed via apt, to check & configure any joysticks/gamepads, and it works well. Then you will have your gamepad as a device under /dev/. (Ex: /dev/js0, /dev/input/js0). And that file handle is easy to use, with many supporting 3rd party/one-off libraries (you don't have to use all of ros if you don't want/need to).
And I know from matlab that there are ways to hack up a solution, but their own 1st party solution is vrjoystick, which takes in a 1-based numeric id. This should mean your gamepad shows up as js0, and the id should then be 1: id = 1; joy = vrjoystick(id,'forcefeedback');.
I want two scripts integrate in to one script.
Scripts for sensors SHT10 and MAX31855. Both make use of software SPI.
The SHT10 use GPIO.BOARD and the MAX31855 use GPIO.BCM.
The problem is that I get an error "ValueError: A different mode is already been set." I don't know how to resolve this because both sensors used different libraries. I think that the problem is in those libraries.
Is there an easy solution for this problem.
Running the scripts separately than there is no problem
You can try using GPIO.setmode(GPIO.BCM) and then in the other program using GPIO.setmode(GPIO.BOARD).
I'm trying out OGRE and I'd like to ask a question about the OGRE config dialog.
The dialog which can be opened with Ogre::Root::showConfigDialog(), lists only "800 x 600 # 32-bit colour" for Video Mode, both for "Direct3D9 Rendering Subsystem" and "Direct3D11 Rendering Subsystem".
My question is, why is there only 800x600x32 ? Is there a way to make it list more video modes, like 1024x768x32, 1980x1080x32, etc?
I've tried Google-searching, but the closest thing I got was how to change video mode without the use of the config dialog.
Any help would be appreciated, thank you!
EDIT:
Here's a link to my screenshot of OGRE Engine Rendering Setup dialog, since I don't have enough reputation to upload images.
http://imgur.com/kNDy48E
In general: This list will automatically contain all available video modes reported by the selected rendering API drivers. If you are certain that your current API and drivers should allow more, you could debug the respective _initialise() function, e.g. for D3D11 in OgreD3D11RenderSystem.cpp:
RenderWindow* D3D11RenderSystem::_initialise( bool autoCreateWindow, const String& windowTitle )
Internally, the function D3D11VideoModeList::enumerate() will be used to enumerate all possible values from the driver.
I saw that your GPU uses "NVIDIA Optimus". Did you try to tell NVIDIA to use the correct GPU for your Ogre application? I saw that in the config dialog the GTX is selected, but just to make sure: How to select Optimus GPU.
Also this Optimus policy trick might help:
NVIDIA released Optimus rendering policies guidelines not long ago.
If the user has driver 302 or higher, we can hint the driver to use the dedicated GPU. All we need to do is to export a variable:
extern "C" {
_declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}
Is there any API/SDK to support tts (text to speech) in Marmalade?
I've had some success porting Flite (http://www.speech.cs.cmu.edu/flite/) to the Marmalade environment. It produces wave files and raw, in-memory buffers (which can then be played using s3eSound directly) fine.
The s3eSound adapter (which plays the text directly from within flite) is a work in progress, so, while it does produce something close to recognisable speech, it is also obviously bugged. For my purposes, the raw buffers are more useful anyway, but feel free to try to fix it up.
You can see what I've done here https://github.com/madmaw/marmalade-flite
There is no specific API provided by Marmalade however you may be able to use the EDK if native APIs provide this functionality on iOS or Android.
https://www.madewithmarmalade.com/devnet/docs#/main/extensions.html