How to take images with Raspberry Pi since "raspistill" and "raspivid" are deprecated - raspberry-pi

Since the Raspberry Pi is transitioning from using the old raspistill and raspivid to the newer libcamera how should I take an image now if I don't want to use the CLI nor C as programming language? I can't find any wrapper for libcamera in any language other than C and the new official Picamera2 library is also in an alpha phase and not recommended for production use.
I am also using a 64-bit version of the Raspberry Pi OS so I can't use the legacy camera interface.
I could downgrade to 32-bit but where is the point in deprecating the old system if the new one is clearly not ready for productive use.
How do you guys handle using the camera of the Raspberry Pi at the moment if you want to use a wrapper like Picamera? Am I missing something?

At the moment, the best way, if you want to use bullseye, is probably to run libcamera-vid and pipe the output from that into a Python script. You can either use a subprocess() call, or just start a pipeline:
libcamera-vid <params> | python script.py
Be sure to read from sys.stdin.buffer like here to avoid CR/LF mangling.
Probably choose a YUV-based format to ensure frames are a deterministic length, as opposed to MJPEG where the frame length will vary according to image content and you'll have to search for JPEG SOI/EOI markers.

Did you try to see if the cam utility is installed?

Related

Calibrating a resistive touchscreen for libinput

I'm using a GUI application framework (EGT) on an ATMEL/ Microchip SAMA5D4. The framework features
DRM/KMS and X11 backends.
I've looked at using tslib to calibrate a restive touchscreen for the device but due to EGT limitations it looks like I'm going to have to use libinput for the moment.
Is there a calibration mechanism (equivalent of tslib) available for libinput? I've looked at xlibinput_calibrator & it seems like it could be a solution but I'll have to sort out the dependencies in the Yocto build.
Thanks,
For anyone looking at this xlibinput_calibrator looks like it requires X11 & was not an option.
I eventually ended up using a startup script to prevent EGT capturing raw events (deleted /dev/event0 or similiar) from the touchscreen & instead use the calibrated source from ts_input.

Is there a Swift bridge for Python?

I developed a script to set attributes for my iTunes music library on my Mac using an Apple Script bridge called AppScript. AppScript Allowed me to write my code in native Python without having to learn Apple Script. AppSript would translate my native Python to Apple Script. Since Apple Script has been replaced with Swift I am wondering if there is a similar bridge for Swift. I have done my research, but no luck. Additionally if there is, can you provide an example of how to control iTunes(now Music) with said library? Thanks in advance
Swift is designed to use MacOS's programming APIs: AppKit, Quartz, CoreFoundation, NSObject, etc, rather than the higher level OSAX event-driven elements (open, print, close, document, window, etc) used in AppleScript.
The system-bundled python (2.7) comes with pyObjC, which allows python to use the same programming APIs that Swift does, e.g. "writing apps". PyObjC also contains a Scripting Bridge to the AppleScript events and objects. The canonical example code does use iTunes:
from Foundation import *
from ScriptingBridge import *
iTunes = SBApplication.applicationWithBundleIdentifier_("com.apple.iTunes")
print iTunes.currentTrack().name()
(Obvs, this is python2 and you need to put brackets round the print command. Also, personally, I wouldn't import * everything, as it's very slow.)
Here are some other methods/attributes based on the Script Dictionary:
iTunes.nextTrack()
iTunes.previousTrack()
iTunes.playpause()
iTunes.fastForward()
iTunes.setShuffleEnabled_(False)
iTunes.currentPlaylist().playOnce_(False)
The system-bundled version of pyObjC is very old, but the library itself is still being developed. If you're using python3, then you should install the latest version of pyObjC.
FWIW, you can actually run uncompiled Swift as a 'script' in the shell.

How to change GPIO.BOARD TO GPIO.BCM for sensor SHT10

I want two scripts integrate in to one script.
Scripts for sensors SHT10 and MAX31855. Both make use of software SPI.
The SHT10 use GPIO.BOARD and the MAX31855 use GPIO.BCM.
The problem is that I get an error "ValueError: A different mode is already been set." I don't know how to resolve this because both sensors used different libraries. I think that the problem is in those libraries.
Is there an easy solution for this problem.
Running the scripts separately than there is no problem
You can try using GPIO.setmode(GPIO.BCM) and then in the other program using GPIO.setmode(GPIO.BOARD).

How Marmalade support text-to-speech?

Is there any API/SDK to support tts (text to speech) in Marmalade?
I've had some success porting Flite (http://www.speech.cs.cmu.edu/flite/) to the Marmalade environment. It produces wave files and raw, in-memory buffers (which can then be played using s3eSound directly) fine.
The s3eSound adapter (which plays the text directly from within flite) is a work in progress, so, while it does produce something close to recognisable speech, it is also obviously bugged. For my purposes, the raw buffers are more useful anyway, but feel free to try to fix it up.
You can see what I've done here https://github.com/madmaw/marmalade-flite
There is no specific API provided by Marmalade however you may be able to use the EDK if native APIs provide this functionality on iOS or Android.
https://www.madewithmarmalade.com/devnet/docs#/main/extensions.html

printk in driver

I am really new to linux module programming.
I need to some how be able to do some tweak to the ath9k driver in linux.
I finally got the compat wireless source code of ath9k to compile in ubuntu 11.04 and was trying to play around with the code.
I tried using printk to tried to get to see what happen.
First I put printk in the init.c file, the message I printed show up when I use dmesg in the terminal. However, when I tried to use the same printk in another file like rc.c it does not show up at all.
I am wondering why is that?
And is there some other way that I could some how log some information from the code similar to the fprintf. What I need is I need to extract somehow the packet header from the driver.
Thank you
Best Regards.
read about the proc fs, it's a great framework to extract data from device drivers.
once you have registered a device node as proc fs, you can read from it.
once the the read function is called, a callback function you defined is creating the output. this is an excellent way to retrieve data from device.
there are also two other methods, one is sysfs, you can google for it. and the second,
if the the device is a char device, you can implement an ioctrl function which returns the info you need.