GStreamer audio how-to? - raspberry-pi

I've been trying to figure out GStreamer for "audio only" for a couple of days now, but with different instructions between 1.0 and 0.10, and most instructions are to do with video, I'm having difficulty understanding how it all fits together and talks over a network (same subnet range). Most examples also seem to want to send audio to a destination instead of waiting for something to connect to it, and I don't think this is what I need.
Basically, I am using the BlueIris camera recording system which talks to IP cameras. Unfortunately, my cameras do not have microphones so I would like to use a spare RaspberryPI with a USB microphone to serve the audio, and BlueIris will connect to it to get the audio. Apparently I can specify alternate audio sources with an rtsp or other streaming source.
The cameras are working great, so gstreamer will just be my audio source.
So my progress so far:
I have figured out how to play audio from the USB microphone to the speakers using:
gst-launch-1.0 alsasrc device=hw:1 ! audioconvert ! autoaudiosink
This is working great.
Then I tried setting up a TCP Sever session to wait for something to connect to it:
gst-launch-1.0 alsasrc device=hw:1 ! audioconvert ! audioresample ! speexenc ! rtpspeexpay ! tcpserversink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstAudioSrcClock
Redistribute latency...
Redistribute latency...
(The server seems to start up without problems.)
And then have a client connect:
gst-launch-1.0 tcpclientsrc ! speexdec ! autoaudiosink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstTCPClientSrc:tcpclientsrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstTCPClientSrc:tcpclientsrc0:
streaming task paused, reason error (-5)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
...and that's a big NOPE!
So I'm hoping for testing that I can go to my Windows machine and start up VLC and try to connect to the Raspberry PI with something like rtsp://192.168.0.123 but this is where it all gets fuzzy, especially when I can't even get gstreamer to connect to itself on the same box.
Can someone please help?

This did it for me:
gst-launch-1.0 alsasrc device=hw:1,0 ! mulawenc ! rtppcmupay ! udpsink host=224.1.1.1 auto-multicast=true port=5000
Now VLC works going to rtp://224.1.1.1:5000 and has the correct codec that I wanted.
On to the next problem...

Related

is there any way to debug movesense device without movesense programmer?

I'm developing a product using a Movesense sensor. But I don't have a Movesense programmer. Currently, I'm using the DFU method to update the latest firmware. The issue I'm facing is I can't see the logs of the Movesense device. And that is why I don't know which part of my program isn't working correctly or where I made a mistake. So is there any way that I can see the logs?
Thank you in advance.
I had the same question. What I am doing now is:
Use the adbbridge app found here https://www.movesense.com/docs/esw/tools/
You can then add some debug info logs to your code for example: DebugLogger::info("readTemperatureFromSensor() idx: %u", idx);
Then
Open the movesense adb bridge app and connect to the sensor
Use as instructed on the link above the commend to connect to the logs of the Android device and get the info: adb logcat | grep OUTPUT
Make the sensor do what you have programmed it for. If its whiteboard subscribe etc or some ble action.
Then subscribe also to the debug logs for example python_client % adb shell am broadcast -a "android.intent.action.MOVESENSE" --es type subscribe --es path System/Debug/Verbose --es value '''{}''' --es id 2
2 and 3 are interchangeable of course.
PS. Make sure your app is build with the debug module as well
ie:
OPTIONAL_CORE_MODULE(DebugService, true)

Reading heart-rate data from a Polarbelt with a Raspberry pi

I am trying to retrieive heart-rate date from a polar belt to use as part of emotion recognition algorithm. I am using a Raspberry pi 3b with raspbian. I am able to connect to the device with bluetoothctl When I open info I get a list of the UUID´s
Here is where it stops. I have tried to use hcitool according to the example below, but that does not work. When I try to connect I get the message: Connection Refused(111)
$ sudo gatttool -i hci1 -b 00:22:D0:33:1E:0F -I
[ ][00:22:D0:33:1E:0F][LE]> connect
[CON][00:22:D0:33:1E:0F][LE]>
So I tried to use bleak and pygatt and I´m not able to make this work. I am quite a newbee, so I am probably doing something wrong. But now I have run out of ideas. Any suggestions will be appreciated.
hciattach, hciconfig, hcitool, hcidump, rfcomm, sdptool, ciptool, and gatttool were deprecated by the BlueZ project in 2017. If you are following a tutorial that uses them, there is a chance that it might be out of date.
For testing it is best to use the bluetoothctl tool.
You say that you have successfully connected and get a list of UUIDs. Was 00002A37-0000-1000-8000-00805F9B34FB one of them?
If it is then select that inside bluetoothctl e.g.
gatt.select-attribute 00002a37-0000-1000-8000-00805f9b34fb
If that is succussful then you should be able to read the value with:
gatt.read
Or to test notifications:
gatt.notify on
This should start data being displayed from the belt.
gatt.notify off will stop the data being sent.
If you have this working with bluetoothctl then reproducing it with Python should be done with confidence that the RPi and belt are able to connect successfully.
There is an example of building a BLE client with Python at:
https://stackoverflow.com/a/63751113/7721752
I noticed in your gatttool example you are using hci1 rather than the more typical value of hci0. This is normally the case if you have added a USB BLE dongle. I the above example you would have to change ADAPTER_PATH = '/org/bluez/hci0' to end with hci1.
There is also the example with Bleak at:
https://stackoverflow.com/a/72541361/7721752
For bleak to select an alternative adapter would add the adapter address to the BleakClient e.g.:
async with BleakClient(address, adapter="yy:yy:yy:yy:yy:yy") as client:

Dual Audio output to both Pulseaudio Bluetooth and HDMI on Raspberry Pi 3B+ Buster

Audio output to both Pulseaudio and HDMI?
On my boat the raspberry pi 3 B+ Buster, is used in two ways:
it runs Kodi to play music that outputs via bluetooth to a car radio that outputs to speakers. (it took two days of work to get that to happen finally found https://peppe8o.com/fixed-connect-bluetooth-headphones-with-your-raspberry-pi/)
it also plays movies that output to an HDMI projector with speakers.
The Pi boots up into LXDE which runs a bash script to connect (and to keep trying to connect as per BluManCZ's answer in https://unix.stackexchange.com/questions/334386/how-to-set-up-automatic-connection-of-bluetooth-headset) to the radio by bluetooth and then autostarts Kodi.
The music is controlled by an Android Yatse app.
When I want to play a movie, I stop the music playing with the Yatse app and then turn on the projector and use a wireless keyboard to play a movie. But I have to manually go to Kodi system settings and select HDMI as the sound ouput.
When I finish the movie, I power off the Pi correctly. But if I forget to first go and manually put the sound output back to Pulseaudio within Kodi then when I next boot it up, and expect to get music, I hear nothing, as it is still going to HDMI. So then I have to go and turn on the projector so I can use the keyboard to switch it back over to pulseaudio.
So, is there some way I can get it to either output to BOTH pulseaudio (bluetooth) AND HDMI so that whichever device is switched on (radio or projector) I get sound out?
Or can I have it automatically detect which one is active and output to that?
How do I get it so that I can seamlessly switch from playing music through the bluetooth to playing videos through the projector?
Bear in mind that when I power up the Pi either one, or other, or both the radio and projector might be powered on at that time.
Ok, I solved it, I think, using the following steps. This asumes that bluetooth output to the a2dp speakers works (see the steps I took for that in the original question):
Install paprefs:
$ sudo apt install paprefs
Then run it on the desktop using Terminal:
$ paprefs
This brings up a GUI window with several tabs. Select the 'Simultaneour Out Put' tab which offers one checkbox to enable or disable the feature. Turn it on.
Then restart pulseaudio
$ killall pulseaudio
Now you can go to VLC and select the audio tab to send the output to Simultaneous output. The sound will go to both. Unfortunately when you shut down VLC it goes back to HDMI and you have to manually change it again. Also in Kodi it does not appear as an option.
So, a few more steps:
Edit the default configuration for pulseaudio
$ sudo nano /etc/pulse/default.pa
and add the following line at the beginning, before any other modules are loaded:
load-module module-combine-sink sink_name=combined
This sets up a new virtual device that outputs to all the others.
While you are there, make sure that this line is also in there, somewhere (probably farther down)
module-default-device-restore
This will revert back the the default device if something changes in the system (eg HDMI is turned off or on).
Exit nano and save the file by doign ctrl-x and saying yes to the prompts.
List the available devices known to pulseaudio:
$ pacmd list-sinks | grep -e 'name:' -e 'index:'
This should now list the bluetooth, jack, HDMI and also combined devices. The one with the asterix is the current fall-back device. You want to make this the 'combined' sink. To do that:
$ sudo pacmd set-default-sink combined
$ sudo reboot 0
Check again, and this time the combined should have the asterix next to it:
$ pacmd list-sinks | grep -e 'name:' -e 'index:'
Now when you play VLC or Kodi the sound should go to both HDMI and Bluetooth.
This seems to survive a full shutdown and power up, so I think it achieves the goal. I have not yet tried all the different combinations of starting with the different output devices on or off, But I am hopefull that it works.

VLC MPTS streaming

I'm trying to stream MPEG-TS using VLC as UDP Multicast. I have a recorded file with several programs. I need receive each program on my output as single program TS.
I do it with console interface in ubuntu 14.04 and I have a problem. I cannot get on my output any program except the first one.
cvlc MyMPTS.ts --sout '#duplicate{dst=udp{mux=ts,dst=239.233.1.1:5510},select="program=1"}' -
this command works well, but if I try adding another program to chain or I change my program number to another I got a following output:
[0x7ff748c93c38] main decoder error: cannot create packetizer output (mpga)
[0x7ff748c8c168] main decoder error: cannot create packetizer output (mpgv)
And there is nothing on output
If I stream using GUI it works well. I can select any program in my MPTS and get it on output, I can launch several windows of VLC and setup streaming with different programs as well. But GUI doesn't work in my case.
Why vlc cannot work with programs except the first defined from source file
Using you command I get:
[00007fa880008b38] stream_out_standard stream out error: UDP output is only valid with TS mux
[00007fa880008b38] stream_out_standard stream out error: no suitable sout mux module for `udp/‌​ts://...'
This seems to be fixed by removing the mux=ts from dst=udp:
cvlc input.ts --sout '#duplicate{dst=udp{dst=...},select="program=94",dst=udp{dst=...},select="program=102"}'
It will still complain about mpga and mpgv but it will start sending mpeg-ts over UDP. No idea what it doesn't like though, maybe something to do with muxer selection.

Canon DSLR Video loop back using v4l2loopback and EDSDK Liveview?

I want to use my DSLR camera as video input for let say Skype / Google talk under Linux and Android.
Is it possible to create a video loop back using v4l2loopback and Canon EDSDK ? how can I pipe the liveview buffer from the camera to the video loop back ?
Thanks
As of October 2017, GStreamer has been updated to 1.0 and v4l2loopback has also received some updates.
As such, the old command posted by #Reinaert Albrecht doesn't work anymore, and the new command that works now is:
gphoto2 --stdout --capture-movie | gst-launch-1.0 fdsrc fd=0 ! decodebin name=dec ! queue ! videoconvert ! tee ! v4l2sink device=/dev/video0
decodebin2 has been replaced by decodebin
ffmpegcolorspace has been replaced by videoconvert
the tee filter has been added to account for a bug in the v4l2loopback driver (see: https://github.com/umlaeute/v4l2loopback/issues/83)
To my understanding Canon's EDSDK is still only available upon request for the Windows and OSX platform (C and Objective-C). On linux, you might want to try and install the SDK under Wine, or resort to a more general purpose gPhoto. Now, the "LiveView" or "EvF" images are individual JPG's. Alternatively, you might want to capture this through the HDMI output port (which will be full res on the EOS 5D Mark III in Spring 2013).
you will need a "producer" application, that writes frames to the loopback device (and which has previously acquired those frames via the canon esdk)
v4l2loopback already comes with a few simple producer examples, and you could have a look at other applications that already have native v4l2loopback output support, e.g. Gem, lives, gmerlin and gstreamer
You can easily do this with the following commands:
modprobe v4l2loopback
And then issue this:
gphoto2 --stdout --capture-movie | gst-launch-0.10 fdsrc ! decodebin2 name=dec ! queue ! ffmpegcolorspace ! v4l2sink device=/dev/video0
Change the video device appropriately.