I want to use my DSLR camera as video input for let say Skype / Google talk under Linux and Android.
Is it possible to create a video loop back using v4l2loopback and Canon EDSDK ? how can I pipe the liveview buffer from the camera to the video loop back ?
Thanks
As of October 2017, GStreamer has been updated to 1.0 and v4l2loopback has also received some updates.
As such, the old command posted by #Reinaert Albrecht doesn't work anymore, and the new command that works now is:
gphoto2 --stdout --capture-movie | gst-launch-1.0 fdsrc fd=0 ! decodebin name=dec ! queue ! videoconvert ! tee ! v4l2sink device=/dev/video0
decodebin2 has been replaced by decodebin
ffmpegcolorspace has been replaced by videoconvert
the tee filter has been added to account for a bug in the v4l2loopback driver (see: https://github.com/umlaeute/v4l2loopback/issues/83)
To my understanding Canon's EDSDK is still only available upon request for the Windows and OSX platform (C and Objective-C). On linux, you might want to try and install the SDK under Wine, or resort to a more general purpose gPhoto. Now, the "LiveView" or "EvF" images are individual JPG's. Alternatively, you might want to capture this through the HDMI output port (which will be full res on the EOS 5D Mark III in Spring 2013).
you will need a "producer" application, that writes frames to the loopback device (and which has previously acquired those frames via the canon esdk)
v4l2loopback already comes with a few simple producer examples, and you could have a look at other applications that already have native v4l2loopback output support, e.g. Gem, lives, gmerlin and gstreamer
You can easily do this with the following commands:
modprobe v4l2loopback
And then issue this:
gphoto2 --stdout --capture-movie | gst-launch-0.10 fdsrc ! decodebin2 name=dec ! queue ! ffmpegcolorspace ! v4l2sink device=/dev/video0
Change the video device appropriately.
Related
I am trying to retrieive heart-rate date from a polar belt to use as part of emotion recognition algorithm. I am using a Raspberry pi 3b with raspbian. I am able to connect to the device with bluetoothctl When I open info I get a list of the UUID´s
Here is where it stops. I have tried to use hcitool according to the example below, but that does not work. When I try to connect I get the message: Connection Refused(111)
$ sudo gatttool -i hci1 -b 00:22:D0:33:1E:0F -I
[ ][00:22:D0:33:1E:0F][LE]> connect
[CON][00:22:D0:33:1E:0F][LE]>
So I tried to use bleak and pygatt and I´m not able to make this work. I am quite a newbee, so I am probably doing something wrong. But now I have run out of ideas. Any suggestions will be appreciated.
hciattach, hciconfig, hcitool, hcidump, rfcomm, sdptool, ciptool, and gatttool were deprecated by the BlueZ project in 2017. If you are following a tutorial that uses them, there is a chance that it might be out of date.
For testing it is best to use the bluetoothctl tool.
You say that you have successfully connected and get a list of UUIDs. Was 00002A37-0000-1000-8000-00805F9B34FB one of them?
If it is then select that inside bluetoothctl e.g.
gatt.select-attribute 00002a37-0000-1000-8000-00805f9b34fb
If that is succussful then you should be able to read the value with:
gatt.read
Or to test notifications:
gatt.notify on
This should start data being displayed from the belt.
gatt.notify off will stop the data being sent.
If you have this working with bluetoothctl then reproducing it with Python should be done with confidence that the RPi and belt are able to connect successfully.
There is an example of building a BLE client with Python at:
https://stackoverflow.com/a/63751113/7721752
I noticed in your gatttool example you are using hci1 rather than the more typical value of hci0. This is normally the case if you have added a USB BLE dongle. I the above example you would have to change ADAPTER_PATH = '/org/bluez/hci0' to end with hci1.
There is also the example with Bleak at:
https://stackoverflow.com/a/72541361/7721752
For bleak to select an alternative adapter would add the adapter address to the BleakClient e.g.:
async with BleakClient(address, adapter="yy:yy:yy:yy:yy:yy") as client:
Audio output to both Pulseaudio and HDMI?
On my boat the raspberry pi 3 B+ Buster, is used in two ways:
it runs Kodi to play music that outputs via bluetooth to a car radio that outputs to speakers. (it took two days of work to get that to happen finally found https://peppe8o.com/fixed-connect-bluetooth-headphones-with-your-raspberry-pi/)
it also plays movies that output to an HDMI projector with speakers.
The Pi boots up into LXDE which runs a bash script to connect (and to keep trying to connect as per BluManCZ's answer in https://unix.stackexchange.com/questions/334386/how-to-set-up-automatic-connection-of-bluetooth-headset) to the radio by bluetooth and then autostarts Kodi.
The music is controlled by an Android Yatse app.
When I want to play a movie, I stop the music playing with the Yatse app and then turn on the projector and use a wireless keyboard to play a movie. But I have to manually go to Kodi system settings and select HDMI as the sound ouput.
When I finish the movie, I power off the Pi correctly. But if I forget to first go and manually put the sound output back to Pulseaudio within Kodi then when I next boot it up, and expect to get music, I hear nothing, as it is still going to HDMI. So then I have to go and turn on the projector so I can use the keyboard to switch it back over to pulseaudio.
So, is there some way I can get it to either output to BOTH pulseaudio (bluetooth) AND HDMI so that whichever device is switched on (radio or projector) I get sound out?
Or can I have it automatically detect which one is active and output to that?
How do I get it so that I can seamlessly switch from playing music through the bluetooth to playing videos through the projector?
Bear in mind that when I power up the Pi either one, or other, or both the radio and projector might be powered on at that time.
Ok, I solved it, I think, using the following steps. This asumes that bluetooth output to the a2dp speakers works (see the steps I took for that in the original question):
Install paprefs:
$ sudo apt install paprefs
Then run it on the desktop using Terminal:
$ paprefs
This brings up a GUI window with several tabs. Select the 'Simultaneour Out Put' tab which offers one checkbox to enable or disable the feature. Turn it on.
Then restart pulseaudio
$ killall pulseaudio
Now you can go to VLC and select the audio tab to send the output to Simultaneous output. The sound will go to both. Unfortunately when you shut down VLC it goes back to HDMI and you have to manually change it again. Also in Kodi it does not appear as an option.
So, a few more steps:
Edit the default configuration for pulseaudio
$ sudo nano /etc/pulse/default.pa
and add the following line at the beginning, before any other modules are loaded:
load-module module-combine-sink sink_name=combined
This sets up a new virtual device that outputs to all the others.
While you are there, make sure that this line is also in there, somewhere (probably farther down)
module-default-device-restore
This will revert back the the default device if something changes in the system (eg HDMI is turned off or on).
Exit nano and save the file by doign ctrl-x and saying yes to the prompts.
List the available devices known to pulseaudio:
$ pacmd list-sinks | grep -e 'name:' -e 'index:'
This should now list the bluetooth, jack, HDMI and also combined devices. The one with the asterix is the current fall-back device. You want to make this the 'combined' sink. To do that:
$ sudo pacmd set-default-sink combined
$ sudo reboot 0
Check again, and this time the combined should have the asterix next to it:
$ pacmd list-sinks | grep -e 'name:' -e 'index:'
Now when you play VLC or Kodi the sound should go to both HDMI and Bluetooth.
This seems to survive a full shutdown and power up, so I think it achieves the goal. I have not yet tried all the different combinations of starting with the different output devices on or off, But I am hopefull that it works.
I've been trying to figure out GStreamer for "audio only" for a couple of days now, but with different instructions between 1.0 and 0.10, and most instructions are to do with video, I'm having difficulty understanding how it all fits together and talks over a network (same subnet range). Most examples also seem to want to send audio to a destination instead of waiting for something to connect to it, and I don't think this is what I need.
Basically, I am using the BlueIris camera recording system which talks to IP cameras. Unfortunately, my cameras do not have microphones so I would like to use a spare RaspberryPI with a USB microphone to serve the audio, and BlueIris will connect to it to get the audio. Apparently I can specify alternate audio sources with an rtsp or other streaming source.
The cameras are working great, so gstreamer will just be my audio source.
So my progress so far:
I have figured out how to play audio from the USB microphone to the speakers using:
gst-launch-1.0 alsasrc device=hw:1 ! audioconvert ! autoaudiosink
This is working great.
Then I tried setting up a TCP Sever session to wait for something to connect to it:
gst-launch-1.0 alsasrc device=hw:1 ! audioconvert ! audioresample ! speexenc ! rtpspeexpay ! tcpserversink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstAudioSrcClock
Redistribute latency...
Redistribute latency...
(The server seems to start up without problems.)
And then have a client connect:
gst-launch-1.0 tcpclientsrc ! speexdec ! autoaudiosink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstTCPClientSrc:tcpclientsrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstTCPClientSrc:tcpclientsrc0:
streaming task paused, reason error (-5)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
...and that's a big NOPE!
So I'm hoping for testing that I can go to my Windows machine and start up VLC and try to connect to the Raspberry PI with something like rtsp://192.168.0.123 but this is where it all gets fuzzy, especially when I can't even get gstreamer to connect to itself on the same box.
Can someone please help?
This did it for me:
gst-launch-1.0 alsasrc device=hw:1,0 ! mulawenc ! rtppcmupay ! udpsink host=224.1.1.1 auto-multicast=true port=5000
Now VLC works going to rtp://224.1.1.1:5000 and has the correct codec that I wanted.
On to the next problem...
I'am about to build an automatic intrusion detection system (IDS) behind my FritzBox Router in my home LAN.
I'm using a Raspberry Pi with Raspbian Jessie, but any dist would be ok.
After some searches and tryouts I found ntop (ntopng to be honest, but I guess my questions aims to any version).
ntop can capture network traffic on its own, but thats not what I want because I want to get all the traffic without putting the Pi between the devices or let him act as a gateway (for performance reasons). Fortunately my FritzBox OS has a function to simulate a mirror port. You can download a .pcap which is continously written in realtime. I do it with a script from this link.
The problem is that I can't pipe the wget download to ntop like I could do it with e.g. tshark.
I'm looking for:
wget -O - http://fritz.box/never_ending.pcap | ntopng -f -
While this works fine:
wget -O - http://fritz.box/never_ending.pcap | tshark -i -
Suggestions of other analyzing software is ok (if pretty enough ;) ) but I want to use the FritzBox-pcap-thing...
Thanks for saving another day of mine :)
Edit:
So I'm comming to this approaches:
Make chunks of pcaps an run a script to analyse every pcap after another. Problem ntop do not merge the results, and I could get a storage problem if traffic running hot
Pipe wget to tshark and overwrite one pcap every time. Then analyse it with ntop. Problem again, the storage
Pipe wget to tshark cut some information out and store them to a database. Problem which info should I store and what programm likes dbs more than pcaps ?
The -i option in tshark is to specify an interface, whereas the -f option in ntop is to specify a name for the dump-file.
In ntopng I didn't even know there was a -f option!?
Does this solve your problem?
I'm using Raspbian.
I found PI4J library to control GPIO's. Now i want to control the sound (make it squeak) on 3.5 Jack. Is there a library or commands for this ?
What are you running? (Raspbian, XBMC, Arch...)
I don't know of any libraries that could do this for you, but you can always resort to the following... assuming you're using Rasbian.
Use Java to make the following system calls with 'exec':
amixer -c 0 set PCM 2dB+ // Volume up.
amixer -c 0 set PCM 2dB- // Volume down.
Hope this helps!
EDIT: As for your new question, you can try the following:
Toolkit.getDefaultToolkit().beep();