raspberry pi video streaming through SPI - streaming

I'm somewhat beginner in gstreamer.
I am trying to stream live video from raspberry pi to MSP430 MCU over SPI connection.
What I am thinking right now is get the buffers directly from raspberry pi using appsink, and then send them through SPI connection.
But I am not sure where the buffers can be saved, and how they are saved.
I'm looking for examples using appsink, but not really sure whether I can continuously get a stream of buffers.
Is there any way that I can do that?
Or any other better way to stream a video over SPI connection can be helpful.
Thank you.

Related

How to stream live video in h.264 using a rasberry pi

So I have been trying for (embarassingly) long to efficiently stream video from my pi to my computer over the internet. Video encoding and streaming still looks fuzzy to me, since most examples hide the things that happen behind the scenes.
Things I've tried:
OpenCV capture, encode as jpeg, send through socket and decode - I get a working video feed, but data usage is crazy.
using the Picamera module, capture and send over socket, receive and pipe into VLC - also worked, data usage is low, but latency is even more than the first approach. Also I want to play the video on my own UI on pyQt
using the Picamera module, capture and send over socket, pipe to FFMPEG, and try to use opencv to capture from the stdout - I got "bad argument" errors on the cv2.VideoCapture(stdout) method.
All I want is an efficient way to transmit video with low bandwidth and latency using h264, and no pre-written protocols like rtmp or installing servers, then blast it onto my pyQt UI. Something like:
bytes =
video.capture().encode('h264')
udp_socket.sendto(bytes, (ip, port))
And on the receiving side:
data = udp_socket.recvfrom(65534) frame = data.decode() update_frame()
The closest thing I got to this was with the first approach.
Any help would be greatly appreciated.

How to get real time audio data from raspberry pi to simulink and apply real time signal processing

I am facing a problem with getting live (real time) audio data from raspberry pi to simulink.
I want to get the data continuously from the microphone that is connected the raspberry pi and process it in simulink continuously too.
I am using the ALSA Audio Capture block but its not working for me.
Anyone tried this before and may help please?
Thank you.

How to stream video from a Raspberry Pi to Elixir and to the web

I'm trying to stream high-quality live video from a Raspberry Pi 3+camera module. I want to public the stream on a public webpage (using video.js I think). I also want an elixir server (on the same LAN as the Pi) to consume the stream to detect faces and motion in it. I am a video streaming noob.
What's the best way to do this? Specifically:
What transport mechanism should I use on the Pi? RTP? WebRTC? Is that even a transport mechanism?
How do I pull images out of a (whatever transport mechanism I used above) stream in Elixir? Is there a library that does it?
If I want to support more simultaneous users than the Pi can handle, what's the right way to proxy the stream through Phoenix?
Thanks!

Alexa Raspberry Pi

I have a Raspberry PI model B+ and I was thinking of integrating it to Alexa Voice Service. So I was able to manage my Raspberry PI and Alexa Voice Service until the part that Alexa says hello. In order to achieve this I used also PC108 media USB external sound card. So I’m getting both input and output from my plug-in microphone or my mini jack audio output to speaker. The thing is that something is missing in order to work .What do I have to do in order to make Alexa listen ?
Thank you in advance.
At re:Invent 2016 they had a workshop on doing this. Take a look at the slides from the session and the workshop instructions. We used a simple USB microphone and sound is built into the Pi. The sample app is still being updated so it should be good to go.
This was with a Pi3 but the basics should still be the same.
You can also use PiCroft that is an image of Mycroft a open source assistant it's just burn it on a sdcard and use
https://mycroft.ai/mycroft-now-available-raspberry-pi-image/
if you want to create skills https://docs.mycroft.ai/skill.creation

Raspberry Pi Motion-Mmal: I want to disable live stream capabilities

I have a few questions about Motion-MMAL for the Raspberry Pi B+ model, running on Raspbian: Sorry if these are noob questions.
1) I want to disable the live stream capability completely; however, i can only find information on how to keep the live stream local
2) If I do not visit the local address for the live stream, is it still technically uploading data and live streaming?
I think if you comment out the webcam_port value in motion.conf then you will not be streaming. Default is 0 for disabled.
So motion will still be recording video if there is actual motion. Nobody will be able to view a live stream.