delay/slow lve streaming video in Simulink using Raspberry Pi camera - matlab

I have a Pi camera module which is attached to pi B board. I am streaming the Live video from Pi camera into Pc Simulink via Edimax Wifi Adaptor Using Router.
By using 10fps and res 320*240, I am facing delay into my Simulink video viewer. Why this delay happens either the speed or range of wifi adaptor or my Laptop processor (i have intel i3 with 4gb RAM 2.4GHZ)
Is there is the way to the reduce the delay?

Since you are using low image resolution and frame-rate, the only reason you are experiencing such delay can be the limited bandwidth of the WiFi.
If you can, try to wire the Raspberry Pi with an Ethernet connection, you will considerably increase the bandwidth and reduce potential delays of the video transmission.

Related

How to get real time audio data from raspberry pi to simulink and apply real time signal processing

I am facing a problem with getting live (real time) audio data from raspberry pi to simulink.
I want to get the data continuously from the microphone that is connected the raspberry pi and process it in simulink continuously too.
I am using the ALSA Audio Capture block but its not working for me.
Anyone tried this before and may help please?
Thank you.

Transmission of live feed from webcam over RF and receive same feed from different pluto

Is there any way we can transmit live feed from Rasp Webcam / Webcam and transmit over channel(RF) with Software defined radio, and receive feed from somewhere else ?
I tried to connect everythink in matlab simulink but the delay is totally unacceptable. is there any way we can transmit that over RF ?

Image segmentation with raspberry pi

I have been trying to perform image segmentation with raspberry Pi. I searched for different pre-trained models to perform it and came across tensorflow lite, it has a deeplab model in it, it is very less in size (2.7 Mb) and can be used for IOT devices. But in my case, I have a custom dataset and I need to train the model on my dataset (i.e training deeplab with custom dataset). My issue is raspberry Pi has less RAM and storage (comparatively). So, if I train deeplab with the custom dataset, can I run it on raspberry Pi. If so, is there any tutorial or a research paper about it?
You can use this training script. Clone the repository and run model.py from model/research/deeplab.
I wouldn´t train the model on the Raspberry Pi, because it´s damn slow. A better approach would be to train it on a PC (maybe with GPU support) and export the model to the Raspberry Pi.

Gstreamer crackling sound on Raspberry Pi 3 while playing video

I am playing a hardware accelerated video on the Raspberry Pi 3 using this simple pipeline:
gst-launch-1.0 playbin uri=file:///test/test.mp4
As soon as the video begins to play, any sound being played in parallel using ALSA begins to crackle (tested with gstreamer and mplayer). It's a simple WAV-file and I am using a USB audio interface.
Listening to the headphone jack already crackles without playing an audio file (but this jack is very low quality and I don't know if that's a different effect).
Playing the audio in the same pipeline as the video does not help. CPU is only on approx. 30 % load and there's plenty of free memory. I already overclocked the SD card. Playing two videos in parallel with omxplayer has no impact and sound still plays well. But as soon as I start the pipe above, the sound begins to crackle.
I tried "stress" to simulate high CPU load. This had no impact either, so CPU does not seem to be the problem (but maybe the GPU?).
This is the gstreamer pipeline to test the audio:
gst-launch-1.0 filesrc location=/test/test.wav ! wavparse ! audioconvert ! alsasink device=hw:1,0
GST_DEBUG=4 shows no problems.
I tried putting queues on different places but nothing helped. Playing a video without audio tracks works a little bit better. But I have no idea, where the ressource shortage may lie, if it even is one.
It somehow seems like gstreamer is disturbing audio streams.
Any ideas where the problem may be are highly appreciated.
It seems like the USB driver of my interface is expecting a very responsive system. I bought a cheap new USB audio interface with a bInterval value of 10 instead of 1 and everything works fine now. More details can be found here: https://github.com/raspberrypi/linux/issues/2215

raspberry pi video streaming through SPI

I'm somewhat beginner in gstreamer.
I am trying to stream live video from raspberry pi to MSP430 MCU over SPI connection.
What I am thinking right now is get the buffers directly from raspberry pi using appsink, and then send them through SPI connection.
But I am not sure where the buffers can be saved, and how they are saved.
I'm looking for examples using appsink, but not really sure whether I can continuously get a stream of buffers.
Is there any way that I can do that?
Or any other better way to stream a video over SPI connection can be helpful.
Thank you.