How does OpenCV handle TCP Connections? - sockets

I setup a NetCat Video Stream from my RPi and I am accessing it with OpenCV in the following way:
videoStream = cv2.VideoCapture("tcp://#<my_ip>:<my_port>/")
...
videoStream.release()
Unfortunately I cannot connect to the Stream multiple times without reinitializing it. How does OpenCV tread my tcp connection? Does .release() properly close the socket or what is the right way to close it?

I would comment but I do not have enough points. I had a similar issue. Ultimately, what worked for me is the run netcat with the -k option, which does allow reconnecting:
on RPI:
/opt/vc/bin/raspivid -n -t 0 -w 640 -h 360 -fps 30 -ih -fl -l -o - | /bin/nc -klvp 5000
for nc, the -k option keeps the port listening after the first client disconnects, thereby allowing you to reconnect. You won't need the -v option, it just adds some verbosity.
Another alternative is to
on receiver (Ubuntu, Win10):
nc x.x.x.x 5000 | mplayer -fps 200 -demuxer h264es -
or
gst-launch-1.0 -v tcpclientsrc host=10.60.66.237 port=5000 ! decodebin ! autovideosink
Python code with opencv:
import cv2
cap = cv2.VideoCapture("tcp://10.60.66.237:5000")
while(True):
ret, frame = cap.read()
cv2.imshow('frame', frame)
# the 'q' button is set as the
# quitting button you may use any
# desired button of your choice
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Disconnect and reconnect all you want :)

Related

Use GStreamer to pack existing h264 stream and send it over network to VLC

I'm trying to stream from Raspberry PI camera over network using raspivid and gstreamer cli. I want to be able to view the stream using VLC "open network stream" on the client.
This is related to question GStreamer rtp stream to vlc however mine it is not quite the same. Instead of encoding the raw output from my PI camera, my idea is to leverage the existing the h264 output of raspivid, mux it into an appropriate container and send it over TCP or UDP.
I was able to successfully capture the h264 output from raspivid into an mp4 file (with correct fps and length information) using this pipeline:
raspivid -n -w 1280 -h 720 -fps 24 -b 4500000 -a 12 -t 30000 -o - | \
gst-launch-1.0 -v fdsrc ! video/x-h264, width=1280, height=720, framerate=24/1 ! \
h264parse ! mp4mux ! filesink location="videofile.mp4"
However, when I try to stream this over a network:
raspivid -n -w 1280 -h 720 -fps 24 -b 4500000 -a 12 -t 0 -o - | \
gst-launch-1.0 -v fdsrc ! video/x-h264, width=1280, height=720, framerate=24/1 ! \
h264parse ! mpegtsmux ! rtpmp2tpay ! udpsink host=192.168.1.20 port=5000
...and try to open the stream using rtp://192.168.1.20:5000 on VLC, it reports an error.
Edit: Ok, I was mistaken to assume that the udpsink listens for incoming connections. However, after changing the last part of the pipeline to use my client's IP address ! udpsink host=192.168.1.77 port=5000 and tried opening that with udp://#:5000 on the VLC, the player does not display anything (both the PI and the receiving computer are on the same LAN and I can see the incoming network traffic on the client).
Does anyone know how to properly construct a gstreamer pipeline to transmit existing h264 stream over a network which can be played by vanilla VLC on the client?
Assuming this is due to missing SPS/PPS data. E.g. probably it works if you start VLC first and then the video pipeline on the Raspberry PI. By default the SPS/PPS headers are most likely send only once at the beginning of the stream.
If the receiver misses SPS/PPS headers it will not be able to decode the H.264 stream. I guess this can be fixed by using the config-interval=-1 property of h264parse.
With that option SPS/PPS data should be send before each IDR-frame which should occur every couple of seconds - depending on the encoder.
Another thing is that you don't need to use rtpmp2tpay block. Just sending MPEG TS over UDP directly should be enough.
Having said that, the pipeline should look like this:
raspivid -n -w 1280 -h 720 -fps 24 -b 4500000 -a 12 -t 0 -o - | \
gst-launch-1.0 -v fdsrc ! \
video/x-h264, width=1280, height=720, framerate=24/1 ! \
h264parse config-interval=-1 ! mpegtsmux ! udpsink host=192.168.1.77 port=5000
The 192.168.1.77 is the IP address of the client running VLC at udp://#5000. Also, make sure no firewalls are blocking the incoming UDP trafict towards the client (Windows firewall, in particular).

Rotating per packets receiving by TCPDUMP

How can I use 'tcpdump' command to capture and save each received packets to separate files (having rotatation per packet without losing any packets).
How about saving dump to a file and then splitting that to separate files?
$ sudo tcpdump -c 10 -w mycap.pcap
tcpdump: data link type PKTAP
tcpdump: listening on pktap, link-type PKTAP (Packet Tap), capture size 65535 bytes
10 packets captured
you'll need to have wireshark installed for this to work (e.g. with brew install wireshark on Mac or apt-get on Ubuntu)
$ editcap -c 1 mycap.pcap output.pcap
10 packets captured -> 10 files created
$ ls -la output* | wc -l
10

Raspberry Pi / Octopi USB Camera Setup: What does this mean and how do I use it?

I'm very new to Raspberry Pi, and have no prior notable experience with Linux so this is all new to me...
Octoprint is a 3D printer spooler that you can run on your raspberry pi. One of the features on Octoprint is the ability to setup a USB camera to view either still images or a stream of your print.
I am using the Octopi prepackaged Octoprint image.
Octoprint's github contains the following info referring to my USB camera. But I have no idea how to implement this.
Hama PC-Webcam "AC-150" on Raspberry Pi
./mjpg_streamer -o output_http.so -w ./www -i input_uvc.so -y -r 640x480 -f 10
https://github.com/foosel/OctoPrint/wiki/Webcams-known-to-work
I'm guessing this is an easy command that I enter via console, but I've winged few commands with no luck. Can someone shed some light on how I use this? Like I said I'm an absolute beginner with the pi...
Any help is greatly appreciated!
Try this:
camera_usb_options="-r VGA -f 10 -y"
sudo service octoprint stop
fuser /dev/video0
/dev/video0: **1871m**
$ ps axl | grep **1871** *Change this number by yours*
$ kill -9 **1871**
./mjpg_streamer -i "input_uvc.so $camera_usb_options" -o "output_http.so -w ./www"
sudo service octoprint start

tshark doesn't always print source ip

How can i get the tcp payload of packets with tshark, and also get the source IP that sent these packets?
This command works for most packets, but some packets are still printed WITHOUT a source IP (Why?) :
tshark -Y "tcp.dstport == 80" -T fields -d tcp.port==80,echo -e echo.data -e ip.src
*To test my command, run it and then browse to http://stackoverflow.com. Notice that usually the data chunks ("47:45:54:20:2f:61:64:73:...") have an IP after them, but not always.
I found the problem:
The packets with a missing source IP were IPv6, but my original command only prints IPv4.
This works:
tshark -Y "tcp.dstport == 80" -T fields -d tcp.port==80,echo -e echo.data -e ip.src -e ipv6.src

Testing iPhone app with limited network access

Is there any way of simulating limited or no 3G / Wifi / EDGE connectivity when using the iPhone simulator?
Is it the variations in speed you wish to test? Or access to each technology?
If it's speed then you could use the following ipfw trick, courtesty of Craig Hockenberry of the Icon Factory, to use ipfw to limit connectivity to a given domain. In this example, it's twitter and it limits the speed of all connections to and from the host.
It's a bash script, if you're doing iPhone dev you'll be on a mac so just create it and run in the terminal.
#!/bin/bash
# configuration
host="twitter.com"
# usage
if [ "$*" == "" ]; then
echo "usage: $0 [off|fast|medium|slow]"
exit
fi
# remove any previous firewall rules
sudo ipfw list 10 > /dev/null 2>&1
if [ $? -eq 0 ]; then
sudo ipfw delete 10 > /dev/null 2>&1
fi
sudo ipfw list 11 > /dev/null 2>&1
if [ $? -eq 0 ]; then
sudo ipfw delete 11 > /dev/null 2>&1
fi
# process the command line option
if [ "$1" == "off" ]; then
# add rules to deny any connections to configured host
sudo ipfw add 10 deny tcp from $host to me
sudo ipfw add 11 deny tcp from me to $host
else
# create a pipe with limited bandwidth
bandwidth="100Kbit"
if [ "$1" == "fast" ]; then
bandwidth="300Kbit"
elif [ "$1" == "slow" ]; then
bandwidth="10Kbit"
fi
sudo ipfw pipe 1 config bw $bandwidth
# add rules to use bandwidth limited pipe
sudo ipfw add 10 pipe 1 tcp from $host to me
sudo ipfw add 11 pipe 1 tcp from me to $host
fi
You might want to take a look at SpeedLimit, a Preference Pane for OS X that allows you to throttle bandwidth and control latency.
If you have iPhone tethering, you can turn off your cable modem/ASDL connection, and route your internet through your iPhone. This method works really well if your carrier is AT&T. If you don't have AT&T as your carrier, you'll have to try one of the other methods to simulate a crappy connection.
Another lo-fi solution, is to wrap your home wireless router in tin foil, or put it in a metal box. What you want to simulate generally is a crappy connection - not a slow connection. The firewall rules will slow the connection, but won't lose random packets.
Since your on a Mac, you can use Dummynet. This plugs into ipfw, but can also simulate packet loss. Here's a typical ipfw with the Dummynet module:
ipfw add 400 prob 0.05 deny sr-ip 10.0.0.0/8
You can test no network by turning your airport off :-)
For finer control, Neil's ipfw suggestion is the best way.