I'm trying to configure a RaspberryPi2 to record video data from the camera module to a rosbag. To get the camera working with ROS, I used code I found here: https://github.com/fpasteau/raspicam_node.
This works fine, but I have a problem capturing the data to a rosbag. When capturing in raw mode at a high frame rate, it captures smoothly for a few seconds, then freezes for a few seconds, then captures smoothly for a few seconds, then freezes, ...
For instance, I tried capturing a file with 640x480#30FPS and this is what rosbag info yields:
duration: 2:51s (171s)
size: 2.9 GB
messages: 5049
compression: none [2504/2504 chunks]
types: rosgraph_msgs/Log [acffd30cd6b6de30f120938c17c593fb]
sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214]
sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
topics: /camera/camera_info 2505 msgs : sensor_msgs/CameraInfo
/camera/image 2504 msgs : sensor_msgs/Image
/rosout 22 msgs : rosgraph_msgs/Log (2 connections)
/rosout_agg 18 msgs : rosgraph_msgs/Log
So if we have 171 seconds of video, at 90FPS, that should give 15390 messages, we only got 2504, which is about 14FPS. The file itself is 2.9GB in size. This means it had an average writing speed of ~17.5MB/s. Eventually I found a command to test the write speed of the SD card (dd if=/dev/zero of=~/test.tmp bs=500K count=1024), which says my writing speed is about ~19MB/s on average.
So my questions are:
If the SD writing speed is causing the problem, how come the RaspberryPi can't utilise the full 90MB/s?
Can I tune the RaspberryPi to write quicker to the SD card?
I thought about getting a BananaPi, which comes with SATA, so I could connect a SATA drive and shouldn't run into any write speed issues. Before making that investment, does anyone have experience with BananaPis? I saw a test here: http://314256.blogspot.co.uk/2014/11/banana-pi-sata-disk-throughput-test.html, which looks like the BananaPi should be able to handle it.
Any other ideas how to make it work on the RaspberryPi?
It looks like the raspicam_node publishes images with bgra8 encoding (raspicam_raw_node.cpp#L266), so we need to store 4*640*480*30 Bytes/second = 36.86 MB/s.
However ~18 MB/s seems to be pretty much the limit on a Raspberry 2 (microSD card performance comparison).
Instead of trying to save all the raw data, have rosbag store the sensor_msgs/CompressedImage from the /camera/image/compressed topic. You can tune the <base_topic>/compressed/jpeg_quality parameter (see compressed_image_transport's dynamic reconfigure parameters), but with the default of 80 you should get around 30:1 compression ratio, i.e. 1.23 MB/s.
The Raspberry should be able to handle this easily. Given the image quality of the tiny Raspberry camera, you will probably not even perceive any difference in quality.
Related
I'm just creating an app(kivy) for a raspberry pi(3b) with 7 inch touchdisplay. In addition I implemented a light sensor (TSL2591), which can regulate the brightness of the backlight using following command:
os.system('sudo sh -c "echo '+str(brightness)+' > /sys/class/backlight/rpi_backlight/brightness"')
with values of brightness 0 to 255
Works fine so far, but I do update the brightness once a second. If I'm not wrong, the command overwrites a config file and I mind of write access to the SD Card that often. I think the SD card will be corrupt after a short period of time.
For sure I can try to get less write operations, but it also leads to less smoothness:
update slower than 1 sec
only write if brightness value really changes
don't use all of the 255 steps
So the main question is: is there any other way to control the brightness? Or any workaround? I could not find a "real" Datasheet or any other advice on the internet. So maybe there is another way.
That's not a conventional disk file; it's a "device special file" which the kernel artificially creates to look like a disk file. It allows you to "talk" to device drivers using standard read() and write() calls.
You need not worry about SD card wear.
Flashing a coral dev board per the getting started guide results in the the error Wrong image format for "source" command. This error is what is displayed in the serial console when the SD card is inserted in the board and the board is powered up - full output below. I didn't find any documentation for this problem so I am posting it here in case anyone else has this issue.
U-Boot SPL 2019.04.1 (Apr 29 2020 - 18:40:05 +0000)
power_bd71837_init
Board id: 2
DDRINFO: start DRAM init
DDRINFO:ddrphy calibration done
DDRINFO: ddrmix config done
Normal Boot
Trying to boot from MMC2
hdr read sector 300, count=1
U-Boot 2019.04.1 (Apr 29 2020 - 18:40:05 +0000), Build: jenkins-enterprise.uboot-imx-1
CPU: Freescale i.MX8MQ rev2.0 1500 MHz (running at 1000 MHz)
CPU: Commercial temperature grade (0C to 95C) at 33C
Reset cause: POR
Model: Freescale i.MX8MQ Phanbell
DRAM: 1 GiB
MMC: FSL_SDHC: 0, FSL_SDHC: 1
Loading Environment from MMC... *** Warning - bad CRC, using default environment
In: serial
Out: serial
Err: serial
BuildInfo:
- ATF
- U-Boot 2019.04.1
flash target is MMC:0
Net:
Error: ethernet#30be0000 address not set.
Error: ethernet#30be0000 address not set.
eth-1: ethernet#30be0000
Fastboot: Normal
Normal Boot
Hit any key to stop autoboot: 0
** No partition table - mmc 1 **
## Executing script at 40480000
Wrong image format for "source" command
## Starting auxiliary core at 0x00000000 ...
u-boot=>
This error results from a bad SD card, or perhaps one that has already been used (formatted) for other uses. I was able to bypass this error and successfully install the OS by burning the image per the the getting started guide on a brand new SD card (I used a Samsung 128GB Pro Endurance card). I used balenaEtcher on a mac, which burns the image in just a few minutes.
For work I've had to provision dozens of these and they are super finicky about sd cards. I've bought 4 brand new completely identical (same brand, size, etc.) sd cards and burn them all identically. 1 will work on one board but not another, another will work in the 2nd but not the first, etc. So the only advice I have is "keep trying".
Thanks for the answers from Oliver and j2abro. I just dug the Coral Dev Board out of my project drawer and started trying to get it running.
First of all, make sure you read the directions and set the DIP switches properly (this dip, me, didn't do that, at first). I finally got it to work, but Oliver's answer seems to match my experience (Dev Board is finicky about SD Cards). Here's the cards used and result. All flashed with BalenaEtcher on a MacBook Pro (M1 2020, Monterrey 12.6):
SanDisk UltraPlus 256GB (multiple FAILS: didn't set DIPs correctly. derp!)
SanDisk UltraPlus 32GB #1 (FAILED with DIPs correctly set: same result as OP)
SanDisk UltraPlus 32GB #2 (FAILED with DIPs correctly set: same result as OP)
SanDisk UltraPlus 256GB (SUCCESS: set DIPs correctly) same card as #1
All these cards were brand new, right out of the retail packaging; nothing else was written to them before the Coral Dev Board image.
Theory: I tried the 32GB cards thinking that this board might have a problem with exFAT (above 32GB) storage formatting, similar to the Raspberry Pi boot limitation. However, given my and j2abro's success with larger SD Cards, I'd recommend trying a larger SD Card (more than 32GB) rather than the typical 32GB or less you'd use for a Pi. Seems like the Dev Board likes exFAT formatting better?
Tip: I strongly recommend connecting to the board using the Serial Console while setting up the board (link below), otherwise you'll be doing a lot of guessing as to what's going on with the setup and waste your time:
https://coral.ai/docs/dev-board/serial-console/
Here's what I saw (finally) in the Serial Console after the second reboot after a successful flash with the 256GB card:
...
[ 8.788556] IPv6: ADDRCONF(NETDEV_UP): usb1: link is not ready
Mendel GNU/Linux (eagle) lime-jet ttymxc0
lime-jet login:
Hope this helps you get to a successful setup.
I'm attempting to stream a H.264 video feed to a web browser. Media Foundation is used for encoding a fragmented MPEG4 stream (MFCreateFMPEG4MediaSink with MFTranscodeContainerType_FMPEG4, MF_LOW_LATENCY and MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS enabled). The stream is then connected to a web server through IMFByteStream.
Streaming of the H.264 video works fine when it's being consumed by a <video src=".."/> tag. However, the resulting latency is ~2sec, which is too much for the application in question. My suspicion is that client-side buffering causes most of the latency. Therefore, I'm experimenting with Media Source Extensions (MSE) for programmatic control over the in-browser streaming. Chrome does, however, fail with the following error when consuming the same MPEG4 stream through MSE:
Failure parsing MP4: TFHD base-data-offset not allowed by MSE. See
https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing
mp4dump of a moof/mdat fragment in the MPEG4 stream. This clearly shows that the TFHD contains an "illegal" base data offset parameter:
[moof] size=8+200
[mfhd] size=12+4
sequence number = 3
[traf] size=8+176
[tfhd] size=12+16, flags=1
track ID = 1
base data offset = 36690
[trun] size=12+136, version=1, flags=f01
sample count = 8
data offset = 0
[mdat] size=8+1624
I'm using Chrome 65.0.3325.181 (Official Build) (32-bit), running on Win10 version 1709 (16299.309).
Is there any way of generating a MSE-compatible H.264/MPEG4 video stream using Media Foundation?
Status Update:
Based on roman-r advise, I managed to fix the problem myself by intercepting the generated MPEG4 stream and perform the following modifications:
Modify Track Fragment Header Box (tfhd):
remove base_data_offset parameter (reduces stream size by 8bytes)
set default-base-is-moof flag
Add missing Track Fragment Decode Time (tfdt) (increases stream size by 20bytes)
set baseMediaDecodeTime parameter
Modify Track fragment Run box (trun):
adjust data_offset parameter
The field descriptions are documented in https://www.iso.org/standard/68960.html (free download).
Switching to MSE-based video streaming reduced the latency from ~2.0 to 0.7 sec. The latency was furthermore reduced to 0-1 frames by calling IMFSinkWriter::NotifyEndOfSegment after each IMFSinkWriter::WriteSample call.
There's a sample implementation available on https://github.com/forderud/AppWebStream
I was getting the same error (Failure parsing MP4: TFHD base-data-offset not allowed by MSE) when trying to play a fmp4 via MSE. The fmp4 had been created from a mp4 using the following ffmpeg comand:
ffmpeg -i myvideo.mp4 -g 52 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov myfmp4video.mp4
Based on this question I was able to find out that to have the fmp4 working in Chrome I had to add the "default_base_moof" flag. So, after creating the fmp4 with the following command:
ffmpeg -i myvideo.mp4 -g 52 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov+default_base_moof myfmp4video.mp4
I was able to play successfully the video using Media Source Extensions.
This Mozilla article helped to find out that missing flag:
https://developer.mozilla.org/en-US/docs/Web/API/Media_Source_Extensions_API/Transcoding_assets_for_MSE
The mentioned 0.7 sec latency (in your Status Update) is caused by the Media Foundation's MFTranscodeContainerType_FMPEG4 containterizer which gathers and outputs each roughly 1/3 seconds (from unknown reason) of frames in one MP4 moof/mdat box pair. This means that you need to wait 19 frames before getting any output from MFTranscodeContainerType_FMPEG4 at 60 FPS.
To output single MP4 moof/mdat per each frame, simply lie that MF_MT_FRAME_RATE is 1 FPS (or anything higher than 1/3 sec). To play the video at the correct speed, use Media Source Extensions' <video>.playbackRate or rather update timescale (i.e. multiply by real FPS) of mvhd and mdhd boxes in your MP4 stream interceptor to get the correctly timed MP4 stream.
Doing that, the latency can be squeezed to under 20 ms. This is barely recognizable when you see the output side by side on localhost in chains such as Unity (research) -> NvEnc -> MFTranscodeContainerType_FMPEG4 -> WebSocket -> Chrome Media Source Extensions display.
Note that MFTranscodeContainerType_FMPEG4 still introduces 1 frame delay (1st frame in, no output, 2nd frame in, 1st frame out, ...), hence the 20 ms latency at 60 FPS. The only solution to that seems to be writing own FMPEG4 containerizer. But that is order of magnitude more complex than intercepting of Media Foundation's MP4 streams.
The problem was solved by following roman-r's advise, and modifying the generated MPEG4 stream. See answer above.
Another way to do this is again using the same code #Fredrik mentioned but I write my own IMFByteStream and and I check the chunks written to the IMFByteStream.
FFMpeg writes the atoms almost once at a time. So you can check the atom name and do the mods. It is the same thing. I wish there was an MSE compliant windows sinker.
Is there one that can generate .ts files for HLS?
I am running a ros publisher/subscriber node, which receives a single image from a /image_pub topic , do some processing and publish the results on /results topic. The image_pub topic is publishing at 20Hz but my publisher/subscriber node runs at 12 hz(i found it using rostopic hz /results). Is there any way to improve the speed or tell my program to run at 20Hz. At start it was running at 20Hz. Then i turned off my Linux for lunch, came back and restarted my program. Now its running at 12 hz. I have restarted it again and again but still runs at 12 hz. Any solution..?
If your image processing takes longer than 1/20 second than there is no way you can achieve the 20Hz. If that is not the case then the following main loop will do the job
ros::Rate publish_rate(20);
while(ros::ok())
{
// do some processing
publisher.publish(image);
publish_rate.sleep();
}
The ros::Rate will make sure to sleep for the right amount of time to achieve the 20Hz.
Also make sure to compile in Release mode (catkin_make -DCMAKE_BUILD_TYPE=Release) as this will speed up you code by a good margin.
I have an Arduino with a WiFly shield, everything works perfectly!
The thing is, when I want to turn on an LED, I open in my
webbrowser:
192.168.1.120/ledon/
(I made a program which handles this URL).
But the thing is; when I make a request, I must wait 1-2 seconds before I can do another one.
So, it is very long, and if I want to control motors, it is just too long.
So, instead of using an HTTP request, I want to use something else which can be faster.
Something "super fast".
I just need to tell the Arduino:
- go direction 1
- go direction 2...
- turn on LED
- turn off LED
- tell me the light level (which return a int)
So it is just about a small amount of data.
Can you show me a way? (Telnet, UDP, OSC?)
For your arduino, have a look at just using sockets or even encoding the data in the URL requested.
You shouldnt get less than about 0.8 Seconds Lag maximum.
How big is your program for handling the Url /ledon/ ?
Using pure packets (usually TCP) from your computer to the arduino is faster sometimes..
But you may need to code a application to handle the packets on the pc.
There is the option of Javascript to parse data back and forth e.g. reading the light level and such.