VLC: How to stream a wave via HTTP - raspberry-pi

I want to stream from my rapsberry the microphone via HTTP with VLC.
This command works fine:
vlc -vvv alsa://hw:1,0 --sout '#transcode{vcodec=none,acodec=mpga,ab=128,channels=2,samplerate=44100}:standard{access=http,mux=mp3,dst=192.168.178.30:8080}'
But when changing the code to s16l and mux to wav I can't hear anything in the VLC.
This is the command I've tried:
vlc -vvv alsa://hw:1,0 --sout '#transcode{vcodec=none,acodec=s16l,channels=1,samplerate=16000,scodec=none}:standard{access=http,mux=wav,dst=192.168.178.30:8080}'
Bu the same codec using RTP works:
vlc -vvv alsa://hw:1,0 --sout '#transcode{vcodec=none,acodec=s16l,channels=1,samplerate=16000,scodec=none}:rtp{dst=192.168.178.30,port=1234,sdp=rtsp://192.168.178.30:8080/test.sdp}'
Some logs: https://gist.github.com/timaschew/9e7e027cd1b371b01b0f186f23b47068

Not all codecs can be muxed, check VLC documentation.
Currently PCM(wave) can be muxed only in RTP.
mux is the encapsulation method required for streaming. wav in VLC is a container intended for storing.

Wave is a file container type, it can hold different types of codec data (compressed /uncompressed).
[Wiki]
Audio in WAV files can be encoded in a variety of audio coding formats, such as GSM or MP3, to reduce the file size.
This is a reference to compare the monophonic (not stereophonic) audio quality and compression bitrates of audio coding formats available for WAV files including PCM, ADPCM, Microsoft GSM 06.10, CELP, SBC, Truespeech and MPEG Layer-3.
For HTTP streaming using VLC
Select the Codec you need to stream like mp3 codec.
Note : Muxing is not applicable here

Related

Write frames to RTSP/HTTP using FFMPEG

I have an application that reads footage from an RTSP stream and processes the frames. I need to restream these processed frames to another RTSP stream. I have used the following command to stream a video file using FFMPEG:
ffmpeg -re -stream_loop -1 -i D:\Proj\sample.mp4 -c copy -f rtsp rtsp://10.0.0.0:8554/mystream.
Is it possible to stream individual frames as soon as they are processed and not only a full video file?
The algorithm has been built in MATLAB.
Thank you.

ffmpeg audio conversion in flutter

I would like to get data from audio file based on microphone input (both Android and iOS), currently I'm using audioplayers and recordMp3 to record the microphone input. This results in a mp3 file with a local file path. In order to use the audio data, I want an uncompressed format like WAV. Would ffmpeg help with this conversion ? I want to eventually use this data for visualization.
MP3 to WAV
ffmpeg -i input.mp3 output.wav
Note that any encoding artifacts in the MP3 will be included in the WAV.
Piping from ffmpeg to your visualizer
I'm assuming you need WAV/PCM because your visualizer only accepts that format and does not accept MP3. You can create a WAV file as shown in the example above, but if your visualizer accepts a pipe as input you can avoid creating a temporary file:
ffmpeg -i input.mp3 -f wav - | yourvisualizer …
Using ffmpeg for visualization
See examples at How do I turn audio into video (that is, show the waveforms in a video)?

How to programmatically output fragmented mp4 file using Bento4

I want to record video conference. I can receive rtp media from video conferencing server. I want to output fragmented mp4 file format for live streaming. So, how to write a fragmented mp4 file programmatically using Bento4?
MP4Box supports DASH. i supply the following simple example:
MP4Box -dash 4000 -frag 4000 -rap -segment-name test_ input.mp4
'-dash 4000' to segment the input mp4 file into 4000ms chunks
'-frag 4000' since frag = dash, actually segments are not fragmented further.
'-rap' to enforce each segment to start random access points, i.e. at keyframes. In such case the segment duration may differ from 4000ms depending on distribution of key frames.
'-segment-name' to specify the pattern of segments names. So in this case, the segments will be named like this: test_1.m4s, test_2.m4s, ...

MP4Box MP4 concatenation not working

I download lectures in mp4 format from Udacity, but they're often broken down into 2-5 minute chunks. I'd like to combine the videos for each lecture into one continuous stream, which I've had success with on Windows using AnyVideo Converter. I'm trying to do the same thing on Ubuntu 15, and most of my web search results suggest MP4Box, whose documentation and all the online examples I can find offer the following syntax:
MP4Box -cat vid1.mp4 -cat vid2.mp4 -cat vid3.mp4 -new combinedfile.mp4
This creates a new file with working audio, but the video doesn't work. When I open with Ubuntu's native video player, I get the error "No valid frames decoded before end of stream." When I open with VLC, I get the error "Codec not supported: VLC could not decode the format 'avc3' (No description for this codec." I've tried using the -keepsys switch, as well, but I get the same results.
All the documentation and online discussion makes it sound as though what I'm trying to do is and should be really simple, but I can't seem to find info relevant to the specific errors I'm getting. What am I missing?
Use the -force-cat option.
For example,
MP4Box -force-cat -add in1.mp4 -cat in2.mp4 -cat in3.mp4 ... -new out.mp4
From the MP4Box documentation:
-force-cat
skips media configuration check when concatenating file.
It looks, by the presence of 'avc3', that these videos are encoded with h.264|avc. There are several modes for the concatenation of such streams. Either the video streams have compatible encoder configurations (frame size, ...) in which case only one configuration description is used in the file (signaled by 'avc1'). If the configurations are not fully compatible, MP4Box uses the 'inband' storage of those configurations (signaled by 'avc3'). The other way would be to use multiple sample description entries (stream configurations) but that is not well supported by players and not yet possible with MP4Box. There is no other way unless you want to reencode your videos. On Ubuntu, you should be able to play 'avc3' streams with the player that goes with MP4Box: MP4Client.

FFMPEG RTMP streaming to FMS without stop?

I have some .mov files want to stream to Flash media server. i have already tried to stream a single .mov by FFMPEG command in terminal and it works, the FMS can display the thing i streaming in live.
ffmpeg -re -i file1.mov -vcodec libx264 -f flv rtmp://localhost/livepkgr/livestream
Now i want to stream multiple files,
i tried to use above command one by one,
but it seems Flash media server stop the streaming when file1 is finished,
then start the stream with file2.
It makes the stream player stopped when file1 is finish, and have to refresh the page in order to continue on file2.
i am calling the FFMPEG command by a C program in linux, i wonder is there any method that i can prevent the FMS stopped when i switch the file source in FFMPEG? or is that possible to let FFMPEG constantly deliver the stream by multiple files source without stopped when a file finish?
Reformat your source file to a TS or MPEG or other "concatable" file. Then you can either use ffmpeg's concat protocol or just "cat" by yourself.
I found something like this it will be useful for you :
I managed to stream a static playlist of videos by using for each video a pipe (ex vid1.mp4 -> pipe1, vid2.mp4 -> pipe2 etc). Then i write into a single stream named pipe "stream" this way cat pipe1 pipe2 pipe3 > stream, and i use the stream pipe as input in FFMPEG to publish my stream