ffmpeg2theora oggfwd not working with icecast2 - streaming

I have a camera streaming (mjpeg) in http://192.168.x.x/image (where x are the rest of the IP). I start my icecast2 server (Ubuntu 10.10) and then I stream using:
ffmpeg2theora -f mjpeg http://192.168.x.x/image -o /dev/stdout - | oggfwd localhost 8000 password /test
The mountpoint is created but the video is not showing on Firefox. I do see the video box but it's just infinitely showing the "thinking" icon and video does not show.
If I download a proper ogg file and do
cat proper_ogg_file.ogg | oggfwd localhost 8000 password /test
I see the video on the icecast server's website.
In addition I did:
ffmpeg2theora -f mjpeg http://192.168.x.x/image -o test_video.ogg
Once I stop the process (CTRL+C) and go to my Desktop where the video is saved and open it with VLC or any other media player, it plays the portion of the stream that I allowed to be recorded all the way up to pressing CTRL+C.
If I take that file and use the previous method:
cat test_video.ogg | oggfwd localhost 8000 password /test
I get the same issue as when I was directly piping the camera to stdout and then to oggfwd. So therefore I assume this is a "conversion" to ogg issue? Can anybody help? Any idea why i can't do that?

I found a solution. The solution is to use flumotion. It is a lot easier to use and works for what I needed it. I can provide information on how to use it if anybody needs to do so.
Thank you

Related

IBM Aspera get size of file before download

I am using Aspera Connect on mac to download files from a server. It works fine in terminal, but i was wondering if before i download a file, i could read its size first and then decide if i want to download it or not. I found the flag
'--precalculate-job-size'
but it's only doing that right before download and there's no way to stop the download.
The current command i use is this:
/Applications/Aspera\ Connect.app/Contents/Resources/./ascp -QT -l 200M -P33001 -i "/Applications/Aspera Connect.app/Contents/Resources/asperaweb_id_dsa.openssh" emp_ext3#fasp.ebi.ac.uk:/{asp_path} {local_path}
The resources for the flags are here:
https://download.asperasoft.com/download/docs/ascp/2.7/html/index.html
To answer your question, without going too much in the details:
If you want to display the size of an elements on an Aspera server for which you have access, you can use the command line "Amelia", see:
https://www.rubydoc.info/gems/asperalm
mlia server --url=ssh://fasp.ebi.ac.uk:33001 --username=emp_ext3 --ssh-keys=~/.aspera/mlia/aspera_bypass_dsa.pem br /10002/data/100_movie_gc.mrcs
there are plenty of options, like : --format=csv --fields=size
Note that this displays individual file sizes, but not recursive folder size.
a few other things:
You are not exactly using "Connect", but rather the "ascp" command line. Connect refers rather to the browser extension and lightweight app. while ascp is the implementation of Aspera FASP transfer protocol, found basically in all Aspera products.
the latest ascp documentation can be found here: https://www.ibm.com/support/knowledgecenter/SSL85S_3.9.6/hsts_admin_linux/dita/hsts_admin_linux_ascp_usage.html
did you know you can also use the free client:
https://downloads.asperasoft.com/en/downloads/2
it includes also ascp, but also a graphical user interface

How to skip selected url while mirroring site with wget

I have the following problem. I need to mirror password protected site. Sounds like simple task:
wget -m -k -K -E --cookies=on --keep-session-cookies --load-cookies=myCookies.txt http://mysite.com
in myCookies.txt I am keeping proper session cookie. This works until wget come accross logout page - then session is invalidated and, effectively, further mirroring is usless.
W tried to add --reject option, but it works only with file types - I can block only html file download or swf file download, I can't say
--reject http://mysite.com/*.php?type=Logout*
Any ideas how to skip certain URLs in wget? Maybe there is other tool that can do the job (must work on MS Windows).
What if you first download (or even just touch) the logout page, and then
wget --no-clobber --your-original-arguments
This should skip the logout page, as it has already been downloaded
(Disclaimer: I didn't try this myself)
I have also encountered this problem and later solved it like this: "--reject-regex logout", more:wget-devTips

Secure pseudo-streaming flv files

We use RTMP to secure stream media content through Wowza and it works like a charm. Wowza is really strong and robust media-server for a business purpose.
But we met a problem, it's getting bigger every day for us. A lot of new customers can't use RTMP by their firewall rules, and it's a problem to deliver a business media content for them.
But everybody has no problems with http pseudo-streaming or just progressive, like it does youtube or vimeo.
So we should do the same, but provide secure links to pseudo-streaming traffic, to prevent a direct download by stealing the links.
We use few servers, one for Rails app, the second for DB, and third as Wowza media server.
My thinking is to setup nginx on Wowza media server and configure to pseudo-stream media originally files (in the same filesystem that Wowza uses to stream through webcam capture).
Can you suggest to use nginx with http_secure_link_module and http_flv_module modules?
Another idea by my colleague is to build a tiny application on Wowza side to get encrypted links and translate it to local file system, then get access to files through X-Accel-Redirect and check authentication via direct connection to DB.
Thanks a lot
I have found a solution, let me share with anyone interested in it.
First of all, my constraints was to use the minimum tools as possible, so ideally to have built-in module in web-server only, no upstream backend scripts. And I have a solution now.
server {
listen 8080 ssl;
server_name your_server.com;
location /video/ {
rewrite /video/([a-zA-Z0-9_\-]*)/([0-9]*)/(.*)\.flv$ /flv/$3.flv?st=$1&e=$2;
}
location /flv/ {
internal;
secure_link $arg_st,$arg_e;
secure_link_md5 YOUR_SECRET_PASSWORD_HERE$arg_e$uri;
if ($secure_link = "") { return 403; }
if ($secure_link = "0") { return 403; }
root /var/www/;
flv;
add_header Cache-Control 'private, max-age=0, must-revalidate';
add_header Strict-Transport-Security 'max-age=16070400; includeSubdomains';
}
}
The real flv files located into "/var/www/flv" directory. To encrypt the URL on Ruby side, you can use that script:
expiration_time = (Time.now + 2.hours).to_i # 1326559618
s = "#{YOUR_SECRET_PASSWORD_HERE}#{expiration_time}/flv/video1.flv"
a = Base64.encode64(Digest::MD5.digest(s))
b = a.tr("+/", "-_").sub('==', '').chomp # HLz1px_YzSNcbcaskzA6nQ
# => "http://your_server.com:8080/video/#{b}/#{expiration_time}/video1.flv"
So the secured 2-hours URL (you can put it into flash-player) looks like:
"http://your_server.com:8080/video/HLz1px_YzSNcbcaskzA6nQ/1326559618/video1.flv"
P.S. Nginx should be compiled with following options --with-http_secure_link_module --with-http_flv_module
$ cd /usr/src
$ wget http://nginx.org/download/nginx-1.2.2.tar.gz
$ tar xzvf ./nginx-1.2.2.tar.gz && rm -f ./nginx-1.2.2.tar.gz
$ wget http://zlib.net/zlib127.zip
$ unzip zlib127.zip && rm -f zlib127.zip
$ wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.30.tar.gz
$ tar xzvf pcre-8.30.tar.gz && rm -f ./pcre-8.30.tar.gz
$ wget http://www.openssl.org/source/openssl-1.0.1c.tar.gz
$ tar xzvf openssl-1.0.1c.tar.gz && rm -f openssl-1.0.1c.tar.gz
$ cd nginx-1.2.2 && ./configure --prefix=/opt/nginx --with-pcre=/usr/src/pcre-8.30 --with-zlib=/usr/src/zlib-1.2.7 --with-openssl-opt=no-krb5 --with-openssl=/usr/src/openssl-1.0.1c --with-http_ssl_module --without-mail_pop3_module --without-mail_smtp_module --without-mail_imap_module --with-http_stub_status_module --with-http_secure_link_module --with-http_flv_module
$ make && make install
JW player and Flowplayer will automatically fall back to RTMPT (over HTTP) when an RTMP connection is unsuccessful, and Wowza makes both available. I've encountered port 1935 blocked at several locations, and the fallback to RTMPT over port 80 generally works. The caveat there, of course, is that you have to have Wowza listening on port 80 (in the VHost.xml where 1935 is defined, change it to 80,1935), and that precludes having any kind of web server listening on the same port.
We use Wowza with port 80 with our clients

Sniffing bonjour traffic between iPhone and Mac

My end result is to try and see the plist being sent between my iphone & my mac (I know its a plist because I can see bplist00 in the hexdump).
I have an app sending data between my iphone to my mac via a bonjour service.
I use tcpdump to capture the traffic and try and transform the payload hexdump into binary to then convert it into a plist text file.
Here are my steps:
Make sure the iphone and mac are connected, have the command ready to send
Run tcp dump: sudo tcpdump -vSs 0 -A -i en1 -w Dump.pcap 'tcp port 57097' on my wireless network (I used Bonjour Browser to find what port the service is registered on), then hit the send command on the phone.
Convert the pcap file to a text file: tshark -V -r Dump.pcap > Dump.txt (the end result is this)
Manually remove the headers and other info from the text file so that I am just left with the payload (we now have this in the file)
Do a reverse hex dump to convert the file into binary: xxd -r Dump.txt Dump1.txt
Convert the binary plist to a text file: plutil -convert xml1 Dump1.txt
However, at step 6 is where things fail: Dump1.txt: Property List error: Conversion of string failed. The string is empty. / JSON error: JSON text did not start with array or object and option to allow fragments not set. (although it could have been a mistake from an earlier step). And I'm not sure why it reports errors on JSON when I have asked for an XML conversion?
This low level network capturing is not something I am normally akin to (I'm normally higher up with fiddler or charles, but considering this isn't via HTTP I need to go lower down the stack).
Can someone please tell me if what I am doing is correct, or whether there is an easier way to do this?
How can I go about capturing the plist being sent to my mac?
My guess is your issue is somewhere around step 4, where you manually edit the request. I was just trying something similar, using Charles rather than tcpdump, and got the exact same error with a payload that I knew contained a plist. Not sure why we got the JSON error message.
I was able to resolve it by directly saving the binary-encoded plist request body from Charles to a file (Charles has a "Save Request" menu option), then run plutil -convert xml1 FILENAME -o - on it, and it worked just fine.

Best way to stream mp3

I need to organize mp3-streaming from my machine to the rest of the world. People advised me to use MPD with Icecast2 as frontend. Everything is ok except one thing — music is being streamed as Ogg Vorbis, not what actually I need.
There's a snippet of MPD's config file:
audio_output {
type "shout"
name "Radio"
host "localhost"
port "8000"
encoding "mp3"
mount "/radio.ogg"
password "mypass"
bitrate "256"
format "44100:16:2"
protocol "icecast2"
description "radio stream"
}
But, Icecast's status page says it's streaming ogg, not mp3.
MPD's version is 0.13.2 running on Debian Lenny. What's wrong?
Any help will be appreciated.
P.S. I have LAME encoder compiled.
So my question is solved. I just compiled 0.15.2 version of MPD with --enable_shout and --enable_ffmpeg keys.