We use RTMP to secure stream media content through Wowza and it works like a charm. Wowza is really strong and robust media-server for a business purpose.
But we met a problem, it's getting bigger every day for us. A lot of new customers can't use RTMP by their firewall rules, and it's a problem to deliver a business media content for them.
But everybody has no problems with http pseudo-streaming or just progressive, like it does youtube or vimeo.
So we should do the same, but provide secure links to pseudo-streaming traffic, to prevent a direct download by stealing the links.
We use few servers, one for Rails app, the second for DB, and third as Wowza media server.
My thinking is to setup nginx on Wowza media server and configure to pseudo-stream media originally files (in the same filesystem that Wowza uses to stream through webcam capture).
Can you suggest to use nginx with http_secure_link_module and http_flv_module modules?
Another idea by my colleague is to build a tiny application on Wowza side to get encrypted links and translate it to local file system, then get access to files through X-Accel-Redirect and check authentication via direct connection to DB.
Thanks a lot
I have found a solution, let me share with anyone interested in it.
First of all, my constraints was to use the minimum tools as possible, so ideally to have built-in module in web-server only, no upstream backend scripts. And I have a solution now.
server {
listen 8080 ssl;
server_name your_server.com;
location /video/ {
rewrite /video/([a-zA-Z0-9_\-]*)/([0-9]*)/(.*)\.flv$ /flv/$3.flv?st=$1&e=$2;
}
location /flv/ {
internal;
secure_link $arg_st,$arg_e;
secure_link_md5 YOUR_SECRET_PASSWORD_HERE$arg_e$uri;
if ($secure_link = "") { return 403; }
if ($secure_link = "0") { return 403; }
root /var/www/;
flv;
add_header Cache-Control 'private, max-age=0, must-revalidate';
add_header Strict-Transport-Security 'max-age=16070400; includeSubdomains';
}
}
The real flv files located into "/var/www/flv" directory. To encrypt the URL on Ruby side, you can use that script:
expiration_time = (Time.now + 2.hours).to_i # 1326559618
s = "#{YOUR_SECRET_PASSWORD_HERE}#{expiration_time}/flv/video1.flv"
a = Base64.encode64(Digest::MD5.digest(s))
b = a.tr("+/", "-_").sub('==', '').chomp # HLz1px_YzSNcbcaskzA6nQ
# => "http://your_server.com:8080/video/#{b}/#{expiration_time}/video1.flv"
So the secured 2-hours URL (you can put it into flash-player) looks like:
"http://your_server.com:8080/video/HLz1px_YzSNcbcaskzA6nQ/1326559618/video1.flv"
P.S. Nginx should be compiled with following options --with-http_secure_link_module --with-http_flv_module
$ cd /usr/src
$ wget http://nginx.org/download/nginx-1.2.2.tar.gz
$ tar xzvf ./nginx-1.2.2.tar.gz && rm -f ./nginx-1.2.2.tar.gz
$ wget http://zlib.net/zlib127.zip
$ unzip zlib127.zip && rm -f zlib127.zip
$ wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.30.tar.gz
$ tar xzvf pcre-8.30.tar.gz && rm -f ./pcre-8.30.tar.gz
$ wget http://www.openssl.org/source/openssl-1.0.1c.tar.gz
$ tar xzvf openssl-1.0.1c.tar.gz && rm -f openssl-1.0.1c.tar.gz
$ cd nginx-1.2.2 && ./configure --prefix=/opt/nginx --with-pcre=/usr/src/pcre-8.30 --with-zlib=/usr/src/zlib-1.2.7 --with-openssl-opt=no-krb5 --with-openssl=/usr/src/openssl-1.0.1c --with-http_ssl_module --without-mail_pop3_module --without-mail_smtp_module --without-mail_imap_module --with-http_stub_status_module --with-http_secure_link_module --with-http_flv_module
$ make && make install
JW player and Flowplayer will automatically fall back to RTMPT (over HTTP) when an RTMP connection is unsuccessful, and Wowza makes both available. I've encountered port 1935 blocked at several locations, and the fallback to RTMPT over port 80 generally works. The caveat there, of course, is that you have to have Wowza listening on port 80 (in the VHost.xml where 1935 is defined, change it to 80,1935), and that precludes having any kind of web server listening on the same port.
We use Wowza with port 80 with our clients
Related
I want to use curl http://127.0.0.1:8033/api/v1 to access https://http2.pro/api/v1 with HTTP/2
This API url will return whether the client using http2.
I have tried: (I'm using latest version 5.0.1)
./mitmdump -p 8033 --http2 --set http2_priority=true --mode reverse:https://http2.pro:443
However curl 127.0.0.1:8033/api/v1 still gives:
{"http2":0,"protocol":"HTTP\/1.1","push":0,"user_agent":"curl\/7.69.1-DEV"}
In contrast, curl https://http2.pro/api/v1 --http2 gives: (this is what I expected)
{"http2":1,"protocol":"HTTP\/2.0","push":0,"user_agent":"curl\/7.69.1-DEV"}
mitmproxy currently does not support converting between HTTP/1 and HTTP/2. For HTTP/2 to happen, both endpoints need to speak it. It is on our todo list and will hopefully be possible soon (https://github.com/mitmproxy/mitmproxy/issues/1775).
I am using Varnish 3 in front of nginx running multiple WordPress sites. I am using a default.vcl recommended and used by many large WordPress sites.
default.vcl: http://pastebin.com/KaSdvuRS
I am using W3 Total Cache which has an option to automatically purge when clearing the cache. I also tested installing Varnish HTTP Purge plugin to purge posts/pages when editing them. Neither seemed to work so I tested interactive session over ssh w/ curl.
I am logged into SSH on the varnish/nginx box, and I type the following command to test the varnish purge:
curl -X PURGE http://www.example.com
The result is:
<head>
<title>405 Not allowed.</title>
</head>
<body>
<h1>Error 405 Not allowed.</h1>
<p>Not allowed.</p>
<h3>Guru Meditation:</h3>
<p>XID: 265824636</p>
<hr>
<p>Varnish cache server</p>
</body>
Any ideas what I am missing? This vcl file is very similar to what is recommended by Varnish-Cache.org for WordPress and is the purge configuration I see recommended everywhere.
Chances are, you're connecting to your Varnish box via the public IP and Varnish will also see a public IP connecting, not a local one. Your ACL for purges now only allows localhost/127.0.0.1. You may want to extend that list with the public IP address of your server as well.
Alternatively, try debugging by removing the ACL check and by simply allowing purges from everyone, just to exclude the ACL as the guilty one.
I have a camera streaming (mjpeg) in http://192.168.x.x/image (where x are the rest of the IP). I start my icecast2 server (Ubuntu 10.10) and then I stream using:
ffmpeg2theora -f mjpeg http://192.168.x.x/image -o /dev/stdout - | oggfwd localhost 8000 password /test
The mountpoint is created but the video is not showing on Firefox. I do see the video box but it's just infinitely showing the "thinking" icon and video does not show.
If I download a proper ogg file and do
cat proper_ogg_file.ogg | oggfwd localhost 8000 password /test
I see the video on the icecast server's website.
In addition I did:
ffmpeg2theora -f mjpeg http://192.168.x.x/image -o test_video.ogg
Once I stop the process (CTRL+C) and go to my Desktop where the video is saved and open it with VLC or any other media player, it plays the portion of the stream that I allowed to be recorded all the way up to pressing CTRL+C.
If I take that file and use the previous method:
cat test_video.ogg | oggfwd localhost 8000 password /test
I get the same issue as when I was directly piping the camera to stdout and then to oggfwd. So therefore I assume this is a "conversion" to ogg issue? Can anybody help? Any idea why i can't do that?
I found a solution. The solution is to use flumotion. It is a lot easier to use and works for what I needed it. I can provide information on how to use it if anybody needs to do so.
Thank you
I tried to download an image using wget but got an error like the following.
--2011-10-01 16:45:42-- http://www.icerts.com/images/logo.jpg
Resolving www.icerts.com... 97.74.86.3
Connecting to www.icerts.com|97.74.86.3|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2011-10-01 16:45:43 ERROR 404: Not Found.
My browser has no problem loading the image.
What's the problem?
curl can't download either.
Thanks.
Sam
You need to add the referer field in the headers of the HTTP request. With wget, you just need the --header arg :
wget http://www.icerts.com/images/logo.jpg --header "Referer: www.icerts.com"
And the result :
--2011-10-02 02:00:18-- http://www.icerts.com/images/logo.jpg
Résolution de www.icerts.com (www.icerts.com)... 97.74.86.3
Connexion vers www.icerts.com (www.icerts.com)|97.74.86.3|:80...connecté.
requête HTTP transmise, en attente de la réponse...200 OK
Longueur: 6102 (6,0K) [image/jpeg]
Sauvegarde en : «logo.jpg»
I had the same problem with a Google Docs URL. Enclosing the URL in quotes did the trick for me:
wget "https://docs.google.com/spreadsheets/export?format=tsv&id=1sSi9f6m-zKteoXA4r4Yq-zfdmL4rjlZRt38mejpdhC23" -O sheet.tsv
You will also get a 404 error if you are using ipv6 and the server only accepts ipv4.
To use ipv4, make a request adding -4:
wget -4 http://www.php.net/get/php-5.4.13.tar.gz/from/this/mirror
I had same problem.
Solved using single quotes like this:
$ wget 'http://www.icerts.com/images/logo.jpg'
wget version in use:
$ wget --version
GNU Wget 1.11.4 Red Hat modified
Wget 404 error also always happens if you want to download the pages from Wordpress-website by typing
wget -r http://somewebsite.com
If this website is built using Wordpress you'll get such an error:
ERROR 404: Not Found.
There's no way to mirror Wordpress-website because the website content is stored in the database and wget is not able to grab .php files. That's why you get Wget 404 error.
I know it's not this question's case, because Sam only wants to download a single picture, but it can be helpful for others.
Actually I don't know what is the reason exactly, I have faced this like of problem.
if you have the domain's IP address (ex 208.113.139.4), please use the IP address instead of domain (in this case www.icerts.com)
wget 192.243.111.11/images/logo.jpg
Go to find the IP from URL https://ipinfo.info/html/ip_checker.php
I want to add something to #blotus's answer,
In case adding the referrer header does not solve the issue, May be you are using the wrong referrer (Sometimes the referrer is different from the URL's domain name).
Paste the URL on a web browser and find the referrer from developer tools (Network -> Request Headers).
I met exactly the same problem while setting up GitHub actions with Cygwin. Only after I used wget --debug <url>, I realized that URL is appended with 0xd symbol which is \r (carriage return).
For this kind of problem there is the solution described in docs:
you can also use igncr in the SHELLOPTS environment variable
So I added the following lines to my YAML script to make wget work properly, as well as other shell commands in my GHA workflow:
env:
SHELLOPTS: igncr
I have an HAProxy install which was configured by someone who left the company. It runs on Ubuntu 10.04 and it seems to use 3 configuration files in the directory /etc/haproxy
haproxy.cfg
haproxy.http.cfg
haproxy.https.cfg
I don't see the point in using the haproxy.https.cfg file as I believe (in our configuration) it can all be configured from a single haproxy.http.cfg file but when I remove that httpS file it complains bitterly and refuses to run. My question
Is this the standard configuration haproxy uses or if not, I can't find a reference to the "S" file anywhere. Can anyone suggest how HAProxy concludes it should use it?
Thanks
The very answer to your question: your haproxy is simply launched with those three config files ( -f haproxy.cfg -f haproxy.http.cfg -f haproxy.https.cfg, maybe from /etc/init.d/haproxy but mileage varies depending on your distribution ).
If you remove the file, of course it will complain.
This is not particularly standard, but ain't bad either, it helps structuring the conf rather than having a very long file.
The task of the .https version will certainly be to redirect the https traffic towards a service that can handle HTTPS (stunnel or nginx usually), since haproxy cannot terminate ssl connections. (stunnel has to be patched, see on the haproxy page)
If you want you can merge those files into one or two, just find out how haproxy is launched (check for init.d or let us know which distribution) and fix it appropriately.
I believe that it is only /etc/haproxy/haproxy.cfg that is used by default.
This may be of use to you (1.4 configuration reference):
http://haproxy.1wt.eu/download/1.4/doc/configuration.txt