Icecast/Shoutcast radio stream: extracting now playing info - metadata

How do I extract "current track / now playing" info from a shoutcast/icecast radio stream? I tried following solutions:
https://github.com/ghaiklor/icecast-parser
https://code.google.com/archive/p/streamscraper/
Parsing the .xspf file Can't extract metdata from some icecast streams
Analyzing the audio stream in PHP/native Android (get info from streaming radio, Pulling Track Info From an Audio Stream Using PHP, http://www.smackfu.com/stuff/programming/shoutcast.html
All of the methods above work for some radiostations, such as
http://icecast.vrtcdn.be/stubru-high.mp3
However, for a number of icecast/shoutcast streams, all of them fail. Example: http://icecast-qmusic.cdp.triple-it.nl/Qmusic_be_live_64.aac. When analyzing the audio stream itself, the streamTitle is always empty. The xspf file always has an empty title tag. However, I notice other apps and websites do succeed in gathering current track info for this radio station. The homepage of the radio station also contains the current track/playlist info: https://qmusic.be/playlist/qmusic.
I know I could go and write an html scraper that extracts this data, but I only want to use this as a last resort. Besides, I have multiple streams with similar problems and this would not be a generic solution that can be applied to all of them.
So, am I missing something? Is there another general way in which metadata can be extracted from an icecast/shoutcast server? I also tried using the 7.html file or the /stats?sid=1 files, but did not have a lot of luck with these approaches (as in: files not present/invalid urls). Below a php script from one of the hyperlinks that works in some cases. Any help or feedback would be greatly appreciated!
PS: Sorry for the mixup of tools/frameworks/languages. Tried lots of stuff here. Extra thanks for answers that are React-Native compatible!
<?php
function getMp3StreamTitle($streamingUrl, $interval, $offset = 0, $headers = true)
{
$needle = 'StreamTitle=';
$ua = 'Mozilla';
$opts = [
'http' => [
'header' => 'Icy-MetaData: 1',
'user_agent' => $ua
]
];
if (($headers = get_headers($streamingUrl))) {
foreach ($headers as $h) {
if (strpos(strtolower($h), 'icy-metaint') !== false && ($interval = explode(':', $h)[1])) {
break;
}
}
}
$context = stream_context_create($opts);
if ($stream = fopen($streamingUrl, 'r', false, $context)) {
$buffer = stream_get_contents($stream, $interval, $offset);
fclose($stream);
if (strpos($buffer, $needle) !== false) {
$title = explode($needle, $buffer)[1];
return substr($title, 1, strpos($title, ';') - 2);
} else {
return getMp3StreamTitle($streamingUrl, $interval, $offset + $interval, false);
}
} else {
throw new Exception("Unable to open stream [{$streamingUrl}]");
}
}
var_dump(getMp3StreamTitle('http://icecast.vrtcdn.be/stubru-high.mp3', 16000));

Related

Delayed response to slash command with Mojolicious in Perl

I am trying to create a slack application in Perl with mojolicious and I am having the following use case:
Slack sends a request to my API from a slash command and needs a response in a 3 seconds timeframe. However, Slack also gives me the opportunity to send up to 5 more responses in a 30 minute timeframe but still needs an initial response in 3 seconds (it just sends a "late_response_url" in the initial call back so that I could POST something to that url later on). In my case I would like to send an initial response to slack to inform the user that the operation is "running" and after a while send the actual outcome of my slow function to Slack.
Currently, I can do this by spawning a second process using fork() and using one process to respond imidiately as Slack dictates and the second to do the rest of the work and respond later on.
I am trying to do this with Mojolicious' subprocesses to avoid using fork(). However I can't find a way to get this to work....
a sample code of what I am already doing with fork is like this:
sub withpath
{
my $c = shift;
my $user = $c->param('user_name');
my $response_body = {
response_type => "ephemeral",
text => "Running for $user:",
attachments => [
{ text => 'analyze' },
],
};
my $pid = fork();
if($pid != 0){
$c->render( json => $response_body );
}else{
$output = do_time_consuming_things()
$response_body = {
response_type => "in-channel",
text => "Result for $user:",
attachments => [
{ text => $output },
],
};
my $ua = Mojo::UserAgent->new;
my $tx = $ua->post(
$response_url,
{ Accept => '*/*' },
json => $response_body,
);
if( my $res = $tx->success )
{
print "\n success \n";
}
else
{
my $err = $tx->error;
print "$err->{code} response: $err->{message}\n" if $err->{code};
print "Connection error: $err->{message}\n";
}
}
}
So the problem is that no matter how I tried I couldn't replicate the exact same code with Mojolicious' subproccesses. Any ideas?
Thanks in advance!
Actually I just found a solution to my problem!
So here is my solution:
my $c = shift; #receive request
my $user = $c->param('user_name'); #get parameters
my $response_url = $c->param('response_url');
my $text = $c->param('text');
my $response_body = { #create the imidiate response that Slack is waiting for
response_type => "ephemeral",
text => "Running for $user:",
attachments => [
{ text => 'analyze' },
],
};
my $subprocess = Mojo::IOLoop::Subprocess->new; #create the subprocesses
$subprocess->run(
sub {do_time_consuming_things($user,$response_url,$text)}, #this callback is the
#actuall subprocess that will run in background and contains the POST request
#from my "fork" code (with the output) that should send a late response to Slack
sub {# this is a dummy subprocess doing nothing as this is needed by Mojo.
my ($subprocess, $err, #results) = #_;
say $err if $err;
say "\n\nok\n\n";
}
);
#and here is the actual imidiate response outside of the subprocesses in order
#to avoid making the server wait for the subprocess to finish before responding!
$c->render( json => $response_body );
So I actually simply had to put my code of do_time_consuming_things in the first callback (in order for it to run as a subprocess) and use the second callback (that is actually linked to the parent process) as a dummy one and keep my "imidiate" response in the main body of the whole function instead of putting it inside one of the subprocesses. See code comments for more information!

Perl Dancer2 infinite-loop on get when calling a method

I made a photobooth with Dancer some years ago and it worked fine.
Now I try to move this to Dancer2. However, It's not working anymore, because I have some infinite-loop.
Let's say my app looks like this:
package App;
use Dancer2;
# Photobox is my pm file with all the magic
use Photobox;
my $photobox = photobox->new();
get '/photo' => sub {
my $photo;
# Trigger cam and download photo. It returns me the filename of the photo.
$photo = $photobox->takePicture();
# Template photo is to show the photo
template 'photo',
{
'photo_filename' => $photo,
'redirect_uri' => "someURI"
};
}
takePicture() looks like this:
sub takePicture {
my $Objekt = shift;
my $return;
my $capture;
$return = `gphoto2 --auto-detect`;
if ($return =~ m/usb:/) {
$capture = `gphoto2 --capture-image-and-download --filename=$photoPath$filename`;
if (!-e $photoPath.$filename) {
return "no-photo-error.png";
}
else {
return $filename;
}
} else {
die "Camera not found: $return";
}
}
When I now call /photo, it'll result in an infinite loop. The browser is "frefreshing" all the time and my cam is shooting one photo after the other. But it never redirects to /showphoto.
It was working with Dancer(1) when I run the application just by perl app.pl from the bin directory. How I use Dancer2 und run it by using plackup app.psgi
I tried to put it into a before hook, but it changed nothing.
Update:
I figured out a way to work around this issue.
First I refactored my code a bit. Basic idea was to split the take photo and show photo operations into two different routes. This makes it easier to see what happens.
get '/takesinglephoto' => sub {
my $photo;
$photo = takePicture();
$single_photo=$photo;
redirect '/showsinglephoto';
;
get '/showsinglephoto' => sub {
set 'layout' => 'fotobox-main';
template 'fotobox_fotostrip',
{
'foto_filename' => $single_photo,
'redirect_uri' => "fotostrip",
'timer' => $timer,
'number' => 'blank'
};
};
And I moved the takePicture method just into my Dancer main App.pm.
Now I recognized from the log output, that the browser does not load the '/takesinglephoto' page once, but refreshes it every some secons. I think the reason is, that takePicture() takes some seconds to run and to return the output. And Dancer does not wait until it ends. With every reload, it triggers the takePicture() again and that causes the infinite-loop.
I worked around this by implementing a simple check to run takePicture() just once.
# define avariable set to 1 / true
my $do_stuff_once = 1;
get '/takesinglephoto' => sub {
my $photo;
# check if variable is true
if ($do_stuff_once == 1) {
$photo = takePicture();
$single_photo=$photo;
# set variable to false
$do_stuff_once = 0;
}
redirect '/showsinglephoto';
};
get '/showsinglephoto' => sub {
# set variable back to true
$do_stuff_once = 1;
set 'layout' => 'fotobox-main';
template 'fotobox_fotostrip',
{
'foto_filename' => $single_photo,
'redirect_uri' => "fotostrip",
'timer' => $timer,
'number' => 'blank'
};
};
Now it still refreshes /takesinglephoto, but it does not trigger takePicture() again and again and finally, when the method returns the photo filename, it redirects to /showsinglephoto.
I would call this a workaround. Is there a better way to solve this?
BR
Arne

How to match a result to a request when sending multiple requests?

A. Summary
As its title, Guzzle allows to send multiple requests at once to save time, as in documentation.
$responses = $client->send(array(
$requestObj1,
$requestObj2,
...
));
(given that each request object is an instance of
Guzzle\Http\Message\EntityEnclosingRequestInterface)
When responses come back, to identify which response is for which request, we can loop through each request and get the response (only available after executing the above command):
$response1 = $requestObj1->getResponse();
$response2 = $requestObj2->getResponse();
...
B. Problem
If the request object contains the same data. It's impossible to identify the original request.
Assume we have the following scenario where we need to create 2 articles: A and B on a distance server: something.com/articles/create.json
Each request has same POST data:
subject: This is a test article
After created, the Guzzle responses with 2 location come back:
something.com/articles/223.json
something.com/articles/245.json
Using the above method to link response-to-its-request, we still don't know which response is for which article, because the request object is exactly the same.
Hence in my database I cannot write down the result:
article A -> Location: 245.json
article B -> Location: 223.json
because it can be the other way arround:
article A -> Location: 223.json
article B -> Location: 245.json
A solution is to put some extra parameter in the POST request, e.g.
subject: This is a test article
record: A
However, the distance server will return error and does not create article because it does not understand the key "record". The distance server is a third party server and I cannot change the way it works.
Another proper solution for this is to set some specific id/tag on the request object, so we can identify it afterwards. However, I've looked through the documentation but there is no method to uniquely identity the request like
$request->setID("id1")
or
$request->setTag("id1")
This has been bugging me for months and still cannot resolve this issue.
If you have solution, please let me know.
Many many thanks and you've saved me!!!!
Thanks for reading this long post.
I've found a proper way to do it, Guzzle allow to add callback once a request is completed. So we can achieve this by setting it on each request in the batch
Each request by default can be created like this
$request = $client->createRequest('GET', 'http://httpbin.org', [
'headers' => ['X-Foo' => 'Bar']
]);
So, to achieve what we want:
$allRequests = [];
$allResults = [];
for($k=0; $k<=10; $k++){
$allRequests['key_'.$k] = $client->createRequest('GET', 'http://httpbin.org?id='.$k, [
'headers' => ['X-Foo' => 'Bar'],
'events' => [
'complete' => function ($e) use (&$allResults, $k){
$response = $e->getResponse();
$allResults['key_'.$k] = $response->getBody().'';
}
]
]);
}
$client->sendAll(array_values($allRequests));
print_r($allResults);
So now the $allResults has result for each corresponding request.
e.g. $allResults['key_1'] is the result of $allRequests['key_1']
I was having the same problem with this.
I solved it by adding a custom query parameter with a unique id generated for each request and add it to the request url (you will need to remember this ids for each one of them to address it after).
After $responses = $client->send($requests) you could iterate through the responses and retrieve the effective url $response->getEffectiveUrl() and parse it (see parse_url and parse_str) to get the custom parameter (with the unique id) and search in your array of requests which one has it.
I found a much better answer.
I was sending batches of 20 requests at a time, 4 concurrently, and used the pooling technique where I got fulfilled, and rejected back, as in the documentation.
I found that I could add this code to the end of my requestAsync() function calls, when yielding / building the array (I do both in different places).
$request = $request->then(function (\GuzzleHttp\Psr7\Response $response) use ($source_db_object) {
$response->_source_object = $source_db_object;
return $response;
});
And then in the clousures on the pool, I can just access the _source_object on the response normally, and it works great.
I find it a little hacky, but if you are just sure to use a name that NEVER clashes with anything in Guzzle, this should be fine.
Here is a full example:
use GuzzleHttp\Client;
use GuzzleHttp\Pool;
use GuzzleHttp\Psr7\Response as GuzzleResponse;
$client = new Client();
$requests = [];
// Simple set-up here, generate some random async requests
for ($i = 0; $i < 10; $i++) {
$request = $client->requestAsync('GET', 'https://jsonplaceholder.typicode.com/todos/1');
// Here we can attach any identifiable data
$request->_source_object = $i;
array_push($requests, $request);
}
$generator = function () use($requests) {
while ($request = array_pop($requests)) {
yield function() use ($request) {
return $request->then(function (GuzzleResponse $response) use ($request) {
// Attach _source_object from request to the response
$response->_source_object = $request->_source_object ?? [];
return $response;
});
};
}
};
$requestPool = new Pool($client, $generator(), [
'concurrency' => 5,
'fulfilled' => function ($response) {
// Then we can properly access the _source_object data once response has arrived here!
echo $response->_source_object . "\n";
}
]);
$requestPool->promise()->wait();
I do it this way :
// create your requests
$requests[] = $client->createRequest('GET', '/endpoint', ['config' => ['order_id' => 123]]);
...
// in your success callback get
$id = $event->getRequest()->getConfig()['order_id']
An update related to the new GuzzleHttp guzzlehttp/guzzle
Concurrent/parallel calls are now run through a few different methods including Promises.. Concurrent Requests
The old way of passing a array of RequestInterfaces will not work anymore.
See example here
$newClient = new \GuzzleHttp\Client(['base_uri' => $base]);
foreach($documents->documents as $doc){
$params = [
'language' =>'eng',
'text' => $doc->summary,
'apikey' => $key
];
$requestArr[$doc->reference] = $newClient->getAsync( '/1/api/sync/analyze/v1?' . http_build_query( $params) );
}
$time_start = microtime(true);
$responses = \GuzzleHttp\Promise\unwrap($requestArr); //$newClient->send( $requestArr );
$time_end = microtime(true);
$this->get('logger')->error(' NewsPerf Dev: took ' . ($time_end - $time_start) );
In this example you will be able to refer to each of the Responses using $requestArr[$doc->reference] . In short give an index to your array and run the Promise::unwrap call.
I also had come across this issue. This was the first thread coming up. I know this is a resolved thread, but I have eventually come up with a better solution. This is for all those who might encounter the issue.
One of the options is to use Guzzle Pool::batch.
What batch does is, it pushed the results of pooled requests into an array and returns the array. This ensures that the response and requests are in the same order.
$client = new Client();
// Create the requests
$requests = function ($total) use($client) {
for ($i = 1; $i <= $total; $i++) {
yield new Request('GET', 'http://www.example.com/foo' . $i);
}
};
// Use the Pool::batch()
$pool_batch = Pool::batch($client, $requests(5));
foreach ($pool_batch as $pool => $res) {
if ($res instanceof RequestException) {
// Do sth
continue;
}
// Do sth
}

How Upload file using Mojolicious?

I have been trying out Mojolicious web framework based on perl. And I have try to develop a full application instead of the Lite. The problem I am facing is that I am trying to upload files to server, but the below code is not working.
Please guide me what is wrong with it. Also, if the file gets uploaded then is it in public folder of the application or some place else.
Thanks in advance.
sub posted {
my $self = shift;
my $logger = $self->app->log;
my $filetype = $self->req->param('filetype');
my $fileuploaded = $self->req->upload('upload');
$logger->debug("filetype: $filetype");
$logger->debug("upload: $fileuploaded");
return $self->render(message => 'File is not available.')
unless ($fileuploaded);
return $self->render(message => 'File is too big.', status => 200)
if $self->req->is_limit_exceeded;
# Render template "example/posted.html.ep" with message
$self->render(message => 'Stuff Uploaded in this website.');
}
(First, you need some HTML form with method="post" and enctype="multipart/form-data", and a input type="file" with name="upload". Just to be sure.)
If there were no errors, $fileuploaded would be a Mojo::Upload. Then you could check its size, its headers, you could slurp it or move it, with $fileuploaded->move_to('path/file.ext').
Taken from a strange example.
To process uploading files you should use $c->req->uploads
post '/' => sub {
my $c = shift;
my #files;
for my $file (#{$c->req->uploads('files')}) {
my $size = $file->size;
my $name = $file->filename;
push #files, "$name ($size)";
$file->move_to("C:\\Program Files\\Apache Software Foundation\\Apache24\\htdocs\\ProcessingFolder\\".$name);
}
$c->render(text => "#files");
} => 'save';
See full code here: https://stackoverflow.com/a/28605563/4632019
You can use Mojolicious::Plugin::RenderFile
Mojolicious::Plugin::RenderFile

Twitter RSS feed, [domdocument.load]: failed to open stream:

i'm using the following:
<?php
$doc = new DOMDocument();
$doc->load('http://twitter.com/statuses/user_timeline/XXXXXX.rss');
$arrFeeds = array();
foreach ($doc->getElementsByTagName('item') as $node) {
$itemRSS = array (
'title' => $node->getElementsByTagName('title')->item(0)->nodeValue,
'desc' => $node->getElementsByTagName('description')->item(0)->nodeValue,
'link' => $node->getElementsByTagName('link')->item(0)->nodeValue,
'date' => $node->getElementsByTagName('pubDate')->item(0)->nodeValue
);
array_push($arrFeeds, $itemRSS);
}
for($i=0;$i<=3;$i++) {
$tweet=substr($arrFeeds[$i]['title'],17);
$tweetDate=strtotime($arrFeeds[$i]['date']);
$newDate=date('G:ia l F Y ',$tweetDate);
if($i==0) { $b='style="border:none;"'; }
$tweetsBox.='<div class="tweetbox" ' . $b . '>
<div class="tweet"><p>' . $tweet . '</p>
<div class="tweetdate">#' . $newDate .'</div>
</div>
</div>';
}
return $tweetsBox;
?>
to return the 4 most recent tweets from a given timeline (XXXXX is the relevant feed)
It seems to work fine but i've recently been getting the following error sporadically:
PHP error debug
Error: DOMDocument::load(http://twitter.com/statuses/user_timeline/XXXXXX.rss) [domdocument.load]: failed to open stream: HTTP request failed! HTTP/1.1 502 Bad Gateway
I've read that the above code is dependant on Twitter beign available and I know it gets rather busy sometimes. Is there either a better way of receiving twits, or is there any kind of error trapping i could do to just to display "tweets are currently unavailable..." ind of message rather than causing an error. I'm usnig ModX CMS so any parse error kills the site rather than just ouputs a warning.
thanks.
I know this is old, but I was just searching for the same solution for a nearly identical script for grabbing a twitter timeline. I ended up doing this, though I haven't been able to thoroughly test it.
I defined the twitter url as a variable ($feedURL), which I also used in $doc_load. Then, I wrapped everything except for the $feedURL into this conditional statement:
$feedURL = "http://twitter.com/statuses/user_timeline/XXXXXXXX.rss"
$headers = #get_headers($feedURL);
if (preg_match("/200/", $headers[0])){
//the rest of you original code in here
}
else echo "Can't connect user-friendly message (or a fake tweet)";
So, it's just checking the headers of the the feed's page, and if its status is 200 (OK), then the rest of the script will execute. Otherwise, it'll echo a message of your choice.
(reference: http://www.phptalk.com/forum/topic/3940-how-to-check-if-an-external-url-is-valid-andor-get-file-size/ )
ETA: Or even better, save a cached version of the feed (which will also ensure you don't go over your API limit of loads):
<?php
$cache_file = dirname(__FILE__).'/cache/twitter_cache.rss';
// Start with the cache
if(file_exists($cache_file)){
$mtime = (strtotime("now") - filemtime($cache_file));
if($mtime > 600) {
$cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss');
$cache_static = fopen($cache_file, 'wb');
fwrite($cache_static, $cache_rss);
fclose($cache_static);
}
echo "<!-- twitter cache generated ".date('Y-m-d h:i:s', filemtime($cache_file))." -->";
}
else {
$cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss');
$cache_static = fopen($cache_file, 'wb');
fwrite($cache_static, $cache_rss);
fclose($cache_static);
}
//End of caching
?>
Then use $cache_file in your $doc->load($cache_file) statement instead of the actual feed url.
(Adapted from here: http://snipplr.com/view/8156/twitter-cache/).