Long MP3 uploaded to Azure Speech-to-Text API returns no results after a significant period - azure-speech

I have uploaded a long MP3 file (around 8 hours) to Azure's Speech-to-Text API, using this. However, 16 hours later, there are still no transcript files available, using this.
I have previously done the same process with a 7 hour long video and have received the results without any issues.
Is there a way to check the status of the transcription process?

for us to check what is going on here, we would need to know the region and ideally the transcription ID... then we can look into our system and search for any information in the log files about the processing status ...

Related

How to archive live streaming in Azure Media Service

I am trying to use Azure Media service to do the 7x24 live streaming and also try to persist the streamed video. In the document it said the Live Output can set the archiveWindowLength upto 25 hours for VOD. But not really able to persist the whole streamed videos.
Any idea about how to achieve it. I am quite new in this area. Any help is appreciated.
The DVR window length for a single Liveoutput is 25 hours. The reason for the 25 hours is to provide a 1 hour overlap for you to switch to a second Liveoutput with a new Asset underneath.
Typically how i set this up is to have an Azure Function and Logic App running with a timer to ping-poing between two LiveOutputs. You have to create a new LiveOutput with a new Asset.
Think of LiveOutputs as "tape recorders" and the Asset is the "tape". You have to swap between tape recorders and switch tapes ever xx-hours.
You do not necessarily have to wait a full 25 hours though. I actually recommend not doing that because of the size of the manifest gets really huge. Sometimes loading such a large HLS or DASH manifest on a client can really mess with memory and cause some bad things to happen. So, you could consider doing the ping-pong between your "tape recorders" every 1 hour.
If you wish to "publish" the live event to your audience with a smaller DVR window (say 10 minutes or 30 minutes) you could additionally create a 3rd LiveOutput and Asset and leave that one set to a DVR window of 30 minutes and leave it running forever.

Google Analytics API Data Age?

I'm using the Core Reporting API (Reporting API V4).
Is there any way for me to determine the last time the data my query is returning was updated?
I'd like to be able to indicate whether the data being displayed was last updated several hours ago versus several minutes ago.
The API does respond with isDataGolden which tells you if the data will change again, if your website is small the data processing latency could be almost nothing.
From your question it sounds like you are more interested in not just if the data is stale but how stale. You could request the the ga:hour and ga:minute to find out when the last processed hit was recorded.
Note there is also the Realtime API which gives you a read of what is happening instantaneously.

Getting 155 error notification (requests /sec ) reached (Unity)

I am developing an app with unity using parse. Now, I know I can increase the req/s limit but I wouldn't like to make it in advance. The question is : how can I get notified (email maybe) if the limit is reached at some point.
I first tried to get the 155 code and call a url from unity that than notifies my via script. Later I found out in Unity can only get error code -1 (Other cause); So i am looking for another solution, cloud code, parse sends me email.
I know I can access the data and see if the limit was reached in the dashboard but I cannot be watching that 24h a day (maybe there is any way to query that data?)
Thanks in Advance
not sure but you can use robot like script may be written in such a way which will keep hitting your dashboard and a check a specific parameter in the dashboard. Once the parameter excides some value it will trigger an automated email to you.
This is how we check the health of our application in terms of App up\App down\App usages.
Regards
Pawan

What is the upload limit on soundcloud

I sometimes get the error: { error_message: 'Sorry, you\'ve exceeded your upload limit.' } when I post sound files to soundcloud, using their http api.
I couldn't find any explanation for this 'upload limit' in their documentations.
Does anyone know if it's a daily limit? or a size limit? or a combination of both?
Thanks
Sparko is mostly right. The only difference is that you can tell how much remaining time you have by requesting the current user details (GET /me) and you'll there will be a key called upload_seconds_remaining.
Free users get 2 hours. Pro gets 4 hours. Pro Unlimited is unlimited. Regardless of the plan, individual tracks also can not be longer than ~6.5hrs (I forget the exact number)
Individual files cannot exceed 500mb Uploading Audio Files
However, I'd imagine this relates to your overall limit for uploading audio to SoundCloud based on the plan attached to the account you're posting to i.e exceeding the 2 hours provided by the free plan.
The API doesn't appear to provide a property for the remaining time provided to the user, although you could infer this from [user]plan & looping through all of their tracks and summing each [track]duration (although probably not advised).

Need inputs on optimising web service calls from Perl

Current implementation-
Divide the original file into files equal to the number of servers.
Ensure each server picks one file for processing.
Each server splits the file into 90 buckets.
Use ForkManager to fork 90 processes, each operating on a bucket.
The child processes will make the API calls.
Merge the output of child processes.
Merge the output of each server.
Stats-
The size of the content downloaded using the API call is 40KB.
On 2 servers, the above process for a 225k user file runs in 15 minutes. My aim is to finish a 10 million file in 30 minutes. (Hope this doesn't sound absurd!)
I contemplated using BerkeleyDB but, couldn't find how do I convert the BerkeleyDB file into normal ASCII file.
This sounds like a one-time operation to me. Although I don't understand the 30 minute limit, I have a few suggestions I know from experience.
First of all, as I said in my comment, your bottleneck will not be reading the data from your files. It will also not be writing the results back to a harddrive. The bottleneck will be in the transfer between your machines and the remote machines. Your setup sounds sophisticated, but that might not help you in this situation.
If you are hitting a webservice, someone is running that service. There are servers that can only handle a certain ammount of load. I have brought down the dev environment servers of a big logistics company with a very small load test I ran at night. Often, these things are equipped for long-term load, but not short, heavy load.
Since IT is all about talking to each other through various protocols, like web services or other APIs, you should also consider just talking to the people who run this service. If you have a business-relationship, that is easy. If not, try to find a way to reach them and to ask if their service is able to handle so many requests at all. You could end up with them excluding you permanently because to their admins it looks like you tried to DDOS them.
I'd ask them if you could send them the files (or an excerpt of the data, cut down to what is relevant for processing) so they can do the operations in batch on their side. That way, you remove the load for processing everything as web requests, and the time it takes to do these requests.