How can I monitor an mp3 live stream to detect corruption? - encoding

Once a month the mp3 streams messes up and the only way to tell it has messed up is by listening to it as it streams. Is there a script or program or tool I can use to monitor the live streams at a given url and send some kind of flag when it corrupts?
What happens is normally it plays a song for example or some music but once a month, every month, randomly, the stream corrupts and starts random chimpmunk like trash audio. Any ideas on this? I am just getting started at this with no idea at all.

Typically, this will happen when you play a track of the wrong sample rate.
Most (all that I've seen) SHOUTcast/Icecast encoders (going straight from files) will compress for MP3 just fine, but assume a fixed sample rate of whatever they are configured for. Typically this will be 44.1kHz. If you drop in a 48kHz track, or a 22.05kHz track, they will play at different speeds while causing all sorts of random issues with the stream.
The problem is easy enough to verify. Simply create a file of a different sample rate and test it. I suspect you will reproduce the problem. If that is the case, to my knowledge there is no way to detect it, since your stream isn't actually corrupt... it just sounds incorrect. You will have to scan all of your files for sample rate. FFMPEG in a script should be able to help you with that.
Now, if the problem actually is a corrupt MP3 stream, then you have problems on your encoding side. I suspect simply swapping out whatever DLL or module you're using with a recent stable version of LAME will help.
To detect a corrupt MP3 stream, your encoder must be using CRC. If you enable it, you should be able to read through the headers of each frame to find the CRC, and then run it on the audio data. In the event you get an error (or several frames with errors), you can then trigger a warning.
You can find information on the MP3 stream header here:
http://www.mp3-tech.org/programmer/frame_header.html

Related

Understanding the role of time in a AVCaptureSession regarding CMSampleBuffers

I recently started programming in Swift as I am trying to work out an iOS camera app idea I've had. The main goal of the project is to save the prior 10 seconds of video before the record button is tapped. So the app is actually always capturing and storing frames, but also discarding the frames that are more than 10 seconds old if the app is not 'recording'.
My approach is to output video and audio data from the AVCaptureSession using respectively AVCaptureVideoDataOutput() and AVCaptureAudioDataOutput(). Using captureOutput() I receive a CMSampleBuffer for both video and audio, who I store in different arrays. I would like those arrays to later serve as an input for the AVAssetWriter.
This is the point where I'm not sure about the role of time and timing regarding the sample buffers and the capture session in general, because in order to present the sample buffers to the AVAssetWriter as an input (I believe) I need to make sure my video and audio data are the same length (duration wise) and synchronized.
I currently need to figure out at what rate the capture session is running, or how I can set that rate. Ideally I would have one audioSampleBuffer for each videoSampleBuffer, representing both the exact same duration. I don't know what realistic values are, but in the end my goal is to output 60fps, so it would be perfect if the videoSampleBuffer would contain 1 frame and the audioSampleBuffer would represent 1/60th of a second. I then could easily append the newest sample buffers to the arrays and drop the oldest.
I've of course done some research regarding my problem, but wasn't able to find what I was looking for.
My initial thought was I had to let the capture session run at some sort of set timescale, but didn't see such an option in the AVFoundation documentation. I then looked into Core Media if there was some way to set the clock the capture session was using, but couldn't find a way to say to the session to use a different CMClock (with properties I know), so I gave up this route. I still wasn't sure about the internal mechanics and timing of the capture session, so I tried to find more information about it, but without much luck. I've also stumbled on the synchronizationClock property of AVCaptureSession, but I couldn't find out how to implement this or find an example.
To this point my best guess is that with every step in time (represented by a timestamp) a new sample buffer for both video and audio is created. Which would be a good thing. But I've a feeling this is just wishful thinking and then would still not know what duration the buffers would represent.
Could anyone help me in the right direction, helping me to understand how time works in a capture session and how to get or set the duration of sample buffers?

Using libvlc to stream a video from memory?

I am generating a video stream in real time, and I've got it as a series of bitmaps in memory.
I'd like to stream these bitmaps over the network using libvlc, but I wasn't able to find the right functions in the API (all streaming functions expect a file or other source).
I even thought of emulating a capture device, but that seemed too convoluted to be true, so I'd rather ask.
My question is, what do I now have to do with these bitmaps to be able to use libvlc to stream them?
I found a question which appears to be solving the same issue.
Other suggestion with significantly less overhead is "emulating" a file with named pipes, i.e. FIFOs.

Waveform representation of any audio in iPhone

I have to draw a waveform for an audio file (CMK.mp3) in my application.
For this I have tried this Solution
As this solution is using AVAssetreader, which is taking two much time to display the waveform.
Can anyone please help, is there any other way to display the waveform quickly?
Thanks
AVAssetReader is the only way to read an AVAsset so there is no way around that. You will want to tune the code to process it without incurring unwanted overhead. I have not tried that code yet but I intend on using it to build a sample project to share on GitHub once I have the time, hopefully soon.
My approach to tune it will be to do the following:
Eliminate all Objective-C method calls and use C only instead
Move all work to a secondary queue off the main queue and use a block to call back one finished
One obstacle with rendering a waveform is you cannot have more than one AVAssetReader running at a time, at least the last time I tried. (It may have changed with iOS 6 possibly) A new reader cancels the other and that interrupts playback, so you need to do your work in sequence. I do that with queues.
In an audio app that I built it reads the CMSampleBufferRef into a CMBufferQueueRef which can hold multiple sample buffers. (see copyNextSampleBuffer on AVAssetReader) You can configure the queue to provide you with enough time to process a waveform after an AVAssetReader finishes reading an asset so that the current playback does not exhaust the contents of the CMBufferQueueRef before you start reading more buffers into it for the next track. That will be my approach when I attempt it. I just have to be careful that I do not use too much memory by making the buffer too big or making the buffer so big that it causes issues with playback. I just do not know how long it will take to process the waveform and I will test it on my older iPods and iPhone 4 before I try it on my iPhone 5 to see if they all perform well.
Be sure to stay as close to C as possible. Calls to Objective-C resources during this processing will incur potential thread switching and other run-time overhead costs which are significant enough to be noticeable. You will want to avoid that. What I may do is set up Key-Value Observing (KVO) to trigger the AVAssetReader to start the next task quickly so that I can maintain gapless playback between tracks.
Once I start my audio experiments I will put them on GitHub. I've created a repository where I will do this work. If you are interested you can "watch" that repo so you will know when I start committing updates to it.
https://github.com/brennanMKE/Audio

Is HTTP Streaming with the iPhone buggy?

I am attempting to stream video using Apple's http streaming technology. I am beginning to suspect that either the player on the iPhone or the Apple tools used to segment the videos is buggy.
http://developer.apple.com/iphone/library/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/Introduction/Introduction.html
I am getting really terrible behavior. The app never seems to do a good job of choosing what quality stream to use. It always starts at the lowest quality and often will job to the highest very suddenly and not be able to keep up. I have tried various ways of altering the bandwidth settings to test it.
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=5000
3/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=10000
4/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=459319
5/prog_index.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1, BANDWIDTH=90268800
I have used very large and small setting to make certain streams the obvious choice, but it doesn't matter. Obviously I also have used default the values set by Apple's variantplaylistcreator tool. It always starts at the lowest quality and will jump to seaming random other qualities.
Anyone know whats going on with this?
Have you tried the sample reference streams provided at the bottom of the page here? Apple tests against these, so if it works there, you know it's on your end.

how to read a http video stream with libavcodec (ffmpeg)

I'm trying to read a real-time http video stream (I've set one up using VLC and my webcam) using libavcodec from ffmpeg. so far I've been able to read an mpeg file fine but when I pass a url instead of the file name it hangs (I assume it's waiting for the end of the file).
anyone have experience/ideas on how to do this? I'm also open to alternatives if people have other suggestions.
the end goal is to do live video-streaming on the iphone. apple's http live streaming has too much lag for what I need but that's for another post :)
any help is greatly appreciated!
if you aren't using apple's way, you can't.
I have to admit I dont quite understand what you want to achieve, however, based on my interpretation of the question it seems like it is related to the segmenting of the video.
The segmenting can be achieved using ffmpeg and libavcodec (look for example here, key line is the one with packet.flags). Just remember that the segment length (in time) depends on the keyframe interval (for h264 at least). If you want an example of a full segmented streaming solution that works (most of the time), check here.
Otherwise, you have to dig into the codec and create the transport stream manually. The MPEG2-TS, which is what iOS supports, can be a little bit difficult sometimes, but it is not too bad. Good luck!
you can use libcurl to download http segment file