Video stream from iPhone camera, retain only last 2 min of stream at all times - iphone

I am working on an iPhone app that would capture realtime video from the iPhone camera over long periods of time. That part is fairly straightforward - the catch is that I only want the device to retain the last 2 minutes of recorded video at any time, discarding all images prior to this time period. I'm having trouble conceptualizing how this functionality might work. The only idea that comes to mind is to retain a stream of still images for the last 2 minutes, and discard outdated images. Then when the user stopped the recording, these images would be compiled into a video. It just seems really inefficient to hold on to almost 3000 images at a time.
I would love to hear any ideas for how to achieve this goal in a reasonably efficient manner. Thank you all in advance for your input!
Best,
James

Skip the idea with still images. You'd loose all the efficency that video codecs have to offer. Plus, I don't think the iPhone can handle that amount of data properly.
But maybe there's a compromise - it may be possible to record, say, ten seconds at a time, then start a new recording seamlessly. Dump the old recordings once they become old then two minutes.
I'm not quite sure if this is possible without loosing a few frames between the recordings though.

Related

How can I fix multiple audio streams in an iPhone app from creating distortion?

I am using several instances of AVAudioPlayer to play overlapping sounds, and getting harsh distortion as a result. Here is my situation... I have an app with several piano keys. Upon touching a key, it plays a note. If I touch 6-7 keys in rapid succession, my app plays a 2 second .mp3 clip for each key. Since I am using separate audio streams, they sounds overlap (which they should), but the result is lots of distortion, pops, or buzzing sounds!
How can I make the overlapping audio crisp and clean? I recorded the piano sounds myself and they are very nice, clean, noise-free recordings, and I don't understand why the overlapping streams sound so bad. Even at low volume or through headphones, the quality is just very degraded.
Any suggestions are appreciated!
Couple of things:
Clipping
The "buzzing" you describe is almost assuredly clipping—the result of adding two or more waveforms together and the resulting, combined waveform having its peaks cut off—clipped—at unity.
When you're designing virtual synthesizers with polyphony, you have to take into consideration how many voices will likely play at once and provide headroom, typically by attenuating each voice.
In practice, you can achieve this with AVAudioPlayer by setting each instances volume property to 0.316 for 10 dB of headroom. (Enough for 8 simultaneous voices)
The obvious problem here that when the user plays a single voice, it may seem too quiet—you'll want to experiment with various headroom values and typical user behavior and adjust to taste (it's also signal-dependent. Your piano samples may clip more/less easily than other waveforms depending on their recorded amplitude.)
Depending on your app's intended user, you might consider making this headroom parameter available to them.
Discontinuities/Performance
The pops and clicks you're hearing may not be a result of clipping, but rather a side effect of the fact you're using mp3 as your audio file format. This is a Bad Idea™. iOS devices only have one hardware stereo mp3 decoder, so as soon as you spin up a second, third, etc. voice, iOS has to decode the mp3 audio data on the cpu. Depending on the device, you can only decode a couple audio streams this way before suffering from underflow discontinuities (cut that in half for stereo files, obviously)... the CPU simply can't decode enough samples for the output audio stream in time, so you hear nasty pops and clicks.
For sample playback, you want to use an LPCM audio encoding (like wav or aiff) or something extremely efficient to decode, like ima4. One strategy that I've used in every app I've shipped that has these types of audio samples is to ship samples in mp3 or aac format, but decode them once to an LPCM file in the app's sandbox the first time the app is launched. This way you get the benefit of a smaller app bundle and low CPU utilization/higher polyphony at runtime when decoding the samples. (With a small hit to the first-time user experience while the user waits for the samples to be decoded.)
My understanding is that AVAudioPlayer isn't meant to be used like that. In general, when combining lots of sounds into a single output like that, you want to open a single stream and mix the sounds yourself.
What you are encountering is clipping — it's occurring because the combined volumes of the sounds you're playing are exceeding the maximum possible volume. You need to decrease the volume of these sounds when there's more than one playing at a time.

Real time audio recording/analysis on iPhone

I'm building a piece of hardware that sends data into the headphone jack, and I need a way to record short snippets and analyze it quickly (hopefully without having to save the file and reopen for analysis). I have played around with fft and the accelerate frameworks, though I don't think it's exactly what I'm looking for.
I'm wondering mostly if something like this is feasible: record a ~30ms snippet of audio, and then grab an array of floats representing the voltage/(db levels?) throughout the recording. Then I could interpret the data depending on the levels at different ms through the recording. Would something like AVAudioRecorder be able to record at a resolution which I could examine every ms in the recording? Since this will be a repeating process, I'm hoping to keep the cpu down as well.
This is totally doable. Use AudioSession with AudioUnits.

record Audio on iOS, and auto save audio every minute

I want to record audio on iOS, and auto save audio every minute.
the main purpose is not losing the audio after 1 hour record on crash.
There is a lot of code to post here to show you how to record the audio, but there are many examples on GitHub. Here are a couple.
To autosave recordings, I would suggest adding a notification timer. Here is an example.
I hope this helps get you started.
Sounds like audio queues are your best friends are. Read through the doc, it is pretty lucid.

Live streaming in iPhone?

I have read a lot of posts about live streaming in iPhone, but none of them really works.
The project I want to work out is as follow:
There is a MUTE movie streaming in a movie theater. I want to get the time code (the position it is playing) through wifi and makes iPhone/iPod Touch to play/stream an audio track at the same time code.
May I ask how to achieve it?
UPDATE: Latency is expected and will be taken into consideration. Small time difference is acceptable in this case.
The variable nature of a wireless connection and the latency involved will completely obliterate the video/audio sync you are trying to achieve.

How to get how much data application have download?

My application is streaming live video, I want to alert them for after every 100MB download.
Any help will be appreciated.
Not easily because MPMoviePlayer internals doesn't reveal metadata about the stream.
If you actually knew for certain the data FPS and resolution (or bitrate for audio), using the elapsed time you could do an adhoc calculation by dividing them.