iPhone voice memo corrupted - iphone

I have recorded a voice memo for an interview. The duration is 26 minutes and 58 seconds. The first 4 seconds play on an iPhone and then it stops playing. I have exported the entire file to my Windows pc. I tried to import it into VLC, audacity and adobe premiere. None of which showed/played more than those first 4 seconds.
I have an example other recording which is not corrupted and this does import properly.
Is it possible to recover this data? Or at least play it somehow.
What I have tried:
use Audacity's raw data import with varying settings but I am not able to get a proper import with anything other than white noise as it sounds. A problem with the raw import is that I do not know the proper codec to select. A wav uses 8/16/24/32 bit PCM for example but using VLCs codec information tab I only find:
Codec: alac; Channels:mono; Samplerate: 48000Hz and bits per sample 16.
Play the file using many audio players, all stop after the 4 seconds. In Adobe premiere, the file says media start 10:01 and media end 14:02,
Play from other iPhones after sharing via iCloud or airdrop. (4 seconds play)
Use Wondershare Repairit (error)
Seeing if the audio exists on the iPhone voice memo app. It does, as the audio peaks show properly of the full duration of the memo.
Convert to another format using various converters both online as well as offline like Adobe media encoder

Related

How to play video while it is downloading using AVPro video in unity3D?

I want to play the video simultaneously while it is downloading via unitywebrequest. Will AVPro video support this? If so please provide me some guidance, as i am new to unity and avpro video. I can able to play the video which is downloaded fully through FullscreenVideo.prefab in AVPro demo. Any help will be much appreciated.
There are two main options you could use for displaying the video while it is still downloading.
Through livestream
You can stream a video to AVPro video using the "absolute path or URL" option on the media player component, then linking this to a stream in rtsp, MPEG-DASH, HLS, or HTTP progressive streaming format. Depending on what platforms you will be targeting some of these options will work better than others
A table of which file format supports what platform can be found in the AVProVideo Usermanual that is included with AVProVideo from page 12 and onwards.
If you want to use streaming you also need to set the "internet access" option to "required" in the player settings, as a video cannot stream without internet access.
A video that is being streamed will automatically start/resume playing when enough video is buffered.
This does however require a constant internet connection which may not be ideal if you're targeting mobile devices, or unnecessary if you're planning to play videos in a loop.
HLS m3u8
HTTP Live Streaming (HLS) works by cutting the overall stream into shorter, manageable hunks of data. These chunks will then get downloaded in sequence regardless of how long the stream is. m3u8 is a file format that works with playlists that keeps information on the location of multiple media files instead of an entire video, this can then be fed into a HLS player that will play the small media files in sequence as dictated in the m3u8 file.
using this method is usefull if you're planning to play smaller videos on repeat as the user will only have to download each chunk of the video once, which you can then store for later use.
You can also make these chunks of video as long or short as you want, and set a buffer of how many chunks you want to have pre-loaded. if for example you set the chunk size to 5 seconds, with a buffer of 5 videos the only loading time you'll have is when loading the first 25 seconds of the video. once these first 5 chunks are loaded it will start playing the video and load the rest of the chunks in the background, without interrupting the video (given your internet speed can handle it)
a con to this would be that you have to convert all your videos to m3u8 yourself. a tool such as FFMPEG can help with this though.
references
HLS
m3u8
AVPro documentation

Zero-value data in createmediastreamsource input buffer when recording using web-audio

I am attempting to record live audio via USB microphone to be converted to WAV and uploaded to a server. I am using Chrome Canary (latest build) on Windows XP. I have based my development on the example at http://webaudiodemos.appspot.com/AudioRecorder/index.html
I see that when I activate the recording, the onaudioprocess event input buffers (e.inputBuffer.getChannelData(0) for example) are all zero-value data. Naturally, there is no sound output or recorded when this is the case. I have verified the rest of the code by replacing the input buffer data with data that produces a tone which shows up in the output WAV file. When I use approaches other than createMediaStreamSource, things are working correctly. For example, I can use createObjectURL and set an src to that and successfully hear my live audio played back in real time. I can also load an audio file and using createBufferSource, see that during playback (which I hear), the inputBuffer has non-zero data in it, of course.
Since most of the web-audio recording demos I have seen on the web rely upon createMediaStreamSource, I am guessing this has been inadvertantly broken in some subsequent release of Chrome. Can anyone confirm this or suggest how to overcome this problem?
It's probably not the version of Chrome. Live input still has some high requirements right now:
1) Input and output sample rates need to be the same on Windows
2) Windows 7+ only - I don't believe it will work on Windows XP, which is likely what is breaking you.
3) Input device must be stereo (or >2 channels) - many, if not most, USB microphones show up as a mono device, and Web Audio isn't working with them yet.
I'm presuming, of course, that my AudioRecorder demo isn't working for you either.
These limitations will be removed over time.

How to get samples from AudioQueue Services on iOS

I'm trying to get samples from AudioQueue to show spectrum of music (like in iTunes) on iPhone.
Ive read a lot of posts but almost all asks about get samples when Recording, not playing :(
I'm using AudioQueue Services for streaming audio. Please help to understanding next points:
1/ Where can I get access to samples (PCM, non mp3 (I'm using mp3 stream)
2/ Should I collect samples in my own buffer to apply fft ?
3/ Is it possible get frequencies without fft transformations ?
4/ How can I synchronize my fft shift in buffer with current playing samples ?
thanks,
update:
AudioQueueProcessingTapNew
For iOS6+, this works fine for me. But what about iOS5 ?
For playing audio, the idea is to get at the samples before you feed them to the Audio Queue callback. You may need to convert any compressed audio file format into raw PCM samples beforehand. This can be done using one of the AVFoundation converter or file reader services.
You can then copy frames of data from the same source used to feed the Audio Queue callback buffers, and apply your FFT or other DSP for visualization to them.
You can use either FFTs or a bank of band-pass filters to get frequency info, but the FFT is very efficient at this.
Synchronization needs to done by trial-and-error, as Apple does not specify exact audio and view graphic display latencies, which may differ between iOS devices and OS versions anyway. But short Audio Queue buffers or using the RemoteIO Audio Unit may give you better control of the audio latency, and OpenGL ES will give you better control of the graphic latency.

Why is there a descrepency in reported audio length between iOS SDK and third-party audio editors?

I am successfully using ExtAudioFileOpenUrl to open an audio file and play it.
One thing I have noticed is that the calculated audio length returned from ExtAudioFileGetProperty kExtAudioFileProperty_FileLengthFrames and an external editor e.g. Audacity and Wave Editor don't match. Interestingly the external editors don't quite agree with each other either.
Any idea why this would be?
After some investigating of various audio editors I've found that the discrepancy seems to be how they all read in mp3 files. If I used an mp3 file I found a variance in audio length between iOS, Audacity, Wave Editor and Twisted Wave.
If I converted the mp3 to caf however, iOS and all the editors agreed on the audio length.
One other interesting thing I found was that converting from mp3 to caf increased the reported audio length.
So the moral of the story is, if you are going to be capturing audio events at certain times convert to mp3 and then back again...
The decoded length of an MP3 file can vary depending on the decoder implementation, because of padding at the beginning of the decoded stream... see http://lame.sourceforge.net/tech-FAQ.txt for some discussion of this.

Create Audio file on iPhone/iPad from many other audio files (mixer)

I am trying to create something similar like Piano app on the iPhone. When people tap a key, it play a piano note. Basically, there will have only 7 notes (C) at the moment. Each note is a .caf file and its length is 5 seconds.
I do not know if there is any way to save the song user played and export to mp3/caf format? The AVAudioRecord seems only record from the microphone input.
Many thanks
For such an app you probably don't want to record into an audio-file - instead record the note-presses and timings for a much more compact format and then play back the same as if the user was pressing the notes at the recorded times.
If you do want to be able to export an audio file format then you can write a simple mixer which adds together the individual samples from your source samples with the correct offsets and puts the results in your output audio buffer. You should probably also write a very simple compressor so that you keep the sample volume without any distortion caused by 'clipping'. This can be done by dividing down any summed sample above 95% of the maximum sample value. There may also be a way to use OpenAL to do this mixing for you and play back into an audio buffer.