I'm attempting to write an AAC file from the output stream of an AUGraph, and on playback my file only plays a buzzing noise, and I get the error ExtAudioFileWriteAsync -50.
I'd like to know what it means so that I can search for and destroy the problem.
Thanks to any Core Audio ninjas that can hook a brother up.
In case anyone else has this problem, the -50 error is a kAudio_ParamError error, defined in CoreAudioTypes.h.
Therefore, one of the parameters being passed to ExtAudioFileWriteAsync must be faulty.
Related
I followed the top answer in this StackOverflow post to use ffmpeg-python extract a .wav file from a YouTube URL (into the pcm_s16le codec), which was played successfully by my local audio player (Mac's Music).
However, as I tried to read it using scipy.io's wavefile,
samplerate, data = wavfile.read(wav_fname)
the following error message is thrown:
"WavFileWarning: Reached EOF prematurely; finished at 1192015 bytes, expected 4294967303 bytes from header."
May anyone suggest what's going on?
I have successfully extracted a .wav file which is successfully read by my local music player. However, it is failed to be recognized by scipy.io's wavefile. And I am not sure why.
I am trying to make a patch that plays audio when a bang is pressed. I have put a symbol so that I don't need to keep reimporting the file. However it works sometimes but not all the time.
A warning in the Pd console reads: Start requested with no prior open
However I have imported an audio file
Is there something that I have done wrong?
Use [trigger] to get the order-of-execution correct.
One problem is, that whenever you send a [1( to [readsf~] you must have sent an [open ...( message directly beforehand.
Even if you have just successfully opened a file, but then stopped it (with [0() or played it through (so it has been closed automatically), you have to send the filename again.
The real problem is, that your messages are out of order: you should never have a fan-out (that is: connecting a message outlet to multiple inlets), as this will create undefined behavior.
Use [trigger] to get the order-of-execution correct.
(Mastering [trigger] is probably the single most important step in learning to program Pd)
I've been looking around Swift documentation to save an audio output from AVAudioEngine but I couldn't find any useful tip.
Any suggestion?
Solution
I found a way around thanks to matt's answer.
Here a sample code of how to save an audio after passing it through an AVAudioEngine (i think that technically it's before)
newAudio = AVAudioFile(forWriting: newAudio.url, settings: nil, error: NSErrorPointer())
//Your new file on which you want to save some changed audio, and prepared to be bufferd in some new data...
var audioPlayerNode = AVAudioPlayerNode() //or your Time pitch unit if pitch changed
//Now install a Tap on the output bus to "record" the transformed file on a our newAudio file.
audioPlayerNode.installTapOnBus(0, bufferSize: (AVAudioFrameCount(audioPlayer.duration)), format: opffb){
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) in
if (self.newAudio.length) < (self.audioFile.length){//Let us know when to stop saving the file, otherwise saving infinitely
self.newAudio.writeFromBuffer(buffer, error: NSErrorPointer())//let's write the buffer result into our file
}else{
audioPlayerNode.removeTapOnBus(0)//if we dont remove it, will keep on tapping infinitely
println("Did you like it? Please, vote up for my question")
}
}
Hope this helps !
One issue to solve:
Sometimes, your outputNode is shorter than the input: if you accelerate the time rate by 2, your audio will be 2 times shorter. This is the issue im facing for now since my condition for saving the file is (line 10)
if(newAudio.length) < (self.audioFile.length)//audiofile being the original(long) audio and newAudio being the new changed (shorter) audio.
Any help here?
Yes, it's quite easy. You simply put a tap on a node and save the buffer into a file.
Unfortunately this means you have to play through the node. I was hoping that AVAudioEngine would let me process one sound file into another directly, but apparently that's impossible - you have to play and process in real time.
Offline rendering Worked for me using GenericOutput AudioUnit. Please check this link, I have done mixing two,three audios offline and combine it to a single file. Not the same scenario but it may help you for getting some idea. core audio offline rendering GenericOutput
I'm trying to use GDCL MP4 Muxer with my RTSP Source Filter. They work fine together except after stopping the graph, muxer doesn't finilize the file and write the reqiured tables to the end of file via file writer (some parts are written starting from moov but not the time table values). When I try another RTSP source filter (which I don't have its source codes), table values are created with GDCL MP4 Muxer.
But when I try Elecard's MP4 Muxer, it works fine with my RTSP Source Filter. So, there is an incompatibility. I examined GDCL's source codes but couldn't find what it was expecting from me. I already calculate and set timestamp values to samples using SetTime method. But GDCL still doesn't finilaze file. Is it caused by missing information or missing signal when graph stops? What can be the problem, any ideas?
One thing you should be aware of regarding Geraint's MP4 Mux is that it is checking incoming media samples to have both start and stop time. You might be having only .tStart/AM_SAMPLE_TIMEVALID which still makes sense for video, but this would be a problem.
So the samples have to have stop time, or you need to fix this in multiplexer code.
A typical symptom for the problem is that generated files are empty or of zero duration.
Hey fellows,
Iam trying to build an application for realtime voicechanging.
In a first step I managed to record audiodata to a specified file and to play it after recording.
Now I try to change the code for playing back the audiobuffers right after recording them in loop.
My question is, how it is possible to read the Audiodata directly from the recording Audioqueue and not (like shown in documentation) from a file.
Iam thankful for any ideas and could show code-parts if needed.
Thanks in advance,
Lukas (from Germany)
Have a look at the SpeakHere example. This line sources the audio data:
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(), false, &numBytes, inCompleteAQBuffer->mPacketDescriptions, THIS->GetCurrentPacket(), &nPackets,
inCompleteAQBuffer->mAudioData);
So, rather than call AudioFileReadPackets, you can just use a memcpy to copy over the recorded data buffer. Or, alternatively, supply to the playback AudioQueue a pointer to the audio data buffer. As playback continues, advance a mCurrentPacket pointer through the buffer.
To record, you'll do something very similar. Rather than writing out to a file, you'll write out to a buffer in memory. You'll first need to allocate that with a malloc. Then are your incoming AudioQueue captures recorded data, you copy that data to the buffer. As more data is copied, you advance the recording head, or mCurrentPacket to a new position.