SWIFT - Is it possible to save audio from AVAudioEngine, or from AudioPlayerNode? If yes, how? - swift

I've been looking around Swift documentation to save an audio output from AVAudioEngine but I couldn't find any useful tip.
Any suggestion?
Solution
I found a way around thanks to matt's answer.
Here a sample code of how to save an audio after passing it through an AVAudioEngine (i think that technically it's before)
newAudio = AVAudioFile(forWriting: newAudio.url, settings: nil, error: NSErrorPointer())
//Your new file on which you want to save some changed audio, and prepared to be bufferd in some new data...
var audioPlayerNode = AVAudioPlayerNode() //or your Time pitch unit if pitch changed
//Now install a Tap on the output bus to "record" the transformed file on a our newAudio file.
audioPlayerNode.installTapOnBus(0, bufferSize: (AVAudioFrameCount(audioPlayer.duration)), format: opffb){
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) in
if (self.newAudio.length) < (self.audioFile.length){//Let us know when to stop saving the file, otherwise saving infinitely
self.newAudio.writeFromBuffer(buffer, error: NSErrorPointer())//let's write the buffer result into our file
}else{
audioPlayerNode.removeTapOnBus(0)//if we dont remove it, will keep on tapping infinitely
println("Did you like it? Please, vote up for my question")
}
}
Hope this helps !
One issue to solve:
Sometimes, your outputNode is shorter than the input: if you accelerate the time rate by 2, your audio will be 2 times shorter. This is the issue im facing for now since my condition for saving the file is (line 10)
if(newAudio.length) < (self.audioFile.length)//audiofile being the original(long) audio and newAudio being the new changed (shorter) audio.
Any help here?

Yes, it's quite easy. You simply put a tap on a node and save the buffer into a file.
Unfortunately this means you have to play through the node. I was hoping that AVAudioEngine would let me process one sound file into another directly, but apparently that's impossible - you have to play and process in real time.

Offline rendering Worked for me using GenericOutput AudioUnit. Please check this link, I have done mixing two,three audios offline and combine it to a single file. Not the same scenario but it may help you for getting some idea. core audio offline rendering GenericOutput

Related

Gapless playback in pyglet

I understood this page to mean that queuing in pyglet provides a gapless transition between audio tracks. But when I test it out, there is a noticeable gap. Has anyone here worked with gapless audio in pyglet?
Example:
player = pyglet.media.Player()
source1 = pyglet.media.load([file1]) # adding streaming=False doesn't fix the issue
source2 = pyglet.media.load([file2])
player.queue(source1)
player.queue(source2)
player.play()
player.seek([time]) # to avoid having to wait until the end of the track. removing this doesn't fix the gap issue
pyglet.app.run()
I would suggest you either edit your url1 and url2 into caching them locally if they're external sources. And then use Player().time to identify when you're about to reach the end. And then call player.next_source.
Or if it's local files and you don't want to programatically solve the problem you could chop up the audio files in something like Audacity to make them seamless on start/stop.
You could also experiment with having multiple players and layer them on top of each other. But if you're only interested in audio playback, there's other alternatives.
It turns out that there were 2 problems.
The first one: I should have used
source_group = pyglet.media.SourceGroup()
source_group.add(source1)
source_group.add(source2)
player.queue(source_group)
The second one: mp3 files are apparently slightly padded at the beginning and at the end, so that is where the gap is coming from. However, this does not seem to be an issue with any other file type.

How to set AVAudioEngine input and output devices (swift/macos)

I've hunted high and low and cannot find a solution to this problem. I am looking for a method to change the input/output devices which an AVAudioEngine will use on macOS.
When simply playing back an audio file the following works as expected:
var outputDeviceID:AudioDeviceID = xxx
let result:OSStatus = AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size))
if result != 0 {
print("error setting output device \(result)")
return
}
However if I initialize the audio input (with let input = engine.inputNode) then I get an error once I attempt to start the engine:
AVAEInternal.h:88 required condition is false: [AVAudioEngine.mm:1055:CheckCanPerformIO: (canPerformIO)]
I know that my playback code is OK since, if I avoid changing the output device then I can hear the microphone and the audio file, and if I change the output device but don't initialize the inputNode the file plays to the specified destination.
Additionally to this I have been trying to change the input device, I understood from various places that the following should do this:
let result1:OSStatus = AudioUnitSetProperty(inputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Output, 0, &inputDeviceID, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size))
if result1 != 0 {
print("failed with error \(result1)")
return
}
However, this doesn't work - in most cases it throws an error (10853) although if I select a sound card that has both inputs and outputs it succeeds - it appears that when I am attempting to set the output or the input node it is actually setting the device for both.
I would think that this meant that an AVAudioEngine instance can only deal with one device, however it is quite happy working with the default devices (mic and speakers/headphones) so I am confident that isn't the issue. Looking at some solutions I have seen online people simply change the default input, but this isn't a massively nice solution.
Does anyone have any ideas as to whether this is possible?
It's worth noting that kAudioOutputUnitProperty_CurrentDevice is the only property available, there is not an equivalent kAudioInputUnitProperty_CurrentDevice key, due to the fact that as I understand it both the inputNode and outputNode are classed as "Output Units" (as they both emit sound somewhere).
Any ideas would be much appreciated as this is very very frustrating!!
Thanks
So I filed a support request with apple on this and another issue and the response confirms that an AVAudioEngine can only be assigned to a single Aggregate device (that is, a device with both input and output channels) - the system default units create effectively an aggregate device internally which is why they work, although I've found an additional issue in that if the input device also has output capabilities (and you activate the inputNode) then that device has to be both the input and output device as otherwise the output appears not to work.
So answer is that I think there is no answer..

What error does "ExtAudioFileWriteAsync -50" indicate?

I'm attempting to write an AAC file from the output stream of an AUGraph, and on playback my file only plays a buzzing noise, and I get the error ExtAudioFileWriteAsync -50.
I'd like to know what it means so that I can search for and destroy the problem.
Thanks to any Core Audio ninjas that can hook a brother up.
In case anyone else has this problem, the -50 error is a kAudio_ParamError error, defined in CoreAudioTypes.h.
Therefore, one of the parameters being passed to ExtAudioFileWriteAsync must be faulty.

Matlab: Writing avi file

I am having a problem when creating an avi file using Matlab. My aim is to use an edge filter on an entire video and save the file as an avi. The filter works fine, my problem is the writing of the avi file.
My code:
vidFile = VideoReader('video.avi');
edgeMov = avifile('edges','fps',30);
for i = 1:vidFile.numberofframes
frameI = read(vidFile,i);
frameIgray = rgb2gray(frameI);
edgeI = edge(frameIgray,'canny',0.6);
edgeIuint8 = im2uint8(edgeI);
edgeIuint8(:,:,2) = edgeIuint8(:,:,1);
edgeIuint8(:,:,3) = edgeIuint8(:,:,1);
edgeMov = addframe(edgeMov,edgeIuint8);
end
edgeMov = close(edgeMov)
When the loop finishes and the avifile is closed, I go to play the video and it says "Windows Media Player encountered a problem while playing this file". I've also tried, without success, Media Player Classic and VLC which lead me to believe that the problem must be the file itself. Using GSpot I checked the file and it said that the AVI header is corrupt.
Retrying the loop again returned exactly the same problem. What's confusing me is when I run the loop for a smaller number of frames, 30 for example, the video writes fine and I can watch it. The size of the video I am trying to convert is in excess of 1000 frames so I don't know if size is a problem?
Any help would be greatly appreciated, thank you.
I've used the following to create AVI
edgeMov = avifile('video.avi','compression','Indeo5','fps',15,'quality',95);
Give it a try.

Realtime AudioQueue Record-Playback

Hey fellows,
Iam trying to build an application for realtime voicechanging.
In a first step I managed to record audiodata to a specified file and to play it after recording.
Now I try to change the code for playing back the audiobuffers right after recording them in loop.
My question is, how it is possible to read the Audiodata directly from the recording Audioqueue and not (like shown in documentation) from a file.
Iam thankful for any ideas and could show code-parts if needed.
Thanks in advance,
Lukas (from Germany)
Have a look at the SpeakHere example. This line sources the audio data:
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(), false, &numBytes, inCompleteAQBuffer->mPacketDescriptions, THIS->GetCurrentPacket(), &nPackets,
inCompleteAQBuffer->mAudioData);
So, rather than call AudioFileReadPackets, you can just use a memcpy to copy over the recorded data buffer. Or, alternatively, supply to the playback AudioQueue a pointer to the audio data buffer. As playback continues, advance a mCurrentPacket pointer through the buffer.
To record, you'll do something very similar. Rather than writing out to a file, you'll write out to a buffer in memory. You'll first need to allocate that with a malloc. Then are your incoming AudioQueue captures recorded data, you copy that data to the buffer. As more data is copied, you advance the recording head, or mCurrentPacket to a new position.