Apple guy tried to be funny and wrote in the docs:
("Headphone," "Speaker," etc.)
What kind of return values are possible in reality?
I ran 'strings' on the CoreMedia framework (iOS4.2 SDK), and the following strings seem reasonable and are grouped together:
ReceiverAndMicrophone
HeadsetInOut
HeadphonesAndMicrophone
SpeakerAndMicrophone
HeadsetBT
LineInOut
Default
Command was:
strings -a -o CoreMedia | less
# CoreMedia is from /Developer/Platforms/iPhoneOS.platform/Developer \
# /SDKs/iPhoneOS4.2.sdk/System/Library/Frameworks/CoreMedia.framework
He wasn't being funny, those are actual values. The only one I've seen that he didn't outline is "LineOut"
According to http://lists.apple.com/archives/coreaudio-api/2009/Jan/msg00084.html
there are also LineOut, HeadsetInOut, ReceiverAndMicrophone, HeadphonesAndMicrophone,
but the guy who asked whether there are more values received no answer.
I just got MicrophoneWired from it. (I actually have a special piece of hardware plugged in that is a temperature probe, but we are using it through the headphone jack).
Then I got MicrophoneBuiltIn with nothing plugged in. This is on an ipod touch with 4.3 by the way.
The values provided by l8nite above are reserved for when your audio session is configured for both input and output. Other values used when you're only doing audio out: (I used the same trick as l8nite - thanks!)
LineOut
HeadphonesBT (used for Bluetooth audio output - observed this hooked up to via bluetooth to a car audio system)
AirTunes (used for AirPlay output)
How is HeadphonesBT different from HeadsetBT ? My app could successfully use the HeadsetBT device to send and receive audio while HeadphonesBT failed to do anything. This is on iOs6
Related
I have a simple script that uses music21 to process the notes in a midi file:
import music21
score = music21.converter.parse('www.vgmusic.com/music/console/nintendo/nes/zanac1a.mid')
for i in score.flat.notes:
print(i.offset, i.quarterLength, i.pitch.midi)
Is there a way to also obtain a note's voicing / midi program using a flat score? Any pointers would be appreciated!
MIDI channels and programs are stored on Instrument instances, so use getContextByClass(instrument.Instrument) to find the closest Instrument in the stream, and then access its .midiProgram.
Be careful:
.midiChannel and .midiProgram are 0-indexed, so MIDI channel 10 will be 9 in music21, etc., (we're discussing changing this behavior in the next release)
Some information might be missing if you're not running the bleeding edge version (we merged a patch yesterday on this topic), so I advise pulling from git: pip install git+https://github.com/cuthbertLab/music21
.flat is going to kill you, though, if the file is multitrack. If you follow my advice you'll just get the last instrument on every track. 90% of the time people doing .flat actually want .recurse().
I've hunted high and low and cannot find a solution to this problem. I am looking for a method to change the input/output devices which an AVAudioEngine will use on macOS.
When simply playing back an audio file the following works as expected:
var outputDeviceID:AudioDeviceID = xxx
let result:OSStatus = AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size))
if result != 0 {
print("error setting output device \(result)")
return
}
However if I initialize the audio input (with let input = engine.inputNode) then I get an error once I attempt to start the engine:
AVAEInternal.h:88 required condition is false: [AVAudioEngine.mm:1055:CheckCanPerformIO: (canPerformIO)]
I know that my playback code is OK since, if I avoid changing the output device then I can hear the microphone and the audio file, and if I change the output device but don't initialize the inputNode the file plays to the specified destination.
Additionally to this I have been trying to change the input device, I understood from various places that the following should do this:
let result1:OSStatus = AudioUnitSetProperty(inputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Output, 0, &inputDeviceID, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size))
if result1 != 0 {
print("failed with error \(result1)")
return
}
However, this doesn't work - in most cases it throws an error (10853) although if I select a sound card that has both inputs and outputs it succeeds - it appears that when I am attempting to set the output or the input node it is actually setting the device for both.
I would think that this meant that an AVAudioEngine instance can only deal with one device, however it is quite happy working with the default devices (mic and speakers/headphones) so I am confident that isn't the issue. Looking at some solutions I have seen online people simply change the default input, but this isn't a massively nice solution.
Does anyone have any ideas as to whether this is possible?
It's worth noting that kAudioOutputUnitProperty_CurrentDevice is the only property available, there is not an equivalent kAudioInputUnitProperty_CurrentDevice key, due to the fact that as I understand it both the inputNode and outputNode are classed as "Output Units" (as they both emit sound somewhere).
Any ideas would be much appreciated as this is very very frustrating!!
Thanks
So I filed a support request with apple on this and another issue and the response confirms that an AVAudioEngine can only be assigned to a single Aggregate device (that is, a device with both input and output channels) - the system default units create effectively an aggregate device internally which is why they work, although I've found an additional issue in that if the input device also has output capabilities (and you activate the inputNode) then that device has to be both the input and output device as otherwise the output appears not to work.
So answer is that I think there is no answer..
I'm trying to filter a signal and then analyse the values of the filtered signal using Tone.js / Web-Audio API.
I'm expecting to get values of the filtered signal, but I only get -Infinity, meaning that my connections between the nodes are wrong. I've made a small fiddle demonstrating this, however in my use-case I do not want to send this node to the destination of the context - I only want to analyse it, not hear it.
osc.connect(filter)
filter.connect(gainNode)
gainNode.connect(meter)
console.log(meter.getLevel())
I guess you tested the code in Chrome because there is a problem with Chrome which causes it to not process anything until it is connected to the destination. When using Tone.js that means you need to call .toMaster() at the end of your chain. I updated you fiddle to make it work: https://jsfiddle.net/8f7abzoL/.
In Firefox calling .toMaster() is not necessary therefore the following works in Firefox as well: https://jsfiddle.net/yrjgfdtz/.
After some digging I've found out that I need to have a scriptProcessorNode - which is apparently no longer recommended - so looking into Audio Worklet Nodes
I'm using a recent daily build of the Corona SDK (version 2001.562) to add gyroscope support to an existing application. Unfortunately, I can't seem to get the event-handling function for the gyroscope to fire. The application is running on an iPod touch, version 4.3.3.
I attach the gyroscope to an event handler like so:
if system.hasEventSource("gyroscope") then
feedbackFile = io.open(system.pathForFile("log.txt", system.DocumentsDirectory), "a");
feedbackFile:write((os.clock()-startupTime).."\tgyroscope on\n");
io.close(feedbackFile);
Runtime:addEventListener( "gyroscope", onGyroscopeDataReceived )
else
feedbackFile = io.open(system.pathForFile("log.txt", system.DocumentsDirectory), "a");
feedbackFile:write((os.clock()-startupTime).."\tgyroscope off\n");
io.close(feedbackFile);
end
When I launch the application on the device, then close it and download the resource files, I find that log.txt contains the line with a timestamp and "gyroscope on". Good so far!
On to the event-handling function:
local function onGyroscopeDataReceived(event)
feedbackFile = io.open(system.pathForFile("log.txt", system.DocumentsDirectory), "a");
feedbackFile:write((os.clock()-startupTime).."\tgyroscope reading delta="..event.deltaRotation..",x="..event.xRotation..",y="..event.yRotation..",z="..event.zRotation.."\n");
io.close(feedbackFile);
end
This line of information never appears in the log.txt file!
Please advise. Thanks in advance!
The problem is event.deltaRotation doesn't exist. You might mean event.deltaTime.
Then when you concatenate a nil value, Lua throws an error and your write code never gets completed. (The latest daily build will now print out a message when you encounter a Lua error on a device.)
The documentation shows how to compute your own deltaDegrees or deltaRadians:
http://developer.anscamobile.com/reference/index/events/gyroscope/eventxrotation
Just a wild guess but it may be your listener is never called --- I noticed your onGyroscopeDataReceived function is local. If that's the case, then you need to make sure the variable is declared prior to the addEventListener call.
When to set kAudioUnitProperty_StreamFormat (and kAudioUnitProperty_SampleRate too)? For each AU in my AUGraph ? Or is it enough to set it jus for the AU Mixer ?
André
you set it on the inputs and outputs of each audiounit.
iphone only allows input signed ints. so don't bother with floats it just won't work.
you set the sample rates using
CAStreamBasicDesciption myDescription;
myDescription.mSampleRate = 44100.0f; // and do this for the other options such as mBitsPerChannel etc.
On the output of audiounits such as the mixer, it comes out as 8.24 fixed point format.
be aware of this when you're trying to create callbacks and using the audiounitrender function, the formats have to match and you can't change the output formats. (but you may still need to set it)
use printf("Mixer file format: "); myDescription.Print(); to get the format description. It will depend on where you put it in your initialization process.
In short, yes - for more detail on what you actually need to set on each unit, see Audio Unit Hosting Guide for iOS