I am working on a PowerShell script that should pop a message box to users if the current input/output audio device is not a wired usb device/headset. Do you guys have any idea about it ? For starters I am looking for a cmdlet that provides info about the current audio input/output device.
Thanks in advance.
I had come across this question but it gives info about the 'default' audio device.. which is not exactly the same as currently connected input/output audio device. Is it ?
How to identify the default audio device in Powershell?
Related
I would like to use chrome speech recognition WebKitSpeechRecognition() with the input of an audio file for testing purposes. I could use a virtual microphone but this is really hacky and hard to implement with automation, but when I tested it everything worked fine and the speechrecognition converted my audio file to text. now I wanted to use the following chrome arguments:
--use-file-for-fake-audio-capture="C:/url/to/audio.wav"
--use-fake-device-for-media-stream
--use-fake-ui-for-media-stream
This worked fine on voice recorder sites for example and I could hear the audio file play when I replayed the recording. But for some reason when I try to use this on WebKitSpeechRecognition of chrome then it doesn't use the fake audio device but instead my actual microphone. Is there any way I can fix this or test my audio files on the website? I am using C# and I couldn't really find any useful info on automatically adding, managing and configuring virtual audio devices. What approaches could I take?
Thanks in advance.
Well it turns out this is not possible because chrome and google check if you are using a fake mic ect, they do this specifically to prevent this kind of behavior so people cannot get free speech to text. There is a paid api available from google (first 60 minutes per month are free)
I am attempting to record live audio via USB microphone to be converted to WAV and uploaded to a server. I am using Chrome Canary (latest build) on Windows XP. I have based my development on the example at http://webaudiodemos.appspot.com/AudioRecorder/index.html
I see that when I activate the recording, the onaudioprocess event input buffers (e.inputBuffer.getChannelData(0) for example) are all zero-value data. Naturally, there is no sound output or recorded when this is the case. I have verified the rest of the code by replacing the input buffer data with data that produces a tone which shows up in the output WAV file. When I use approaches other than createMediaStreamSource, things are working correctly. For example, I can use createObjectURL and set an src to that and successfully hear my live audio played back in real time. I can also load an audio file and using createBufferSource, see that during playback (which I hear), the inputBuffer has non-zero data in it, of course.
Since most of the web-audio recording demos I have seen on the web rely upon createMediaStreamSource, I am guessing this has been inadvertantly broken in some subsequent release of Chrome. Can anyone confirm this or suggest how to overcome this problem?
It's probably not the version of Chrome. Live input still has some high requirements right now:
1) Input and output sample rates need to be the same on Windows
2) Windows 7+ only - I don't believe it will work on Windows XP, which is likely what is breaking you.
3) Input device must be stereo (or >2 channels) - many, if not most, USB microphones show up as a mono device, and Web Audio isn't working with them yet.
I'm presuming, of course, that my AudioRecorder demo isn't working for you either.
These limitations will be removed over time.
Is there a way to record the audio being currently mixed down (possibly from another tab/process) on the hardware? Is there a way to input/connect to the browsers mixer?
Studio hardware usually has several input/output channels, mono and/or stereo; is there a way to get these connected onto the graph? Is there/will there be some device enumeration api?
The closest thing That you might be able to do is get data from the microphone, and set the system microphone to your system's output (In windows... Manage Audio Devices > Recording > Stereo Mix). Then just use getUserMedia to get the audio.
navigator.webkitGetUserMedia({audio: true}, function(stream) {
var microphone = context.createMediaStreamSource(stream);
});
I want to write a function in my iPad App, which allows me to stream the music choosen on iPad to the connected Game-Interfaces (iPod, iPhone...) via bluetooth. Does anyone knows a simple solution or maybe wants to share some sample code?
Thanks for help!
I am doing something very similar. I have my iphone connecting to multiple devices to stream audio to them, but I want the device that is streaming the audio to also play audio as well.
You can look into the GKSession in the GameKit API and that should give you a good start.
Also maybe openAl, but I think that might be a little overboard. I heard Core Audio has a built in feature for bluetooth devices that are connected to play audio through them but I dont think this goes for iPhone, iPad, iTouch etc....
I have also created my own peer connection interface that allows me to see multiple bluetooth devices that are running my app. I then can click each one and each gets connected. I then I added a test to push a text message to all connected devices for testing. Next I need to find out how to stream audio to the connected apple devices.
If anyone has any info on this I am sure we would both appreciate it.
Some on of you saw this.
So the question is, how to access mobile audio out in read mode?
Thank you!
That's fundamentally impossible, at least from the level of a software application (it could be possible in hardware or even firmware). That device works by taking advantage of the microphone jack included in the headphone jack. The hardware communicates with the application via an audio signal that is read from the microphone input.