Property 'createMediaStreamDestination' does not exist on type 'AudioContext' - audiocontext

I am getting above error while trying to get a stream from a File, via Audio Tag.
this.audioElement = document.getElementById('audioSend');
var audioCtx = new AudioContext();
var source = audioCtx.createMediaElementSource(this.audioElement);
var dst = audioCtx.createMediaStreamDestination();

createMediaStreamDestination() is supported in the following browsers:
Chrome since 10.0
Firefox since 25.0
Safari since 6.0
Opera since 15.0
But not IE nor Edge.
Source:
https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamAudioDestinationNode
The lack of support of Edge is a bit odd since it is documented on their site:
https://msdn.microsoft.com/en-us/library/dn954877(v=vs.85).aspx
Which browser are you targeting?

Related

iOS 16 Poor SpeechSynthesisUtterance voice quality

since iOS 16 update my vocabulary app (PWA) has problems with spelling provided text to SpeechSynthesisUtterance object. It doesn't apply to all languages, eg. Russian sounds the same like before update to iOS 16. If it comes to German or English - the quality is very low, muffled, the voice sounds nasal... For MacOS Safari everything works as supposed to, but not for iOS 16.
const fullPhrase = toFullPhrase(props.phrase);
const utterance = new SpeechSynthesisUtterance();
onMounted(() => { // Vue lifecycle method
utterance.text = fullPhrase;
utterance.lang = voice.value.lang;
utterance.voice = voice.value;
utterance.addEventListener(ON_SPEAK_END, toggleSpeakStatus);
});
I tried to modify pitch and rate properties but without success... Did they change API for SpeechSynthesis / SpeechSynthesisUtterance for Safari in iOS 16 maybe?
It looks like IO16 introduced a lot of new (sometimes very weird) voices for en-GB and en-US. In my case I was looking for a voice only by a lang and taking the first one. As a result I was getting a strange voice.

Swift - AVAudioEngine fails to default to system output

I try to use AVAudioEngine to playback audio. Here is my code stripped down to the minimum:
import AVKit
let audioEngine = AVAudioEngine()
let audioPlayer = AVAudioPlayerNode()
let mainMixer = audioEngine.mainMixerNode
/*
let outputNode: AVAudioOutputNode = audioEngine.outputNode
let outputUnit: AudioUnit = outputNode.audioUnit!
var outputDeviceID: AudioDeviceID = 240 // Magic Numbers: 246 Line out, Built-in Speaker: 240 - these will be different on other computers
AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout<AudioDeviceID>.size))
*/
audioEngine.attach(audioPlayer)
audioEngine.connect(audioPlayer, to: audioEngine.mainMixerNode, format: nil)
try! audioEngine.start()
let kPlaybacklFileLocation = "/System/Library/Sounds/Submarine.aiff"
let audioFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, kPlaybacklFileLocation as CFString, .cfurlposixPathStyle, false)
let audiofile = try! AVAudioFile(forReading: audioFileURL! as URL)
audioPlayer.scheduleFile(audiofile, at: nil)
audioPlayer.play()
The code runs in an XCode 11 Playground and should output the "Submarine" alert found in the system sounds, or you may direct it to any other soundfile.
I have a macbook pro mid 2012 (non retina) running on Mojave and the code runs as expected.
However, on my 2018 Mac mini, also running Mojave, the playground runs without errors but no sound is heard. Only after I set the outputNode explicitly to a specific output (as found in the system preferences) by adding the code in comments it works.
Anyway, this would not be a viable solution, as the output so would be bound to the specified port. On my macbook, where the code works without the commented part, the output will be rerouted to whatever I specify in the system panel whithout even having to restart XCode.
I retreived the outputDeviceID 240 using AudioObjectGetPropertyData, the code needed is to verbose to show, but it is noteworthy that besides the physical outputs "BuiltInHeadphoneOutputDevice" and "BuiltInSpeakerDevice" there is also an "CADefaultDeviceAggregate-2576-0".
Also noteworthy is that the device list differs from the list I get using AudioComponentFindNext which only gives me entries like "AudioDeviceOutput" and "DefaultOutputUnit".
I would expect that the AVAudioEnginewould use the AUHAL to be routed to the default unit.
So why does the same code in the same XCode version and the same OS runs fine on one machine and not the other?
What do I have do change within the code to guarantee that on any machine the audio goes to the output which is set in the system preferences?
Some further considerations:
The Mac mini has the T2 security chip which puts more restraints to what XCode may have access to
I have no paid developer account, some entitlements need certificates I therefore cannot grant
I have some virtual outputs (blackhole, soundflower) installed on the mini, but system output is set to either built in speakers or headphones/lineOut

Detect if app is running on a macOS beta version

I would like to be able to somehow detect if my app is running on a beta version of macOS 11, as there are some known bugs I want to inform users about. I only want to show such an alert to macOS 11 beta users, meaning not macOS 10.15 users nor users on the final version of macOS 11. I could of course just submit an app update to remove the alert when macOS 11 is close to being done, but it would be nice to have something reusable I could use in multiple apps and for future macOS beta versions.
Constraints:
The app is sandboxed.
The app is in the App Store, so no private APIs.
The app doesn't have a network entitlement, so the detection needs to be offline.
I don't want to bundle a list of known macOS build numbers and compare that.
My thinking is that maybe it's possible to use some kind of sniffing. Maybe there are some APIs that return different results when the macOS version is a beta version.
I believe you're out of luck. About This Mac uses PrivateFrameworks/Seeding.framework, here the important disassembly:
/* #class SDBuildInfo */
+(char)currentBuildIsSeed {
return 0x0;
}
So it seems this is a build time compiler flag. The plists in the framework don't contain this flag unfortunately.
Sample private API usage: kaloprominat/currentBuildIsSeed.py
For the crazy ones: It would be possible to read the binary and compare the assembly for the function. I'd start with the class-dump code, which gets you different fat binaries and the function offset.
This is far from perfect but macOS BigSur release notes mention:
Known Issues
In Swift, the authorizationStatus() property of CLLocationManager is incorrectly exposed as a method instead of a property. (62853845)
This^ is new API introduced in MacOS11 / iOS14
So one could detect this for this particular beta.
import CoreLocation
func isMacOS11Beta() -> Bool {
var propertiesCount = UInt32()
let classToInspect = CLLocationManager.self
var isMacOS11OrHigher = false
var isMacOS11Beta = false
let propertiesInAClass = class_copyPropertyList(CLLocationManager.self, &propertiesCount)
if classToInspect.responds(to: NSSelectorFromString("authorizationStatus")) {
isMacOS11OrHigher = true
isMacOS11Beta = true
for i in 0 ..< Int(propertiesCount) {
if let property = propertiesInAClass?[i], let propertyName = NSString(utf8String: property_getName(property)) as String? {
if propertyName == "authorizationStatus" {
isMacOS11Beta = false
}
}
}
free(propertiesInAClass)
}
return isMacOS11OrHigher && isMacOS11Beta
}

IMFSourceReader giving error 0x80070491 for some resolutions

I'm trying to catpure video from a 5MP UVC camera using an IMFSourceReader from Microsoft Media Foundation (on Windows 7 x64). Everything works just like the documentation with no errors on any API calls until the first callback into OnReadSample() which has "0x80070491 There was no match for the specified key in the index" as it's hrStatus parameter.
When I set the resolution down to 1080p it works fine even though 5MP is the camera's native resolution and 5MP (2592x1944) enumerates as an available format.
I can't find anything in the the Microsoft documentation to say that this behaviour is by design but it seems consistent so far. Has anyone else got IMFSourceReader to work at more that 1080p?
I see the same effects on the Microsoft MFCaptureToFile example when it's forced to select the native resolution:
HRESULT nativeTypeErrorCode = S_OK;
DWORD count = 0;
UINT32 streamIndex = 0;
UINT32 requiredWidth = 2592;
UINT32 requiredheight = 1944;
while ( nativeTypeErrorCode == S_OK )
{
IMFMediaType * nativeType = NULL;
nativeTypeErrorCode = m_pReader->GetNativeMediaType( streamIndex, count, &nativeType );
if ( nativeTypeErrorCode != S_OK ) continue;
// get the media type
GUID nativeGuid = { 0 };
hr = nativeType->GetGUID( MF_MT_SUBTYPE, &nativeGuid );
if ( FAILED( hr ) ) return hr;
UINT32 width, height;
hr = ::MFGetAttributeSize( nativeType, MF_MT_FRAME_SIZE, &width, &height );
if ( FAILED( hr ) ) return hr;
if ( nativeGuid == MFVideoFormat_YUY2 && width == requiredWidth && height == requiredheight )
{
// found native config, set it
hr = m_pReader->SetCurrentMediaType( streamIndex, NULL, nativeType );
if ( FAILED( hr ) ) return hr;
break;
}
SafeRelease( &nativeType );
count++;
}
Is there some undocumented maximum resolution with Media Framework?
It turns out that the problem was with the camera I was using, NOT the media streaming framework or UVC cameras generally.
I have switched back to using DirectShow sample grabbing which seems to work ok so far.
I ran into this same problem on windows 7 with a usb camera module I got from Amazon.com (ELP-USBFHD01M-L21). The default resolution of 1920x1080x30fps (MJPEG) works fine, but when I try to select 1280x720x60fps (also MJPEG, NOT h.264) I get the 0x80070491 error in the ReadSample callback. Various other resolutions work OK, such as 640x480x120fps. 1280x720x9fps (YUY2) also works.
The camera works fine at 1280x720x60fps in Direct Show.
Unfortunately, 1280x720x60fps is the resolution I want to use to do some fairly low latency augmented reality stuff with the Oculus Rift.
Interestingly, 1280x720x60fps works fine with the MFCaptureD3D sample in Windows 10 technical preview. I tried copying the ksthunk.sys and usbvideo.sys drivers from my windows 10 installation to my windows 7 machine, but they failed to load even when I booted in "Disable Driver Signing" mode.
After looking around on the web, it seems like various people with various webcams have run into this problem. I'm going to have to use DirectShow to do my video capture, which is annoying since it is a very old API which can't be used with app store applications.
I know this is a fairly obscure problem, but since Microsoft seems to have fixed it in Windows 10 it would be great if they backported the fix it to Windows 7. As it is, I can't use their recommended media foundation API because it won't work on most of the machines I have to run it on.
In any case, if you are having this problem, and Windows 10 is an option, try that as a fix.
Max Behensky

WinRT writing to TCP stream not working

I have started with the development of a "WinRT" app ("Metro"-style apps for Windows 8). The app should read and write some data via a TCP stream. Reading works fine, but writing does not work. Below you can find the code which uses the full .NET Framework (which works):
var client = new TcpClient();
client.Connect(IPAddress.Parse("192.168.178.51"), 60128);
var stream = client.GetStream();
var writer = new StreamWriter(stream);
writer.WriteLine("ISCP\0\0\0\x10\0\0\0.....");
writer.Flush();
In comparison the following code does not work:
var tcpClient = new StreamSocket();
await tcpClient.ConnectAsync(new HostName("192.168.178.51"), "60128");
var writer = new DataWriter(_tcpClient.OutputStream);
writer.WriteString("ISCP\0\0\0\x10\0\0\0....");
writer.FlushAsync();
WriteString returns the correct length of the string (25), yet the other end does not receive the correct command. Via Wireshark I also see a correct package for the full .NET version, but not for the WinRT version.
How to fix this?
.NET version:
WinRT version:
After your call to writer.WriteString() you need to actually commit the date that is now on the buffer by calling writer.StoreAsync()
any call to wrtier.WriteXX will only store data in memory. Once you call writer.StoreAsync() that data in memory will be sent.
My guess is that StreamWrtiers.WriteLine does this for you in a single call.