How to use source maps in Chrome dev tools to analyze Performance profiling recording - google-chrome-devtools

I have a performance profiling recordings from customer where I can see obfuscated method names.
I have both the obfuscated JS file and source maps for it.
I would like to see unobfuscated method names after opening the performance recording in Chrome dev tools.
Unfortunately I don't know how to load source maps. Urls in the recording even point to a domain.
Any idea how to tell Chrome to use specific source maps from drive? 🙏

Related

Web Audio API Maximum Number of Sources?

I'm designing a web app with Electron to play back pipe organ sample files. Whenever concurrent note polyphony nears ~1024, the sound completely drops out, including subsequent reverb nodes. After the sounds would theoretically stop playing in the background (because I have released the key), the audio eventually comes back in.
Is this a hard limit on the Web Audio API? I also notice high CPU usage for that tab when it seems to be jammed. Is there a way to enable more concurrent audio sources? Ideally, I need to have tens of thousands for proper polyphony (although many of them are the same audio files being repeated)
I'm currently looping the samples with Tone.js, if that makes a difference.
Since you mentioned that you're building an Electron app I think it should be possible to run the same code in Chrome as well. Which means you could also profile it like any other code running in Chrome.
Here is the official guide from the Chrome team on profiling: Profiling Web Audio apps in Chrome.
Doing that hopefully allows you to spot any bottlenecks or performance problems in your app.

Best Recommendation for Capturing Video in a Meteor App on iOS devices

I ran into this problem in Safari where it appears that WebRTC is not fully supported. So when I call
navigator.webkitGetuserMedia()
I get an undefined error.
So my question to the community is what is the best way to write a Meteor app that captures Video on a mobile device and saves it on the said device.
If you have done this, I would appreciate it very much if you could share with me and the community how you went about this.
Specific Answer
The modern API is: navigator.mediaDevices.getUserMedia(constraints). See the docs here.
In the past, I've been unsuccessful with getUserMedia on iOS, but according to this post it can be done on iOS 11.
As for saving it, you can write to the browser's file system, but that API is only supported in Chrome. If you want to write to the camera roll, you'd need native code in the mix.
General Advice
I've spent several years of my life dealing with recording, uploading, and processing video using meteor. If you are doing anything more than trivial web recording, these observations may save you some time:
Chrome (on everything but iOS) has the best API for web recording. If you can require chrome for recording, that's ideal. Firefox is a close second, only because it doesn't support the file system API.
If you need to record and upload long videos on iOS, build a native app. Don't consider any kind of hybrid - that's a serious trap. The number of corner cases and things you need to check is pretty astounding, and the only way to get over those hurdles is with native code.

chrome speech recognition WebKitSpeechRecognition() not accepting input of fake audio device --use-file-for-fake-audio-capture or audio file

I would like to use chrome speech recognition WebKitSpeechRecognition() with the input of an audio file for testing purposes. I could use a virtual microphone but this is really hacky and hard to implement with automation, but when I tested it everything worked fine and the speechrecognition converted my audio file to text. now I wanted to use the following chrome arguments:
--use-file-for-fake-audio-capture="C:/url/to/audio.wav"
--use-fake-device-for-media-stream
--use-fake-ui-for-media-stream
This worked fine on voice recorder sites for example and I could hear the audio file play when I replayed the recording. But for some reason when I try to use this on WebKitSpeechRecognition of chrome then it doesn't use the fake audio device but instead my actual microphone. Is there any way I can fix this or test my audio files on the website? I am using C# and I couldn't really find any useful info on automatically adding, managing and configuring virtual audio devices. What approaches could I take?
Thanks in advance.
Well it turns out this is not possible because chrome and google check if you are using a fake mic ect, they do this specifically to prevent this kind of behavior so people cannot get free speech to text. There is a paid api available from google (first 60 minutes per month are free)

Syntax for audio only questions (SSML) in AoG Trivia sample

I'm using the AoG Trivia sample code (there's so much depth to this code!) that it's easier for me to grapple with its functions. I'm trying to create audio-only questions (I host .ogg files in a GCP bucket), but when I use the ssml method in ssml.js .audio, it fails to use the url to speak the .ogg file. Is there a special way to enter the questions in the question.json file, that are urls to audio files? I checked that the ssml was valid using the simulator.
Thanks for your help!
OK, so my bad, in the code I was leaving out the AUDIO_BASE_URL which is used to point where the hosted audio files are in Firebase. However ... a new problem has arisen, but I'll close this question. (I get different behaviour of playing the audio on the simulator&Google Assistant on Android vs Google Home, coupled with some intermittent network time-outs - I've raised it with Google :)

Sencha Touch and Saving Local Media

Does anyone know if it is possible to save media from the internet on the local device using Sencha Touch? From what I've seen so far, I understand it's definitely possible to save XML or JSON data locally on the device, but I have had no luck finding ways to store media locally.
To be more specific, I am looking to program an app that provides the user with a series of audio seminars - like podcasts, really. The user would be able to stream those audio files directly from the internet, but I also need to provide the user with the ability to save an episode/seminar for later. This will be important for when a user is traveling and does not have a reliable internet connection or data plan.
The primary delivery device would be on iOS (iPhone, iPad, iPod Touch) and I would hope to be able to use the same technology on Android devices - but that would be a secondary phase.
If this is possible, how would I go about saving material? And what, if any, would be the limitations on doing so? Any thoughts on this would be greatly appreciated.
I have done something similar using Sencha Touch 1 and PhoneGap to produce a hybrid app.
Basically, I use Sencha Touch to download the JSON, etc and LocalStorage to hold the data. Downloading media/files/etc to the actual device is not supported in Sencha Touch as the framework doesn't have access to a file system.
I then use PhoneGap's API's to tap into the device's native file system and download files to the app's Documents directory and pass the file names/paths to Sencha Touch for use in the app.
I'm assuming you are looking to create a hybrid app based on your question but if this is strictly a web app then there isn't much you can do.
TO add to the above point, you possibly could base64 encode the file and store it within LocalStorage but this isn't a sustainable model as LocalStorage only gives you 5mb of space. If you go over 5mb, the user is prompted (yes, no) to allow LocalStorage to use more space (in 5mb increments). Since the files your reference have the potential to be 5mb each, you can see how this could quickly become unmanageable for both you and the user.
EDIT:
See http://phonegap.com/ for the native wrapper
http://blog.clearlyinnovative.com/post/2056122828/phonegap-plugin-for-downloading-url-all-the-code for the phonegap download plugin
and https://github.com/aaronksaunders/FileDownLoadApp for the code
Check this website out. Scroll down to storing data offline. They discuss Sencha Touch provides a set of data store and proxy classes that make it very easy to work with data from (and going to) a variety of sources - both server- and client-side... hope this helped, cheers.