How to modulate params from a Web Audio Api ScriptProcessor? - web-audio-api

I am working on a Browser Synth with the Web Audio Api.
Instead of using the "build in" OscillatorNode I want to develop a custom Oscillator model via the ScriptProcessorNode.
I am able to modulate the AudioParams of the "Build in" nodes with other nodes.
How can I connect internal Params of the ProcessorNode to other AudioNodes?

If you mean "how do I create AudioParam members of a ScriptProcessorNode that I can connect other sources to, to modulate my ScriptProcessor" - the short answer is, today you can't. It probably will not be in version 1 of Web Audio: https://github.com/WebAudio/web-audio-api/issues/134.

Related

Azure media player source manifist

We are very small junior school private tutors have setup of an online portal where students can login and watch the daily video lectures. We have many videos uploaded to Azure media services but we realized the encoding cost is high and not affordable. So I encoded a video using FFMPEG and generated m4s and audio file and .mpd (metadata) using MPBox in my local.
I have copied all the files on Azure blob storage and blob storage have HTTPS access. Can I use .mpd as source url for Azure media player ?
e.g. Azure media player source is //amssamples.streaming.mediaservices.windows.net/3b970ae0-39d5-44bd-b3a3-3136143d6435/AzureMediaServicesPromo.ism/manifest
but my generated metadata from MPDBox is
https://bb.sourceoftraining.companywebinternet.storage/ssj-ewrrer-2343s-ssssdf23/process_and_benifits.mpd
Or any other player I can use. I tried Shaka player but unable to show the Resolution and Playback speed settings.
Uploading pre-encoded MP4's works just fine. I suggest you download the latest version of the Azure Media Explorer tool for the v3 API. In there you can now upload an MP4 into a new asset, and have it generate the client and server manifests needed for streaming. Just upload to a new empty Asset, and then double click on the asset to get to the tab for the files, and click the generate manifests buttons.
That pre-gens the required manifest files needed for streaming an MP4 that is pre-encoded with closed 2 second GOPs. The tool pre-generates both the client and server manifest and saves them back into the asset to improve the playback performance from the streaming server.
You can use Azure Media Player to play back DASH, Smooth, or HLS - but the technology that it chooses to use for playback differs by platform. For example depending on the browser version, OS, or mobile client it will chose to load a different player tech or it will use the built-in OS player support.
https://learn.microsoft.com/en-us/azure/media-services/azure-media-player/azure-media-player-overview
For DASH content (.mpd) the AMP player chooses to use Dash on Windows, and on Android in specific conditions. It does this by detecting the platform and using the right tech combined with the /manifest(format=mpd-time-cmaf) format on the URL. You can learn more about how "dynamic packaging" works in AMS here - https://learn.microsoft.com/en-us/azure/media-services/latest/dynamic-packaging-overview
There are various "format" options on the streaming locator URL in AMS that provide different manifest formats back.
Smooth Streaming = /manifest
MPEG-DASH-CMAF = /manifest(format=mpd-time-cmaf)
HLS with CMAF = manifest(format=m3u8-cmaf)
HLS v3 (TS) = /manifest(format=m3u8-aapl-v3)
Using one of those various formats, you can use any 3rd party player that supports them. Shaka, HLS.js, Exoplayer on Android, iOS AvFoundation native player, Video.js, or even the 'adpater-player' noted by Jason above. Any player that supports the current HLS or DASH specifications should work.
If you have School email addresses that you can use for yourself and your students the simplest solution would be to leverage capabilities from Microsoft Stream via the free O365 education plan - https://www.microsoft.com/en-us/microsoft-365/academic/compare-office-365-education-plans. Info on Microsoft Stream at https://www.microsoft.com/en-us/microsoft-365/microsoft-stream.
And to clarify feedback Jason Pan just provided, while Azure Media Player doesn't support just pointing at .mpd file for playback this is rather done via first creating appropriate server manifest and then requesting .mpd manifest via format option in the URL clients will use to request content. Media Services will then dynamically create the appropriate manifest to respond to the client request. See John's response for links to articles with additional feedback on this.
If you use Shaka Player's UI library, you'll be able to display the Resolution and Playback speed settings.
Shaka UI library Shaka Player Demo

Is there a way to get access to the master mixer or other devices/channels via the web audio api?

Is there a way to record the audio being currently mixed down (possibly from another tab/process) on the hardware? Is there a way to input/connect to the browsers mixer?
Studio hardware usually has several input/output channels, mono and/or stereo; is there a way to get these connected onto the graph? Is there/will there be some device enumeration api?
The closest thing That you might be able to do is get data from the microphone, and set the system microphone to your system's output (In windows... Manage Audio Devices > Recording > Stereo Mix). Then just use getUserMedia to get the audio.
navigator.webkitGetUserMedia({audio: true}, function(stream) {
var microphone = context.createMediaStreamSource(stream);
});

upload video on a Convergence Application

hi everybody i try to develop a web application that can control Smart tv like this guide http://samsungdforum.com/Guide/tut00024/index.html i work fine but now i would like to upload video from computer then it can display on the smart tv like image shown on the tutorial have any one any idea or exemple or suggestion about modification of code that can i do that can help me i would like to modify code of convergence tutorial than can sens message or send video client application to smart tv application
Sending files is covered by the tutorial. You can find API reference for this here.
Sending video file is not exactly a wise thing, because there is a 3MB limit for a file that can be sent using Convergence API. This API is designed for sending messages between TV and external client rather than files. If you want to launch video playback, send video URL from web app to the TV and let the TV download the video by itself.

Stream Audio off site for iOS app?

I am working with a group at developing an app that will essentially be a 'radio' app. One view that will just play whatever audio is streaming at the time, and another view or two of archives to listen to past programs. What I am working on right now is how to assemble the view to play. The site in question is on-this-rock.org and the source for playing is here
Any suggestions for how I can best go about building the player to stream in the audio, without needing the rest of the site graphics?
Thanks
The stream URL is actually:
http://s4.voscast.com:8080/
This is just a SHOUTcast stream. You can build your radio player to connect directly to it. No need for the HTML/Flash on the website itself.
You can find this easily by looking at your browser tool's network tab, or by using a tool such as Fiddler or Wireshark.

how to create application for video sharing or live video view between two iphones

I am creating application which is having functionality like 1 person can view video live from another iPhone, i.e. one iphone is recording and and another is viewing the same, as we do with FACE TIME, but this things to be performed by our own server.
I come to know to USE XMPP client, and also we can use google Api , but how to use and what else things are required to create such kind of application ?
Also shall we need to create own server side part or we can hire other servers , like google/gtalk or any other which is already ready.
please guide me what other things are required for the same.
thanks.
I believe that for connecting 2 devices together GStreamer is one of the best choices: it's broadly used and there's a lot of materials/docs on it.
GStreamer has a pipeline architecture that inspired by DirectShow and Quicktime, and it provides a command-line tool named gst-launch that allows you to create a pipeline and quickly test several components of the library together.
This message, shares some interesting info on how to stream video directly from the iPhone camera using gst-launch, while receiving the data on a PC through VLC. Which means, 50% of what you are looking for is already done.
Another option, also demonstrated in that message, is to use FFmpeg.
I'd like to advocate ffmpeg, which has been successfully migrated onto iOS.
What you need to do is:
1. rewrite ffserver, use camera input as the video source, and encode it by H.264/MPEG-4 encoder
2. rewrite ffplay, so that it can display video on iOS devices. The network protocol and video decoder part are ready.