With the web audio API, I want to pan sound (with a PannerNode), then feed the sound into a ChannelSplitterNode so I can apply an AnalyserNode to each channel.
However, the ChannelSplitterNode does away with the pan created by the earlier PannerNode.
This gist illustrates my problem. It's lengthy, but we can focus on the panAndPlay method:
// Pans and plays a sound.
function panAndPlay(source) {
// Specify a pan.
var panner = audioContext.createPanner();
panner.panningModel = 'equalpower';
panner.setPosition(1, 0, 0);
// Create a splitter node that is supposed to split by channel.
var splitter = audioContext.createChannelSplitter();
// Create the audio graph: source -> panner -> splitter -> destination
source.connect(panner);
panner.connect(splitter);
// Try to hook up both channels outputted by the splitter to destination.
splitter.connect(audioContext.destination, 0);
splitter.connect(audioContext.destination, 1);
// The splitter seems to ignore the pan: You only hear the pan effect if you
// bypass the splitter by directly connecting the panner to the
// destination. Why does the splitter remove the pan effect? I want to use
// a splitter so that I can hook up an analyzer node to each channel, but
// I also want the pan effect ...
// Start playing sound.
source.start(0);
};
How can I get the splitter to heed the pan effect from earlier in its channel outputs?
The splitter outputs two mono signals. When you connect both of those mono signals to the destination, they get mixed into a single mono signal. You would have to use a merger node to turn them back into a stereo signal, and connect that to the destination, to hear it as stereo.
But it would probably be better to connect the panner directly to the destination. You can also connect it to the splitter, and the splitter outputs to the two AnalyserNodes, and then don't connect the outputs of the AnalyserNodes to anything. This way you don't need a merger node.
Related
I can drawing line with line renderer. But i want to pull that line with realistic effect like vacuum cleaner cable.
video (for 5 sec)
I want to connect all the parts to body part with drawing a line. After than, they should unite with visual effect. (like vacuum cleaner cable)
I just don't know how i can make this physics
example scene
Try this :
https://www.youtube.com/watch?v=SY2npMqaOpc
and then Position should be the middle of the object 1 and object 2 that you want to connect.
Click on one object (pos1) with raycast
click on 2 nd object (pos2) with raycast
Now you have their positions and objects selected
connect them (i don't know how you want to be the connection)
or do it manually
separate codes for every connection
I am trying to use an AVAudioEnvironmentNode() with AVAudioEngine and ARKit.
I am not using ARSCNView or any of the other ARViews, this is just a plain ARSession.
I have a sound source -> AVAudioEnvironmentNode -> AVAudioEngine.mainOut.
I understand how set the position of the sound source. I am trying to figure out how to move the audioListener. Because I want to walk around the sound source in space.
The Apple Documentation says that to update the node's listener position you use.
AVAudioEnvironmentNode.listenerPosition = AVAudio3DPoint(newX, newY, newZ)
When I pass the ARCamera's forward and up to the node, that seems to change fine. However when I trying to change the position, I do not hear anything, and when I print a debug of listenerPosition, the output stays at zero origin, even those the camera's position is moving.
Is there something I have to do to make the AVAudioEnvironmentNode movable to take a new position?
Thanks.
I am using a Focusrite Scarlett 2i2 into a Mac. The signal into the Scarlett is a guitar.
With code along these lines I can get audio into the, app, but it is only the stereo left channel.
mic = AKMicrophone()
device = AKDevice(name:"Scarlett 2i4 USB", deviceID:56);
mic.setDevice(device)
let booster = AKBooster(mic, gain: 1.0)
AudioKit.output = booster
AudioKit.start()
mic.start()
Is there a simple way to combine left and right channels from a mic input into a single mono signal (or left and right with the same signal)?
I tried a variation on this answer about flipping left and right channels: AudioKit - Stereo channel flipping from input to output?
But that didn't work. FWIW, it also didn't work for purely flipping the channels (AKPanner seems to be able to pan something from the center to hard left, but not from hard left to center or right.)
Two other things that might be related:
It seems that AKStereoInput is not available for the Mac platform. Is that correct?
What exactly is "deviceID"? I seem to be able to change that and get the same result.
Thank you.
Yes, there is something called AKStereoFieldLimiter that does just that:
https://audiokit.io/docs/Classes/AKStereoFieldLimiter.html
In a custom plugin for the extension of CustusX which implements GUIExtenderService I want to access the streamed image of an ultrasound probe together with its position.
The Documentation says:
VideoSource has two main users: Rendering to screen and recording to disk. VideoGraphics contains the visualization functionality that the Reps uses when rendering in the Views. VideoGraphics needs a Probe to provide position information. Probe also wraps the VideoSource with its own ProbeAdapterVideoSource (using the adapter pattern) in order to add special information not known to the VideoSource, such as pixel spacing.
So to my understanding, VideoSource is responsible for the image and Probe for the position. If I start with the VideoSource and hook up to newFrame and retrieve the image with getVtkImageData I only get the image data. So the question is: How can I obtain both an image frame and its corresponding position information for that frame? (Either via VideoSource and Probe or by other means).
You need an instance of the Probe and related objects:
VisServicesPtr services = VisServices::create(context);
ToolPtr tool = services->trackingService->getTool("myprobe");
ProbePtr probe = tool->getProbe();
VideoSourcePtr video = probe->getRTSource();
Now you have a tool, containing a probe, containing a video. video provides the image stream, while tool and probe provides position info. According to the documentation, the position of the image can be expressed as a transform rMv, where r is the global reference space and v is the image space in millimeters. To convert to pixels, multiply by the image spacing. rMv can be found using:
Transform3D rMpr, prMt, tMu, uMv, rMv;
rMpr = services->patientModelService->get_rMpr();
prMt = tool->get_prMt();
tMu = probe->getSector()->get_tMu();
uMv = probe->getSector()->get_uMv();
rMv = rMpr*prMt*tMu*uMv;
The transform rMpr is the patient registration and is identity if you are doing streaming only.
Now, a pixel position p in pixels can be converted to global space r using:
Vector3D p_v(p[0]*spacing[0], p[1]*spacing[1])
Vector3D p_r = rMv.coord(p_v);
Note: The position gotten this way will be the last sampled tracking position, not necessarily obtained simultaneously with the image frame. Interpolating with the next tracking position (recived after the image frame) can increase accuracy, but this is dependent on the specific use case.
I can use the linearRampToValueAtTime() method of the Web Audio API's AudioParam Interface to schedule a gradual linear change in the param. For instance, for gain,
var gainNode = audioContext.createGain();
gainNode.gain.linearRampToValueAtTime(1.0, audioContext.currentTime + 2);
I want to linearly ramp the position of a PannerNode. A Panner has a setPosition method, but I don't see an associated AudioParam:
var pannerNode = audioContext.createPanner();
pannerNode.setPosition(xPosition, yPosition, zPosition);
Can I linearly ramp the position of a panner node? I know that I could manually create a timer, and directly call setPosition over time, but can the Web Audio API handle that for me?
You can't. It's one of the many things wrong with the initial AudioPanner design, and why it's being refactored into two different nodes. https://github.com/WebAudio/web-audio-api/issues/372.
For the time being, you'll have to animate this via setInterval or the like.