The audioContext.listener is deprecated and in place is a "spatialListener" and I am curious if it is still a property of the audio context?
In other words is the syntax: audioContext.spatialListener ?
Also, it is not clear what the difference is between the pannerNode and the spatialPanner node at this point. Any clarifying would be appreciated.If spatialPanner is replacing the previous panner node then what role does the previous pannerNode have if any.
There are currently three panners:
StereoPanner. This is a simple, equal power panner with a left/right balance AudioParam. Most non-3D panning scenarios should probably use this - it's simple, lightweight and works well for speakers and headphones.
Panner. This is the previous panner with x/y/z controls (and the listener to set up the listener position and orientation). Unfortunately, the x/y/z/ controls weren't set up as AudioParams, and it was too late to change them in-place: every bit of code out there using Panner would have broken. (Same with Listener: the params needed to be AudioParams, not doubles: that's why there is now SpatialListener.) This node is deprecated, and will go away: hopefully before v1 of the Web Audio spec is finalized. This supports both equal-power and HRTF (head-related transfer function), which enables 3D positioning.
SpatialPanner. This is basically the same as #2, except a) it uses AudioParams for the parameters, so it's smoothly automatable, b) it's relative to the SpatialListener, which also uses AudioParams, and c) it's not deprecated. :) If you're not using StereoPanner, you should probably use SpatialPanner and SpatialListener.
Related
I'm working on a live music visualisation project, where I am using a particle stream to visualise each channel of audio (vocals, guitar, percussion, bass) which are each coming from a looper.
I have the visualisation aspects working - I do envelope tracking in a separate pd instance, send the envelope details via udp to my gem instance, which then uses that to vary the size and colour of multiple particle streams.
The problem I have is that I am trying to set the origin point of each stream, and they are either interacting or they are controlling the origin of a different stream. The part_velocity also seems to be having a similar issue.
Each particle system has it's own gemhead (which I init as say [gemhead 20] so each one is unique), but changing the XYZ for its [part_source 1 point] object seems to affect a stream that's in a different gemhead chain.
I have also moved it off into an abstraction, where I name its head [gemhead $0] and I am having the same issue.
This unanswered thread from years ago shows two other people having the same problem, but no answers.
Here's a portion of my main patch which calls the abstraction:
And this is the abstraction:
Am I missing something simple here, or is there perhaps a bug in that one of the part_xxx objects is not checking which gemhead list it's in? Note that there are other gemheads in the main patch, some have an argument, some don't, but they're doing other stuff.
Oh yeah, and input is welcome on the somewhat dumb-looking way that I'm preserving state here, I've NO idea what the patterns are here, and cannot for the life of me find any good advice on it!
What is the difference between those two methods? Why should i prefer one?
1)
GameObject.FindGameObjectWithTag("").GetComponent<Rocket>().active = true;
2)
GameObject.FindGameObjectWithTag("").GetComponent<Rocket>().SendMessage("setActive");
thanks!
Sending a message searches through all the components of the gameObject and invokes any function that has the same name as the message. Not a 100% sure but Im sure this uses reflection which is generally considered slow.
SetActive() or the active variable set the gameObject as active or not. If its not active it wont render in the scene and vice versa.
First of all it seems there are several inconsistencies with your code above:
1) Components (and MonoBehavior) don't have an active property (active belongs to GameObject), so the first line of code shouldn't compile. In addition the most recente version of unity don't have active anymore, it's substitued with activeSelf and activeInHierarchy.
And btw, both activeSelf and activeInHierarchy are read only, so you can't assing directly a value to them. For changing their value use SetActive method.
2)
The second line of code shouldn't work either (unless Unity does some magic behind the scenes) because SetActive methods belong to GameObject and not to your Rocket Component.
Now, I suppose your question was the difference between:
gameObject.SetActive(true);
and
gameObject.SendMessage("SetActive",true);
The result is the same, but in the second way Unity3D will use reflection to find the proper method to be called, if any. Reflection has an heavy impact on performance, so I suggest you to avoid it as much as possible.
Simple question:
I've added some scales (sliders) to my window, and I want to call a method when you move the scale.
What is the signal name that I use for gtk_signal_connect?
ie I should be able to write something like:
gtk_signal_connect(GTK_OBJECT(my_scale), "scale_moved", (GtkSignalFunc)my_event, data);
or am I missing something here?
And more importantly - how do I find out in the future what the signal names are? for example - I googled 'gtk_signal_connect' but I didn't find a big list of different signals.
Similarly, I didn't find details about related signals in the GtkScale documentation. (Well, in this page, there is a single signal detail, but it relates to changing the displayed value format).
GtkScale inherits from GtkRange, and signals are inherited in GTK+. Therefore, you can connect to the value-changed signal exposed by GtkRange.
You're on the right track to find the signals exposed by a given GTK+ widget: besides the source code itself, the documentation is indeed the canonical resource, but you should also take the base classes into account in your search.
I want to write GUI code that is orthogonal. Lets say I have a circle class and a square class and they need to interact. Right now, to get the circle and square talking to each other - say the circle object sends a message to the square object, I would use something like square_obj.listen_for_circle(circle_obj) where listen_for_circle is a method that implements an addlistener.
This is a problem for me since now the two objects are linked - and removing one object from my code would break it. What I am looking to do is for the circle_obj to be able to broadcast a global message say 'CIRCLE_EVENT'. Additionally square_obj would be listening for global message broadcasts of type 'CIRCLE_EVENT', and upon hearing the event - does some action.(Ahhh, now the objects have no links to each other in the code base!)
Is this possible or even reasonable in MATLAB? (or maybe i'm just going crazy).
As always, advice much appreciated.
I'm not really sure why addlistener is problematic for you. It basically just adds an event listener that doesn't do anything if the event-origin object (the circle) is deleted.
Alternately you can use event.listener or handle.listener. They are undocumented but work well, and are widely used within the Matlab codebase (m-files). See explanation here: http://UndocumentedMatlab.com/blog/continuous-slider-callback/#Event_Listener
I am designing a music visualiser application for the iPhone.
I was thinking of doing this by picking up data via the iPhone's mic, running a Fourier Transform on it and then creating visualisations.
The best example I have been able to get of this is aurioTuch which produces a perfect graph based on FFT data. However I have been struggling to understand / replicate aurioTouch in my own project.
I am unable to understand where exactly aurioTouch picks up the data from the microphone before it does the FFT?
Also is there any other examples of code that I could use to do this in my project? Or any other tips?
Since I am planning myself to use the input of the mic, I thought your question is a good opportunity to get familiar with a relevant sample code.
I will trace back the steps of reading through the code:
Starting off in SpectrumAnalysis.cpp (since it is obvious the audio has to get to this class somehow), you can see that the class method SpectrumAnalysisProcess has a 2nd input argument const int32_t* inTimeSig --- sounds a promising starting point, since the input time signal is what we are looking for.
Using the right-click menu item Find in project on this method, you can see that except for the obvious definition & declaration, this method is used only inside the FFTBufferManager::ComputeFFT method, where it gets mAudioBuffer as its 2nd argument (the inTimeSig from step 1). Looking for this class data member gives more then 2 or 3 results, but most of them are again just definitions/memory alloc etc. The interesting search result is where mAudioBuffer is used as argument to memcopy, inside the method FFTBufferManager::GrabAudioData.
Again using the search option, we see that FFTBufferManager::GrabAudioData is called only once, inside a method called PerformThru. This method has an input argument called ioData (sounds promising) of type AudioBufferList.
Looking for PerformThru, we see it is used in the following line: inputProc.inputProc = PerformThru; - we're almost there:: it looks like registering a callback function. Looking for the type of inputProc, we indeed see it is AURenderCallbackStruct - that's it. The callback is called by the audio framework, who is responsible to feed it with samples.
You will probably have to read the documentation for AURenderCallbackStruct (or better off, the Audio Unit Hosting) to get a deeper understanding, but I hope this gave you a good starting point.