Web Audio API — squaring a signal by using a Gain - web-audio-api

Should it be possible to square a signal by creating a Gain instance and connecting the signal both to the gain input and amplitude control parameter? Because I am seeing odd results at least in Firefox. I can see that Tone.js uses a wave-shaper instead for a pow operation, so perhaps this is the way to go. But I'm curious, since the API says the gain parameter is audio-rate, obviously there must be some delays involved.

This works for me:
var c = new AudioContext();
var o = c.createOscillator();
var g = c.createGain();
g.gain.value = 0;
g.connect(c.destination);
o.connect(g);
o.connect(g.gain);
o.start();
o.stop(c.currentTime + 2);
You can't tell from listening but if you paste the code into http://hoch.github.io/canopy/, you can see that the sine wave has been squared.

Yes, it works to square a signal this way. (I use it in my vocoder.) There should be no delay in doing things this way.

Related

How Double QN works?

What is the idea behind double QN?
The Bellman equation used to calculate the Q values to update the online network follows the equation:
value = reward + discount_factor * target_network.predict(next_state)[argmax(online_network.predict(next_state))]
The Bellman equation used to calculate the Q value updates in the original DQN is:
value = reward + discount_factor * max(target_network.predict(next_state))
but the target network for evaluating the action is updated using weights of the
online_network and the value and fed to the target value is basically old q value of the action.
any ideas how adding another networks based on weights from the first network helps?
I really liked the explanation from here:
https://becominghuman.ai/beat-atari-with-deep-reinforcement-learning-part-2-dqn-improvements-d3563f665a2c
"This is actually quite simple: you probably remember from the previous post that we were trying to optimize the Q function defined as follows:
Q(s, a) = r + γ maxₐ’(Q(s’, a’))
Because this definition is recursive (the Q value depends on other Q values), in Q-learning we end up training a network to predict its own output, as we pointed out last time.
The problem of course is that at each minibatch of training, we are changing both Q(s, a) and Q(s’, a’), in other words, we are getting closer to our target but also moving our target! This can make it a lot harder for our network to converge.
It thus seems like we should instead use a fixed target so as to avoid this problem of the network “chasing its own tail”, but of course that isn’t possible since the target Q function should get better and better as we train."

Why does this trivially learnable example break AdaBoost?

I'm testing out a boosted tree model that I built using Matlab's fitensemble method.
X = rand(100, 10);
Y = X(:, end)>.5;
boosted_tree = fitensemble(X, Y, 'AdaBoostM1', 100,'Tree');
predicted_Y = predict(boosted_tree, X);
I just wanted to run it on a few simple examples, so I threw in an easy case, one feature is >.5 for positive examples and < .5 for negative examples. I get the warning
Warning: AdaBoostM1 exits because classification error = 0
Which leads me to think, great, it figured out the relevant feature and all the training examples were correctly classified.
But if I look at the accuracy
sum(predicted_Y==Y)/length(Y)
The result is 0.5 because the classifier simply assigned the positive class to all examples!
Why does Matlab think that classification error = 0 when it is clearly not 0? I believe this example should be easily learnable. Is there a way to prevent this error and get the correct result using this method?
Edit: The code above should reproduce the warning.
This is not a bug, it's just that AdaBoost is not designed to work in cases where the first weak learner gets perfect classification. More details:
1) The warning you get is referring to the error of the first weak learning, which is indeed zero. You can see this by following the stack trace that comes along with the warning into the function Ensemble.m (in Matlab R2013b, at line 194). If you place a breakpoint there and run your example, then run the command H.predict(X) you will see that this learning has perfect prediction.
2) So why doesn't your ensemble have perfect prediction? If you look more at Ensemble.m, you'll see that this perfect learner never gets added to the ensemble. This is also reflected in that boosted_tree.NTrained is zero.
3) So why doesn't this perfect learner get added to the ensemble? If you find a description of the AdaBoost.M1 algorithm, you'll see that in each round, training examples are weighted by the error of the previous weak learner. But if that weak learner had no error, then the weights will be zero and therefore all subsequent learners will have nothing to do.
4) If you come across this situation in the real world, what do you do? Don't bother with AdaBoost! The problem is easy enough that a single one of your weak learners can solve it:
X = rand(100, 10);
Y = X(:, end)>.5;
tree = fit(ClassificationTree.template, X, Y);
predicted_Y = predict(tree, X);
accuracy = sum(predicted_Y == Y) / length(Y)

How can I find a transfer function between an input and output sampled at different rates?

I understand that normally I'd use ftest() after prepping my data with iddata(). However, for iddata() to work correctly I need to have both my input and output data be sampled at the same rate. Is there a rate-independent variant of iddata() or any other way which will allow me to accomplish what I need to accomplish?
I am working on the same problem (https://dsp.stackexchange.com/questions/19458/how-to-compute-transfer-function-from-experimental-data) and I am not sure I found the way to do it, but I share my idea with you, so we might find a solution that works for both of us (if you already have found a way to do it, please share it).
Method 1
If you have your signals in the time domain you can synchronize them and then use the tfestimate function.
% Define timeseries
ts_output = timeseries(x,time1,'Name','output');
ts_input = timeseries(y,time2,'Name','input');
% Synchronization
[ts_output,ts_input] = synchronize(ts_output,ts_input,'uniform',...
'interval',delta_t);
% Compute transfer function
Fs = 1/delta_t;
[Txy,W] = tfestimate(ts_input.data,ts_output.data,[],[],[],Fs);
Method 2
Instead of synchronize you could resample the signal sampled at lower frequency: assume Fs1 > Fs2
[P,Q] = rat(Fs1/Fs2);
y2 = resample(y2,P,Q);

How to mimic MATLAB/Simulink relay behavior?

I am trying to mimic the behavior of MATLAB's Simulink relay block with just MATLAB code.
My code is as follows (not familiar with persistent variable? click) :
function out = fcn(u,delta)
persistent y;
if isempty(y)
y = 0;
end
if u >= delta
y = 1;
elseif u <= -delta
y = 0;
end
out = y;
When I look to the output and compare with the real relay block I see :
Where does the difference come from?
Both blocks insert the same sample time, does the relay block have something extra to show the discontinuity?
Simulink block diagram download
I'm not quite sure about this explanation, maybe somebody can support it.
The MATLAB function Block does not support Zero-Crossing Detection, the Relay Block does. That means the latter knows in advance, when your sine will reach the threshold delta and sets the output accordingly to the correct time. The MATLAB function Block needs 2 or more steps to detect the slope (respectively the crossing of the threshold). So from one step to another it realizes that the condition for the new output was set and updates the output and you get a ramp, not a step.
C/C++ S-Functions do have Zero-Crossing Detection - though it seems quite complicated.

Samples by samples cross-correlation(Xcorr) matlab

I am using the xcorr function for identifying the similarity of the signals. the following is the code,
r1 = max(abs(xcorr(S1, shat1,'coeff')));
r2 = max(abs(xcorr(S1,shat2,'coeff')));
if r1>r2
dn=shat2;
else
dn=shat1;
end
It worked perfectly. But the problem is the signals are having 40,000 samples each. Practically I do get a lot of delay. I have to send bunch of samples (like 250samples)into the xcorr for getting rid of the delay. But how do I do that? I know that I have to use a for loop, but found difficult in doing that. Can some one suggest me how do I do that.I tried something like this
for i=1:250:40000
r1 = max(abs(xcorr(S1(:,i), shat1(:,i),'coeff')));
but totally lost. Someone suggest something please....
If I understand you correctly, you want to cross-correlate block of 250 samples, one after the other. Adapting from your attempt, try
for i=1:250:40000
r1 = max(abs(xcorr(S1(i:i+249), shat1(i:i+249),'coeff')));
end
As a side note, do you know anything about the maximum lag between your signals? If you can safely assume that the temporal shift between your signals is below 250 (which the idea of splitting it into intervals suggests), you could save calculation time by modifying your original code with using maxlags, a parameter for xcorr:
maxlags=250; %# or some other reasonable value, maybe even 100? 50?
r1 = max(abs(xcorr(S1, shat1,maxlags, 'coeff')));
r2 = max(abs(xcorr(S1, shat2,maxlags, 'coeff')));
...
I haven't tested how fast that would be, but my guess is you might be able to avoid your loop altogether with this...