How to modulate the pulsewidth of the Web Audio API Square OscillatorNode? - web-audio-api

I want to modulate the square waveform of the Web Audio API OscillatorNode by connecting to other OscillatorNodes. But I can not find the parameter in the AudioParams.
Is this possible at all or is there a workaround?
I thought about creating a "custom" wavetable Oscillator with the "audioContext.createWaveTable()" function. This wavetable could contain different pulses with sweeping pulsewidths.
But than again I have no idea how to control the position of the wavetable pointer via AudioParams to modulate the sweep.
Is this possible or do I have a fundamental misunderstanding how this API works?

I found a workaround to the PWM problem here:
http://musicdsp.org/archive.php?classid=1#8
"Take an upramping sawtooth and its inverse, a downramping sawtooth. Adding these two waves with a well defined delay between 0 and period (1/f)
results in a square wave with a duty cycle ranging from 0 to 100%."
The inverse sawtooth can be done with a GainNode with gain value -1.

You can't, I'm afraid. We don't have pulse-width modulation in the oscillator yet. You'll have to do it by hand in a script processor node.

Related

Masking Audio STFT Matlab

I want to implantation a masking time-frequency audio.
In first, I am using the function : S=spectrogram(x,window,noverlap,nfft) on Matlab, to extract the STFT of the noise+target signal (from WAV file). After that, I am forcing on some coefficients of the STFT(S variable) to be zero with decision of some threshold. But after doing ISTFT I get complex values ( not a real values like I am Expecting - like audio signal).
Can anyone explain where the problem is coming from? And what is the accepted solution to a problem of this kind?
Note:
If I were doing FFT and there doing manipulations on the signal, I would make sure that the signal has properties to be real in time, but how to keep the properties in the STFT plane?
Are you using the MATLAB function spectrogram() or stft()?
I think you should use stft() (because you can use istft() to go back to time domain).
Also whatever processing you do to the time-frequency domain, you should do the same processing to both positive frequencies and negative frequencies.

Estimation of Pitch from Speech Signals Using Autocorrelation Algorithm

I want to detect speech signals pitch frequency using autocorrelation algorithm. I have a MATLAB code but the results are wrong. I would be grateful if you could solve the mistake in my code.
[y,Fs]=audioread('Sample1.wav');
y=y(:,1);
auto_corr_y=xcorr(y);
subplot(2,1,1);plot(y)
subplot(2,1,2);plot(auto_corr_y)
[pks,locs] = findpeaks(auto_corr_y);
[mm,peak1_ind]=max(pks);
period=locs(peak1_ind+1)-locs(peak1_ind);
pitch_Hz=Fs/period
Thank you for your help in this matter.
Seems, your code do not works because the Sample1.wav must contains only the short quazi-periodic part of the vocalized record. Also note, the pitch frequency is not the constant over time, so your estimation must takes this into account.
If you just want to estimate the frequency, you can take the RAPT method from the Speech Filling System (see the sfs_rapt.m wrapper for Windows).

Generating square wave using Matlab embeded function

I intend to generate a square wave which is applied on a DSP.
I have written these codes and put them in an embeded Matlab function.
function y = fcn(u)
%#eml
t=0:0.001:1
h = sign(sin(125600*t+u));
y= (h+1)/2
where, u is a constant value of 0.582 which is used for shifting the square wave.
The problem is at the output in the simulation, instead of getting a square wave, I see only two straight lines of y=o and y=1.
Please let me know where is the problem that I can not get the square wave?
Note that the frequency of square wave must be 20 kHz. Therefore, I adjust the sampling time as 1e-7 s. And also its amplitude is between 0 and 1 In addition, due to this signal must be transferred to a DSP board, in the "solver option" I chose the type: " Fixed-step" and for the Solver: "Discrete (no continues state)".
Thanks a lot.
This is wrong on many levels.
First of all, you never define the time vector inside a MATLAB Function, that's what the Simulink engine does. Pass time as an input to your MATLAB Function block and use a Clock block to generate the time input.
Second, the above is fine for simulation, but it sounds like you are generating C code from the Simulink model to run it (in real-time) on your DSP. This is not my area of expertise, but from memory, I think you need to enable "absolute time" or something similar for the above to work with code generation. However, I think this is target-dependent and so I'm not sure whether this will work on your DSP.
In you function type plot(t,y) at the end. You are generating a 20khz square wave (assuming you are sampling at 1e-7). Essentially your generating it is working.
Now, what is the DSP board you are using/any information that is relevant to your problem?
I don't know what you are referring to when you say "Solver" either.
Is the "simulation" an oscilloscope or a program? Either way perhaps it is not triggering correctly? Is there an edge trigger option?

Playing multiple sine waves on iPhone with AudioUnit(s)

I'm currently working on program that can output sine wave of set frequency through speaker/headphones on iPhone.
Now I want to output multiple sine waves, and I don't know which approach is better. Should I just add all sine waves and play them using one AudioUnit, or maybe create AudioUnit for each sine wave ?
I'm currently leaning towards first solution, but don't know why ... It's just my instinct. It would be great if someone could explain to me why solution they choose is better :)
Thanks !
You will have more precise control of the timing of the mix (where each sine wave starts and ends), and the quality of the mix, if you create one DSP mixer and play the result through a single Audio Unit. There will also be a very tiny bit less thread switching overhead taking up CPU cycles.

Peak detection in Performous code

I was looking to implement voice pitch detection in iphone using HPS method. But the detected tones are not very accurate. Performous does a decent job of pitch detection.
I looked through the code but i did not fully get the theory behind the calculations.
They use FFT and find the peaks. But the part where they use the phase of FFT output, got me confused.I figure they use some heuristics for voice frequencies.
So,Could anyone please explain the algorithm used in Performous to detect pitch?
[Performous][1] extracts pitch from the microphone. Also the code is open source. Here is a description of what the algorithm does, from the guy that coded it (Tronic on irc.freenode.net#performous).
PCM input (with buffering)
FFT (1024 samples at a time, remove 200 samples from front of the buffer afterwards)
Reassignment method (against the previous FFT that was 200 samples earlier)
Filtering of peaks (this part could be done much better or even left out)
Combining peaks into sets of harmonics (we call the combination a tone)
Temporal filtering of tones (update the set of tones detected earlier instead of simply using the newly detected ones)
Pick the best vocal tone (frequency limits, weighting, could use the harmonic array also but I don't think we do)
I still wasn't able from this information to figure it out and implement it. If anyone manages this, please please post your results here, and comment this response so that SO notifies me.
The task would be to create a minimal C++ wrapper around this code.