I have EEG signal that is processed using openvibe, which outputs the signal as osc. Using the livegrabber I was able to receive the signal in ableton (see pic). However, now I’m stuck on how I can make use of this signal. I want to convert it into midi to control vst parameters in Ableton. Can anyone give me pointers?
For the answer, linking to the conversation from the same question on the Showsync forum: https://forum.showsync.com/t/integrating-eeg-osc-signal-with-ableton/794/2
Related
I'm trying to send and receive data through a serial port using simulink matlab and Arduino. when to receive data from Arduino to simulink matlab no problem!!
but for sent data to Arduino, I'm faced with this error.
and simulation simulink picture is:
The notation double (c) on the output y from your MATLAB Function block indicates that the signal is numerically complex, see Display Signal Attributes in the documentation for more details. This is the source of your problem, as mentioned in the error message (which is pretty self-explanatory by the way).
To fix it, you need to specify the data type of your outputs to be real in the Ports and Data Manager.
Alternatively, you can add a Complex to Real-Imag to the output(s) of your MATLAB Function block and take only the real or imaginary part of the signal, depending on what you want your algorithm to do.
I need to implement an LMS-based adaptive audio-cancellation algorithm on the Simulink Desktop Real-Time toolbox.
The physical system is composed of a microphone recording a noise source and another microphone recording the residual noise after the control process (antinoise being injected by a speaker controlled by Simulink).
For the (adaptive) LMS algorithm to work properly I need to be able to work on a sample-by-sample basis, that is at each sampled time instant I need to update the adaptive filter using the synchronised current sample value of both microphones. I realise some delay is inevitable but I was wondering whether it's possible on Simulink Desktop Real-Time to reduce the buffer size of the inputs to one sample and thus work on a sample-by-sample basis.
Thanks for your help in advance.
You can always implement the filter on a sample by sample basis.
But you still need a history of input values to perform the actual LMS calculation on. On a sample by sample basis this would just mean using a simple FIFO buffer.
If you have access to the DSP Toolbox then there is already an LMS Filter block that will do this for you.
My collegue and I are developping a sound and speech processing module on a Analog Device DSP. Because of the proximity of our single microphone and speaker, we have been experiencing some important echo. We want to implement an NLMS based algorithm to reduce this echo.
I first wanted to implement it and test the algorithm in Matlab but I am still having some issues. I think I might have some theoretical issue in my algorithm. I have a rough time understanding what would be the "desired signal" in the algorithm since I don't have access to a uncorrupted signal.
Here is an overview of my naive way to implement this in Matlab.
Simulink diagram here
Link to Simulink code (.slx)
Right now the code can't compile because of an "algeabric loop error" in Simulink, but I have a feeling there is more to this problem.
Any help would be appreciated.
The model you have is not fully correct. For acoustic echo cancellation you are using the adaptive filter to model the room. You are identifying the room characteristics using the adaptive filter. Once you do this you can then use your adaptive filter to identify the part of the far end signal from the loud speaker which goes back into the microphone and subtract that from the microphone signal to remove the echo.
For your adaptive filter your input should be the signal from far end which would be the signal going to the loud speaker in the room. Your desired signal is the signal coming out of the microphone in the room. The microphone signal contains signals from the voices from the person in the room and also a portion of sound from the loud speaker which is the echo.
Sound from far end ----|In | Out (You can ignore this)
| Adaptive Filter |
Sound from local microphone ----|Desired | Error
In this model Error output signal from adaptive filter is your desired echo free signal. This is because error is computed by subtracting adaptive filter output from desired which is basically removing the echo.
To simulate this system in Simulink you need a filter to represent the room. You can use an ordinary FIR filter for this. You should be able to get room impulse responses online. These are usually long (~1000) slowly decaying impulse responses. Your audio source can represent signal from the loud speaker. You feed the same audio signal into this room response filter and you will get your desired signal. Feeding both into adaptive filter will make adaptive filter adapt to the room response filter.
Is it possible to transform speech (pitch/formant shift) in (near) real-time using MATLAB? How can it be done?
If not, what should I use to do that?
I need to get input from the microphone, visualise the sound wave, add a filter to it, see the oscilloscope again, and play back the modified sound.
The real-time visualization (spectrogram) can be created with SparkNG package by Hideki Kawahara.
Sure. There's a demo application up on the MATLAB Central File Exchange that does something similar. It reads in a signal from the sound card (requires Data Acquisition Toolbox) in near real time, applies an FFT transform - you could do something else like applying a filter - and visualises the results in 3D graphs live. You could use it as a template and modify it to your needs, such as visualising in different ways (more of an oscilloscope style), or outputting the sound as a .wav file for later playback.
If you need properly real time, you might look into implementing in Simulink rather than just base MATLAB.
I'm learning simulink and I want to use the rician channle block from the communications blockset. I'm told I need to change the format format. Would anyone have some sample code where they used the rician channels in simulink to model a bit error rate process?
MathWorks appears to have a tutorial on the subject. Have you taken a look at that?