tlab Audio conversion - matlab

I recorded my voice in Matlab. Now i want to convert that audio in to strings i-e; written sentences in Matlab. Is there a way to convert audio in to text.

I'm pretty sure MATLAB does not have native speech-to-text functionality.
A quick Google search turned up at least one project integrating speech-to-text into MATLAB.
http://www.ee.ic.ac.uk/hp/staff/dmb/voicebox/voicebox.html
Some other software that can translate recorded speech into text are Microsoft's SAPI (built into Windows Vista and Windows 7, and available as a download for Windows XP), and CMU's Sphinx project. Nuance Dragon Naturally Speaking is an option, but it is comparatively expensive. It's not obvious to me how these could be integrated into MATLAB though.

You can achieve somewhat limit mileage using the Builtin Windows Speech API. It depends on your operating system etc. and you need to follow similar principles from the API documentation:
http://msdn.microsoft.com/en-us/library/ms723627(v=vs.85).aspx
Using MATLAB's activeX server (
http://www.mathworks.co.uk/help/matlab/ref/actxserver.html)
You need the first declare a speech recogniser engine
RC = actxserver('SAPI.SpSharedRecoContext'); %connect to speech engine
And then setup various call back functions for each state of the recogniser:
RC.registerevent({'Recognition' #CallbackFunction; 'Hypothesis' #CallbackFunction; 'FalseRecognition' #CallbackFunction})
The contents of the callback function should be along these lines:
function word = CallbackFunction(varargin)
global word
result = varargin{length(varargin)-2};
word = result.Phraseinfo.GetText;
end
Then finally switch the recogniser on:
RC.Recognizer.State = 'SRSActive';
You would need to reference the documentation for which callback functions are called and when.
You will need to also setup a grammar dictionary to get meaningful results. As the engine will be attempting to recognise any word otherwise.

Related

Audio widget within Jupyter notebook is **not** playing. How can I get the widget to play the audio?

I writing my code within a Jupyter notebook in VS Code. I am hoping to play some of the audio within my data set. However, when I execute the cell, the console reports no errors, produces the widget, but the widget displays 0:00 / 0:00 (see below), indicating there is no sound to play.
Below, I have listed two ways to reproduce the error.
I have acquired data from the hub data store. Looking specifically at the spoken MNIST data set, I cannot get the data from the audio tensor to play
import hub
from IPython.display import display, Audio
from ipywidgets import interactive
# Obtain the data using the hub module
ds = hub.load("hub://activeloop/spoken_mnist")
# Create widget
sample = ds.audio[0].numpy()
display(Audio(data=sample, rate = 8000, autoplay=True))
The second example is a test (copied from another post) that I ran to see if it was something wrong with the data or something wrong with my console, environment, etc.
# Same imports as shown above
# Toy Function to play beats in notebook
def beat_freq(f1=220.0, f2=224.0):
max_time = 5
rate = 8000
times = np.linspace(0,max_time,rate*max_time)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
display(Audio(data=signal, rate=rate))
return signal
v = interactive(beat_freq, f1=(200.0,300.0), f2=(200.0,300.0))
display(v)
I believe that if it is something wrong with the data (this is a well-known data set so, I doubt it), then only the second one will play. If it is something to do with the IDE or something else, then neither will work, as is the case now.
Apologies for the late reply! In the future, please tag the questions with activeloop so it's easier to sort through (or hit us up directly in community slack -> slack.activeloop.ai).
Regarding the Free Spoken Digit Dataset, I managed to track the error with your usage of activeloop hub and audio display.
adding [:,0] to 9th line will help fixing display on Colab as Audio expects one-dimensional data
%matplotlib inline
import hub
from IPython.display import display, Audio
from ipywidgets import interactive
# Obtain the data using the hub module
ds = hub.load("hub://activeloop/spoken_mnist")
# Create widget
sample = ds.audio[0].numpy()[:,0]
display(Audio(data=sample, rate = 8000, autoplay=True))
(When we uploaded the dataset, we decided to upload the audio as (N,C) where C is the number of channels, which happens to be 1 for the particular dataset. The added dimension wasn't added automatically)
Regarding the VScode... the audio, unfortunately, would still not work (not because of us, but VScode), but you can still try visualizing Free Spoken Digit Dataset (you can play the music there, too). Hopefully this addresses your needs!
Let us know if you have further questions.
Mikayel from Activeloop

How to send PRBS pattern from sfp+ transceiver?

I have a compliance board for SFP+ transceivers. For CFP2 transceivers there is some integrated PRBS pattern generator, but I can't find something similar for SFP+ transceivers.
Is this even possible using the compliance board alone or do I need an AWG?
You can send anything you want from an SFP+ module - it doesn't care what you feed it.
You need to turn on the laser and then feed a differential signal to pins 18 and 19. Check SFF-8418 for details.

Marytts HMM voice quality changes with text length

I am using MaryTTS as a text to speech engine inside a Grails Application.
During app testing I found out that the language quality drastically changes (for the worst) with increasing text length when using a HMM voice.
So naturally I tested via the MARY Web Client while tweeking all HMM relevant parameters (F0Add, F0Scale and Rate) as well as removing them or leaving the default values, but to no success.
The voice I am using is bits1-hsmm:5.2 (German Female)
gradle dependency:
compile "de.dfki.mary:voice-bits1-hsmm:5.2"
The code is as simple as:
def marytts = new LocalMaryInterface()
marytts.locale = Locale.GERMAN
marytts.generateAudio text
Everything works fine up to the point where the text to convert goes over 120 characters (not only in the code but also via the Mary Web Client)
Here the text I used for the last tests:
Baumaßnahmen im Mai und Oktober Notwendige Instandhaltungsarbeiten an der Münchner S-Bahn-Stammstrecke sollen von nun an gebündelt stattfinden. Die Bahn möchte dadurch die baubedingten Fahrplaneinschränkungen durch gesperrte Gleise geringer halten.
To see the difference in quality use a part of the text (first couple words) vs the whole.
Another important point: This does not occur when using a Unit Selection voice .
Am I missing something like a configuration or specific parameter set or is this the standard behaviour of HMM voices inside MaryTTS?
It will be great to be able to use this voice with decent quality, since Unit Selection voices are not available as standalone dependencies and having to split the text in smaller parts and play them sequentially is not really something I would consider.
Any input is appreciated.
Update
Further trial and error showed that the robotic background sound is added when the text contains punctoation marks such as . , : ; [ ] { }. Independent of text length! Not really sure what the root cause is but atleast with a text manipulation before the conversion the voice is useable.

Controlling light using midi inputs

I currently am using Max/MSP to create an interactive system between lights and sound.
I am using Philips hue lighting which I have hooked up to Max/MSP and now I am wanting to trigger an increase in brightness/saturation on the input of a note from a Midi instrument. Does anyone have any ideas how this might be accomplished?
I have built this.
I used the shell object. And then feed an array of parameters into it via a javascipt file with the HUE API. There is a lag time of 1/6 of a second between commands.
Javascript file:
inlets=1;
outlets=1;
var bridge="192.168.0.100";
var hash="newdeveloper";
var bulb= 1;
var brt= 200;
var satn= 250;
var hcolor= 10000;
var bulb=1;
function list(bulb,hcolor,brt,satn,tran) {
execute('PUT','http://'+bridge+'/api/'+hash+'/lights/'+bulb+'/state', '"{\\\"on\\\":true,\\\"hue\\\":'+hcolor+', \\\"bri\\\":'+brt+',\\\"sat\\\":'+satn+',\\\"transitiontime\\\":'+tran+'}"');
}
function execute($method,$url,$message){
outlet(0,"curl --request",$method,"--data",$message,$url);
}
To control Philips Hue you need to issue calls to a restful http based api, like so: http://www.developers.meethue.com/documentation/core-concepts, using the [jweb] or [maxweb] objects: https://cycling74.com/forums/topic/making-rest-call-from-max-6-and-saving-the-return/
Generally however, to control lights you use DMX, the standard protocol for professional lighting control. Here is a somewhat lengthy post on the topic: https://cycling74.com/forums/topic/controlling-video-and-lighting-with-max/, scroll down to my post from APRIL 11, 2014 | 3:42 AM.
To change the bri/sat of your lights is explained in the following link (Registration/Login required)
http://www.developers.meethue.com/documentation/lights-api#16_set_light_state
You will need to know the IP Address of your hue hue bridge which is explained here: http://www.developers.meethue.com/documentation/getting-started and a valid username.
Also bear in mind the performance limitations. As a general rule you can send up to 10 lightstate commands per second. I would recommend having a 100ms gap between each one, to prevent flooding the bridge (and losing commands).
Are you interested in finding out details of who to map this data from a MIDI input to the phillips HUE lights within max? or are you already familiar with Max.
Using Tommy b's javascript (which you could put into a js object), You could for example scale the MIDI messages you want to use using midiin and borax objects and map them to the outputs you want using the scale object. Karlheinz Essl's RTC library is a good place to start with algorithmic composition if you want to transform the data at all http://www.essl.at/software.html
+1 for DMX light control via Max. There are lots of good max-to-dmx tutorials and USB-DMX hardware is getting pretty cheap. However, as someone who previously believed in dragging a bunch of computer equipment on stage just to control a light or two with an instrument, I'd recommend researching and purchasing a simple one channel "color organ" circuit kit (e.g., Velleman MK 110). Controlling a 120/240V light bulb via audio is easier than you might think; a computer for this type of application is usually overkill. Keep it simple and good luck!

need for tool for video processing

I have a 2giga mpeg file of people runnig,jogging,walking etc. in it. I will use it in a image classification project but I need to segmentate the video depending on per person an per action.
for example;
there are 25 people in video which repeat these actions in order
1st person
-runs
-walks
2nd person
-runs
-walks
and goes on....
and what I want is to have 2 different mpeg file for each person
such as;
firstperson_runs.mpeg
firstperson_waves.mpeg
so I need a tool to split big file into these files. Splitting shall be due to time.
such as;
pick t1:start of action
pick t2:end of action
create a new video from big file for the interval t1 and t2
of course I will select time intervals for each video.
OS:Winxp pro
if it can be done by matlab ,can you describe it?
any help???
I imagine there are a number of tools available to do this without MATLAB, but if you really want to use MATLAB I would check out these submissions on The MathWorks File Exchange:
Gerald Dalley's videoIO Toolbox for Matlab
Micah Richert's mmread
David Foti's mpgread and mpgwrite
EDIT:
As mentioned by M456, you can also use the built-in function MMREADER for creating a multimedia reader object for your movie file (and subsequently reading selected movie frames from it with the READ method). However, I don't know which version of MATLAB this function was introduced in. It is in versions 7.7 and 7.8 (R2008b and R2009a, respectively), but it is not in version 7.1.
Matlab can do such video split operations. There are two built in functions (aviread and mmreader) for reading video files. Both will create objects which contain the individual frames of the video. You can save these as separate frames or make a new video out of by using avifile.