Fake microphone input for MATLAB - matlab

I'm working on a project on MATLAB that records sound and processes. I'm completely fed up of playing the same sounds over and over again during the development.
Is there some kind of way to "fake" the microphone i.e. playing a file on my computer and getting it in MATLAB with the same code I use to record with my mic?
Thanks for any help.
PS: I'm on Mac OS X Yosemite

It depends how you've implemented your code - if you post the relevant sections you'd be able to get more specific suggestions - but in general you might be able to replace the part of the code that captures input from the microphone with a call that reads a file from disk - wavread would be useful for this (http://uk.mathworks.com/help/matlab/ref/wavread.html).
If you're doing realtime stuff then it may or may not work, but if not then you could play the sound file in a third party application and use something to internally rewire the output to the input. Soundflower is one tool that can do this, there are others.

There are more pieces of the puzzle to address.
If just an asynchronous mode of work is possible
If you just wish to work in silence and MATLAB process under development does not require synchronisation with the sound-replay ( not dependent on where the sound-sample starts & just needs "some" sound-related data to be input once the MATLAB code gets ready ), than the easiest way would be to plug a jack-connector into MIC and have the sound re-played in an endless loop by an external device ( MP3 player et al ) and enjoy the silence.
In case a synchronous mode of operations is needed
In case your MATLAB code requires synchronised processing, aligned with the start of the sound-sample and terminating the re-play process once MATLAB code is finished, then you need something a bit more complex than just a re-wired ( be it done physically or virtually ) sound delivery.
There are ways how to allow MATLAB communicate with external processes and thus allow triggering the synchronised events on the remote side ( sending a message alike HeyPythonProcess.startTheSoundREPLAY() ) and make the whole sound-processing both silent ( for example, the python audioservices can move sound-bytes into respective audiomixer paths under your full ( i.e. programmable ) control ) and fully synchronous ( via an event-driven, messaging layer, like ZeroMQ allows )
thus keeping the process as needed.
If this sounds complicated? Yes, it is complicated, but both realistic and possible. MATLAB allows inter-process communications / messaging in a fully autonomous multi-agent manner ( no subordination, indeed a fully autonomous mode of work ) and that gives you an immense power for the future, perhaps, once entering into distributed cloud/grid processing Projects.
Use a side-effect of a bridged mode of MATLAB operations
There is also another synchronous way, to use python-MATLAB bridge, where python side "enforces" synchronicity ( controls the experiment ) and starts / stops the MATLAB part of the work ( thus aligning as a side-effect the replay with the MATLAB processing )
from pymatbridge import Matlab as aMATLAB # get ready
''' #
mlab = aMATLAB() # a class instance ( empty )
'''
mlab = aMATLAB( matlab = '...aMatlabCODE' ) # a class instance ( initialised )
mlab.start() # True once connected.
#
# start playing sound here
# ... and make MATLAB-beyond-the-bridge process it
# ...
# ....>>> |||||||||||||
# vvvvvvvvvvvvv
results = mlab.run_code('a=1;') # process code / vars

Related

Alternative to MQL5

I am starting with Expert Advisors on MetaTrader Terminal software and I have many algorithms to use with it. These algorithms were developed in MATLAB using its powerfull built in functions ( e.g. svd, pinv, fft ).
To test my algorithms I have some alternatives:
Write all the algorithms in MQL5.
Write the algorithms in C++ and than make a DLL to call by MQL5.
Write the algorithms in Python to embed in C and than make a DLL.
Convert the MATLAB source code to C and than make a DLL.
About the problems:
Impracticable because MQL5 does not have built in functions so I will have to implement one by one by hand.
I still did not try this, but I think it will take a long time to implement the algorithms ( I wrote some algorithms in C but took a good time and the result wasn't fast like MATLAB ).
I am getting a lot of errors when compiling to a DLL but if I compile to an executable there is no error ( this would be a good alternative since to convert MATLAB to python is quite simple and fast to do ).
I am trying this now, but I think there is so much work to do.
I researched about other similar pieces of software, like MetaTrader Terminal but I didn't found a good one.
I would like to know, if there is a simplest ( and fast ) way to embed other language in some way to MQL5 or some alternative to my issue.
Thanks.
Yes, there is alternative ... 5 ) Go Distributed :
having a similar motivation for using non-MQL4 code for fast & complex mathematics in external quantitative models for FX-trading, I have started to use both { MATLAB | python | ... } and MetaTrader Terminal environments in an interconnected form of a heterogeneous distributed processing system.
MQL4 part is responsible for:
anAsyncFxMarketEventFLOW processing
aZmqInteractionFRAMEWORK setup and participation in message-patterns handling
anFxTradeManagementPOLICY processing
anFxTradeDetectorPolicyREQUESTOR sending analysis RQST-s to remote AI/ML-predictor
anFxTradeEntryPolicyEXECUTOR processing upon remote node(s) indication(s)
{ MATLAB | python | ... } part is responsible for:
aZmqInteractionFRAMEWORK setup and participation in message-patterns handling
anFxTradeDetectorPolicyPROCESSOR receiving & processing analysis RQST-s to from remote { MQL4 | ... } -requestor
anFxTradeEntryPolicyREQUESTOR sending trade entry requests to remote { MQL4 | other-platform | ... }-market-interfacing-node(s)
Why to start thinking in a Distributed way?
The core advantage is in re-using the strengths of MATLAB and other COTS AI/ML-packages, without any need to reverse engineer the still creeping MQL4 interfacing options ( yes, in the last few years, DLL-interfaces had several dirty hits from newer updates ( strings ceased to be strings and started to become a struct (!!!) etc. -- many man*years of pain with a code-base under maintenance, so there is some un-forgettable experience what ought be avoided ... ).
The next advantage is to become able to add failure-resilience. A distributed system can work in ( 1 + N ) protected shading.
The next advantage is to become able to increase performance. A distributed system can provide a pool of processors - be it in a { SEQ | PAR }-mode of operations ( a pipeline-process or a parallel-form process execution ).
MATLAB node just joins:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% MATLAB script to setup
zeromq-matlab
clear all;
if ~ispc
s1 = zmq( 'subscribe', 'ipc', 'MATLAB' ); %% using IPC transport on <localhost>
else
disp( '0MQ IPC not supported on Windows.' )
disp( 'Setup TCP transport class instead' )
disp( 'Setting up TCP') %% using TCP transport on <localhost>
s1 = zmq( 'subscribe', 'tcp', 'localhost', 5555 );
end
recv_data1 = []; %% setup RECV buffer
This said, one can preserve strengths on each side and avoid any form of duplications of already implemented native, high-performance tuned, libraries, while the distributed mode of operations also adds some brand new potential benefits for Expert Advisor modus operandi.
one may add a remote keyboard interface to an EA automation and use some custom-specific commands ( CLI )
a fast, non-blocking, distributed remote logging
GPU / GPU-grid computing being used from inside MetaTrader Terminal
may like to check other posts on extending MetaTrader Terminal programming models
A Distributed System, on top of a Communication Framework:
MATLAB has already available port of ZeroMQ Communication Framework, the same that MetaTrader Terminal has, thanks to Austin CONRAD's wrapper ( though the MQH is interfacing to a ver 2.1.11 DLL, the services needed work like a charm ), so you are straight ready to use it on each side, so these types of nodes are ready to join their respective roles in any form one can design into a truly heterogeneous distributed system.
My recent R&D uses several instances of python-side processes to operate AI/ML-predictor, r/KBD, r/RealTimeANALYSER and a centralised r/LOG services, that are actively used, over many PUSH/PULL + XREQ/XREP + PUB/SUB Scalable Formal Communication Patterns, from several instances of MetaTrader Terminal-s by their respective MQL4-code.
MATLAB functions could be re-used in the same way.

Specman beginner's questions

I am new to Specman.
I have a couple of questions:
I am trying to use the agent methodology. After writing the env,agent,bfm etc - what is the recommended way to create clock and reset? by writing a tb.v (calling the top verilog module) or is there a better way?
How do I link the specman env file to the tb (or maybe its just enough to link the ports of the different specman files with a signals_map to the verilog files?
Most important how do I run the environment with irun?
I was thinking of creating a file listing all the verilog files, e.g. - veri.lst
the specman top shall import all the specman files, e.g - spec_top.e
irun -access +wrc veri.lst spec_top.e
should be ok?
should I mention the top level module in the command?
Should I put the test name in a special way in the command?
Thanks alot for all the help!!
Cadence recommends driving clocks from inside an HDL testbench (i.e. written in Verilog in your case). This is because every time the simulator yields control to Specman to execute it wastes processor time. You want to minimize the number of switches as much as possible.
Linking the env to the TB is done by connecting the Verilog signals of interest to the corresponding Specman ports (using hdl_path()).
W.r.t. running it, there are 2 things to keep in mind. e code can be executed in compiled or in interpreted mode. Also, compiled code is faster, but can't be debugged. You have to tell irun what you want compiled and what you want interpreted:
irun -f veri.lst \
compiled_top.e \
-snload interpreted_top.e
What you typically compile are files which you don't expect to change (verification components that you buy or reuse from other projects, for example). The rest of your files you'd load interpreted to be able to easily debug.
Adding to Tudor's great answer -
First - yes, connecting The e TB to the DUT is done using hdl_path(), and connecting the ports to external. You usually would have one unit designated for the interface, so configuring it would look something like this:
extend signal_map {
// name of the instance of the verilog module you interface
keep hdl_path() == "sub_system_a";
keep bind (sig_clock, external);
// name of the clock signal
keep sig_clock.hdl_path == "clk";
};
Please take a look in the IES release, at the UVM Examples.
They are in
specman/uvm/uvm_examples
For example, check out the specman/uvm/uvm_examples/xserial/e/xserial_collector_h.e:
And about the clock -
Connecting a clock in the e TB to the design is very simple. Something like this -
unit synch {
sig_clock : in simple_port of bit is instance;
keep bind(sig_clock, external);
event clock is rise(sig_clock$) #sim;
// can define also on fall or change
};
Now the clock event can be used as sampling event for TCMs and Temporals. This is a simple fast way for using the clock in the TB.
Another way to use the clock, is more "acceleration ready". In this methodology, you would implement a clock agent in verilog, and it will provide "clock services" to the TB. According to this methodology, the TB will not have any "wait cycles" in it. instead - it will call the Clock Agent task "wait_cycles()" - and wait for indication that required number of clock cycles passed.
This is a rather new methodology, oriented to be Acceleration Ready.
It will be demonstrated in the UVM Examples in next IES release, 15.1.
/efrat

OS system calls in x86

While working on a educational simplistic RISC processor I was wondering about how system calls work when implementing my software interrupt function. For example, hypothetically lets say our program calls sys_end which ends the current process. Now I know this would go to a vector table and then to the code to end the current process.
My question is the code that ends the process ran in supervisor mode or user mode? No where I seem to look specifies this. I'm assuming if its in normal user mode that could pose a very significant problem as a user mode process could do say do something evil like:
for (i=0; i++; i<10000){
int sys_fork //creates child process
}
which could be very bad I thought the OS would have some say on how many times a process could repeat itself and not to mention what other harmful things a process could do by changing the code in the system call itself.
system calls run in supervisor mode for the duration of the system call. The supervisor mode is necessary for accessing hardware (the screen, the keyboard), and for keeping user processes isolated from each other.
There are (or can be configured) limits on the amount of cpu, number of processes, etc. a user process may use or request, which can offer some protection against the kind of runaway program you describe.
But the default linux configuration allows 10k processes to be created in a tight loop; I've done it myself (both intentionally and accidentally)

VHDL simulation in real time?

I've written some code that has an RTC component in it. It's a bit difficult to do proper emulation of the code because the clock speed is set to 50MHz so to see any 'real time' events take place would take forever. I did try to do simulation for 2 seconds in modelsim but it ended up crashing.
What would be a better way to do it if I don't have an evaluation board to burn and test using scope?
If you could provide a little more specific example of exactly what you're trying to test and what is chewing up your simulation cycles that would be helpful.
In general, if you have a lot of code that you need to test in simulation, it's helpful if you can create testbenches of the sub-modules and test them first. Often, if you simulate at the top (chip) level and try to stimulate sub-modules that are buried deep in the hierarchy of a design, it takes many clock ticks just to get data into and out of the sub-module. If you simulate the sub-module directly you have direct access to the modules I/O and can test the things you want to test in that module in fewer cycles than if you try to get to it from the top level.
If you are trying to test logic that has very deep fifos that you are trying to fill or a specific count of a large counter you're trying to hit, you can either add logic to your code to help create those conditions in fewer cycles (like a load instruction on the counter) or you can force the values of internal signals of your design from the testbench itself.
These are just a couple of general ideas. Again, if you provide more detail about what it is you're simulating there are probably people on this forum that can provide help that is more specific to your problem.
As already mentioned by Ciano, if you provided more information about your design we would be able to give more accurate answer. However, there are several tips that hardware designers should follow, specially for complex system simulation. Some of them (that I mostly use) are listed below:
Hierarchical simulation (as Ciano, already posted): instead of simulating the entire system, try to simulate smaller set of modules.
Selective configuration: most systems require some initialization processes such as reset initialization time, external chips register initialization, etc... Usually for simulation purposes a few of them are not require and you may use a global constant to jump these stages when simulating, like:
constant SIMULATION_ENABLE : STD_LOGIC := '1';
...;
-- in reset condition:
if SIMULATION_ENABLE = '1' then
currentState <= state_executeSystem; -- jump the initialization procedures
else
currentState <= state_initializeSystem;
end if;
Be careful, do not modify your code directly (hard coded). As the system increases, it becomes impossible to remember which parts of it you modified to simulate. Use constants instead, as the above example, to configure modules to simulation profile.
Scaled time/size constants: instead of using (everytime) the real values for time and sizes (such as time event, memory sizes, register file size, etc) use scaled values whenever possible. For example, if you are building a RTC that generates an interrupt to the main system every 60 seconds - scale your constants (if possible) to generate interrupts to about (6ms, 60us). Of course, the scale choice depends on your system. In my designs, I use two global configuration files. One of them I use for simulation and the other for synthesis. Most constant values are scaled down to enable lower simulation time.
Increase the abstraction: for bigger modules it might be useful to create a simplified and more abstract module, acting as a model of your module. For example, if you have a processor that has this RTC (you mentioned) as a peripheral, you may create a simplified module of this RTC. Pretending that you only need the its interrupt you may create a simplified model such as:
constant INTERRUPT_EVENTS array(1 to 2) of time := (
32 ns,
100 ms
);
process
for i in 1 to INTERRUPT_EVENTS'length loop
rtcInterrupt <= '0';
wait for INTERRUPT_EVENTS(i);
rtcInterrupt <= '1';
wait for clk = '1' and clk'event
end for
wait;
end process;

Is there a way to copy files in a non-blocking way in Scala?

I have checked java.nio.file.Files.copy but that blocks a thread until the copy is done. Are there any libraries that allow one to copy a file in a non-blocking way? I need to perform many of these operations simultaneously and cannot afford to have so many threads blocked.
While I could write something myself using non-blocking streams, I would rather use something tried and tested that would guarantee a correct copy every time (or detect if something went wrong).
Check this: Iterate over lines in a file in parallel (Scala)?
val chunkSize = 128 * 1024
val iterator = Source.fromFile(path).getLines.grouped(chunkSize)
iterator.foreach { lines =>
lines.par.foreach { line => process(line) }
}
Reading (copying) files by chunks in parallel. In this case "par" is used.
So it quite non-blocking in terms / scope of processors (cores).
But you may follow same idea of chunks, for example using Akka/Future/Promises to be even in wider scopes.
You may customize you chunk-size deepening on your performance characteristic, level of system load, etc..
One more link that explains possible way to do read / write data from (property) file in parallel using Akka Actors. This is not quite that you might be want, but it may give an idea.
Idea - you may build your own not-blocking way of reading / copying files.
--
And about your statement "While I could write something myself using non-blocking streams":
I would remind that each OS / File System (FS) may have its own vision about what and where to block. Like Windows blocks a file (write-block at leat) if one thread writes to it. On Linux is is configurable. So if you want to stick to something stable, I would suggest to think it out and go with your own wrapper (over FS) solution based on events, chunks, states.
I have used the Process class, issuing an operating system command to copy the file. Of course, one has to check under which OS the application is running, and issue the appropriate command, but this allows for fast and asynchronous copies.
As Marius rightly mentions in the comments, Scala Process blocks, so I run it wrapped in a Future.
Java 8 Process introduces a function isAlive(). A non-blocking alternative would be to use Java 8 processes and use the scheduler to poll at regular intervals to see if the process has finished. However, I did no need to go to this extent.
Have you checked out the async stuff in scala-io?
http://jesseeichar.github.io/scala-io-doc/0.4.2/index.html#!/core/async%20read%20write