I'm struggling to find a simple way to read data from arecord, and then after some processing send it to aplay using Julia. I've figured out how to use pipelines to directly send data over:
run(pipeline(`arecord -d 3`, `aplay`)) # Same as: arecord -d 3 | aplay
I've also figured out how to obtain data:
data = read(`arecord -d 3`)
However, the simple task of outputting data to aplay has eluded me, which got me thinking of how, in general, one would go about taking some data d and throwing it into some ::Cmd (or at least converting some ::Cmd into an ::IOStream) variable in Julia. Also, what would the differences be between a 1 off stream (like the above for reading audio data) and a continuous stream (which would only stop after being told to close).
Thanks in advance for your help.
EDIT: arecord and aplay are standard linux terminal commands to record and play audio. arecord -d 3 generates a simple vector of 8bit values, sampling at 8kHz.
To be clear, I'm asking:
What is the standard way of reading data from a ::Cmd as a continuous data stream into a vector (e.g. reading from a never-ending file).
What is the standard way of writing a vector of data into a ::Cmd as either a 1 off instance or as a continuous stream of data (e.g. writing to a file once verses continually appending it).
This is NOT file specific because writing to aplay, a standard ::Cmd variable which works in the pipeline example above, does not work when just trying to pass it some data using either the pipeline or write functions (or at least I have been unsuccessful in doing so).
After reading and processing data. Try
open(`aplay`,"w",STDOUT) do stdin
write(stdin, data)
end
Related
I am trying to use the MATLAB scripts shipped with Dymola to post-process the output result of Dymola. But in some cases, the output data in the .mat file only have 2 elements, how could I get the data between 10s and 100s in this kind of cases?
It's a parameter or variable that is not time depending so it's stored in a compact way. I understand the mechanism, but it is not user-friendly when post-processing the data in MATLAB, I have to find the "wrong" dimensional data. How could I fix this issue?
I recommend creating some simple logic that looks at the size of the variable and then automatically puts it into some dictionary, list, etc. From there you can manipulate the variable. I know you are asking for Matlab but here is a Python solution that I have used which may help you get started:
varNames_param_base=[]
varNames_var_base=[]
for i, val in enumerate(r.varNames()):
if np.size(r.values(val)) == 4:
varNames_param_base.append(val)
else:
varNames_var_base.append(val)
I used those lines in this file.
In the example r.varNames() is a list of all the variable names (i.e., strings) which are read from the resulting Dymola .mat file. r.values gets the value of the variable name currently being used in the for loop (i.e., val).
You may also consider converting your result file to SDF (a simple HDF5 representation), because that format does not use any clever storage options (if I remember correctly).
Given a pcap file I need to get the payloads of packets in Matlab. I read pcap2matlab does this but I couldn't understand the documentation properly. Can any one help me in this regard ?. If anyone used this function please explain to me both in capture mode and read mode.
Reading the documentation on pcap2matlab, you capture information directly from your network interface or you can use it in read mode where you read *.pcap files that you generate from a network logging utility such as Wireshark.
pcap2matlab looks like you should be able to use it in read only mode by reading in a filename directly, without any other parameters it should read directly into the EOF.
From the docs.
% 2. A filename string that identifies the pcap file to read. Setting this input argument
% to a filename string will automatically set the function to work in read mode.
pcap2matlab('', '','filename.pcap')
Streaming execute, -11!, does not work on named pipes, so an obvious solution of redirecting the gzip -cd output to a named pipe and passing it to -11! does not work.
-11! accepts a compressed file and streams it so long as it was compressed with -19! (using 2 as the compression algorithm parameter, which is gzip).
The only difference between a normal gzipped file and a kdb compressed one is a few bytes at the beginning of the file.
EDIT (see comment) Thanks, this isn't true - the bytes are different at the end of the file
So a possible solution is to prepend your gzipped files (if they weren't produced by -19!) with the appropriate byte array first.
For anyone using kdb v3.4+ streaming execution for named pipes was introduced as the function .Q.fps.
Here is a simple example of .Q.fps in action, first create the pipe via command line:
echo "aa" > test.pipe
Next in a q session:
q).Q.fps[0N!]`:test.pipe
,"aa"
Where 0N! is a function used to display the contents of the file.
I am interested in opening a capture file in wireshark and then exporting the data in "C arrays" format [Wireshark provides that option in its GUI. One can do it by following "File->Export->as C arrays file" from the main menu.My question is how can I do this in perl? Can someone help me with a script for this?
I Would like to parse each and every packet of the wireshark capture. So I thought, I will first convert each packet to an array and then parse it. Do you have any suggestions on this? My capture consists of all IEEE 802.11 frames.
If you want to do all the parsing yourself, i.e. look at the raw packet data, I would suggest writing your own program using libpcap to read pcap-format capture files (on UN*X, libpcap 1.1.0 and later can also read pcap-ng-format capture files, which is what Wireshark 1.8.0 and later write by default). No need to write stuff out as C arrays.
I would like to play some kind of text-to-speech with only numbers. I can record 10 wav files, but how can I combine them programmatically ?
For instance, the user types 1234, and the text-to-speech combines 1.wav with 2.wav, 3.wav and 4.wav to produce 1234.wav that plays "one two three four".
1) create a new destination sample buffer (you will want to know the sizes).
2) read the samples (e.g. using AudioFile and ExtAudioFile APIs) and write them in sequence to the buffer. You may want to add silence between the files.
It will help if your files are all the same bit depth (the destination bit depth - 16 should be fine) and sample rate.
Alternatively, if you have fixed, known, sample rates and bit depths for all files, you could just save them as raw sample data and be done in much less time because you could simply append the data as is without writing all the extra audio file reading programs.
The open source project wavtools provides a good reference for this sort of work, if you're ok with perl. Otherwise there is a similar question with some java examples.
The simplist common .wav (RIFF) file format just has a 44 byte header in front of raw PCM samples. So, for these simple types of .wav files, you could just try reading the files as raw bytes, removing the 44 byte header from all but the first file, and concatening the samples. Or just play the concatenated samples directly using the Audio Queue API.