I am using Connected Components Workbench V12 with a Micro850 PLC
I am trying to take an input from a barcode scanner and convert the scanned integer input to a 8-bit binary number to use each bit as a boolean to trigger outputs on the PLC.
I am expecting the FOR loop to get the remainder for the input/2 and repeat for the remaining bits.
for example I input 21 and expect a 0,0,0,1,0,1,0,1 but only get a 1 on the 8th bit
Here's a Screenshot of the Variable Monitor
out_Complete := in_Enable;
IF in_Enable THEN
B[8]:= ANY_TO_DINT(in_Integer_Input); //convert input to DINT
Ba[8]:= ANY_TO_DINT(in_Integer_Input);
FOR X := 1 TO 8 BY 1 DO
B[X]:= MOD(Ba[X],2); //get remainder for [x] bit
Ba[X]:= B[X] / 2; //divide by 2
OutputBit[X]:= ANY_TO_BOOL(B[X]); //set output bit
END_FOR;
END_IF;
Ba[X] is always zero, except for index 8, because that is the one you set.
Another way to check the bits would be try bit access. Or bit shifting combined with AND, something like this
FOR X := 0 TO 7 BY 1 DO
B[X]:= SHR(in_Integer_Input, X) AND 1;
END_FOR;
So this is my code for a MATLAB program that works like this:
I have a folder with 10 .mp3 songs ( /songs ) and a folder with test files ( /tests ).
Test are just 10s cutouts of songs from the /songs folder
What I do is:
1) move to songs dir, and load songs into a list
cd(path_to_songs_dir)
songList = dir('*.mp3');
numSongs= numel(songList);
for i=1:numSongs
[y{i}, fs{i}] = audioread(songList(i).name);
end
2) load test file
[y_test, fs_test] = audioread(path_to_test_file);
3) create a gallery for the graphs
for t=1:numSongs
gallery{t}=y{t}(:,1);
end
4) use cross correlation to determine which song has the highest cross correlation value with my test file
for g=1:numSongs
[xc{g},lagc{g}]= xcorr(gallery{g},y_test(:,1),'none');
if g == 1
[maxcorr,maxli] = max(xc{g});
n_song = g;
elseif g ~= 1 && max(xc{g}) > maxcorr
[maxcorr,maxli] = max(xc{g});
n_song = g;
end
end
The program works just fine and is pretty good at recognizing songs, but it is excruciatingly slow. It may take up to an entire minute, and it only has to compare the test againt 10 songs.
Any idea to make it faster? Any kind of suggestion on improving the code is appreciated, this is one of my very first attempts to use MATLAB
I need to collect voice pieces from a continuous audio stream. I need to process later the user's voice piece that has just been said (not for speech recognition). What I am focusing on is only the voice's segmentation based on its loudness.
If after at least 1 second of silence, his voice becomes loud enough for a while, and then silent again for at least 1 second, I say this is a sentence and the voice should be segmented here.
I just know I can get raw audio data from the AudioClip created by Microphone.Start(). I want to write some code like this:
void Start()
{
audio = Microphone.Start(deviceName, true, 10, 16000);
}
void Update()
{
audio.GetData(fdata, 0);
for(int i = 0; i < fdata.Length; i++) {
u16data[i] = Convert.ToUInt16(fdata[i] * 65535);
}
// ... Process u16data
}
But what I'm not sure is:
Every frame when I call audio.GetData(fdata, 0), what I get is the latest 10 seconds of sound data if fdata is big enough or shorter than 10 seconds if fdata is not big enough, is it right?
fdata is a float array, and what I need is a 16 kHz, 16 bit PCM buffer. Is it right to convert the data like: u16data[i] = fdata[i] * 65535?
What is the right way to detect loud moments and silent moments in fdata?
No. you have to read starting at the current position within the AudioClip using Microphone.GetPosition
Get the position in samples of the recording.
and pass the optained index to AudioClip.GetData
Use the offsetSamples parameter to start the read from a specific position in the clip
fdata = new float[clip.samples * clip.channels];
var currentIndex = Microphone.GetPosition(null);
audio.GetData(fdata, currentIndex);
I don't understand what exactly you convert this for. fdata will contain
floats ranging from -1.0f to 1.0f (AudioClip.GetData)
so if for some reason you need to get values between short.MinValue (= -32768) and short.MaxValue(= 32767) than yes you can do that using
u16data[i] = Convert.ToUInt16(fdata[i] * short.MaxValue);
note however that Convert.ToUInt16(float):
value, rounded to the nearest 16-bit unsigned integer. If value is halfway between two whole numbers, the even number is returned; that is, 4.5 is converted to 4, and 5.5 is converted to 6.
you might want to rather use Mathf.RoundToInt first to also round up if a value is e.g. 4.5.
u16data[i] = Convert.ToUInt16(Mathf.RoundToInt(fdata[i] * short.MaxValue));
Your naming however suggests that you are actually trying to get unsigned values ushort (or also UInt16). For this you can not have negative values! So you have to shift the float values up in order to map the range (-1.0f | 1.0f ) to the range (0.0f | 1.0f) before multiplaying it by ushort.MaxValue(= 65535)
u16data[i] = Convert.ToUInt16(Mathf.RoundToInt(fdata[i] + 1) / 2 * ushort.MaxValue);
What you receive from AudioClip.GetData are the gain values of the audio track between -1.0f and 1.0f.
so a "loud" moment would be where
Mathf.Abs(fdata[i]) >= aCertainLoudThreshold;
a "silent" moment would be where
Mathf.Abs(fdata[i]) <= aCertainSiltenThreshold;
where aCertainSiltenThreshold might e.g. be 0.2f and aCertainLoudThreshold might e.g. be 0.8f.
My metadata is stored in a 8 bit unsigned dataset in a HDF5 file. After importing to DM, it become a 2D image of 1*length dimension. Each "pixel" stores the ASCII value of the corresponding value to the character.
For further processing, I have to convert the ASCII array to a single string, and further to TagGroup. Here is the stupid method (pixel by pixel) I currently do:
String Img2Str (image img){
Number dim1, dim2
img.getsize(dim1,dim2)
string out = ""
for (number i=0; i<dim1*dim2; i++)
out += img.getpixel(0,i).chr()
Return out
}
This pixel-wise operation is really quite slow! Is there any other faster method to do this work?
Yes, there is a better way. You really want to look into the chapter of raw-data streaming:
If you hold raw data in a "stream" object, you can read and write it in any form you like. So the solution to your problem is to
Create a stream
Add the "image" to the stream (writing binary data)
Reset the steam position to the start
Read out the binary data a string
This is the code:
{
number sx = 10
number sy = 10
image textImg := IntegerImage( "Text", 1, 0 , sx, sy )
textImg = 97 + random()*26
textImg.showimage()
object stream = NewStreamFromBuffer( 0 )
ImageWriteImageDataToStream( textImg, stream, 0 )
stream.StreamSetPos(0,0)
string asString = StreamReadAsText( stream, 0, sx*sy )
Result("\n as string:\n\t"+asString)
}
Note that you could create a stream linked to file on disc and, provided you know the starting position in bytes, read from the file directly as well.
I have a CSV file 1.6 GB large, that I need to feed into matlab. I will have to do this frequently and I need it to run quickly. The file is of the form:
20111205 00:00.2 99.18 6 E
20111205 00:00.2 99.18 5 E
20111205 00:00.2 99.18 1 E
20111205 00:00.2 99.195 5 E
20111205 00:00.2 99.195 5 E
20111205 01:27.0 99.19 5 E
20111205 02:01.4 99.185 1 E
20111205 02:01.4 99.185 1 E
20111205 02:01.4 99.185 1 E
20111205 02:01.4 99.185 1 E
The code I have right now is the following:
tic;
format long g
fid = fopen('C:\Program Files\MATLAB\R2013a\EDU13.csv','r');
[c] = fscanf(fid, '%d,%d:%d.%d,%f,%d,%c');
c = reshape(c, 7, length(c)/7)
toc;
But this is far too slow. I would appreciate a method of getting this CSV file into matlab in the most efficient manner possible. Thank you!
Consider using a binary file format. Binary files are much smaller and don't need to be converted by MATLAB into the binary format. Hence they are much faster to read and write. They may also be more accurate (precision may be higher).
http://www.mathworks.com.au/help/matlab/ref/fread.html
The recommended syntax is textscan (http://www.mathworks.com/help/matlab/ref/textscan.html)
Your code would look like this:
fid = fopen('C:\Program Files\MATLAB\R2013a\EDU13.csv','r');
c = textscan(fid, '%d,%d:%d.%d,%f,%d,%c');
fclose(fid);
You end up with a cell array... whether it's worth converting that to another shape really depends on how you want to access the data afterwards.
It is quite likely that this would be faster if you include a loop that allows you to use a smaller, fixed amount of memory for much of the operation. One problem with reading large files is the fact that you don't know ahead of time how big it will be - and that very likely means that Matlab guesses the amount of memory it needs, and frequently has to rescale. That is a very slow operation - if it happens every 1MB, say, then it copies 1 MB once, next 2 MB, then again 3 MB, etc - as you can see it is quadratic in the size of the array.
If instead you allocate a fixed amount of memory for the final result, and process in smaller batches, you avoid all that overhead. I'm pretty sure it will be much faster - but you would have to experiment a bit with the block size. That would look something like this:
block = 1000;
Nlines = 35E6;
fid = fopen('C:\Program Files\MATLAB\R2013a\EDU13.csv','r');
c = struct(field1, field2, fieldn, value); %... initialize structure array or other storage for c ...
c_offset = 0;
while ~feof(fid)
temp = textscan(fid, '%d,%d:%d.%d,%f,%d,%c', block);
bt = size(temp, 1); % first dimension - should be `block`, except for last loop
%... extract, process, store in c(c_offset + (1:bt))...
c_offset = c_offset + bt;
end
fclose(fid);
Inspired by #Axon's answer, I implemented a "fast" C program to convert the file to binary, then read it in using Matlab's fread function. Spoiler alert: reading is then 20x faster... although the initial conversion takes a little bit of time.
To make the job in Matlab easier, and the file size smaller, I am converting each of the number fields into an int16 (short integer). For the first field - which looks like a yyyymmdd field - that involves splitting into two smaller numbers; similarly the decimal numbers are converted to two short integers (given the apparent range I think that is valid). All this is recognizing that "to really optimize, you must really know your problem" - so if assumptions are invalid, the results will be too.
Here is the C code:
#include <stdio.h>
int main(){
FILE *fp, *fo;
long int ld1;
int d2, d3, d4, d5, d6, d7;
short int buf[9];
char c8;
int n;
short int year, monthday;
fp = fopen("bigdata.txt", "r");
fo = fopen("bigdata.bin", "wb");
if (fp == NULL || fo == NULL) {
printf("unable to open file\n");
return 1;
}
while(!feof(fp)) {
n = fscanf(fp, "%ld %d:%d.%d %d.%d %d %c\n", \
&ld1, &d2, &d3, &d4, &d5, &d6, &d7, &c8);
year = d1 / 10000;
monthday = d1 - 10000 * year;
// move everything into buffer for single call to fwrite:
buf[0] = year;
buf[1] = monthday;
buf[2] = d2;
buf[3] = d3;
buf[4] = d4;
buf[5] = d5;
buf[6] = d6;
buf[7] = d7;
buf[8] = c8;
fwrite(buf, sizeof(short int), 9, fo);
}
fclose(fp);
fclose(fo);
return 0;
}
The resulting file is about half the size of the original - which is encouraging and will speed up access. Note that it would be a good idea if the output file could be written to a different disk than the input file - it really helps keep data streaming without a lot of time wasted in seek operations.
Benchmark: using a file of 2 M lines as input, this ran in about 2 seconds (same disk). The resulting binary file is read in Matlab with the following:
tic
fid = fopen('bigdata.bin');
d = fread(fid, 'int16');
d = reshape(d, 9, []);
toc
Of course, now if you want to recover the numbers as floating point numbers, you will have to do a little bit of work; but I think it's worth it. One possible problem you will have to solve is the situation where the value after the decimal point has a different number of digits: converting (a,b) into float isn't as simple as "a + b/100" when b > 100... "exercise for the student"?
A little benchmarking: The above code took about 0.4 seconds. By comparison, my first suggestion with textread took about 9 seconds on the same file; and your original code took a little over 11 seconds. The difference may get bigger when the file gets bigger.
If you do this a lot (as you said), it clearly is worth converting your files once to binary format, and using them that way. Especially if the file needs to be converted only once, and read many times, the savings will be considerable.
update
I repeated the benchmark with a 13M line file. The conversion took 13 seconds, the binary read < 3 seconds. By contrast each of the other two methods took over a minute (textscan: 61s; fscanf: 77s). It seems that things are scaling linearly (file size 470M text, 240M binary)