I need to make a B-scan data from a chunk of A-scan data. The A-scan data that I received are arranged in such a way that the row resembles the amplitude of the each point of the A-scan data and the column represents each A-scan data gathered.
This is how my data looks like:
4855 4641 4891 4791 4812 4812
4827 4766 4862 4745 4767 4785
6676 5075 6903 6879 6697 6084
7340 6829 7678 7753 7263 6726
6176 6237 6708 6737 6316 5943
12014 10467 10915 10914 10124 10642
8251 7538 7641 7619 7269 7658
6522 6105 6132 6136 5921 6227
5519 5287 5330 5376 5255 5237
4904 4784 4835 4855 4794 4758
4553 4527 4472 4592 4469 4455
4298 4323 4291 4293 4221 4238
4167 3957 4089 3991 3938 3907
3789 3721 3777 3777 3643 3596
3736 4615 3639 2814 3638 2782
4413 5286 4248 3998 4370 4199
5994 6896 6134 5548 6102 6161
8506 9020 7841 8060 8663 8941
12347 12302 10639 11151 12533 12478
18859 18175 15035 15938 18358 18160
27106 26261 22613 24069 27015 27114
32767 32601 32767 32767 32767 32767
32767 32767 32767 32767 32767 32767
32767 32767 32767 32767 32767 32767
32767 32767 32767 32767 32767 32767
32767 32767 32767 32767 32767 32767
32767 32767 32767 32767 32767 32767
26416 26459 32767 32767 26308 26945
6523 6900 13327 16665 6616 6477
-14233 -14011 -8554 -5649 -13956 -13858
-28128 -26784 -26157 -24055 -27875 -28374
-28775 -27905 -30348 -26285 -28918 -29066
-20635 -19776 -21144 -21548 -22107 -22759
-16915 -15742 -15908 -17398 -19600 -20143
This is just the sample of the data. It is in .txt format.
A-scan data
B-scan data
The problem I am facing is to plot this data into a b-scan data. Matlab would be great (Though other methods would be great too). Please share your way of plotting this B-scan data.
Scilab
On Scilab you can use the read function. It reads formatted texted files, and you need to know at least the number of columns. To add vertical spacing, you should add a constant value i*d for each whole column, where i is the column number.
I put the example you gave in a text file so I could read it, than I plotted it.
//read(): first argument is file path
// (or file name, if you change current directory)
// second argument is number of lines (-1 if unknown)
// third argument is the number of columns
B = read("file.txt",-1,6);
d = 1000; //vertical spacing
Bs = B; //copy of the original data
for i = 1 : size(Bs,'c')
//loop adding constant to each column
Bs(:,i) = Bs(:,i) + (i-1) * d;
end
//simply plot the matrix
plot2d(Bs);
The result in Scilab is:
MATLAB
In MATLAB, you can use the importdata function, which also reads formatted text files, but the minimum necessary is the file name. You should also add the vertical spacing manually.
%call importdata() after changing current directory
B = importdata("file.txt");
d = 1000; %vertical spacing
Bs = B; %copy of the original data
for i = 1 : size(Bs,2)
%loop adding constant to each column
Bs(:,i) = Bs(:,i) + (i-1) * d;
end
%plot the modified matrix
plot(Bs);
The result in MATLAB is:
%ploting columnspace
figure(1)
plot3([0 columnspaceA(1,1)],[0 columnspaceA(2,1)],[0
columnspaceA(3,1)],'y- ^', 'LineWidth',3)
hold on
plot3([0 columnspaceA(1,2)],[0 columnspaceA(2,2)],[0
columnspaceA(3,2)],'y-^', 'LineWidth',3)
%ploting leftynullspace
plot3([0 leftnullspace(1,1)],[0 leftnullspace(2,1)],[0
leftnullspace(3,1)],'g','linew',3)
h=fmesh(#(s,t)columnspaceA(1,1)*s+columnspaceA(1,2)*t,#(s,t)columnspaceA(2,1)*s+columnspaceA(2,2)*t,#(s,t)columnspaceA(3,1)*s+columnspaceA(3,2)*t,[-1 1]);
figure(2)
%ploting nullspace
hold on
plot3([0 nullspace(1,1)],[0 nullspace(2,1)],[0 nullspace(3,1)],'g-^','LineWidth',3)
% %ploting rowspace
plot3([0 rowspaceA(1,1)],[0 rowspaceA(2,1)],[0 rowspaceA(3,1)],'r-^','LineWidth',3)
hold on
plot3([0 rowspaceA(1,2)],[0 rowspaceA(2,2)],[0 rowspaceA(3,2)],'r-^','LineWidth',3)
h1 = fmesh(#(s,t)rowspaceA(1,1)*s+rowspaceA(1,2)*t,#(s,t)rowspaceA(2,1)*s+rowspaceA(2,2)*t,#(s,t)rowspaceA(3,1)*s+rowspaceA(3,2)*t,[-1 1]);
Related
I have a TXT-Doc https://www.mikrocontroller.net/attachment/428580/Probe_1.txt with analog values. I'm using FFT and Pwelch in Matlab to find the frequency of my analog signal. Theoreticly the frequency should be somewhere ~300 Hz. That's why i thought about an area of 0-500 Hz.
I tried fft:
>> load pr_1.txt;
>> Fs = 1000;
>> T = 1/Fs;
>> L = length(pr_1);
>> FFT = fft(pr_1);
>> P2 = abs(FFT/L);
>> P1 = P2(1:L/2+1);
>> P1(2:end-1) = 2*P1(2:end-1);
>> f = Fs*(0:(L/2))/L;
>> plot(f,20*log10(P1))
and pwelch:
>> len = 2050;
>> h = kaiser(len,4.53);
>> pwelch(pr_1,h,[],len,Fs)
Both is showing me the same answer: Peak at ~300 Hz.
But my question is how to I define the right sampling frequence Fs?
If i change Fs for example to Fs=10000 instead of Fs=1000 I'll get Peak Points at ~3000 Hz instead of 300 Hz. And if I change Fs = 500, the Peak results at 150 Hz. I have the problem to understand and define the right Fs.
Assuming your provided file was generated by some measurement device, you can take the 3rd line
# Aufnahmerate [Hz]: 9997.4286613483
as your sample rate Fs. This is a fixed rate which is defined by the capturing device, just as #Irreducible said. Therefore you can't randomly change it.
The header of the file tells you also more information: It says there are 100000 samples, which is true, and also if you multiply the timeinterval per sample (5th line) with the number of samples(100000, 2nd line), you get exactly the measurement duration (4th line). From this I assume, that these values are true.
For completeness/future reference: the top content of your file:
# 2019-08-26 15:38:49.661906
# Anzahl Messwerte: 100000
# Aufnahmerate [Hz]: 9997.4286613483
# Messdauer [s]: 10.002572
# Zeitintervall je Messpunkt [ms]: 0.10002572
2.820286226094503856e+00
2.820608226282290687e+00
2.820286226094503856e+00
2.820608226282290687e+00
2.820286226094503856e+00
2.820608226282290687e+00
2.820608226282290687e+00
2.820608226282290687e+00
2.819642225722452267e+00
2.820286226094503856e+00
2.820608226282290687e+00
2.819964225907891198e+00
2.820286226094503856e+00
2.819964225907891198e+00
...
I am working with mtcars dataset and using linear regression
data(mtcars)
fit<- lm(mpg ~.,mtcars);summary(fit)
When I fit the model with lm it shows the result like this
Call:
lm(formula = mpg ~ ., data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-3.5087 -1.3584 -0.0948 0.7745 4.6251
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 23.87913 20.06582 1.190 0.2525
cyl6 -2.64870 3.04089 -0.871 0.3975
cyl8 -0.33616 7.15954 -0.047 0.9632
disp 0.03555 0.03190 1.114 0.2827
hp -0.07051 0.03943 -1.788 0.0939 .
drat 1.18283 2.48348 0.476 0.6407
wt -4.52978 2.53875 -1.784 0.0946 .
qsec 0.36784 0.93540 0.393 0.6997
vs1 1.93085 2.87126 0.672 0.5115
amManual 1.21212 3.21355 0.377 0.7113
gear4 1.11435 3.79952 0.293 0.7733
gear5 2.52840 3.73636 0.677 0.5089
carb2 -0.97935 2.31797 -0.423 0.6787
carb3 2.99964 4.29355 0.699 0.4955
carb4 1.09142 4.44962 0.245 0.8096
carb6 4.47757 6.38406 0.701 0.4938
carb8 7.25041 8.36057 0.867 0.3995
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.833 on 15 degrees of freedom
Multiple R-squared: 0.8931, Adjusted R-squared: 0.779
F-statistic: 7.83 on 16 and 15 DF, p-value: 0.000124
I found that none of variables are marked as significant at 0.05 significant level.
To find out significant variables I want to to do subset selection to find out best pair of vairables as predictors with response variable mpg.
The function regsubsets in the package leaps does best subset regression (see ?leaps). Adapting your code:
library(leaps)
regfit <- regsubsets(mpg ~., data = mtcars)
summary(regfit)
# or for a more visual display
plot(regfit,scale="Cp")
I have made some research here to understand this topic but I have not achieved good results. I'm working with a Kinect for Windows and the Kinect SDK 1.7. I'm working with matlab to process raw depth map info.
First, I'm using this method (https://stackoverflow.com/a/11732251/3416588) to store Kinect raw depth data to a text file. I got a list with (480x640 = 307200) elements and data like this:
23048
23048
23048
-8
-8
-8
-8
-8
-8
-8
-8
6704
6720
6720
6720
6720
6720
6720
6720
6720
6736
6736
6736
6736
6752
0
0
Then in Matlab I convert this values to binary. So, I get 16-bits numbers. The last three numbers that correspond to player index are "000", so I remove them. Now, from the 13 bits numbers, I remove the more significative, which is always "0".
So, I have made this:
[0,1,1,0,0,0,1,0,0,0,1,1,1,0,0,0] - 16 bits number
[0,1,1,0,0,0,1,0,0,0,1,1,1] - 13 bits number
[1,1,0,0,0,1,0,0,0,1,1,1] - 12 bits number
Then I follow to this procedure (https://stackoverflow.com/a/9678900/3416588) to convert raw-depth info to meters but i get values in the range from -4.7422 to 0.3055. What do they mean?
clc
clear all
close all
%Get raw depth data from .txt
fileID = fopen('C:\Example.txt', 'r');
datos = fscanf(fileID, '%i'); % Data in decimal
fclose(fileID);
% If raw depth data is less than 0, set it to 0
for i = 1 : 640*480
if(datos(i) < 0)
datos(i) = 0;
end
end
% Auxiliar arrays
datosBin = zeros(640*480, 16);
realDepth = zeros(640*480, 12);
% Decimal raw depth data to binary
n = 16;
m = 0;
for i = 1 : 640*480
a = datos(i);
datosBin(i,:) = fix(rem(a*pow2(-(n-1):m),2));
end
% Remove player index and msb (more significant bit)
for i = 1 : 640*480
realDepth(i,:) = datosBin(i,2:13);
end
% Auxiliar array to store raw depth data decimal number
realDepthDec = zeros(640*480,1);
% Raw depth data 12 bits to decimal
for i = 1 : 640*480
realDepthDec(i) = bin2dec(num2str(realDepth(i,:)));
end
% Auxiliar array
rawDepthMapMeters = zeros(480, 640);
% Create array 640*480 to store bit depth info in meters
for j = 1 : 480
for i = 1 : 640
if(realDepthDec(i+j) <= 2046)
rawDepthMapMeters(j, i) = 0.1236 * tan(realDepthDec(i+j)/2842.5 + 1.1863);
end
end
end
Where is my mistake? What am I doing wrong? Thanks in advance.
PS. Sorry for my bad english.
In the second article you read, you will see that the method you use is outdated. Read this.
x = (i - w / 2) * (z + minDistance) * scaleFactor
y = (j - h / 2) * (z + minDistance) * scaleFactor
z = z
Where
minDistance = -10
scaleFactor = .0021.
These values were found by hand.
Also you could convert those points to millimeters in your first application as described in the second question
using (var depthFrame = e.OpenDepthImageFrame())
{
var depthArray = new short[depthFrame.PixelDataLength];
depthFrame.CopyPixelDataTo(depthArray);
for (int i = 0; i < depthArray.Length; i++) {
int depthInMillimeters =
depthArray[i] >> DepthImageFrame.PlayerIndexBitmaskWidth;
// TADAx2!
}
Can any tell me how i can write a 5 msec timer code in matlab ?
%% Decomposing into sets of 40 bytes packets
% While Time < T1(= 5 msec), keep on filling the 40 bytes-sized packet
%while(Total_Connection_Time-Running_Time)>0
for n=1:length(total_number_of_bytes)
% n=counter to go through "total_number_of_bytes" marix
packets=[]; % 40-bytes matrix (packetization phase)
% checking whether number of bytes at each talkspurt period is < or > 40 bytes in order to start packetization
if (total_number_of_bytes(n)<=40)
k=40-total_number_of_bytes(n); % calculating how many remaining bytes we need to complete a 40 bytes packet
packets=[packets,total_number_of_bytes(n)+k];
total_number_of_bytes(n)=40; %new bytes matrix after packetization (adding bytes from next talkspurt to get total of 40 bytes)
total_number_of_bytes(n+1)= total_number_of_bytes(n+1)-k; % bytes are taken from the next talkspurt period in order to get a 40 byte packet
if total_number_of_bytes(n+1)<0
for i=1:length(total_number_of_bytes) % looping through the array starting total_number_of_bytes(n+1)
total_number_of_bytes(n+1)=total_number_of_bytes(n+1)+total_number_of_bytes(n+1+i)
total_number_of_bytes(n+1+i)=0;
packets=[total_number_of_bytes]
end
end
end
if(total_number_of_bytes(n)>40)
m=total_number_of_bytes(n)-40; % cz we need 40 bytes packets
packets=[packets,total_number_of_bytes -40];
total_number_of_bytes(n)=40;
total_number_of_bytes(n+1)= total_number_of_bytes(n+1)+m; % The remaining bytes are added to the next talkspurt period bytes
packets=[total_number_of_bytes]
end
for better accuracy use
java.lang.Thread.sleep(5);
instead of tic toc, see more here for further info.
Tic and toc are getting a bad rap, so I will just post this.
I tried the following:
tic
count = 0;
while toc<0.005
a=randn(10);
count = count+1;
end
toc
Running it ten times, the maximum value of toc was 5.006 ms. The count was around 1000 each time.
This is not the same as your program, but if graphics or GUI are not involved, I think tic and toc can do the job.
in the 2D array plotted below, we are interested in finding the "lump" region. As you can see it is not a continuous graph. Also, we know the approximate dimension of the "lump" region. A set of data are given below. First column contains the y values and the second contains the x values. Any suggestion as to how to detect lump regions like this?
21048 -980
21044 -956
21040 -928
21036 -904
21028 -880
21016 -856
21016 -832
21016 -808
21004 -784
21004 -760
20996 -736
20996 -712
20992 -684
20984 -660
20980 -636
20968 -612
20968 -588
20964 -564
20956 -540
20956 -516
20952 -492
20948 -468
20940 -440
20936 -416
20932 -392
20928 -368
20924 -344
20920 -320
20912 -296
20912 -272
20908 -248
20904 -224
20900 -200
20900 -176
20896 -152
20888 -128
20888 -104
20884 -80
20872 -52
20864 -28
20856 -4
20836 16
20812 40
20780 64
20748 88
20744 112
20736 136
20736 160
20732 184
20724 208
20724 232
20724 256
20720 280
20720 304
20720 328
20724 352
20724 376
20732 400
20732 424
20736 448
20736 472
20740 496
20740 520
20748 544
20740 568
20736 592
20736 616
20736 640
20740 664
20740 688
20736 712
20736 736
20744 760
20748 788
20760 812
20796 836
20836 860
20852 888
20852 912
20844 936
20836 960
20828 984
20820 1008
20816 1032
20820 1056
20852 1080
20900 1108
20936 1132
20956 1156
20968 1184
20980 1208
20996 1232
21004 1256
21012 1280
21016 1308
21024 1332
21024 1356
21028 1380
21024 1404
21020 1428
21016 1452
21008 1476
21004 1500
20992 1524
20980 1548
20956 1572
20944 1596
20920 1616
20896 1640
20872 1664
20848 1684
20812 1708
20752 1728
20664 1744
20640 1768
20628 1792
20628 1816
20620 1836
20616 1860
20612 1884
20604 1908
20596 1932
20588 1956
20584 1980
20580 2004
20572 2024
20564 2048
20552 2072
20548 2096
20536 2120
20536 2144
20524 2164
20516 2188
20512 2212
20508 2236
20500 2260
20488 2280
20476 2304
20472 2328
20476 2352
20460 2376
20456 2396
20452 2420
20452 2444
20436 2468
20432 2492
20432 2516
20424 2536
20420 2560
20408 2584
20396 2608
20388 2628
20380 2652
20364 2676
20364 2700
20360 2724
20352 2744
20344 2768
20336 2792
20332 2812
20328 2836
20332 2860
20340 2888
20356 2912
20380 2940
20428 2968
20452 2996
20496 3024
20532 3052
20568 3080
20628 3112
20652 3140
20728 3172
20772 3200
20868 3260
20864 3284
20864 3308
20868 3332
20860 3356
20884 3384
20884 3408
20912 3436
20944 3464
20948 3488
20948 3512
20932 3536
20940 3564
It may be just a coincidence, but the lump you show looks fairly parabolic. It's not completely clear what you mean by "know the approximate dimension of the lump region" but if you mean that you know approximately how wide it is (i.e. how much of the x-axis it takes up), you could simply slide a window of that width along the x-axis and do a parabolic fit (a.k.a. polyfit with degree 2) to all data that fits into the window at each point. Then, compute r^2 goodness-of-fit values at each point and the point with the r^2 closest to 1.0 would be the best fit. You'd probably need a threshold value and to throw out those where the x^2 coefficient was positive (to find lumps rather than dips) for sanity, but this might be a workable approach.
Even if the parabolic look is a coincidence, I think this would ba a reasonable approach--a downward pointing parabola is a pretty good description of a general "lump" by any definition I can think of.
Edit: Attempted Implementation Below
I got curious and went ahead and implemented my proposed solution (with slight modifications). First, here's the code (ugly but functional):
function [x, p] = find_lump(data, width)
n = size(data, 1);
f = plot(data(:,1),data(:,2), 'bx-');
hold on;
bestX = -inf;
bestP = [];
bestMSE = inf;
bestXdat = [];
bestYfit = [];
spanStart = 0;
spanStop = 1;
spanWidth = 0;
while (spanStop < n)
if (spanStart > 0)
% Drop first segment from window (since we'll advance x):
spanWidth = spanWidth - (data(spanStart + 1, 1) - x);
end
spanStart = spanStart + 1;
x = data(spanStart, 1);
% Advance spanStop index to maintain window width:
while ((spanStop < n) && (spanWidth <= width))
spanStop = spanStop + 1;
spanWidth = data(spanStop, 1) - x;
end
% Correct for overshoot:
if (spanWidth > width)
spanStop = spanStop - 1;
spanWidth = data(spanStop, 1) - x;
end
% Fit parabola to data in the current window:
xdat = data(spanStart:spanStop, 1);
ydat = data(spanStart:spanStop, 2);
p = polyfit(xdat, ydat, 2);
% Compute fit quality (mean squared error):
yfit = polyval(p,xdat);
r = yfit - ydat;
mse = (r' * r) / size(xdat,1);
if ((p(1) < -0.002) && (mse < bestMSE))
bestMSE = mse;
bestX = x;
bestP = p;
bestXdat = xdat;
bestYfit = yfit;
end
end
x = bestX;
p = bestP;
plot(bestXdat,bestYfit,'r-');
...and here's a result using the given data (I swapped the columns so column 1 is x values and column 2 is y values) with a window width parameter of 750:
Comments:
I opted to use mean squared error between the fit parabola and the original data within each window as the quality metric, rather than correlation coefficient (r^2 value) due to laziness more than anything else. I don't think the results would be much different the other way.
The output is heavily dependent on the threshold value chosen for the quadratic coefficient (see the bestMSE condition at the end of the loop). Truth be told, I cheated here by outputing the fit coefficients at each point, then selected the threshold based on the known lump shape. This is equivalent to using a lump template as suggested by #chaohuang and may not be very robust depending on the expected variance in the data.
Note that some sort of shape control parameter seems to be necessary if this approach is used. The reason is that any random (smooth) run of data can be fit nicely to some parabola, but not necessarily around the maximum value. Here's a result where I set the threshold to zero and thus only restricted the fit to parabolas pointing downwards:
An improvement would be to add a check that the fit parabola at least has a maximum within the window interval (that is, check that the first derivative goes to zero within the window so we at least find local maxima along the curve). This alone is not sufficient as you still might have a tiny little lump that fits a parabola better than an "obvious" big lump as seen in the given data set.