I'm trying to do connected component analysis.but I'm getting error. I need the vertebral body ;but I'm getting some other objects.
Image is:
Result is:
im= imread('im.bmp');
figure,imshow(im);
K1=imadjust(im);
figure, imshow(K1), title('After Adjustment Image')
threshold = graythresh(K1);
originalImage = im2bw(K1, threshold);
originalImage = bwareaopen(originalImage,100);
se = strel('disk', 2); %# structuring element
closeBW = imclose(originalImage,se);
figure,imshow(closeBW);
CC = bwconncomp(closeBW);
L = labelmatrix(CC);
L2 = bwlabel(K1);
figure, imshow(label2rgb(L));
Segmentation isn't my area, so I'm not sure what the best approach is. Here are a couple heuristic ideas I came up with:
Discard regions that are too big or too small.
It looks like you can expect a certain size from the vertebra.
regionIdxs = unique(L(:));
regionSizes = accumarray(L(:)+1,1);
If we look at regionSizes, we see the region sizes in pixels:
213360
919
887
810
601
695
14551
684
1515
414
749
128
173
26658
The regions you want (rows 2-6) are on the range 500-1000. We can probably safely discard regions that are <200 or >2000 in size.
goodRegionIdx = (regionSizes>200) & (regionSizes<2000);
regionIdxs = regionIdxs(goodRegionIdx);
regionSizes = regionSizes(goodRegionIdx);
Look at the image moments of the desired regions.
The eigenvalues of the covariance matrix of a distribution characterize its size in its widest direction and its size perpendicular to that direction. We are looking for fat disk-shapes, so we can expect a big eigenvalue and a medium-sized eigenvalue.
[X,Y] = meshgrid(1:size(L,2),1:size(L,1));
for i = 1:length(regionIdxs)
idx = regionIdxs(i);
region = L==idx;
totalmass = sum(region(:));
Ex(i) = sum( X(1,:).*sum(region,1) ) / totalmass;
Ey(i) = sum( Y(:,1).*sum(region,2)) / totalmass;
Exy(i) = sum(sum( X.*Y.*region )) / totalmass;
Exx(i) = sum(sum( X.*X.*region )) / totalmass;
Eyy(i) = sum(sum( Y.*Y.*region )) / totalmass;
Varx(i) = Exx(i) - Ex(i)^2;
Vary(i) = Eyy(i) - Ey(i)^2;
Varxy(i) = Exy(i) - Ex(i)*Ey(i);
Cov = [Varx(i) Varxy(i); Varxy(i) Vary(i)];
eig(i,:) = eigs(Cov);
end
If we look at the eigenvalues eig:
177.6943 30.8029
142.4484 35.9089
164.6374 26.2081
112.6501 22.7570
138.1674 24.1569
89.8082 58.8964
284.2280 96.9304
83.3226 15.9994
113.3122 33.7410
We are only interested in rows 1-5, which have eigenvalues on the range 100-200 for the largest and below 50 the the second. If we discard these, get the following regions:
goodRegionIdx = (eig(:,1)>100) & (eig(:,1)<200) & (eig(:,2)<50);
regionIdxs = regionIdxs(goodRegionIdx);
We can plot the regions by using logical OR |.
finalImage = false(size(L));
for i = 1:length(regionIdxs)
finalImage = finalImage | (L==regionIdxs(i) );
end
We seem to get one false positive. Looking at the ratio of the eigenvalues eig(:,1)./eig(:,2) is one idea but that seem to be a little problematic too.
You could try some sort of outlier detection like RANSAC to try and eliminate the region you don't want, since true vertebra tend to be spatially aligned along a line or curve.
I'm not sure what else to suggest. You may have to look into more advanced segmentation methods like machine learning if you can't find another way to discriminate the good from the bad. Having a stricter preprocessing method might be one thing to try.
Hope that helps.
Related
I want to creat a histogram code, knowing that it'll be counting the number of occurence of 3 values of a pixel.
The idea is I have 3 matrices (L1im, L2im, L3im) representing information extracted from an image, size of each of them is 256*226, and I want to compute how many times a combination of let's say (52,6,40) occurs (each number correspends to a matrix/component but they're all of the same pixel).
I have tried this, but it doesn’t produce the right result:
for i = 1 : 256
for j = 1 : 256
for k = 1 : 256
if (L1im == i) & (L2im == j) & (L3im == k)
myhist(i,j,k)= myhist(i,j,k)+1;
end
end
end
end
Colour Triplets Histogram
Keeping in mind doing an entire RGB triplet histogram is a large task since you can have 256 × 256 × 256 = 16,777,216 combinations of possible unique colours. A slightly more manageable task I believe is to compute the histogram for the unique RGB values in the image (since the rest will be zero anyways). This is still a fairly large task but might be reasonable if the image is fairly small. Below I believe a decent alternative to binning is to resize the image to a smaller number of pixels. This can be done by using the imresize function. I believe this will decrease fidelity of the image and almost act as a rounding function which can "kinda" simulate the behaviour of binning. In this example I convert the matrices string arrays an concatenate the channels, L1im, L2im and L3im of the image. Below is a demo where I use the image built into MATLAB named saturn.png. A Resize_Factor
of 1 will result in the highest amount of bins and the number of bins will decrease as the Resize_Factor increases. Keep in mind that the histogram might require scaling if the image is being resized with the Resize_Factor.
Resize_Factor = 200;
RGB_Image = imread("saturn.png");
[Image_Height,Image_Width,Number_Of_Colour_Channels] = size(RGB_Image);
Number_Of_Pixels = Image_Height*Image_Width;
RGB_Image = imresize(RGB_Image,[Image_Height/Resize_Factor Image_Width/Resize_Factor]);
L1im = RGB_Image(:,:,1);
L2im = RGB_Image(:,:,2);
L3im = RGB_Image(:,:,3);
L1im_String = string(L1im);
L2im_String = string(L2im);
L3im_String = string(L3im);
RGB_Triplets = L1im_String + "," + L2im_String + "," + L3im_String;
Unique_RGB_Triplets = unique(RGB_Triplets);
for Colour_Index = 1: length(Unique_RGB_Triplets)
RGB_Colour = Unique_RGB_Triplets(Colour_Index);
Unique_RGB_Triplets(Colour_Index,2) = nnz(RGB_Colour == RGB_Triplets);
end
Counts = str2double(Unique_RGB_Triplets(:,2));
Scaling_Factor = Number_Of_Pixels/sum(Counts);
Counts = Counts.*Scaling_Factor;
if sum(Counts) == Number_Of_Pixels
disp("Sum of histogram is equal to the number of pixels");
end
bar(Counts);
title("RGB Triplet Histogram");
xlabel("RGB Triplets"); ylabel("Counts");
Current_Axis = gca;
Scale = (1:length(Unique_RGB_Triplets));
set(Current_Axis,'xtick',Scale,'xticklabel',Unique_RGB_Triplets);
Angle = 90;
xtickangle(Current_Axis,Angle);
I'm relatively new to Matlab, and trying to understand why a piece of code isn't working.
I have a 512x512 image that needs to be downsized to 256, and then resized back up to 512.
How I understand the mathematics, is that I would need to mean the pixels in the image to get the 256, and then sum them back to get the 512. Is that correct ? Following is the code that I'm looking at, and if someone can explain me whats wrong(its giving a blank white image), I would appreciate it:
w = double(imread('walkbridge.tif'));
%read the image
w = w(:,:,1);
for x = 1:256
for y = 1:256
s256(x,y) = (w(2*x,2*y)+ w(2*x,(2*y)-1) + w((2*x)-1,2*y)+ w((2*x)-1,(2*y)-1))/4;
end
end
for x = 1 : 256
for y = 1 : 256
for x1 = 0:1
for y1 = 0:1
R1((2*x)-x1,((2*y)-y1)) = s256(x,y);
end
end
end
end
imshow(R1)
I got your code to work, so you might have some bad values in your image data. Namely, if your image has values in range 0..127 or something similar, it will most likely show as all white. By default, imshow expects color channels to be in range 0..1.
You might also want to simplify your code a bit by indexing the original array instead of accessing individual elements. That way the code is even easy to change:
half_size = 256;
w = magic(2*half_size);
w = w / max(w(:));
figure()
imshow(w)
s = zeros(half_size);
for x = 1:half_size
for y = 1:half_size
ix = w(2*x-1:2*x, 2*y-1:2*y);
s(x,y) = sum(ix(:))/4;
end
end
for x = 1:half_size
for y = 1:half_size
R1(2*x-1:2*x, 2*y-1:2*y) = s(x,y);
end
end
figure()
imshow(R1)
I imagine the calculations could even be vectorised in some way instead of looping, but I didn't bother.
My calculation involves cosh(x) and sinh(x) when x is around 700 - 1000 which reaches MATLAB's limit and the result is NaN. The problem in the code is elastic_restor_coeff rises when radius is small (below 5e-9 in the code). My goal is to do another integral over a radius distribution from 1e-9 to 100e-9 which is still a work in progress because I get stuck at this problem.
My work around solution right now is to approximate the real part of chi_para with a step function when threshold2 hits a value of about 300. The number 300 is obtained from using the lowest possible value of radius and look at the cut-off value from the plot. I think this approach is not good enough for actual calculation since this value changes with radius so I am looking for a better approximation method. Also, the imaginary part of chi_para is difficult to approximate since it looks like a pulse instead of a step.
Here is my code without an integration over a radius distribution.
k_B = 1.38e-23;
T = 296;
radius = [5e-9,10e-9, 20e-9, 30e-9,100e-9];
fric_coeff = 8*pi*1e-3.*radius.^3;
elastic_restor_coeff = 8*pi*1.*radius.^3;
time_const = fric_coeff/elastic_restor_coeff;
omega_ar = logspace(-6,6,60);
chi_para = zeros(1,length(omega_ar));
chi_perpen = zeros(1,length(omega_ar));
threshold = zeros(1,length(omega_ar));
threshold2 = zeros(1,length(omega_ar));
for i = 1:length(radius)
for k = 1:length(omega_ar)
omega = omega_ar(k);
fric_coeff = 8*pi*1e-3.*radius(i).^3;
elastic_restor_coeff = 8*pi*1.*radius(i).^3;
time_const = fric_coeff/elastic_restor_coeff;
G_para_func = #(t) ((cosh(2*k_B*T./elastic_restor_coeff.*exp(-t./time_const))-1).*exp(1i.*omega.*t))./(cosh(2*k_B*T./elastic_restor_coeff)-1);
G_perpen_func = #(t) ((sinh(2*k_B*T./elastic_restor_coeff.*exp(-t./time_const))).*exp(1i.*omega.*t))./(sinh(2*k_B*T./elastic_restor_coeff));
chi_para(k) = (1 + 1i*omega*integral(G_para_func, 0, inf));
chi_perpen(k) = (1 + 1i*omega*integral(G_perpen_func, 0, inf));
threshold(k) = 2*k_B*T./elastic_restor_coeff*omega;
threshold2(k) = 2*k_B*T./elastic_restor_coeff*(omega*time_const - 1);
end
figure(1);
semilogx(omega_ar,real(chi_para),omega_ar,imag(chi_para));
hold on;
figure(2);
semilogx(omega_ar,real(chi_perpen),omega_ar,imag(chi_perpen));
hold on;
end
Here is the simplified function that I would like to approximate:
where x is iterated in a loop and the maximum value of x is about 700.
I'm trying to estimate the (unknown) original datapoints that went into calculating a (known) moving average. However, I do know some of the original datapoints, and I'm not sure how to use that information.
I am using the method given in the answers here: https://stats.stackexchange.com/questions/67907/extract-data-points-from-moving-average, but in MATLAB (my code below). This method works quite well for large numbers of data points (>1000), but less well with fewer data points, as you'd expect.
window = 3;
datapoints = 150;
data = 3*rand(1,datapoints)+50;
moving_averages = [];
for i = window:size(data,2)
moving_averages(i) = mean(data(i+1-window:i));
end
length = size(moving_averages,2)+(window-1);
a = (tril(ones(length,length),window-1) - tril(ones(length,length),-1))/window;
a = a(1:length-(window-1),:);
ai = pinv(a);
daily = mtimes(ai,moving_averages');
x = 1:size(data,2);
figure(1)
hold on
plot(x,data,'Color','b');
plot(x(window:end),moving_averages(window:end),'Linewidth',2,'Color','r');
plot(x,daily(window:end),'Color','g');
hold off
axis([0 size(x,2) min(daily(window:end))-1 max(daily(window:end))+1])
legend('original data','moving average','back-calculated')
Now, say I know a smattering of the original data points. I'm having trouble figuring how might I use that information to more accurately calculate the rest. Thank you for any assistance.
You should be able to calculate the original data exactly if you at any time can exactly determine one window's worth of data, i.e. in this case n-1 samples in a window of length n. (In your case) if you know A,B and (A+B+C)/3, you can solve now and know C. Now when you have (B+C+D)/3 (your moving average) you can exactly solve for D. Rinse and repeat. This logic works going backwards too.
Here is an example with the same idea:
% the actual vector of values
a = cumsum(rand(150,1) - 0.5);
% compute moving average
win = 3; % sliding window length
idx = hankel(1:win, win:numel(a));
m = mean(a(idx));
% coefficient matrix: m(i) = sum(a(i:i+win-1))/win
A = repmat([ones(1,win) zeros(1,numel(a)-win)], numel(a)-win+1, 1);
for i=2:size(A,1)
A(i,:) = circshift(A(i-1,:), [0 1]);
end
A = A / win;
% solve linear system
%x = A \ m(:);
x = pinv(A) * m(:);
% plot and compare
subplot(211), plot(1:numel(a),a, 1:numel(m),m)
legend({'original','moving average'})
title(sprintf('length = %d, window = %d',numel(a),win))
subplot(212), plot(1:numel(a),a, 1:numel(a),x)
legend({'original','reconstructed'})
title(sprintf('error = %f',norm(x(:)-a(:))))
You can see the reconstruction error is very small, even using the data sizes in your example (150 samples with a 3-samples moving average).
there.
I am going to generate 10^6 random points in matlab with this particular characters.
the points should be inside a sphere with radious 25, the are 3-D so we have x, y, z or r, theta, phi.
there is a minimum distance between each points.
first, i decided to generate points and then check the distances, then omit points with do not have these condition. but, it may omit many of points.
another way is to use RSA (Random Sequential Addition), it means generate points one by one with this minimum distance between them. for example generate first point, then generate second randomly out of the minimum distance from point 1. and go on till achieving 10^6 points.
but it takes lots of time and i can not reach 10^6 points, since the speed of searching appropriate position for new points will take long time.
Right now I am using this program:
Nmax=10000;
R=25;
P=rand(1,3);
k=1;
while k<Nmax
theta=2*pi*rand(1);
phi=pi*rand(1);
r = R*sqrt(rand(1));
% convert to cartesian
x=r.*sin(theta).*cos(phi);
y=r.*sin(theta).*sin(phi);
z=r.*cos(theta);
P1=[x y z];
r=sqrt((x-0)^2+(y-0)^2+(z-0)^2);
D = pdist2(P1,P,'euclidean');
% euclidean distance
if D>0.146*r^(2/3)
P=[P;P1];
k=k+1;
end
i=i+1;
end
x=P(:,1);y=P(:,2);z=P(:,3); plot3(x,y,z,'.');
How can I efficiently generate points by these condition?
thank you.
I took a closer look at your algorithm, and concluded there is NO WAY it will ever work - at least not if you really want to get a million points in that sphere. There is a simple picture that explains why not - this is a plot of the number of points that you need to test (using your technique of RSA) to get one additional "good" point. As you can see, this goes asymptotic at just a few thousand points (I ran a slightly faster algorithm against 200k points to produce this):
I don't know if you ever tried to compute the theoretical number of points you could fit in your sphere when you have them perfectly arranged, but I'm beginning to suspect the number is a good deal smaller than 1E6.
The complete code I used to investigate this, plus the output it generated, can be found here. I never got as far as the technique I described in my earlier answer... there was just too much else going on in the setup you described.
EDIT:
I started to think it might not be possible, even with "perfect" arrangement, to get to 1M points. I made a simple model for myself as follows:
Imagine you start on the "outer shell" (r=25), and try to fit points at equal distances. If you divide the area of the "shell" by the area of one "exclusion disk" (of radius r_sub_crit), you get a (high) estimate of the number of points at that distance:
numpoints = 4*pi*r^2 / (pi*(0.146 * r^(2/3))^2) ~ 188 * r^(2/3)
The next "shell" in should be at a radius that is 0.146*r^(2/3) less - but if you think of the points as being very carefully arranged, you might be able to get a tiny bit closer. Again, let's be generous and say the shells can be just 1/sqrt(3) closer than the criteria. You can then start at the outer shell and work your way in, using a simple python script:
import scipy as sc
r = 25
npts = 0
def rc(r):
return 0.146*sc.power(r, 2./3.)
while (r > rc(r)):
morePts = sc.floor(4/(0.146*0.146)*sc.power(r, 2./3.))
npts = npts + morePts
print morePts, ' more points at r = ', r
r = r - rc(r)/sc.sqrt(3)
print 'total number of points fitted in sphere: ', npts
The output of this is:
1604.0 more points at r = 25
1573.0 more points at r = 24.2793037966
1542.0 more points at r = 23.5725257555
1512.0 more points at r = 22.8795314897
1482.0 more points at r = 22.2001865995
1452.0 more points at r = 21.5343566722
1422.0 more points at r = 20.8819072818
1393.0 more points at r = 20.2427039885
1364.0 more points at r = 19.6166123391
1336.0 more points at r = 19.0034978659
1308.0 more points at r = 18.4032260869
1280.0 more points at r = 17.8156625053
1252.0 more points at r = 17.2406726094
1224.0 more points at r = 16.6781218719
1197.0 more points at r = 16.1278757499
1171.0 more points at r = 15.5897996844
1144.0 more points at r = 15.0637590998
1118.0 more points at r = 14.549619404
1092.0 more points at r = 14.0472459873
1066.0 more points at r = 13.5565042228
1041.0 more points at r = 13.0772594652
1016.0 more points at r = 12.6093770509
991.0 more points at r = 12.1527222975
967.0 more points at r = 11.707160503
943.0 more points at r = 11.2725569457
919.0 more points at r = 10.8487768835
896.0 more points at r = 10.4356855535
872.0 more points at r = 10.0331481711
850.0 more points at r = 9.64102993012
827.0 more points at r = 9.25919600154
805.0 more points at r = 8.88751153329
783.0 more points at r = 8.52584164948
761.0 more points at r = 8.17405144976
740.0 more points at r = 7.83200600865
718.0 more points at r = 7.49957037478
698.0 more points at r = 7.17660957023
677.0 more points at r = 6.86298858965
657.0 more points at r = 6.55857239952
637.0 more points at r = 6.26322593726
618.0 more points at r = 5.97681411037
598.0 more points at r = 5.69920179546
579.0 more points at r = 5.43025383729
561.0 more points at r = 5.16983504778
542.0 more points at r = 4.91781020487
524.0 more points at r = 4.67404405146
506.0 more points at r = 4.43840129415
489.0 more points at r = 4.21074660206
472.0 more points at r = 3.9909446055
455.0 more points at r = 3.77885989456
438.0 more points at r = 3.57435701766
422.0 more points at r = 3.37730048004
406.0 more points at r = 3.1875547421
390.0 more points at r = 3.00498421767
375.0 more points at r = 2.82945327223
360.0 more points at r = 2.66082622092
345.0 more points at r = 2.49896732654
331.0 more points at r = 2.34374079733
316.0 more points at r = 2.19501078464
303.0 more points at r = 2.05264138052
289.0 more points at r = 1.91649661498
276.0 more points at r = 1.78644045325
263.0 more points at r = 1.66233679273
250.0 more points at r = 1.54404945973
238.0 more points at r = 1.43144220603
226.0 more points at r = 1.32437870508
214.0 more points at r = 1.22272254805
203.0 more points at r = 1.1263372394
192.0 more points at r = 1.03508619218
181.0 more points at r = 0.94883272297
170.0 more points at r = 0.867440046252
160.0 more points at r = 0.790771268402
150.0 more points at r = 0.718689381062
140.0 more points at r = 0.65105725389
131.0 more points at r = 0.587737626612
122.0 more points at r = 0.528593100237
113.0 more points at r = 0.473486127367
105.0 more points at r = 0.422279001431
97.0 more points at r = 0.374833844693
89.0 more points at r = 0.331012594847
82.0 more points at r = 0.290676989951
75.0 more points at r = 0.253688551418
68.0 more points at r = 0.219908564725
61.0 more points at r = 0.189198057381
55.0 more points at r = 0.161417773651
49.0 more points at r = 0.136428145311
44.0 more points at r = 0.114089257597
38.0 more points at r = 0.0942608092113
33.0 more points at r = 0.0768020649149
29.0 more points at r = 0.0615717987589
24.0 more points at r = 0.0484282253244
20.0 more points at r = 0.0372289153633
17.0 more points at r = 0.0278306908104
13.0 more points at r = 0.0200894920319
10.0 more points at r = 0.013860207063
8.0 more points at r = 0.00899644813842
5.0 more points at r = 0.00535025545232
total number of points fitted in sphere: 55600.0
This seems to confirm that you really can't get to a million, no matter how you try...
There are many things you could do to improve your program - both algorithm, and code.
On the code side, one of the things that is REALLY slowing you down is the fact that not only you use a for loop (which is slow), but in the line
P = [P;P1];
you append elements to an array. Every time that happens, Matlab needs to find a new place to put the data, copying all the points in the process. This quickly becomes very slow. Preallocating the array with
P = zeros(1000000, 3);
keeping track of the number N of points you have found so far, and changing your calculation of distance to
D = pdist2(P1, P(1:N, :), 'euclidean');
would at least address that...
The other issue is that you check new points against all previously found points - so when you have 100 points, you check about 100x100, for 1000 it is 1000x1000. You can see then that this algorithm is O(N^3) at least... not counting the fact that you will get more "misses" as the density goes up. A O(N^3) algorithm with N=10E6 takes at least 10E18 cycles; if you had a 4 GHz machine with one clock cycle per comparison, you would need 2.5E9 seconds = approximately 80 years. You can try parallel processing, but that's just brute force - who wants that?
I recommend that you think about breaking the problem into smaller pieces (quite literally): for example, if you divide your sphere into little boxes that are about the size of your maximum distance, and for each box you keep track of what points are in it, then you only need to check against points in "this" box and its immediate neighbors - 27 boxes in all. If your boxes are 2.5 mm across, you would have 100x100x100 = 1M boxes. That seems like a lot, but now your computation time will be reduced drastically, as you will have (by the end of the algorithm) only 1 point on average per box... Of course with the distance criterion you are using, you will have more points near the center, but that's a detail.
The data structure you would need would be a cell array of 100x100x100, and each cell contains the index of the good points found so far "in that cell". The problem with a cell array is that it doesn't lend itself to vectorization. If instead you have the memory, you could assign it as a 4D array of 10x100x100x100, assuming you will have no more than 10 points per cell (if you do, you will have to handle that separately; work with me here...). Use an index of -1 for points not yet found
Then your check would be something like this:
% initializing:
bigList = zeros(10,102,102,102)-1; % avoid hitting the edge...
NPlist = zeros(102, 102, 102); % track # valid points in each box
bottomcorner = [-25.5, -25.5, -25.5]; % boxes span from -25.5 to +25.5
cellSize = 0.5;
.
% in your loop:
P1= [x, y, z];
cellCoords = ceil(P1/cellSize);
goodFlag = true;
pointsSoFar = bigList(:, cellCoords(1)+(-1:1), cellCoords(2)+(-1:1), cellCoords(3)+(-1:1));
pointsToCheck = find(pointsSoFar>0); % this is where the big gains come...
r=sum(P1.^2);
D = pdist2(P1,P(pointsToCheck, :),'euclidean'); % euclidean distance
if D>0.146*r^(2/3)
P(k,:) = P1;
% check the syntax of this line...
cci = ind2sub([102 102 102], cellCoords(1), cellCoords(2), cellCoords(3));
NP(cci)=NP(cci)+1; % increasing number of points in this box
% you want to handle the case where this > 10!!!
bigList(NP(cci), cci) = k;
k=k+1;
end
....
I don't know if you can take it from here; if you can't, say so in the notes and I may have some time this weekend to code this up in more detail. There are ways to speed it up more with some vectorization, but it quickly becomes hard to manage.
I think that putting a larger number of points randomly in space, and then using the above for a giant vectorized culling, may be the way to go. But I recommend to take little steps first... if you can get the above to work at all well, you can then optimize further (array size, etc).
I found the reference - "Simulated Brain Tumor Growth Dynamics Using a Three-Dimensional Cellular Automaton", Ansal et al (2000).
I agree it is puzzling - until you realize one important thing. They are reporting their results in mm, but your code was written in cm. While that may seem insignificant, the formula for "critical radius", rc = 0.146r^(2/3) includes a constant, 0.146, that is dimensional - the dimensions are mm^(1/3), not cm^(1/3).
When I make that change in my python code to evaluate the number of possible lattice sites, it jumps by a factor 10. Now they claimed that they were using a "jamming limit" of 0.38 - the number where you really cannot find any more sites. If you include that limit, I predict no more than 200k points could be found - still short of their 1.5M, but not quite so crazy.
You might consider contacting the authors to discuss this with them? If you want to include me in the conversation, you can email me at: SO (just two letters) at my handle name dot united states. Same domain as where I posted links above...