I haven't been able to find the range of values that is returned from .pixel_array so I'm not sure how to scale the values to a custom range like [0,1]. Is there a pydicom inbuilt function that does this already?
A combination of Bits Stored and Pixel Representation should be enough for Pixel Data:
from pydicom import dcmread
ds = dcmread("/path/to/dataset")
if ds.PixelRepresentation == 0:
# Unsigned integers
min_px = 0
max_px = 2**ds.BitsStored - 1
else:
# Signed integers
min_px = -2**(ds.BitsStored - 1)
max_px = 2**(ds.BitsStored - 1) - 1
Related
I want to creat a histogram code, knowing that it'll be counting the number of occurence of 3 values of a pixel.
The idea is I have 3 matrices (L1im, L2im, L3im) representing information extracted from an image, size of each of them is 256*226, and I want to compute how many times a combination of let's say (52,6,40) occurs (each number correspends to a matrix/component but they're all of the same pixel).
I have tried this, but it doesn’t produce the right result:
for i = 1 : 256
for j = 1 : 256
for k = 1 : 256
if (L1im == i) & (L2im == j) & (L3im == k)
myhist(i,j,k)= myhist(i,j,k)+1;
end
end
end
end
Colour Triplets Histogram
Keeping in mind doing an entire RGB triplet histogram is a large task since you can have 256 × 256 × 256 = 16,777,216 combinations of possible unique colours. A slightly more manageable task I believe is to compute the histogram for the unique RGB values in the image (since the rest will be zero anyways). This is still a fairly large task but might be reasonable if the image is fairly small. Below I believe a decent alternative to binning is to resize the image to a smaller number of pixels. This can be done by using the imresize function. I believe this will decrease fidelity of the image and almost act as a rounding function which can "kinda" simulate the behaviour of binning. In this example I convert the matrices string arrays an concatenate the channels, L1im, L2im and L3im of the image. Below is a demo where I use the image built into MATLAB named saturn.png. A Resize_Factor
of 1 will result in the highest amount of bins and the number of bins will decrease as the Resize_Factor increases. Keep in mind that the histogram might require scaling if the image is being resized with the Resize_Factor.
Resize_Factor = 200;
RGB_Image = imread("saturn.png");
[Image_Height,Image_Width,Number_Of_Colour_Channels] = size(RGB_Image);
Number_Of_Pixels = Image_Height*Image_Width;
RGB_Image = imresize(RGB_Image,[Image_Height/Resize_Factor Image_Width/Resize_Factor]);
L1im = RGB_Image(:,:,1);
L2im = RGB_Image(:,:,2);
L3im = RGB_Image(:,:,3);
L1im_String = string(L1im);
L2im_String = string(L2im);
L3im_String = string(L3im);
RGB_Triplets = L1im_String + "," + L2im_String + "," + L3im_String;
Unique_RGB_Triplets = unique(RGB_Triplets);
for Colour_Index = 1: length(Unique_RGB_Triplets)
RGB_Colour = Unique_RGB_Triplets(Colour_Index);
Unique_RGB_Triplets(Colour_Index,2) = nnz(RGB_Colour == RGB_Triplets);
end
Counts = str2double(Unique_RGB_Triplets(:,2));
Scaling_Factor = Number_Of_Pixels/sum(Counts);
Counts = Counts.*Scaling_Factor;
if sum(Counts) == Number_Of_Pixels
disp("Sum of histogram is equal to the number of pixels");
end
bar(Counts);
title("RGB Triplet Histogram");
xlabel("RGB Triplets"); ylabel("Counts");
Current_Axis = gca;
Scale = (1:length(Unique_RGB_Triplets));
set(Current_Axis,'xtick',Scale,'xticklabel',Unique_RGB_Triplets);
Angle = 90;
xtickangle(Current_Axis,Angle);
I've declared a function that will be used to calculate the convolution of an image using an arbitrary 3x3 kernel. I also created a script that will prompt the user to select both an image as well as enter the convolution kernel of their choice. However, I do not know how to go about dealing with negative pixel values that will arise for various kernels. How would I implement a condition into my script that will deal with these negative values?
This is my function:
function y = convul(x,m,H,W)
y=zeros(H,W);
for i=2:(H-1)
for j=2:(W-1)
Z1=(x(i-1,j-1))*(m(1,1));
Z2=(x(i-1,j))*(m(1,2));
Z3=(x(i-1,j+1))*(m(1,3));
Z4=(x(i,j-1))*(m(2,1));
Z5=(x(i,j))*(m(2,2));
Z6=(x(i,j+1))*(m(2,3));
Z7=(x(i+1,j-1))*(m(3,1));
Z8=(x(i+1,j))*(m(3,2));
Z9=(x(i+1,j+1))*(m(3,3));
y(i,j)=Z1+Z2+Z3+Z4+Z5+Z6+Z7+Z8+Z9;
end
end
And this is the script that I've written that prompts the user to enter an image and select a kernel of their choice:
[file,path]=uigetfile('*.bmp');
x = imread(fullfile(path,file));
x_info=imfinfo(fullfile(path,file));
W=x_info.Width;
H=x_info.Height;
L=x_info.NumColormapEntries;
prompt='Enter a convulation kernel m: ';
m=input(prompt)/9;
y=convul(x,m,H,W);
imshow(y,[0,(L-1)]);
I've tried to use the absolute value of the convolution, as well as attempting to locate negatives in the output image, but nothing worked.
This is the original image:
This is the image I get when I use the kernel [-1 -1 -1;-1 9 -1; -1 -1 -1]:
I don't know what I'm doing wrong.
MATLAB is rather unique in how it handles operations between different data types. If x is uint8 (as it likely is in this case), and m is double (as it likely is in this case), then this operation:
Z1=(x(i-1,j-1))*(m(1,1));
returns a uint8 value, not a double. Arithmetic in MATLAB always takes the type of the non-double argument. (And you cannot do arithmetic between two different types unless one of them is double.)
MATLAB does integer arithmetic with saturation. That means that uint8(5) * -1 gives 0, not -5, because uint8 cannot represent a negative value.
So all your Z1..Z9 are uint8 values, negative results have been set to 0. Now you add all of these, again with saturation, leading to a value of at most 255. This value is assigned to the output (a double). So it looks like you are doing your computations correctly and outputting a double array, but you are still clamping your result in an odd way.
A Correct implementation would cast each of the values of x to double before multiplying by a potentially negative number. For example:
for i = 2:H-1
for j = 2:W-1
s = 0;
s = s + double(x(i-1,j-1))*m(1,1);
s = s + double(x(i-1,j))*m(1,2);
s = s + double(x(i-1,j+1))*m(1,3);
s = s + double(x(i,j-1))*m(2,1);
s = s + double(x(i,j))*m(2,2);
s = s + double(x(i,j+1))*m(2,3);
s = s + double(x(i+1,j-1))*m(3,1);
s = s + double(x(i+1,j))*m(3,2);
s = s + double(x(i+1,j+1))*m(3,3);
y(i,j) = s;
end
end
(Note that I removed your use of 9 different variables, I think this is cleaner, and I also removed a lot of your unnecessary brackets!)
A simpler implementation would be:
for i = 2:H-1
for j = 2:W-1
s = double(x(i-1:i+1,j-1:j+1)) .* m;
y(i,j) = sum(s(:));
end
end
I'm trying to quantize a set of double type samples with 128 level uniform quantizer and I want my output to be double type aswell. When I try to use "quantize" matlab gives an error: Inputs of class 'double' are not supported. I tried "uencode" as well but its answer was nonsense. I'm quite new to matlab and I've been working on this for hours. Any help appriciated. Thanks
uencode is supposed to give integer results. Thats the point of it. but the key point is that it assumes a symmetric range. going from -x to +x where x is the largest or smallest value in your data set. So if your data is from 0-10 your result looks like nonsense because it quantizes the values on the range -10 to 10.
In any event, you actually want the encoded value and the quantized value. I wrote a simple function to do this. It even has little help instructions (really just type "help ValueQuantizer"). I also made it very flexible so it should work with any data size (assuming you have enough memory) it can be a vector, 2d array, 3d, 4d....etc
here is an example to see how it works. Our number is a Uniform distribution from -0.5 to 3.5 this shows that unlike uencode, my function works with nonsymmetric data, and that it works with negative values
a = 4*rand(2,4,2) - .5
[encoded_vals, quant_values] = ValueQuantizer(a, 3)
produces
a(:,:,1) =
0.6041 2.1204 -0.0240 3.3390
2.2188 0.1504 1.4935 0.8615
a(:,:,2) =
1.8411 2.5051 1.5238 3.0636
0.3952 0.5204 2.2963 3.3372
encoded_vals(:,:,1) =
1 4 0 7
5 0 3 2
encoded_vals(:,:,2) =
4 5 3 6
1 1 5 7
quant_values(:,:,1) =
0.4564 1.8977 -0.0240 3.3390
2.3781 -0.0240 1.4173 0.9368
quant_values(:,:,2) =
1.8977 2.3781 1.4173 2.8585
0.4564 0.4564 2.3781 3.3390
so you can see it returns the encoded values as integers (just like uencode but without the weird symmetric assumption). Unlike uencode, this just returns everything as doubles rather than converting to uint8/16/32. The important part is it also returns the quantized values, which is what you wanted
here is the function
function [encoded_vals, quant_values] = ValueQuantizer(U, N)
% ValueQuantizer uniformly quantizes and encodes the input into N-bits
% it then returns the unsigned integer encoded values and the actual
% quantized values
%
% encoded_vals = ValueQuantizer(U,N) uniformly quantizes and encodes data
% in U. The output range is integer values in the range [0 2^N-1]
%
% [encoded_vals, quant_values] = ValueQuantizer(U, N) uniformly quantizes
% and encodes data in U. encoded_vals range is integer values [0 2^N-1]
% quant_values shows the original data U converted to the quantized level
% representing the number
if (N<2)
disp('N is out of range. N must be > 2')
return;
end
quant_values = double(U(:));
max_val = max(quant_values);
min_val = min(quant_values);
%quantizes the data
quanta_size = (max_val-min_val) / (2^N -1);
quant_values = (quant_values-min_val) ./ quanta_size;
%reshapes the data
quant_values = reshape(quant_values, size(U));
encoded_vals = round(quant_values);
%returns the original numbers in their new quantized form
quant_values = (encoded_vals .* quanta_size) + min_val;
end
As far as I can tell this should always work, but I haven't done extensive testing, good luck
I have a problem in Matrix type conversion.
So, I want to extract the SIFT features from an image by using VLFEAT function " vl_covdet"
Here is the detail:
Input images = <141x142x3 uint8>
And because vl_covdet only can read 1 channel and an image with type of single , I give R channel of my input image to vl_covdet:
R_input_Images = Input images(:,:,1) <141x142 uint8>
R_Single_Images= im2single(R_input_Images);
[frames, descrs,info] = vl_covdet(R_Single_Images,'Method','multiscalehessian','EstimateAffineShape', false,'EstimateOrientation', true, 'DoubleImage', false, 'Verbose');
And now, I got features
descrs = <128x240 single> which values are ranging from 0 - 0.368
But to compute BoW, I have to use K-Means clustering from VLFEAT ("vl_hikmeans") which require uint8 input type.
descrs must be of class UINT8.
So then I tried to convert it again into uint8
descrs=uint8(descrs);
Now
descrs = <128x240 uint8> **AND ALL THE VALUES BECOME 0**.
What I have to do now??
values are ranging from 0 - 0.368
Well, if you round those to integer, it's no surprise they become zeros.
Since an image in floating-point format has range 0-1, and in uint8 format has range 0-255, try
descrs = uint8(descrs * 255);
how can I modify the dynamic range of an image (gray scale [-30000 30000]) in matlab to be between [-3000 15000]?
You can use the second argument of imagesc to do that:
imagesc(rand(10),[-3000 15000])
colormap('gray')
Simple interpolation along with some vector multiplication
x1 = img[i,j]
O1 = -30000 // min range of values in img
O2 = 30000 // max range of values in img
T1 = -3000 // min range of target value
T2 = 15000 // max range of target value
x2 = ((x1 - O1) * (T2 - T1)) / (O2 - O1) // Value in new range
Using the above equation and two passes over the matrix using vectorization you can convert the values. I leave that part to you.