Recalculate DICOM Window Center and Width on Subtraction of Dynamic MRI Sequence - metadata

I'm trying to calculate and present the Subtraction images of a Dynamic MRI Sequence. However, I've looked for quite some time and I can't seem to find how to relate the individual Rescale Slope and Intercept and even Window Center and Width fields with the respective fields for the new Subtracted image.
I'm sorry if it is a repost but I can't find the answer for this particular problem.
I guess that for Slope and Intercept I probably should just apply the old ones, subtract the images and make sure they are within uint16 range, but what about Window Center and Width?
Thanks in advance

About Rescale Slope and -Intercept:
You can apply the original values of the slices you subtracted, if they are all equal in the slices you subtract from each other. If they are not equal, you will have to rescale one of the slices to the slope/intercept of the other one before doing the subtraction. Otherwise, the subtraction will yield wrong grayscales. Obviously, the resulting subtracted image will then be assigned the slope and intercept of the slice you rescaled the other one to.
About Window Center and -Width:
There is no answer which is right or wrong. Windowing depends on the taste of the person viewing the images - ask three physicians and receive four different answers ;-)
I would rather recommend to calculate new values from the histogram of the subtracted image than trying to calculate them from the orginal slices. Subtraction means that you eliminate tissue. The original values were probably adjusted in such a way that this tissue is visible. Now that you have subtracted it you want to have a window that emphasizes the vessels - the rest is just noise.

Related

Surface from level curves of unkown levels in MATLAB

I want to obtain a n*m matrix with a approximated "height" at each discrete point. The input is a picture (see link below) of the contours from a map, each contourline represents an 5m increase or decrease of the height.
My thoughts:
I imported the picture as a logical png to a matrix called A which means that every contourline in the matrix is a connected strip of '1's and everything else is just 0.
My initial thought was to just start in the upper left corner of the matrix, set that height to zero, declare a new matrix 'height' and start with figuring out height(:,1) by adding 5 meters each time we meet a '1' in the A matrix. Knowing the whole first colonn I now for each row start from the left and add 5 m each time we meet a '1'.
I quickly realized however that this wouldn't work since there is no way for the algorithm to understand whether it should add or subtract height, i.e if we are running uphill or downhill.
If I somehow could approximate the gradient from the intensity of contourlines that would be great even though it would always be possible for a uphill to be a downhill and vice versa but then I could manually decide which is true of these two cases.
Picture:
WORK IN PROGRESS
%% Read and binarize the image
I=imread('https://i.stack.imgur.com/pRkiY.jpg');
I=rgb2gray(I);
I=I>graythresh(I)*255;
%% Get skeleton, i.e. the lines!
sk=bwmorph(~I,'skel',Inf);
%% lines are too thin, dilate them
dilated=~imdilate(sk, strel('disk', 2, 4));
%% label the image!
test=bwlabel(dilated,8);
imshow(test,[]); colormap(plasma); % use colormap parula if you prefer.
Missing: label each adjacent area with a number +1 (or -1) its neighbours (No idea how to do this)
Missing: Interpolate flat areas. This should be doable once the altitudes are known. One can set the pixels in the skeleton image to the altitudes and interpolate the rest using griddata, which will be slow, but still doable.
Disclaimer: not full answer yet, feel free to edit or reuse the code in this answer to further it!

I have a histogram plot how to chose the appropriate point?

I had written a code to obtain a threshold pixel value of an image for imaging particles. I have got a plot attached below. I want to choose a point where there is sudden jump in the value. This will be my threshold value. I can manually do this by seeing the point. But I want to do it automatically through code what should I do?
I was thinking of sorting it and finding frequency. Then loop it to compare it with previous value. I want to know what should I choose the minimum difference to between these two values.
What other method should I use?
Here is the Image:
Firstly, what do you mean by "sudden jump"? If I understand well, it means that there is a big difference (descrease) of pixel numbers between two adjecent gray levels. Then you can just right shift the histogram vector, and substract the two vectors, getting a vector containing the differences between two adjacent gray levels. And then, you can choose a threshold. That's all.

Problems in implementing the integral image in MATLAB

I tried to implement the integral image in MATLAB by the following:
im = imread('image.jpg');
ii_im = cumsum(cumsum(double(im)')');
im is the original image and ii_im is the integral image.
The problem here is the value in ii_im flows out of the 0 to 255 range.
When using imshow(ii_im), I always get a very bright image which I am not sure is the correct result. Am I correct here?
You're implementing the integral image calculations right, but I don't understand why you would want to visualize it - especially since the sums will go beyond any normal integer range. This is expected as you are performing a summation of intensities bounded by larger and larger rectangular neighbourhoods as you move to the bottom right of the image. It is inevitable that you will get large numbers towards the bottom right. Also, you will obviously get a white image when trying to show this image because most of the values will go beyond 255, which is visualized as white.
If I can add something, one small optimization I have is to get rid of the transposing and use cumsum to specify the dimension you want to work on. Specifically, you can do this:
ii_im = cumsum(cumsum(double(im), 1), 2);
It doesn't matter what direction you specify first (2 then 1, or 1 then 2). The summation of all pixels within each bounded area, as long as you specify all directions to operate on, should be the same.
Back to your question for display, if you really, really, really really... I mean really want to, you can normalize the contrast by doing:
imshow(ii_im, []);
However, what you should expect is a gradient image which starts to be dark from the top, then becomes brighter when you get to the bottom right of the image. Remember, each point in the integral image calculates the total summation of pixel intensities bounded by the top left corner of the image to this point, thus forming a rectangle of intensities you need to sum over. Therefore, as we move further down and to the right of the integral image, the total summation should increase.
With the cameraman.tif image, this is the original image, as well as it's integral image visualized using the above command:
Either way, there is absolutely no reason why you would want to visualize it. You would use this directly with whatever application requires it (adaptive thresholding, Viola-Jones detector, etc.)
Another option could be applying a log operation for each value in the integral image. Something like:
imshow(log(1 + ii_im), []);
However, this will make most of the pixels have the same contrast and this is probably not useful. This is what I get with cameraman.tif:
The moral of this story is that you need some sort of contrast normalization so that you can fit all of the values in your integral image within the confines of the data type that is used to display the image on the screen using imshow.

Matlab - Concatenation of overlapping blocks with weighted average

I'm looking for a quick way to combine overlapping blocks into one image. Assume the size of the full image and the coordinates of each block within the full image are known. Also assume the blocks are regularly spaced both horizontally and vertically.
The catch - in the overlapping region, a pixel in the output image should get a value according to a weighted average of the corresponding pixels in the overlapping blocks. The weights should be proportional to the distance from the block center.
So, for example, take a pixel location p (relative to the full image coordinates) in the overlapping region between block B1 and B2. Assume the overlap region is due to a horizontal shift only of size h. If B1(p) and B2(p) are the values at that location as they appear in blocks B1,B2, and d1,d2 are the respective distances of p from the center of blocks B1 and B2 then in the output image O the location p will get O(p) = (h-d1)/h*B1(p) + (h-d2)/h*B2(p).
Note that generally, there can be up to 4 overlapping blocks in any region.
I'm looking for the best way to do this in Matlab. Hopefully, for any choice of distance function.
blockproc and alike can help splitting an image into blocks but allow for very basic combination of results. imfuse comes close to what I need, but offers simple non-weighted alpha blending only. bwdist seems to be useful, but I haven't figured what the most efficient method to put it to use is.
You should use the command im2col.
Once you have all your patches in vectors aligned in one matrix you'll be able to work on the columns (Filtering per patch) and rows (Filtering between patches).
It will be trickier than the classic usage of im2col but it should work.

How to find the distance between the only two points in an image produced by a grating like substance?

i need to find the distance between the two points.I can find the distance between them manually by the pixel to cm converter in the image processing tool box. But i want a code which detects the point positions in the image and calculate the distance.
More accurately speaking the image contains only three points one mid and the other two approximately distanced equally from it...
There might be a better way then this, but I hacked something similar together last night.
Use bwboundaries to find the objects in the image (the contiguous regions in a black/white image).
The second returned matrix, L, is the same image but with the regions numbered. So for the first point, you want to isolate all the pixels related to it,
L2 = (L==1)
Now find the center of that region (for object 1).
x1 = (1:size(L2,2))*sum(L2,1)'/size(L2,2);
y1 = (1:size(L2,1))*sum(L2,2)/size(L2,1);
Repeat that for all the regions in your image. You should have the center of mass of each point. I think that should do it for you, but I haven't tested it.