I am calculating the Local Ternary Pattern of an image. My code is given below. Am I going in the right direction or not?
function [ I3 ] = LTP(I2)
m=size(I2,1);
n=size(I2,2);
for i=2:m-1
for j=2:n-1
J0=I2(i,j);
I3(i-1,j-1)=I2(i-1,j-1)>J0;
end
end
I2 is the image LTP is applied to.
This isn't quite correct. Here's an example of LTP given a 3 x 3 image patch and a threshold t:
(source: hindawi.com)
The range that you assign a pixel in a window to 0 is when the threshold is between c - t and c + t, where c is the centre intensity of the pixel. Therefore, because the intensity is 34 in the centre of this window, the range is between [29,39]. Any values that are beyond 39 get assigned 1 and any values that are below 29 get assigned -1. Once you determine the ternary codes, you split up the codes into upper and lower patterns. Basically, any values that get assigned a -1 get assigned 0 for upper patterns and any values that get assigned a -1 get assigned 1 for lower patterns. Also, for the lower pattern, any values that are 1 from the original window get mapped to 0. The final pattern is reading the bit pattern starting from the east location with respect to the centre (row 2, column 3), then going around counter-clockwise. Therefore, you should probably modify your function so that you're outputting both lower patterns and upper patterns in your image.
Let's write the corrected version of your code. Bear in mind that I will not give an optimized version. Let's get a basic algorithm working, and it'll be up to you on how you want to optimize this. As such, change your code to something like this, bearing in mind all of the stuff I talked about above. BTW, your function is not defined properly. You can't use spaces to define your function, as well as your variables. It will interpret each word in between spaces as variables or functions, and that's not what you want. Assuming your neighbourhood size is 3 x 3 and your image is grayscale, try something like this:
function [ ltp_upper, ltp_lower ] = LTP(im, t)
%// Get the dimensions
rows=size(im,1);
cols=size(im,2);
%// Reordering vector - Essentially for getting binary strings
reorder_vector = [8 7 4 1 2 3 6 9];
%// For the upper and lower LTP patterns
ltp_upper = zeros(size(im));
ltp_lower = zeros(size(im));
%// For each pixel in our image, ignoring the borders...
for row = 2 : rows - 1
for col = 2 : cols - 1
cen = im(row,col); %// Get centre
%// Get neighbourhood - cast to double for better precision
pixels = double(im(row-1:row+1,col-1:col+1));
%// Get ranges and determine LTP
out_LTP = zeros(3, 3);
low = cen - t;
high = cen + t;
out_LTP(pixels < low) = -1;
out_LTP(pixels > high) = 1;
out_LTP(pixels >= low & pixels <= high) = 0;
%// Get upper and lower patterns
upper = out_LTP;
upper(upper == -1) = 0;
upper = upper(reorder_vector);
lower = out_LTP;
lower(lower == 1) = 0;
lower(lower == -1) = 1;
lower = lower(reorder_vector);
%// Convert to a binary character string, then use bin2dec
%// to get the decimal representation
upper_bitstring = char(48 + upper);
ltp_upper(row,col) = bin2dec(upper_bitstring);
lower_bitstring = char(48 + lower);
ltp_lower(row,col) = bin2dec(lower_bitstring);
end
end
Let's go through this code slowly. First, I get the dimensions of the image so I can iterate over each pixel. Also, bear in mind that I'm assuming that the image is grayscale. Once I do this, I allocate space to store the upper and lower LTP patterns per pixel in our image as we will need to output this to the user. I have decided to ignore the border pixels where when we consider a pixel neighbourhood, if the window goes out of bounds, we ignore these locations.
Now, for each valid pixel that is within the valid borders of the image, we extract our pixel neighbourhood. I convert these to double precision to allow for negative differences, as well as for better precision. I then calculate the low and high ranges, then create a LTP pattern following the guidelines we talked about above.
Once I calculate the LTP pattern, I create two versions of the LTP pattern, upper and lower where any values of -1 for the upper pattern get mapped to 0 and 1 for the lower pattern. Also, for the lower pattern, any values that were 1 from the original window get mapped to 0. After, this, I extract out the bits in the order that I laid out - starting from the east, go counter-clockwise. That's the purpose of the reorder_vector as this will allow us to extract those exact locations. These locations will now become a 1D vector.
This 1D vector is important, as we now need to convert this vector into character string so that we can use bin2dec to convert the value into a decimal number. These numbers for the upper and lower LTPs are what are finally used for the output, and we place those in the corresponding positions of both output variables.
This code is untested, so it'll be up to you to debug this if it doesn't work to your specifications.
Good luck!
Related
Here is an example of convolution given:
I have two questions here:
Why is the vector 𝑥 padded with two 0s on each side? As, the length of kernel ℎ is 3. If 𝑥 is padded with one 0 on each side, the middle element of convolution output would be within the range of the length of 𝑥, why not one 0 on each side?
Explain the following output to me:
>> x = [1, 2, 1, 3];
>> h = [2, 0, 1];
>> y = conv(x, h, 'valid')
y =
3 8
>>
What is valid doing here in the context of the previously shown mathematics on vectors 𝑥 and ℎ?
I can't speak as to the amount of zero padding that is proper .... That being said, any zero padding is making up data that is not there. This isn't necessarily wrong, but you should be aware that the values computing this information may be biased. Sometimes you care about this, sometimes you don't. Introducing 1 zero (in this case) would leave the middle kernel value always in the data, but why should that be a stopping criteria? Importantly, adding on 2 zeros still leaves one multiplication of values that are actually present in the data and the kernel (the x[0]*h[0] and x[3]*h[2] - using 0 based indexing). Adding on a 3rd zero (or more) would just yield zeros in the output since 3 is the length of the kernel. In other words zero padding will always yield an output that is partially based on the actual data (but not completely) for any zero padding from n=1 to n = length(h)-1 (in this case either 1 or 2).
Even though zero padding with length 2 or 1 still has multiplications based on real data, some values are summed over "fake" data (those multiplied with a padded zero). In this case Matlab gives you 3 options for how you want the data returned. First, you can get the full convolution, which includes values that are biased because they include adding in 0 values that aren't really in the data. Alternatively you can get same, which means the length of the output is the length of the data y = [4 3 8 1]. This corresponds to 1 zero but note that for longer kernels you could technically get other lengths between full and same, Matlab just doesn't return those for you.
Finally, and probably most important to understand out of all this, you have the valid option. In your example only 2 samples of the output are computed from summations that occur only from multiplications over real data (i.e. from multiplying samples of the kernel with samples from x and not from zeros). More specifically:
y[2] = h[2]*x[0] + h[1]*x[1] + h[2]*x[2] = 3 //0 based indexing like example
y[3] = h[2]*x[1] + h[1]*x[2] + h[2]*x[3] = 8
Note none of the other y values are computed with only h and x, they all involve a padded zero which is not necessarily indicative of the real data. For example:
y[4] = h[2]*x[2] + h[1]*x[3] + h[2]*0 <= padded zero
So my computer is not too strong.. to say the least..
Yet I want to create a median of all pixels in an entire specific movie.
I was able to do it for a sequence of frames in memory.. but I am not sure on how to do it when reading more frames each time... how do I give median weight?
(like I'll read 100 frames each time but the median has to update according to the current median * 100 * times I read + 100 * current image..)
I have this code:
mov = VideoReader('MVI_3478.MOV');
seq = read(mov, [1 frames]);
% create background
channels = size(seq, 3);
height = size(seq,1);
width = size(seq,2);
BG = zeros(height, width, channels, 'uint8');
for c = 1:channels
for y = 1:height
for x = 1:width
BG(y,x,c) = median(seq(y,x,c,:));
end
end
end
and my question is, given that I will add another loop above everything, how to give median weight?
Thanks!
There is no possibility to calculate the median this way. The required Information is lost.
Example:
median([1,2,3,4,5,6,7]) is 4
median([1,2,3,3,5,6,7]) is 3
median([1,2,3])=2
median([4,5,6,7])=5
median([3,5,6,7])=5
Thus, for both subsequence you get the partial results 2 and 5, while the median is 3 in one case and 4 in the other case.
The only possibility I see is some binary search approach:
smaller=0
larger=0
equal=0
el=numel(s)
while(smaller>=el/2||larger>el/2||equal==0)
guess=..
smaller=0
larger=0
equal=0
for c = 1:channels
for y = 1:height
for x = 1:width
s=seq(y,x,c,:)
smaller=smaller+numel(s(s<guess);
larger=larger+numel(s(s>guess);
equal=equal+numel(s(s=guess);
end
end
end
end
This is only a sketch, the code has to be completed. Guess has to be filled with some binary search strategy.
In case of a large number of frames, calculating the median in a progressive manner can be problem since the median is a global order statistic and does not have a structure. The classical method is to use the fact that we are working with grayscale 8 bit values (256). Thus for any pixel p(x,y,n) one needs to maintain a histogram with 256 bins with each bin counting n values( as there are n frames).
Thus at each update we will have:
value = p(x,y,i); %for the ith frame
H(x,y,value) = H(x,y,value) + 1; %updating your histogram,
and then sort the histogram by their frequencies and pick the middle value: https://math.stackexchange.com/questions/202302/how-to-calculate-median-and-standard-deviation-from-histogram
The size of this counter can be decided based on the number of frames you have in the video N = log2(n) bit. The median search now is simplified since its constant time search within a histogram. This also helps when concatenating many histograms since the search remains a constant time search independent.
Thus finally the total size of your histograms would be XYN bits, where X and Y are the dimensions of your image.
This is a bit more of a general question, but, no matter how many times I read the description of MATLAB's im2col function, I cannot fully understand it. I need it for the computational efficiency because MATLAB is awful with nested for loops. Here's what I'm attempting to do, but using nested for loops:
[TRIMMED]=TM_FILTER(IMAGE, FILTER_SIZE, PERCENT)
Takes a 2-D array and returns the array, filtered with a
square trimed mean filter with length/width equal to FILTER_SIZE and percent equal to PERCENT.
%}
function [trimmed]=tm_filter(image, filter_size, percent)
if rem(filter_size, 2)==0 %make sure filter has a center pixel
error('filter size must be odd numbered'); %error and return if number is odd
return
end
if percent > 100 || percent < 0
error('Percentage must be ? [0, 100]');
return
end
[rows, columns]=size(image); %figure out pixels needed
n=(filter_size-1)/2; %n is pixel distance from center pixel to boundaries
padded=(padarray(image, [n,n],128)); %padding on boundaries so center pixel always has neighborhood
for i=1+n:rows %rows from first non-padded entry to last nonpadded entry
for j=1+n:columns %colums from first non-padded entry to last nonpadded entry
subimage=padded(i-n:i+n,j-n:j+n); %neighborhood same size as filter
average=trimmean(trimmean(subimage, percent), percent); %computes trimmed mean of neighborhood as trimmed mean of vector of trimmed means
trimmed(i-n, j-n)=average; %stores averaged pixel in new array
end
end
trimmed=uint8(trimmed); %converts image to gray levels from 0-255
Here is the code you want: note the entire nested loop was replaced with a single statement.
[TRIMMED]=TM_FILTER(IMAGE, FILTER_SIZE, PERCENT)
Takes a 2-D array and returns the array, filtered with a
square trimed mean filter with length/width equal to FILTER_SIZE and percent equal to PERCENT.
%}
function [trimmed]=tm_filter(image, filter_size, percent)
if rem(filter_size, 2)==0 %make sure filter has a center pixel
error('filter size must be odd numbered'); %error and return if number is odd
return
end
if percent > 100 || percent < 0
error('Percentage must be ? [0, 100]');
return
end
trimmed = (uint8)trimmean(im2col(image, filter_size), percent);
Explanation:
the im2col function turns each region of filter_size into a column. Your trimmean function can then operate on each of the regions (columns) in a single operation - much more efficient than extracting each shape in turn. Also note this requires only a single application of trimmean - in your original you first do it on the columns, then again on the rows, which will actually cause a more severe trim than I think you intended (exclude 50% first time, then 50% again - feels like excluding 75%. Not exactly true but you get my point). Also you will find that changing the order of operations (row, then column vs column, then row) would change the result because the filter is nonlinear.
For example
im = reshape(1:9, [3 3]);
disp(im2col(im,[2 2])
results in
1 2 4 5
2 3 5 6
4 5 7 8
5 6 8 9
since you took each of the 4 possible blocks of 2x2 from this matrix:
1 4 7
2 5 8
3 6 9
and turned them into columns
Note - with this technique (applied to the unpadded image) you do lose some pixels on the edge; your method added some padding so that every pixel (even ones on the edge) has a complete neighborhood, and as such the filter returns an image that is the same size as the original (but it's not clear what the effect of padding/filtering will be near the margin, and especially the corner: you have almost 75% percent of pixels fixed at 128 and that is likely to dominate the behavior in the corner).
why im2col? why not nlfilter?
>> trimmed = nlfilter( image, [filter_size filter_size],...
#(x) treimmean( trimmean(x, percent), percent ) );
Are you sure you process the entire image?
i and j only goes up to rows and columns respectively. However, when you update trimmed you access i-n and j-n. What about the last n rows and columns?
Why do you apply trimmean twice for each block? Isn't it more appropriate to process the block at once, as in trimmean( x(:), percent)?
I believe the results of trimmean( trimmean(x, percent), percent) will be different than those of trimmean( x(:), percent). Have you give it a thought?
A small remark, it is best not to use i and j as variable names in matlab.
I am new to Matlab and am trying to implement code to perform the same function as histeq without actual use of the function. In my code the image colour I get changes drastically when it should not change that much. The average intensity in the image (ranging between 0 and 255) is 105.3196. The image is of an open source pollen particle.
Any help would be much appreciated. The sooner the better! Please could any help be simplified as my Matlab understanding is limited. Thanks.
clc;
clear all;
close all;
pollenJpg = imread ('pollen.jpg', 'jpg');
greyscalePollen = rgb2gray (pollenJpg);
histEqPollen = histeq(greyscalePollen);
averagePollen = mean2 (greyscalePollen)
sizeGreyScalePollen = size(greyscalePollen);
rowsGreyScalePollen = sizeGreyScalePollen(1,1);
columnsGreyScalePollen = sizeGreyScalePollen(1,2);
for i = (1:rowsGreyScalePollen)
for j = (1:columnsGreyScalePollen)
if (greyscalePollen(i,j) > averagePollen)
greyscalePollen(i,j) = greyscalePollen(i,j) + (0.1 * averagePollen);
if (greyscalePollen(i,j) > 255)
greyscalePollen(i,j) = 255;
end
elseif (greyscalePollen(i,j) < averagePollen)
greyscalePollen(i,j) = greyscalePollen(i,j) - (0.1 * averagePollen);
if (greyscalePollen(i,j) > 0)
greyscalePollen(i,j) = 0;
end
end
end
end
figure;
imshow (pollenJpg);
title ('Original Image');
figure;
imshow (greyscalePollen);
title ('Attempted Histogram Equalization of Image');
figure;
imshow (histEqPollen);
title ('True Histogram Equalization of Image');
To implement the equalisation algorithm described on the Wikipedia page, follow these these steps:
Decide on a binSize to group greyscale values. (This is a tweakable, the larger the bin, the less accurate the result from the ideal case, but I think it can cause problems if chosen too small on real images).
Then, calculate the probability of a pixel being a shade of grey:
pixelCount = imageWidth * imageHeight
histogram = all zero
for each pixel in image at coordinates i, j
histogram[floor(pixel / 255 / 10) + 1] += 1 / pixelCount // 1-based arrays, not 0-based
// Note a technicality here: you may need to
// write special code to handle pixels of 255,
// because they will fall in their own bin. Or instead use rounding with an offset.
The histogram in this calculation is scaled (divided by the pixel count) so that the values make sense as probabilities. You can of course factor the division out of the for loop.
Now you need to calculate the accumulative sum of this:
histogramSum = all zero // length of histogramSum must be one bigger than histogram
for i == 1 .. length(histogram)
histogramSum[i + 1] = histogramSum[i] + histogram[i]
Now you have to invert this function and this is the tricky part. The best is to not calculate an explicit inverse, but calculate it on the spot, and apply it on the image. The basic idea is to search for the pixel value in the histogramSum (find the closest index below), and then do a linear interpolation between the index and the next index.
foreach pixel in image at coordinates i, j
hIndex = findIndex(pixel, histogramSum) // You have to write findIndex, it should be simple
equilisationFactor = (pixel - histogramSum[hIndex])/(histogramSum[hIndex + 1] - histogramSum[hIndex]) * binSize
// This above is the linear interpolation step.
// Notice the technicality that you need to handle:
// histogramSum[hIndex + 1] may be out of bounds
equalisedImage[i, j] = pixel * equilisationFactor
Edit: without drilling into the maths, I can't be 100% sure, but I think that division by 0 errors are possible. These can occur if one bin is empty, so consecutive sums are equal. So you need special code to handle this case too. The best you can do is take the value for the factor as halfway between hIndex, hIndex + n, where n is the highest value for which histogramSum[hIndex + n] == histogramSum[hIndex].
And that should be it, once you have dealt with all the technicalities.
The above algorithm is slow (especially in the findIndex step). You may be able to optimize this with a special lookup datastructure. But only do that when it's working, and only if necessary.
One more thing about your Matlab code: the rows and columns are inverted. Because of the symmetry in the algorithm, the result is the same, but it can cause puzzling bugs in other algorithms, and be very confusing if you examine pixel values during debugging. In the pseudocode above I used them the same as you, though.
Relatively few (5) lines of code can do this. I used a low contrast file called 'pollen.jpg' that I found at http://commons.wikimedia.org/wiki/File%3ALepismium_lorentzianum_pollen.jpg
I read it in using your code, run all the above, then do the following:
% find out the index of pixels sorted by intensity:
[gv gi] = sort(greyscalePollen(:));
% create a table of "approximately equal" intensity values:
N = numel(gv);
newVals = repmat(0:255, [ceil(N/256) 1]);
% perform lookup:
% the pixels in sorted order need new values from "equal bins" table:
newImg = zeros(size(greyscalePollen));
newImg(gi) = newVals(1:N);
% if the size of the image doesn't divide into 256, the last bin will have
% slightly fewer pixels in it than the others
When I run this algorithm, and then create a composite of the four images (original, your attempt, my attempt, and histeq), you get the following:
I think it's convincing. The images are not exactly identical - I believe that is because the matlab histeq routine ignores all pixels with value 0. Since it is fully vectorized it is also pretty fast (although not nearly as fast as histeq by about a factor 15 on my image.
EDIT: a bit of explanation might be in order. The repmat command I use to create the newVals matrix creates a matrix that looks like this:
0 1 2 3 4 ... 255
0 1 2 3 4 ... 255
0 1 2 3 4 ... 255
...
0 1 2 3 4 ... 255
Since matlab stores matrices in "first index first" order, if you read this matrix with a single index (as I do in the line newVals(1:N)), you access first all the zeros, then all the ones, etc:
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 ...
So - when I know the indices of the pixels in the order of their intensity (as returned by the second argument of the sort command, which I called gi), then I can easily assign the value 0 to the first N/256 pixels, the value 1 to the next N/256 etc, with the command I used:
newImg(gi) = newVals(1:N);
I hope this makes the code a little easier to understand.
I'm building a program that plays Connect 4, and one of the things to check for is cases where your opponent has the following board positions:
[0,1,1,0,0] or [0,1,0,1,0] or [0,0,1,1,0]
where your opponent is one move away from having three pieces in a row with a blank on either side. If you don't fill one of the middle elements on your next move, your opponent can go there and force a checkmate.
What I have is a board of 42 squares, numbered 1:42. And I created a matrix called FiveCheck, where each row maps to five consecutive board positions. For example:
FiveCheck(34,:) = [board(7),board(14),board(21),board(28),board(35)];
FiveCheck(35,:) = [board(14),board(21),board(28),board(35),board(42)];
are two of the diagonals of the board.
I can test for the possible checkmate with
(sum(FiveCheck(:,2:4),2) == 2 + sum(FiveCheck,2) == 2) == 2
And that gives me a vector with 1's indicating that the corresponding FiveCheck row has a possible checkmate. Let's say the 34th element of that vector has a 1, and the pattern for that diagonal (from the example given above) is [0,0,1,1,0]. How do I return 14, the board position I should move to?
Another separate example, if the 35th element of that vector has a 1, and the pattern for that diagonal is [0,1,0,1,0], how do I return 28?
EDIT: I just realized this is impossible without some sort of a map. So I created FiveMap, a matrix the same size of FiveCheck, with the same formulas except the word "board" is removed. For example:
FiveMap(34,:) = [(7),(14),(21),(28),(35)];
FiveMap(35,:) = [(14),(21),(28),(35),(42)];
Since you are dealing with binary vectors of size 5, a very efficient solution might be the use of a look up table.
Consider board to be a binary matrix. you can filter it with 4 filters (of length 5), representing horizontal, vertical and two diagonals to identify possible positions you are looking for. Then, for specious locations, you can extract the 5 binary bits and use a look up table of size 32 to get the offset to the position where you should place your piece.
a small example:
% construct LUT
LUT = zeros(32,2); % dx and dy offsets for each entry
LUT(12,:) = [ 1 0 ]; % covering the case [0 1 1 0 0] - put piece 1 to the right of center
% continue constructing LUT here...
horFilt = ones(1, 5);
resp = imfilter( board, horFilt ); % need to think about board''s boundaries here...
[yy xx] = find( resp == 2 ); all locations where filter caught 2 out of 5
pat = board( yy, xx + [ -2 1 0 1 2] ); % I assume here only one location was found
pat = bin2dec( '0'+char( pat ) ); % convert binary pattern to decimal entry
board( yy + LUT(pat,2) , xx + LUT(pat, 1) ) = ... ; % your move here...