how to determin the North west radiant score(270-360 degree) to decide score - matlab

I have a code in matlab as below. the code is very confusing. Now i need to decide the north west part of the disk (270-360) degree if the fuel type are f3 and f5, then it is a shia. i need to return yes or no for shia.
% base path
fireline_path = 'Documents/fireline/';
% EOSD landcover file
landcover_file = fullfile(fireline_path,'landcover/EOSD/083O_lc_1/083O_lc_1.TIF');
[landcover,cmap_landcover,R_landcover0] = geotiffread(landcover_file);
proj_landcover = geotiffinfo(landcover_file);
pixelsize = 25; % 25m UTM grid
% trim raster to area around town of Lake
i_trim = 3001:3500;
j_trim = 3001:3500;
landcover = landcover(i_trim,j_trim);
[x11,y11] = pix2map(R_landcover0,i_trim(1),j_trim(1));
R_landcover = makerefmat(x11,y11,pixelsize,-pixelsize);
% read in damaged propeties and get their locations in grid
damage_shapefile = fullfile(fireline_path,'work/slavelake/damaged_properties_SL.shp');
damage = shaperead(damage_shapefile);
x_damage = [damage(~strcmp({damage.Damage},'Undamaged')).X];
y_damage = [damage(~strcmp({damage.Damage},'Undamaged')).Y];
[i_damage, j_damage] = map2pix(R_landcover,x_damage,y_damage);
ind_damage = sub2ind(size(landcover),round(i_damage),round(j_damage));
% create landcover -> fuel mappings
fuelmappings = zeros(max(landcover(:)),1);
ind_vegclass = [52,81,82,83,100,211,221,231]; % indicies of classes present that are vegetation
veg_loadings = [...
3,...'Shrub Low'
3,...'Wetland-treed'
3,...'Wetland-shrub'
1,...'Wetland-herb'
1,...'Herbs'
5,...'Coniferous-dense'
5,...'Broadleaf-dense'
5]; %Mixedwood-dense'
fuelmappings(ind_vegclass) = veg_loadings;
landcover(landcover==0) = 1; % set nodata cells to 1, this will map to no fuel
fuelmap = fuelmappings(landcover);
% create binary fuel map (F3 or F5)
fuelmapbw = fuelmap>=3;
%%
% flag pixels within 1/2 mile of dense areas
shiasum = zeros(size(landcover)); % initialize summary
disp('Percent of damaged/destroyed properties in SHIA:')
densitythresholds = .25;%[.75, .67, .5, .33, .25];
for densitythreshold = densitythresholds
% create filter to calculate shia (create a disk and then zero out unwanted
% pixels)
shiadistmiles = .5; %miles
shiadistmeters = shiadistmiles*unitsratio('meters','miles');
shiadistpix = shiadistmeters/pixelsize;
hshia = fspecial('disk',shiadistpix);
hshia(1:floor(shiadistpix),:) = 0;
xy = -floor(shiadistpix):floor(shiadistpix);
[X,Y] = meshgrid(xy,fliplr(xy));
f70 = -tan(deg2rad(70))*X;
hshia(Y<f70) = 0;
f20 = -tan(deg2rad(20))*X;
hshia(Y>f20) = 0;
hshia(hshia>0) = 1;
hshia = hshia/sum(hshia(:));
close all
shiavals = filter2(hshia,fuelmapbw);
shia = filter2(hshia,fuelmapbw) > densitythreshold;
linkimage(fuelmapbw,shia)
hold on
plot(j_damage,i_damage,'x')
hold off
propsinshia = sum(shia(ind_damage))/numel(ind_damage);
disp(['With threshold of ' num2str(densitythreshold) ':'])
disp(propsinshia * 100)
outfile = [fireline_path 'shia/SlaveLake_'...
num2str(densitythreshold*100,'%02d') '.tif'];
geotiffwrite(outfile,shia,R_landcover,...
'CoordRefSysCode',proj_landcover.GeoTIFFCodes.PCS)
shiasum = shiasum+shia;
end
lookup = [0 fliplr(densitythresholds) * 100];
shia = lookup(shiasum+1);
outfile = [fireline_path 'shia/SlaveLake_v2.tif'];
geotiffwrite(outfile,uint8(shia),R_landcover,...
'CoordRefSysCode',proj_landcover.GeoTIFFCodes.PCS)

Related

Problems with initializing CamShift algorithm

I use matlab to implement a tracker using cam shift. I need to implement it myself hence I'm not using the built in method.
I let the user choose the object to track and after the first iteration the convergence makes it start tracking a different object. pictures speaks louder than words so:
I choose my face in the first frame:
It converges to my shoulder and starting tracking it:
Now the tracking is overall ok throughout the video so I think the issue is in the initialization phase.
SomeCode: Get position from user:
function [pos] = getObjectPosition(F)
f = figure;
imshow(F);
pos = getPosition(imrect(gca));
close(f);
end
Main Loop:
firstFrame = readFrame(input);
pos = getObjectPosition(firstFrame);
while hasFrame(input)
% init next frame
nextFrame = readFrame(input);
[currFrameHSV, ~, ~] = rgb2hsv(nextFrame);
frameHue = double(currFrameHSV(:,:,1));
% init while loop vars
prevX = -1;
prevY = -1;
i = 0;
while (i < numOfIterations) && ~(prevX == pos(1) && prevY == pos(2))
% updating the curr iteration number
i = i + 1;
% getting search window location and dimensions
envRect = round(getEnvRect(pos,pos(4)/yRatio,pos(3)/xRatio,height,width));
[X,Y] = meshgrid(envRect(1):envRect(1) + envRect(3),...
envRect(2):envRect(2) + envRect(4));
% getting the sub image
I = imcrop(frameHue,envRect);
M00 = sum(sum(I));
M10 = sum(sum(X.*I));
M01 = sum(sum(Y.*I));
% getting center mass location
Xc = round(M10/M00);
Yc = round(M01/M00);
% updating object position
prevX = pos(1);
prevY = pos(2);
pos(1) = floor(Xc - pos(3)/2);
pos(2) = floor(Yc - pos(4)/2);
end
% adding the object's bounding box to the frame to write
% show the image
end
getEnv method:
function [ envRect ] = getEnvRect( feature,hShift,wShift,rows,cols )
hEnvSize = feature(4) + 2*hShift;
wEnvSize = feature(3) + 2*wShift;
xPatch = feature(1);
xEnv = max(1, xPatch - wShift);
yPatch = feature(2);
yEnv = max(1, yPatch - hShift);
width = min(cols, xEnv + wEnvSize) - xEnv;
height = min(rows, yEnv + hEnvSize) - yEnv;
envRect = [xEnv, yEnv, width, height];
end

Creating a table from a variable inside a for loop

I am writing a for loop to calculate the value of four different variables. The first variable is M. M increases from 10^2 to 10^5,
M = [10^2,10^3,10^4,10^5];
The other three variables needed for the table are shown in the code below.
confmc
confcv
confmcSize/confcvSize
I first create a for loop to iterate through the four different values of M. I then create the table outside of the for loop.
How could I adjust the implementation so that the table displays all four values of M?
randn('state',100)
%%%%%% Problem and method parameters %%%%%%%%%
S = 5; E = 6; sigma = 0.3; r = 0.05; T = 1;
Dt = 1e-2; N = T/Dt; M = [10^2,10^3,10^4,10^5];
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for k=1:numel(M)
%%%%%%%%% Geom Asian exact mean %%%%%%%%%%%%
sigsqT= sigma^2*T*(N+1)*(2*N+1)/(6*N*N);
muT = 0.5*sigsqT + (r - 0.5*sigma^2)*T*(N+1)/(2*N);
d1 = (log(S/E) + (muT + 0.5*sigsqT))/(sqrt(sigsqT));
d2 = d1 - sqrt(sigsqT);
N1 = 0.5*(1+erf(d1/sqrt(2)));
N2 = 0.5*(1+erf(d2/sqrt(2)));
geo = exp(-r*T)*( S*exp(muT)*N1 - E*N2 );
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Spath = S*cumprod(exp((r-0.5*sigma^2)*Dt+sigma*sqrt(Dt)*randn(M(k),N)),2);
% Standard Monte Carlo
arithave = mean(Spath,2);
Parith = exp(-r*T)*max(arithave-E,0); % payoffs
Pmean = mean(Parith);
Pstd = std(Parith);
confmc = [Pmean-1.96*Pstd/sqrt(M(k)), Pmean+1.96*Pstd/sqrt(M(k))];
confmcSize = [(Pmean+1.96*Pstd/sqrt(M(k)))-(Pmean-1.96*Pstd/sqrt(M(k)))];
% Control Variate
geoave = exp((1/N)*sum(log(Spath),2));
Pgeo = exp(-r*T)*max(geoave-E,0); % geo payoffs
Z = Parith + geo - Pgeo; % control variate version
Zmean = mean(Z);
Zstd = std(Z);
confcv = [Zmean-1.96*Zstd/sqrt(M(k)), Zmean+1.96*Zstd/sqrt(M(k))];
confcvSize = [(Zmean+1.96*Zstd/sqrt(M(k)))-(Zmean-1.96*Zstd/sqrt(M(k)))];
end
T = table(M,confmc,confcv,confmcSize/confcvSize)
The current code returns
T =
1×4 table
M confmc confcv Var4
_____ ____________________ ____________________ ______
1e+05 0.096756 0.1007 0.097306 0.097789 8.1622
How could I change my implementation so that all four values of M are computed?
I just modified few things.Take a look at the following code.
randn('state',100)
%%%%%% Problem and method parameters %%%%%%%%%
S = 5; E = 6; sigma = 0.3; r = 0.05; T = 1;
Dt = 1e-2; N = T/Dt; M = [10^2,10^3,10^4,10^5];
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
confmc = zeros(numel(M), 2);
confcv = zeros(numel(M), 2);
confmcSize = zeros(numel(M), 1);
confcvSize = zeros(numel(M), 1);
for k=1:numel(M)
%%%%%%%%% Geom Asian exact mean %%%%%%%%%%%%
sigsqT= sigma^2*T*(N+1)*(2*N+1)/(6*N*N);
muT = 0.5*sigsqT + (r - 0.5*sigma^2)*T*(N+1)/(2*N);
d1 = (log(S/E) + (muT + 0.5*sigsqT))/(sqrt(sigsqT));
d2 = d1 - sqrt(sigsqT);
N1 = 0.5*(1+erf(d1/sqrt(2)));
N2 = 0.5*(1+erf(d2/sqrt(2)));
geo = exp(-r*T)*( S*exp(muT)*N1 - E*N2 );
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Spath = S*cumprod(exp((r-0.5*sigma^2)*Dt+sigma*sqrt(Dt)*randn(M(k),N)),2);
% Standard Monte Carlo
arithave = mean(Spath,2);
Parith = exp(-r*T)*max(arithave-E,0); % payoffs
Pmean = mean(Parith);
Pstd = std(Parith);
confmc(k,:) = [Pmean-1.96*Pstd/sqrt(M(k)), Pmean+1.96*Pstd/sqrt(M(k))];
confmcSize(k,1) = [(Pmean+1.96*Pstd/sqrt(M(k)))-(Pmean-1.96*Pstd/sqrt(M(k)))];
% Control Variate
geoave = exp((1/N)*sum(log(Spath),2));
Pgeo = exp(-r*T)*max(geoave-E,0); % geo payoffs
Z = Parith + geo - Pgeo; % control variate version
Zmean = mean(Z);
Zstd = std(Z);
confcv(k,:) = [Zmean-1.96*Zstd/sqrt(M(k)), Zmean+1.96*Zstd/sqrt(M(k))];
confcvSize(k,1) = [(Zmean+1.96*Zstd/sqrt(M(k)))-(Zmean-1.96*Zstd/sqrt(M(k)))];
end
T = table(M',confmc,confcv,confmcSize./confcvSize)
In short, I just used a matrix instead of a vector or scalar as the members of the table. In your code, the variables (confmc, confcv, confmcSize, confcvSize) were getting overwritten.

how to find the corners of rotated object in matlab?

I want to find the corners of objects.
I tried the following code:
Vstats = regionprops(BW2,'Centroid','MajorAxisLength','MinorAxisLength',...
'Orientation');
u = [Vstats.Centroid];
VcX = u(1:2:end);
VcY = u(2:2:end);
[VcY id] = sort(VcY); % sorting regions by vertical position
VcX = VcX(id);
Vstats = Vstats(id); % permute according sort
Bv = Bv(id);
Vori = [Vstats.Orientation];
VRmaj = [Vstats.MajorAxisLength]/2;
VRmin = [Vstats.MinorAxisLength]/2;
% find corners of vertebrae
figure,imshow(BW2)
hold on
% C = corner(VER);
% plot(C(:,1), C(:,2), 'or');
C = cell(size(Bv));
Anterior = zeros(2*length(C),2);
Posterior = zeros(2*length(C),2);
for i = 1:length(C) % for each region
cx = VcX(i); % centroid coordinates
cy = VcY(i);
bx = Bv{i}(:,2); % edge points coordinates
by = Bv{i}(:,1);
ux = bx-cx; % move to the origin
uy = by-cy;
[t, r] = cart2pol(ux,uy); % translate in polar coodinates
t = t - deg2rad(Vori(i)); % unrotate
for k = 1:4 % find corners (look each quadrant)
fi = t( (t>=(k-3)*pi/2) & (t<=(k-2)*pi/2) );
ri = r( (t>=(k-3)*pi/2) & (t<=(k-2)*pi/2) );
[rp, ip] = max(ri); % find farthest point
tc(k) = fi(ip); % save coordinates
rc(k) = rp;
end
[xc,yc] = pol2cart(tc+1*deg2rad(Vori(i)) ,rc); % de-rotate, translate in cartesian
C{i}(:,1) = xc + cx; % return to previous place
C{i}(:,2) = yc + cy;
plot(C{i}([1,4],1),C{i}([1,4],2),'or',C{i}([2,3],1),C{i}([2,3],2),'og')
% save coordinates :
Anterior([2*i-1,2*i],:) = [C{i}([1,4],1), C{i}([1,4],2)];
Posterior([2*i-1,2*i],:) = [C{i}([2,3],1), C{i}([2,3],2)];
end
My input image is :
I got the following output image
The bottommost object in the image is not detected properly. How can I correct the code? It fails to work for a rotated image.
You can get all the points from the image, and use kmeans clustering and partition the points into 8 groups. Once partition is done, you have the points in and and you can pick what ever the points you want.
rgbImage = imread('your image') ;
%% crop out the unwanted white background from the image
grayImage = min(rgbImage, [], 3);
binaryImage = grayImage < 200;
binaryImage = bwareafilt(binaryImage, 1);
[rows, columns] = find(binaryImage);
row1 = min(rows);
row2 = max(rows);
col1 = min(columns);
col2 = max(columns);
% Crop
croppedImage = rgbImage(row1:row2, col1:col2, :);
I = rgb2gray(croppedImage) ;
%% Get the white regions
[y,x,val] = find(I) ;
%5 use kmeans clustering
[idx,C] = kmeans([x,y],8) ;
%%
figure
imshow(I) ;
hold on
for i = 1:8
xi = x(idx==i) ; yi = y(idx==i) ;
id1=convhull(xi,yi) ;
coor = [xi(id1) yi(id1)] ;
[id,c] = kmeans(coor,4) ;
plot(coor(:,1),coor(:,2),'r','linewidth',3) ;
plot(c(:,1),c(:,2),'*b')
end
Now we are able to capture the regions..the boundary/convex hull points are in hand. You can do what ever math you want with the points.
Did you solve the problem? I Looked into it and it seems that the rotation given by 'regionprops' seems to be off. To fix that I've prepared a quick solution: I've dilated the image to close the gaps, found 4 most distant peaks of each spine, and then validated if a peak is on the left, or on the right of the centerline (that I have obtained by extrapolating form sorted centroids). This method seems to work for this particular problem.
BW2 = rgb2gray(Image);
BW2 = imbinarize(BW2);
%dilate and erode will help to remove extra features of the vertebra
se = strel('disk',4,4);
BW2_dilate = imdilate(BW2,se);
BW2_erode = imerode(BW2_dilate,se);
sb = bwboundaries(BW2_erode);
figure
imshow(BW2)
hold on
centerLine = [];
corners = [];
for bone = 1:length(sb)
x0 = sb{bone}(:,2) - mean(sb{bone}(:,2));
y0 = sb{bone}(:,1) - mean(sb{bone}(:,1));
%save the position of the centroid
centerLine = [centerLine; [mean(sb{bone}(:,1)) mean(sb{bone}(:,2))]];
[th0,rho0] = cart2pol(x0,y0);
%make sure that the indexing starts at the dip, not at the corner
lowest_val = find(rho0==min(rho0));
rho1 = [rho0(lowest_val:end); rho0(1:lowest_val-1)];
th00 = [th0(lowest_val:end); th0(1:lowest_val-1)];
y1 = [y0(lowest_val:end); y0(1:lowest_val-1)];
x1 = [x0(lowest_val:end); x0(1:lowest_val-1)];
%detect corners, using smooth data to remove noise
[pks,locs] = findpeaks(smooth(rho1));
[pksS,idS] = sort(pks,'descend');
%4 most pronounced peaks are where the corners are
edgesFndCx = x1(locs(idS(1:4)));
edgesFndCy = y1(locs(idS(1:4)));
edgesFndCx = edgesFndCx + mean(sb{bone}(:,2));
edgesFndCy = edgesFndCy + mean(sb{bone}(:,1));
corners{bone} = [edgesFndCy edgesFndCx];
end
[~,idCL] = sort(centerLine(:,1),'descend');
centerLine = centerLine(idCL,:);
%extrapolate the spine centerline
yDatExt= 1:size(BW2_erode,1);
extrpLine = interp1(centerLine(:,1),centerLine(:,2),yDatExt,'spline','extrap');
plot(centerLine(:,2),centerLine(:,1),'r')
plot(extrpLine,yDatExt,'r')
%find edges to the left, and to the right of the centerline
for bone = 1:length(corners)
x0 = corners{bone}(:,2);
y0 = corners{bone}(:,1);
for crn = 1:4
xCompare = extrpLine(y0(crn));
if x0(crn) < xCompare
plot(x0(crn),y0(crn),'go','LineWidth',2)
else
plot(x0(crn),y0(crn),'ro','LineWidth',2)
end
end
end
Solution

Matlab surf only points, not lines

I have to draw a hipsometric map on a 3D plot. I have two vectors 1x401 (named xLabels and yLabels) which are the geo coordinates, and401x401(namedA`) matrix with the altitude data. To plot the data I use:
surf(xLabels, yLabels,A,'EdgeColor','None','Marker','.');
which leads to something like that:
But i would like to have something like that:
On the second image, only the surface is plotted, while my image looks like pillars.
I tried even make my vectors to 401x401 using meshgrid but it did not have any effect.
Do you have any idea what I should change?
#EDIT
I checked for X and Y data. I quess is too small interval (0.0083), but when i try plot good second of upper plots with same interval it draws correctly.
#EDIT2:
sizeX = 4800;
sizeY = 6000;
pixdegree = 0.0083; % 1 pixel is 0.0083 degree on map
intSize = 2;
lon = 37 + (35/60);
lat = 55+ (45/60);
fDEM = 'E020N90';
fHDR = 'E020N90.HDR';
[startXY, endXY] = calcFirstPixel(lon, lat); %calc borders for my area
f = fopen('E020N90.DEM');
offset = (startXY(1,2)*sizeX*intSize)+(startXY(1,1)*intSize);
fseek(f, offset,0); %seek from curr file pos
x = 0;
A = [];
BB = [];
jump = (intSize*sizeX)-(401*2);
while x<401
row = fread(f, 802);
fseek(f, jump, 0); %jump 2 next row
A = [A row];
x = x+1;
end
fclose(f);
A = A';
A = A(:,2:2:802);
m1 = min(A(:)); %wartość minimalna dla naszej podziałki
m2 = max(A(:)); %wartość maksymalna dla naszej podziałki
step = m2/8; % będzie 8 kolorów
highScale = m1:step:m2-step; %wartości graniczne dla każdego z nich
%handles.axes1 = A;
colormap(hObject, jet(8));
startXtick = 20 + pixdegree*startXY(1,1);
endXtick = 20 + pixdegree*endXY(1,1);
startYtick = 90 - pixdegree*endXY(1,2);
endYtick = 90 - pixdegree*startXY(1,2);
[XX,YY] = ndgrid(startXtick:pixdegree:endXtick,startYtick:pixdegree:endYtick);
xLabels = startXtick:pixdegree:endXtick;
yLabels = startYtick:pixdegree:endYtick;
surf(xLabels, yLabels,A,'EdgeColor','None','Marker','.');
set(gca,'YDir','normal');
grid on;
view([45 45])
And .DEM files
function [startXY, endXY] = calcFirstPixel(lon,lat)
global fHDR;
format = '%s %s';
f = fopen(fHDR);
cont = textscan(f, format);
LonStart = str2double(cont{1,2}{11,1});
LatStart = str2double(cont{1,2}{12,1});
diffPerPix = str2double(cont{1,2}{13,1});
fclose(f);
x = LonStart;
countX = 0
y = LatStart;
countY= 0;
while x<lon
x=x+diffPerPix
countX = countX +1;
end
while y>lat
y=y-diffPerPix
countY = countY+1;
end
startXY= [countX-200 countY-200];
endXY = [countX+200 countY+200];
end

Change input video from saved video to streaming video for manipulation

I downloaded this code from MIT's video magnification lab: http://people.csail.mit.edu/mrub/evm/#code
Except it currently only runs on saved videos. We wanted to change it so that we can have a live video set up.
The matlab code is as follows:
dataDir = './data';
resultsDir = 'ResultsSIGGRAPH2012';
mkdir(resultsDir);
inFile = fullfile(dataDir,'face2.mp4');
fprintf('Processing %s\n', inFile);
% Motion
amplify_spatial_lpyr_temporal_butter(inFile,resultsDir,20,80, ...
0.5,10,30, 0);
The amplify_spatial_lpyr_temporal_butter matlab code is:
function amplify_spatial_lpyr_temporal_butter(vidFile, outDir ...
,alpha, lambda_c, fl, fh ...
,samplingRate, chromAttenuation)
[low_a, low_b] = butter(1, fl/samplingRate, 'low');
[high_a, high_b] = butter(1, fh/samplingRate, 'low');
[~,vidName] = fileparts(vidFile);
outName = fullfile(outDir,[vidName '-butter-from-' num2str(fl) '-to-' ...
num2str(fh) '-alpha-' num2str(alpha) '-lambda_c-' num2str(lambda_c) ...
'-chromAtn-' num2str(chromAttenuation) '.avi']);
% Read video
vid = VideoReader(vidFile);
% Extract video info
vidHeight = vid.Height;
vidWidth = vid.Width;
nChannels = 3;
fr = vid.FrameRate;
len = vid.NumberOfFrames;
temp = struct('cdata', ...
zeros(vidHeight, vidWidth, nChannels, 'uint8'), ...
'colormap', []);
startIndex = 1;
endIndex = len-10;
vidOut = VideoWriter(outName);
vidOut.FrameRate = fr;
open(vidOut)
% firstFrame
temp.cdata = read(vid, startIndex);
[rgbframe,~] = frame2im(temp);
rgbframe = im2double(rgbframe);
frame = rgb2ntsc(rgbframe);
[pyr,pind] = buildLpyr(frame(:,:,1),'auto');
pyr = repmat(pyr,[1 3]);
[pyr(:,2),~] = buildLpyr(frame(:,:,2),'auto');
[pyr(:,3),~] = buildLpyr(frame(:,:,3),'auto');
lowpass1 = pyr;
lowpass2 = pyr;
pyr_prev = pyr;
output = rgbframe;
writeVideo(vidOut,im2uint8(output));
nLevels = size(pind,1);
for i=startIndex+1:endIndex
progmeter(i-startIndex,endIndex - startIndex + 1);
temp.cdata = read(vid, i);
[rgbframe,~] = frame2im(temp);
rgbframe = im2double(rgbframe);
frame = rgb2ntsc(rgbframe);
[pyr(:,1),~] = buildLpyr(frame(:,:,1),'auto');
[pyr(:,2),~] = buildLpyr(frame(:,:,2),'auto');
[pyr(:,3),~] = buildLpyr(frame(:,:,3),'auto');
%% temporal filtering
lowpass1 = (-high_b(2) .* lowpass1 + high_a(1).*pyr + ...
high_a(2).*pyr_prev)./high_b(1);
lowpass2 = (-low_b(2) .* lowpass2 + low_a(1).*pyr + ...
low_a(2).*pyr_prev)./low_b(1);
filtered = (lowpass1 - lowpass2);
pyr_prev = pyr;
%% amplify each spatial frequency bands according to Figure 6 of our paper
ind = size(pyr,1);
delta = lambda_c/8/(1+alpha);
% the factor to boost alpha above the bound we have in the
% paper. (for better visualization)
exaggeration_factor = 2;
% compute the representative wavelength lambda for the lowest spatial
% freqency band of Laplacian pyramid
lambda = (vidHeight^2 + vidWidth^2).^0.5/3; % 3 is experimental constant
for l = nLevels:-1:1
indices = ind-prod(pind(l,:))+1:ind;
% compute modified alpha for this level
currAlpha = lambda/delta/8 - 1;
currAlpha = currAlpha*exaggeration_factor;
if (l == nLevels || l == 1) % ignore the highest and lowest frequency band
filtered(indices,:) = 0;
elseif (currAlpha > alpha) % representative lambda exceeds lambda_c
filtered(indices,:) = alpha*filtered(indices,:);
else
filtered(indices,:) = currAlpha*filtered(indices,:);
end
ind = ind - prod(pind(l,:));
% go one level down on pyramid,
% representative lambda will reduce by factor of 2
lambda = lambda/2;
end
%% Render on the input video
output = zeros(size(frame));
output(:,:,1) = reconLpyr(filtered(:,1),pind);
output(:,:,2) = reconLpyr(filtered(:,2),pind);
output(:,:,3) = reconLpyr(filtered(:,3),pind);
output(:,:,2) = output(:,:,2)*chromAttenuation;
output(:,:,3) = output(:,:,3)*chromAttenuation;
output = frame + output;
output = ntsc2rgb(output);
% filtered = rgbframe + filtered.*mask;
output(output > 1) = 1;
output(output < 0) = 0;
writeVideo(vidOut,im2uint8(output));
end
close(vidOut);
end
We are trying to get it to work with a streaming video input, but don't really know where to start. We are somewhat familiar with programming but not so familiar with Matlab and so don't really know where to look for such answers.
Any help would be greatly appreciated.