Intro: I'm using MATLAB's Neural Network Toolbox in an attempt to forecast time series one step into the future. Currently I'm just trying to forecast a simple sinusoidal function, but hopefully I will be able to move on to something a bit more complex after I obtain satisfactory results.
Problem: Everything seems to work fine, however the predicted forecast tends to be lagged by one period. Neural network forecasting isn't much use if it just outputs the series delayed by one unit of time, right?
Code:
t = -50:0.2:100;
noise = rand(1,length(t));
y = sin(t)+1/2*sin(t+pi/3);
split = floor(0.9*length(t));
forperiod = length(t)-split;
numinputs = 5;
forecasted = [];
msg = '';
for j = 1:forperiod
fprintf(repmat('\b',1,numel(msg)));
msg = sprintf('forecasting iteration %g/%g...\n',j,forperiod);
fprintf('%s',msg);
estdata = y(1:split+j-1);
estdatalen = size(estdata,2);
signal = estdata;
last = signal(end);
[signal,low,high] = preprocess(signal'); % pre-process
signal = signal';
inputs = signal(rowshiftmat(length(signal),numinputs));
targets = signal(numinputs+1:end);
%% NARNET METHOD
feedbackDelays = 1:4;
hiddenLayerSize = 10;
net = narnet(feedbackDelays,[hiddenLayerSize hiddenLayerSize]);
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
signalcells = mat2cell(signal,[1],ones(1,length(signal)));
[inputs,inputStates,layerStates,targets] = preparets(net,{},{},signalcells);
net.trainParam.showWindow = false;
net.trainparam.showCommandLine = false;
net.trainFcn = 'trainlm'; % Levenberg-Marquardt
net.performFcn = 'mse'; % Mean squared error
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
next = net(inputs(end),inputStates,layerStates);
next = postprocess(next{1}, low, high); % post-process
next = (next+1)*last;
forecasted = [forecasted next];
end
figure(1);
plot(1:forperiod, forecasted, 'b', 1:forperiod, y(end-forperiod+1:end), 'r');
grid on;
Note:
The function 'preprocess' simply converts the data into logged % differences and 'postprocess' converts the logged % differences back for plotting. (Check EDIT for preprocess and postprocess code)
Results:
BLUE: Forecasted Values
RED: Actual Values
Can anyone tell me what I'm doing wrong here? Or perhaps recommend another method to achieve the desired results (lagless prediction of sinusoidal function, and eventually more chaotic timeseries)? Your help is very much appreciated.
EDIT:
It's been a few days now and I hope everyone has enjoyed their weekend. Since no solutions have emerged I've decided to post the code for the helper functions 'postprocess.m', 'preprocess.m', and their helper function 'normalize.m'. Maybe this will help get the ball rollin.
postprocess.m:
function data = postprocess(x, low, high)
% denormalize
logdata = (x+1)/2*(high-low)+low;
% inverse log data
sign = logdata./abs(logdata);
data = sign.*(exp(abs(logdata))-1);
end
preprocess.m:
function [y, low, high] = preprocess(x)
% differencing
diffs = diff(x);
% calc % changes
chngs = diffs./x(1:end-1,:);
% log data
sign = chngs./abs(chngs);
logdata = sign.*log(abs(chngs)+1);
% normalize logrets
high = max(max(logdata));
low = min(min(logdata));
y=[];
for i = 1:size(logdata,2)
y = [y normalize(logdata(:,i), -1, 1)];
end
end
normalize.m:
function Y = normalize(X,low,high)
%NORMALIZE Linear normalization of X between low and high values.
if length(X) <= 1
error('Length of X input vector must be greater than 1.');
end
mi = min(X);
ma = max(X);
Y = (X-mi)/(ma-mi)*(high-low)+low;
end
I didn't check you code, but made a similar test to predict sin() with NN. The result seems reasonable, without a lag. I think, your bug is somewhere in synchronization of predicted values with actual values.
Here is the code:
%% init & params
t = (-50 : 0.2 : 100)';
y = sin(t) + 0.5 * sin(t + pi / 3);
sigma = 0.2;
n_lags = 12;
hidden_layer_size = 15;
%% create net
net = fitnet(hidden_layer_size);
%% train
noise = sigma * randn(size(t));
y_train = y + noise;
out = circshift(y_train, -1);
out(end) = nan;
in = lagged_input(y_train, n_lags);
net = train(net, in', out');
%% test
noise = sigma * randn(size(t)); % new noise
y_test = y + noise;
in_test = lagged_input(y_test, n_lags);
out_test = net(in_test')';
y_test_predicted = circshift(out_test, 1); % sync with actual value
y_test_predicted(1) = nan;
%% plot
figure,
plot(t, [y, y_test, y_test_predicted], 'linewidth', 2);
grid minor; legend('orig', 'noised', 'predicted')
and the lagged_input() function:
function in = lagged_input(in, n_lags)
for k = 2 : n_lags
in = cat(2, in, circshift(in(:, end), 1));
in(1, k) = nan;
end
end
Related
I'm trying to write an image compression script in MATLAB using multilayer 3D DWT(color image). along the way, I want to apply thresholding on coefficient matrices, both global and local thresholds.
I like to use the formula below to calculate my local threshold:
where sigma is variance and N is the number of elements.
Global thresholding works fine; but my problem is that the calculated local threshold is (most often!) greater than the maximum band coefficient, therefore no thresholding is applied.
Everything else works fine and I get a result too, but I suspect the local threshold is miscalculated. Also, the resulting image is larger than the original!
I'd appreciate any help on the correct way to calculate the local threshold, or if there's a pre-set MATLAB function.
here's an example output:
here's my code:
clear;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%% COMPRESSION %%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% read base image
% dwt 3/5-L on base images
% quantize coeffs (local/global)
% count zero value-ed coeffs
% calculate mse/psnr
% save and show result
% read images
base = imread('circ.jpg');
fam = 'haar'; % wavelet family
lvl = 3; % wavelet depth
% set to 1 to apply global thr
thr_type = 0;
% global threshold value
gthr = 180;
% convert base to grayscale
%base = rgb2gray(base);
% apply dwt on base image
dc = wavedec3(base, lvl, fam);
% extract coeffs
ll_base = dc.dec{1};
lh_base = dc.dec{2};
hl_base = dc.dec{3};
hh_base = dc.dec{4};
ll_var = var(ll_base, 0);
lh_var = var(lh_base, 0);
hl_var = var(hl_base, 0);
hh_var = var(hh_base, 0);
% count number of elements
ll_n = numel(ll_base);
lh_n = numel(lh_base);
hl_n = numel(hl_base);
hh_n = numel(hh_base);
% find local threshold
ll_t = ll_var * (sqrt(2 * log2(ll_n)));
lh_t = lh_var * (sqrt(2 * log2(lh_n)));
hl_t = hl_var * (sqrt(2 * log2(hl_n)));
hh_t = hh_var * (sqrt(2 * log2(hh_n)));
% global
if thr_type == 1
ll_t = gthr; lh_t = gthr; hl_t = gthr; hh_t = gthr;
end
% count zero values in bands
ll_size = size(ll_base);
lh_size = size(lh_base);
hl_size = size(hl_base);
hh_size = size(hh_base);
% count zero values in new band matrices
ll_zeros = sum(ll_base==0,'all');
lh_zeros = sum(lh_base==0,'all');
hl_zeros = sum(hl_base==0,'all');
hh_zeros = sum(hh_base==0,'all');
% initiate new matrices
ll_new = zeros(ll_size);
lh_new = zeros(lh_size);
hl_new = zeros(lh_size);
hh_new = zeros(lh_size);
% apply thresholding on bands
% if new value < thr => 0
% otherwise, keep the previous value
for id=1:ll_size(1)
for idx=1:ll_size(2)
if ll_base(id,idx) < ll_t
ll_new(id,idx) = 0;
else
ll_new(id,idx) = ll_base(id,idx);
end
end
end
for id=1:lh_size(1)
for idx=1:lh_size(2)
if lh_base(id,idx) < lh_t
lh_new(id,idx) = 0;
else
lh_new(id,idx) = lh_base(id,idx);
end
end
end
for id=1:hl_size(1)
for idx=1:hl_size(2)
if hl_base(id,idx) < hl_t
hl_new(id,idx) = 0;
else
hl_new(id,idx) = hl_base(id,idx);
end
end
end
for id=1:hh_size(1)
for idx=1:hh_size(2)
if hh_base(id,idx) < hh_t
hh_new(id,idx) = 0;
else
hh_new(id,idx) = hh_base(id,idx);
end
end
end
% count zeros of the new matrices
ll_new_size = size(ll_new);
lh_new_size = size(lh_new);
hl_new_size = size(hl_new);
hh_new_size = size(hh_new);
% count number of zeros among new values
ll_new_zeros = sum(ll_new==0,'all');
lh_new_zeros = sum(lh_new==0,'all');
hl_new_zeros = sum(hl_new==0,'all');
hh_new_zeros = sum(hh_new==0,'all');
% set new band matrices
dc.dec{1} = ll_new;
dc.dec{2} = lh_new;
dc.dec{3} = hl_new;
dc.dec{4} = hh_new;
% count how many coeff. were thresholded
ll_zeros_diff = ll_new_zeros - ll_zeros;
lh_zeros_diff = lh_zeros - lh_new_zeros;
hl_zeros_diff = hl_zeros - hl_new_zeros;
hh_zeros_diff = hh_zeros - hh_new_zeros;
% show coeff. matrices vs. thresholded version
figure
colormap(gray);
subplot(2,4,1); imagesc(ll_base); title('LL');
subplot(2,4,2); imagesc(lh_base); title('LH');
subplot(2,4,3); imagesc(hl_base); title('HL');
subplot(2,4,4); imagesc(hh_base); title('HH');
subplot(2,4,5); imagesc(ll_new); title({'LL thr';ll_zeros_diff});
subplot(2,4,6); imagesc(lh_new); title({'LH thr';lh_zeros_diff});
subplot(2,4,7); imagesc(hl_new); title({'HL thr';hl_zeros_diff});
subplot(2,4,8); imagesc(hh_new); title({'HH thr';hh_zeros_diff});
% idwt to reconstruct compressed image
cmp = waverec3(dc);
cmp = uint8(cmp);
% calculate mse/psnr
D = abs(cmp - base) .^2;
mse = sum(D(:))/numel(base);
psnr = 10*log10(255*255/mse);
% show images and mse/psnr
figure
subplot(1,2,1);
imshow(base); title("Original"); axis square;
subplot(1,2,2);
imshow(cmp); colormap(gray); axis square;
msg = strcat("MSE: ", num2str(mse), " | PSNR: ", num2str(psnr));
title({"Compressed";msg});
% save image locally
imwrite(cmp, 'compressed.png');
I solved the question.
the sigma in the local threshold formula is not variance, it's the standard deviation. I applied these steps:
used stdfilt() std2() to find standard deviation of my coeff. matrices (thanks to #Rotem for pointing this out)
used numel() to count the number of elements in coeff. matrices
this is a summary of the process. it's the same for other bands (LH, HL, HH))
[c, s] = wavedec2(image, wname, level); %apply dwt
ll = appcoeff2(c, s, wname); %find LL
ll_std = std2(ll); %find standard deviation
ll_n = numel(ll); %find number of coeffs in LL
ll_t = ll_std * (sqrt(2 * log2(ll_n))); %local the formula
ll_new = ll .* double(ll > ll_t); %thresholding
replace the LL values in c in a for loop
reconstruct by applying IDWT using waverec2
this is a sample output:
I have a basic exercise for telecommunications with matlab, and i must plot a triangle pulse with (-c,0) to (c,0) with c = 6 and Amplitude = 1 in a for loop for M pulses and approach the periodic pulse using N Fourier series terms. I can't find something on the internet that can help me so far.
A similar code for rect pulse that I made and works is this:
a = 1;
b = 3;
N = 1000;
t = linspace(a-2*a,b+2*b,N);
A = 1;
y = rect_pulse(A,a,b,t);
plot(t,y);
grid on
axis([a-2*a b+2*b 0 2*A]);
M = 5;
T=7;
t_new = linspace(a-2*a,b+(M-1)*T+2*b,N);
y_new = zeros(1,N);
for index = 1:1:M
temp_y = rect_pulse(A,a+(index-1)*T,b+(index-1)*T,t_new);
y_new = y_new + temp_y;
end
figure;
plot(t_new,y_new);
grid on;
axis([a-2*a b+(M-1)*T+2*b 0 2*A]);
Where rect_pulse is this:
function y = rect_pulse (A,a,b,t)
N=length(t);
y = zeros(1,N);
for index = 1:1:N
if(t(1,index)>=a) && (t(1,index)<=b)
y(1,index) = A;
end
end
And fourier series is this:
function y_fourier = fourier_series_rect_pulse(a,b,To,N,t)
y_fourier = 0;
wo = (2*pi)/To;
for n = -N:1:N
f_real = #(x) cos(n*wo*x);
f_imag = #(x) sin(n*wo*x);
cn = (1/To)*(quad(f_real,a,b)) - j*quad(f_imag,a,b));
y_fourier = y_fourier + cn*exp(j*n*wo*t);
end
y_fourier = real(y_fourier);
Any ideas how to make this in to triangle pulse?
This probably deviates significantly from your approach but if you're curious here is a script I came up with to generate a triangular pulse train that can be adjusted. This method, unfortunately, uses the fft() function which may or may not be off-limits in your case. Most of the script uses indexing and manipulating vectors. Additional spectral components may be seen due to the DC offset of the alternating triangular wave and the limited number of cycles available in the vector representation of the triangular wave.
Triangular Pulses and Fourier Transforms:
Triangular Pulse with Duty-Off Period:
Higher frequency spectral components present due to the abrupt corners that occur at the transition states of the triangle pulse and duty-off period.
%******************************************************%
%PROPERTIES THAT CAN BE CHANGED%
%******************************************************%
Plotting_Interval = 0.01; %0.01 seconds%
Pulse_Width = 1/20; %6 seconds%
Period = 1/20; %10 seconds (should be at least the pulse width)%
Start_Time = 0;
End_Time = Pulse_Width*1000; %(1000 pulses)%
%******************************************************%
if(Period < Pulse_Width)
Period = Pulse_Width;
end
Time_Vector = (Start_Time: Plotting_Interval: End_Time);
Points_Per_Unit_Time = 1/Plotting_Interval;
Half_Pulse = Pulse_Width/2;
Number_Of_Points = Pulse_Width/Plotting_Interval;
Rising_Slope = linspace(0,1,floor(Number_Of_Points/2) + 1);
Falling_Slope = 1 - Rising_Slope;
Triangular_Pulse = [Rising_Slope Falling_Slope(2:end)];
t = (0: Plotting_Interval: Pulse_Width);
Periodic_Triangular_Pulse = zeros(1,length(Time_Vector));
for Cycle = 1: +Period/Plotting_Interval: End_Time/Plotting_Interval
Periodic_Triangular_Pulse(1,Cycle:Cycle+length(Triangular_Pulse)-1) = Triangular_Pulse(1,1:end);
end
Periodic_Triangular_Pulse = Periodic_Triangular_Pulse(1,1:length(Time_Vector));
subplot(1,2,1); plot(Time_Vector,Periodic_Triangular_Pulse);
Triangle_Frequency = 1/Period;
title("Triangular Pulse Train " + num2str(Triangle_Frequency) + "Hz (first 10 cycles)");
axis([0 Period*10 0 1]);
xlabel("Time (s)"); ylabel("Amplitude");
Signal_Length = length(Periodic_Triangular_Pulse);
Fourier_Transform = fft(Periodic_Triangular_Pulse);
Fs = 1/Plotting_Interval;
P2 = abs(Fourier_Transform/Signal_Length);
P1 = P2(1:floor(Signal_Length/2)+1);
P1(2:end-1) = 2*P1(2:end-1);
f = Fs*(0:(Signal_Length/2))/Signal_Length;
subplot(1,2,2); plot(f,P1)
title("Single-Sided Fourier Transform");
xlabel("Frequency (Hz)"); ylabel("Magnitude");
Ran using MATLAB R2019b
I am building a code to solve a diff. equation:
function dy = KIN1PARM(t,y,k)
%
% version : first order reaction
% A --> B
% dA/dt = -k*A
% integrated form A = A0*exp(-k*t)
%
dy = -k.*y;
end
I want this equation to be solved numerically and the results (y as a function of t, and k) to be used for minimization with respect to the experimental values to get the optimal value of parameter k.
function SSE = SSE_minimization_1parm(tspan_inp,val_exp,k_inp,y0_inp)
f = #(Tt,Ty) KIN1PARM(Tt,Ty,k_inp); %function to call ode45
size_limit = length(y0_inp);
options = odeset('NonNegative',1:size_limit,'RelTol',1e-4,'AbsTol', 1e-4);
[ts,val_theo] = ode45(f, tspan_inp, y0_inp,options); %Cexp is the state variable predicted by the model
err = val_exp - val_theo;
SSE = sum(err.^2); %sum squared-error
The main code to plot the experimental and calculated data is:
% Analyzing first order kinetics
clear all; clc;
figure_title = 'Experimental Data';
label_abscissa = 'Time [s]';
label_ordinatus = 'Concentration [mol/L]';
%
abscissa = [ 0;
240;
480;
720;
960;
1140;
1380;
1620;
1800;
2040;
2220;
2460;
2700;
2940];
ordinatus = [ 0;
19.6;
36.7;
49.0;
57.1;
64.5;
71.4;
75.2;
78.7;
81.3;
83.3;
85.5;
87.0;
87.7];
%
title_string = [' Time [s]', ' | ', ' Complex [mol/L] ', ' '];
disp(title_string);
for i=1:length(abscissa)
report_raw_data{i} = sprintf('%1.3E\t',abscissa(i),ordinatus(i));
disp([report_raw_data{i}]);
end;
%---------------------/plotting dot data/-------------------------------------
%
f = figure('Position', [100 100 700 500]);
title(figure_title,'FontName','arial','FontWeight','bold', 'FontSize', 12);
xlabel(label_abscissa, 'FontSize', 12);
ylabel(label_ordinatus, 'FontSize', 12);
%
grid on; hold on;
%
marker_style = { 's'};
%
plot(abscissa,ordinatus, marker_style{1},...
'MarkerFaceColor', 'black',...
'MarkerEdgeColor', 'black',...
'MarkerSize',4);
%---------------------/Analyzing/----------------------------------------
%
options = optimset('Display','iter','TolFun',1e-4,'TolX',1e-4);
%
CPUtime0 = cputime;
Time_M = abscissa;
Concentration_M = ordinatus;
tspan = Time_M;
y0 = 0;
k0 = rand(1);
[k, fval, exitflag, output] = fminsearch(#(k) SSE_minimization_1parm(tspan,Concentration_M,k,y0),k0,options);
CPUtimex = cputime;
CPUtime_delay = CPUtimex - CPUtime0;
%
%---------------------/plotting calculated data/-------------------------------------
%
xupperlimit = Time_M(length(Time_M));
xval = ([0:1:xupperlimit])';
%
yvector = data4plot_1parm(xval,k,y0);
plot(xval,yvector, 'r');
hold on;
%---------------------/printing calculated data/-------------------------------------
%
disp('RESULTS:');
disp(['CPU time: ',sprintf('%0.5f\t',CPUtime_delay),' sec']);
disp(['k: ',sprintf('%1.3E\t',k')]);
disp(['fval: ',sprintf('%1.3E\t',fval)]);
disp(['exitflag: ',sprintf('%1.3E\t',exitflag)]);
disp(output);
disp(['Output: ',output.message]);
The corresponding function, which uses the optimized parameter k to yield the calculated y = f(t) data :
function val = data4plot_1parm(tspan_inp,k_inp,y0_inp)
f = #(Tt,Ty) KIN1PARM(Tt,Ty,k_inp);
size_limit = length(y0_inp);
options = odeset('NonNegative',1:size_limit,'RelTol',1e-4,'AbsTol',1e-4);
[ts,val_theo] = ode45(f, tspan_inp, y0_inp, options);
The code runs optimization cycles always giving different values of parameter k, which are different from the value calculated using ln(y) vs t (should be around 7.0e-4 for that series of exp. data).
Looking at the outcome of the ode solver (SSE_minimization_1parm => val_theo) I found that the ode function gives me a vector of zeroes.
Could someone help me , please, to figure out what's going with the ode solver ?
Thanks much in advance !
So here comes the best which I can get right now. For my way I tread ordinatus values as time and the abscissa values as measured quantity which you try to model. Also, you seem to have set alot of options for the solver, which I all omitted. First comes your proposed solution using ode45(), but with a non-zero y0 = 100, which I just "guessed" from looking at the data (in a semilogarithmic plot).
function main
abscissa = [0;
240;
480;
720;
960;
1140;
1380;
1620;
1800;
2040;
2220;
2460;
2700;
2940];
ordinatus = [ 0;
19.6;
36.7;
49.0;
57.1;
64.5;
71.4;
75.2;
78.7;
81.3;
83.3;
85.5;
87.0;
87.7];
tspan = [min(ordinatus), max(ordinatus)]; % // assuming ordinatus is time
y0 = 100; % // <---- Probably the most important parameter to guess
k0 = -0.1; % // <--- second most important parameter to guess (negative for growth)
k_opt = fminsearch(#minimize, k0) % // optimization only over k
% nested minimization function
function e = minimize(k)
sol = ode45(#KIN1PARM, tspan, y0, [], k);
y_hat = deval(sol, ordinatus); % // evaluate solution at given times
e = sum((y_hat' - abscissa).^2); % // compute squarederror
end
% // plot with optimal parameter
[T,Y] = ode45(#KIN1PARM, tspan, y0, [], k_opt);
figure
plot(ordinatus, abscissa,'ko', 'markersize',10,'markerfacecolor','black')
hold on
plot(T,Y, 'r--', 'linewidth', 2)
% // Another attempt with fminsearch and the integral form
t = ordinatus;
t_fit = linspace(min(ordinatus), max(ordinatus))
y = abscissa;
% create model function with parameters A0 = p(1) and k = p(2)
model = #(p, t) p(1)*exp(-p(2)*t);
e = #(p) sum((y - model(p, t)).^2); % minimize squared errors
p0 = [100, -0.1]; % an initial guess (positive A0 and probably negative k for exp. growth)
p_fit = fminsearch(e, p0); % Optimize
% Add to plot
plot(t_fit, model(p_fit, t_fit), 'b-', 'linewidth', 2)
legend('location', 'best', 'data', 'ode45 with fixed y0', ...
sprintf ('integral form: %5.1f*exp(-%.4f)', p_fit))
end
function dy = KIN1PARM(t,y,k)
%
% version : first order reaction
% A --> B
% dA/dt = -k*A
% integrated form A = A0*exp(-k*t)
%
dy = -k.*y;
end
The result can be seen below. Quit surprisingly to me, the initial guess of y0 = 100 fits quite well with the optimal A0 found. The result can be seen below:
I am trying to do modulation and demodulation for 16-QAM and then trying to compare theoretical and simulated BER.
I am not getting simulation-line in the graph.
I can not understand what is wrong with my code. Can anybody help me?
here is the code:
M=16;
SNR_db = [0 2 4 6 8 10 12];
x = randi([0,M-1],1000,1);
hmod = modem.qammod(16);
hdemod = modem.qamdemod(hmod,'SymbolOrder', 'Gray');
tx = zeros(1,1000);
for n=1:1000
tx(n) = modulate(hmod, x(n));
end
rx = zeros(1,1000);
rx_demod = zeros(1,1000);
for j = 1:7
err = zeros(1,7);
err_t = zeros(1,7);
for n = 1:1000
rx(n) = awgn(tx(n), SNR_db(j));
rx_demod(n) = demodulate(hdemod, rx(n));
if(rx_demod(n)~=x(n))
err(j) = err(j)+1;
end
end
% err_t = err_t + err;
end
theoryBer = 3/2*erfc(sqrt(0.1*(10.^(SNR_db/10))));
figure
semilogy(SNR_db,theoryBer,'-',SNR_db, err, '^-');
grid on
legend('theory', 'simulation');
xlabel('Es/No, dB')
ylabel('Symbol Error Rate')
title('Symbol error probability curve for 16-QAM modulation')
http://www.dsplog.com/db-install/wp-content/uploads/2008/06/script_16qam_gray_mapping_bit_error_rate.m
That does what you want manually, without assuming any toolbox functionality (i.e. the fancy modulator and demodulators).
Also you can try
edit commdoc_mod
Make a copy of that file and you should be able to get it to do what you want with one simple loop.
Edit
Here are the modifications to that file that give you the simulated EbNo curves instead of the symbol error rate ones. Should be good enough for any practical purpose.
M = 16; % Size of signal constellation
k = log2(M); % Number of bits per symbol
n = 3e4; % Number of bits to process
nSyms = n/k; % Number of symbols
hMod = modem.qammod(M); % Create a 16-QAM modulator
hMod.InputType = 'Bit'; % Accept bits as inputs
hMod.SymbolOrder = 'Gray'; % Accept bits as inputs
hDemod = modem.qamdemod(hMod); % Create a 16-QAM based on the modulator
x = randi([0 1],n,1); % Random binary data stream
tx = modulate(hMod,x);
EbNo = 0:10; % In dB
SNR = EbNo + 10*log10(k);
rx = zeros(nSyms,length(SNR));
bit_error_rate = zeros(length(SNR),1);
for i=1:length(SNR)
rx(:,i) = awgn(tx,SNR(i),'measured');
end
rx_demod = demodulate(hDemod,rx);
for i=1:length(SNR)
[~,bit_error_rate(i)] = biterr(x,rx_demod(:,i));
end
theoryBer = 3/(2*k)*erfc(sqrt(0.1*k*(10.^(EbNo/10))));
figure;
semilogy(EbNo,theoryBer,'-',EbNo, bit_error_rate, '^-');
grid on;
legend('theory', 'simulation');
xlabel('Eb/No, dB');
ylabel('Bit Error Rate');
title('Bit error probability curve for 16-QAM modulation');
In your code, you confuse Symbol Error Probability and Bit Error Probability. Moreover err = zeros(1,7); is misplaced.
After the corrections:
M=16;
SNR_db = 0:2:12;
N=1000;
x = randi([0,M-1],N,1);
k = log2(M); % bits per symbol
tx = qammod(x, M,'Gray');
err = zeros(1,7);
for j = 1:numel(SNR_db)
rx = awgn(tx, SNR_db(j),'measured');
rx_demod = qamdemod( rx, M, 'Gray' );
[~,err(j)] = biterr(x,rx_demod);
end
theorySER = 3/2*erfc(sqrt(0.1*(10.^(SNR_db/10))));
figure
semilogy(SNR_db,theorySER,'-',SNR_db, err*k, '^-');
grid on
legend('theory', 'simulation');
xlabel('Es/No, dB')
ylabel('Symbol Error Rate')
title('Symbol Error Probability curve for 16-QAM modulation')
And the resulting graph is:
I'm trying to produce some computer generated holograms by using MATLAB. I used equally spaced mesh grid to initialize the spatial grid, and I got the following image
This pattern is sort of what I need except the center region. The fringe should be sharp but blurred. I think it might be the problem of the mesh grid. I tried generate a grid in polar coordinates and the map it into Cartesian coordinates by using MATLAB's pol2cart function. Unfortunately, it doesn't work as well. One may suggest that using fine grids. It doesn't work too. I think if I can generate a spiral mesh grid, perhaps the problem is solvable. In addition, the number of the spiral arms could, in general, be arbitrary, could anyone give me a hint on this?
I've attached the code (My final projects are not exactly the same, but it has a similar problem).
clc; clear all; close all;
%% initialization
tic
lambda = 1.55e-6;
k0 = 2*pi/lambda;
c0 = 3e8;
eta0 = 377;
scale = 0.25e-6;
NELEMENTS = 1600;
GoldenRatio = (1+sqrt(5))/2;
g = 2*pi*(1-1/GoldenRatio);
pntsrc = zeros(NELEMENTS, 3);
phisrc = zeros(NELEMENTS, 1);
for idxe = 1:NELEMENTS
pntsrc(idxe, :) = scale*sqrt(idxe)*[cos(idxe*g), sin(idxe*g), 0];
phisrc(idxe) = angle(-sin(idxe*g)+1i*cos(idxe*g));
end
phisrc = 3*phisrc/2; % 3 arms (topological charge ell=3)
%% post processing
sigma = 1;
polfilter = [0, 0, 1i*sigma; 0, 0, -1; -1i*sigma, 1, 0]; % cp filter
xboundl = -100e-6; xboundu = 100e-6;
yboundl = -100e-6; yboundu = 100e-6;
xf = linspace(xboundl, xboundu, 100);
yf = linspace(yboundl, yboundu, 100);
zf = -400e-6;
[pntobsx, pntobsy] = meshgrid(xf, yf);
% how to generate a right mesh grid such that we can generate a decent result?
pntobs = [pntobsx(:), pntobsy(:), zf*ones(size(pntobsx(:)))];
% arbitrary mesh may result in "wrong" results
NPNTOBS = size(pntobs, 1);
nxp = length(xf);
nyp = length(yf);
%% observation
Eobs = zeros(NPNTOBS, 3);
matlabpool open local 12
parfor nobs = 1:NPNTOBS
rp = pntobs(nobs, :);
Erad = [0; 0; 0];
for idx = 1:NELEMENTS
rs = pntsrc(idx, :);
p = exp(sigma*1i*2*phisrc(idx))*[1 -sigma*1i 0]/2; % simplified here
u = rp - rs;
r = sqrt(u(1)^2+u(2)^2+u(3)^2); %norm(u);
u = u/r; % unit vector
ut = [u(2)*p(3)-u(3)*p(2),...
u(3)*p(1)-u(1)*p(3), ...
u(1)*p(2)-u(2)*p(1)]; % cross product: u cross p
Erad = Erad + ... % u cross p cross u, do not use the built-in func
c0*k0^2/4/pi*exp(1i*k0*r)/r*eta0*...
[ut(2)*u(3)-ut(3)*u(2);...
ut(3)*u(1)-ut(1)*u(3); ...
ut(1)*u(2)-ut(2)*u(1)];
end
Eobs(nobs, :) = Erad; % filter neglected here
end
matlabpool close
Eobs = Eobs/max(max(sum(abs(Eobs), 2))); % normailized
%% source, gaussian beam
E0 = 1;
w0 = 80e-6;
theta = 0; % may be titled
RotateX = [1, 0, 0; ...
0, cosd(theta), -sind(theta); ...
0, sind(theta), cosd(theta)];
Esrc = zeros(NPNTOBS, 3);
for nobs = 1:NPNTOBS
rp = RotateX*[pntobs(nobs, 1:2).'; 0];
z = rp(3);
r = sqrt(sum(abs(rp(1:2)).^2));
zR = pi*w0^2/lambda;
wz = w0*sqrt(1+z^2/zR^2);
Rz = z^2+zR^2;
zetaz = atan(z/zR);
gaussian = E0*w0/wz*exp(-r^2/wz^2-1i*k0*z-1i*k0*0*r^2/Rz/2+1i*zetaz);% ...
Esrc(nobs, :) = (polfilter*gaussian*[1; -1i; 0]).'/sqrt(2)/2;
end
Esrc = [Esrc(:, 2), Esrc(:, 3), Esrc(:, 1)];
Esrc = Esrc/max(max(sum(abs(Esrc), 2))); % normailized
toc
%% visualization
fringe = Eobs + Esrc; % I'll have a different formula in my code
normEsrc = reshape(sum(abs(Esrc).^2, 2), [nyp nxp]);
normEobs = reshape(sum(abs(Eobs).^2, 2), [nyp nxp]);
normFringe = reshape(sum(abs(fringe).^2, 2), [nyp nxp]);
close all;
xf0 = linspace(xboundl, xboundu, 500);
yf0 = linspace(yboundl, yboundu, 500);
[xfi, yfi] = meshgrid(xf0, yf0);
data = interp2(xf, yf, normFringe, xfi, yfi);
figure; surf(xfi, yfi, data,'edgecolor','none');
% tri = delaunay(xfi, yfi); trisurf(tri, xfi, yfi, data, 'edgecolor','none');
xlim([xboundl, xboundu])
ylim([yboundl, yboundu])
% colorbar
view(0,90)
colormap(hot)
axis equal
axis off
title('fringe thereo. ', ...
'fontsize', 18)
I didn't read your code because it is too long to do such a simple thing. I wrote mine and here is the result:
the code is
%spiral.m
function val = spiral(x,y)
r = sqrt( x*x + y*y);
a = atan2(y,x)*2+r;
x = r*cos(a);
y = r*sin(a);
val = exp(-x*x*y*y);
val = 1/(1+exp(-1000*(val)));
endfunction
%show.m
n=300;
l = 7;
A = zeros(n);
for i=1:n
for j=1:n
A(i,j) = spiral( 2*(i/n-0.5)*l,2*(j/n-0.5)*l);
end
end
imshow(A) %don't know if imshow is in matlab. I used octave.
the key for the sharpnes is line
val = 1/(1+exp(-1000*(val)));
It is logistic function. The number 1000 defines how sharp your image will be. So lower it for more blurry image or higher it for sharper.
I hope this answers your question ;)
Edit: It is real fun to play with. Here is another spiral:
function val = spiral(x,y)
s= 0.5;
r = sqrt( x*x + y*y);
a = atan2(y,x)*2+r*r*r;
x = r*cos(a);
y = r*sin(a);
val = 0;
if (abs(x)<s )
val = s-abs(x);
endif
if(abs(y)<s)
val =max(s-abs(y),val);
endif
%val = 1/(1+exp(-1*(val)));
endfunction
Edit2: Fun, fun, fun! Here the arms do not get thinner.
function val = spiral(x,y)
s= 0.1;
r = sqrt( x*x + y*y);
a = atan2(y,x)*2+r*r; % h
x = r*cos(a);
y = r*sin(a);
val = 0;
s = s*exp(r);
if (abs(x)<s )
val = s-abs(x);
endif
if(abs(y)<s)
val =max(s-abs(y),val);
endif
val = val/s;
val = 1/(1+exp(-10*(val)));
endfunction
Damn your question I really need to study for my exam, arghhh!
Edit3:
I vectorised the code and it runs much faster.
%spiral.m
function val = spiral(x,y)
s= 2;
r = sqrt( x.*x + y.*y);
a = atan2(y,x)*8+exp(r);
x = r.*cos(a);
y = r.*sin(a);
val = 0;
s = s.*exp(-0.1*r);
val = r;
val = (abs(x)<s ).*(s-abs(x));
val = val./s;
% val = 1./(1.+exp(-1*(val)));
endfunction
%show.m
n=1000;
l = 3;
A = zeros(n);
[X,Y] = meshgrid(-l:2*l/n:l);
A = spiral(X,Y);
imshow(A)
Sorry, can't post figures. But this might help. I wrote it for experiments with amplitude spatial modulators...
R=70; % radius of curvature of fresnel lens (in pixel units)
A=0; % oblique incidence by linear grating (1=oblique 0=collinear)
B=1; % expanding by fresnel lens (1=yes 0=no)
L=7; % topological charge
Lambda=30; % linear grating fringe spacing (in pixels)
aspect=1/2; % fraction of fringe period that is white/clear
xsize=1024; % resolution (xres x yres number data pts calculated)
ysize=768; %
% define the X and Y ranges (defined to skip zero)
xvec = linspace(-xsize/2, xsize/2, xsize); % list of x values
yvec = linspace(-ysize/2, ysize/2, ysize); % list of y values
% define the meshes - matrices linear in one dimension
[xmesh, ymesh] = meshgrid(xvec, yvec);
% calculate the individual phase components
vortexPh = atan2(ymesh,xmesh); % the vortex phase
linPh = -2*pi*ymesh; % a phase of linear grating
radialPh = (xmesh.^2+ymesh.^2); % a phase of defocus
% combine the phases with appropriate scales (phases are additive)
% the 'pi' at the end causes inversion of the pattern
Ph = L*vortexPh + A*linPh/Lambda + B*radialPh/R^2;
% transmittance function (the real part of exp(I*Ph))
T = cos(Ph);
% the binary version
binT = T > cos(pi*aspect);
% plot the pattern
% imagesc(binT)
imagesc(T)
colormap(gray)