Aligning the dimensions for conformity in matlab - matlab

A number of times, I run this simple code. It keeps displaying a usual error related to the dimensions but I have tried to extend the dimensions of rho_1. I am not sure if the error is mainly due to the CDF function. Any suggestions for solving this problem? Thanks
rho_1 = [2*10^-4];
beta=4;
Cua = pi*gamma(1+2/beta)*gamma(1-2/beta);
A = (4*pi-36*sqrt(3)+64)/(12*pi-9*sqrt(3));
p2=10^(15/10);
p1=10^(15/10);
T_1 = 10^(2/10);
T_2 = 10^(2/10);
B_one = 1/2*rho_1*Cua*((T_2)^(2/beta))*(A^2)*(((p2/p1)^(2/beta))+ 1);
Ry_low = 0:10:50; A=(4*pi-36*sqrt(3)+64)/(12*pi-9*sqrt(3));
Ry_high = 50;
D_one= 1/2*rho_1*Cua*((T_2)^(2/beta)) * (A^2) *(((p1/p2)^(2/beta))+ 1) ;
C_rov = ((pi* rho_1)/(2* sqrt(B_one*D_one)*(Ry_high - Ry_low).^2))*((normcdf(sqrt(2*B_one)*Ry_high) - (normcdf(sqrt(2*B_one)*Ry_low))) *((normcdf(sqrt(2*D_one)*Ry_high) - (normcdf(sqrt(2*D_one)*Ry_low)))));
plot(Ry_low,C_rov)

Use dot multiplication/division. Also corrected D_1 to D_one. Please replace your line 13 with this:
C_rov = ((pi* rho_1)./(2* sqrt(B_one*D_one).*(Ry_high - Ry_low).^2)).*((normcdf(sqrt(2*B_one)*Ry_high) - (normcdf(sqrt(2*B_one)*Ry_low))).*((normcdf(sqrt(2*D_one)*Ry_high) - (normcdf(sqrt(2*D_one)*Ry_low)))));

Related

Neural network: How to calculate the error for a unit

I am trying to work out question 26 from this exam paper (the exam is from 2002, not one I'm getting marked on!)
This is the exact question:
The answer is B.
Could someone point out where I'm going wrong?
I worked out I1 from the previous question on the paper to be 0.982.
The activation function is sigmoid. So should the sum be, for output 1:
d1 = f(Ik)[1-f(Ik)](Tk-Zk)
From the question:
T1 = 0.58
Z1 = 0.83
T1 - Z1 = -0.25
sigmoid(I1) = sigmoid(0.982) = 0.728
1-sigmoid(I1) = 1-0.728 = 0.272
So putting this all together:
d1 = (0.728)(0.272)(-0.25)
d1 = -0.049
But the answer should be d1 = -0.0353
Can anyone show me where I'm going wrong?
Edit 1: I tried to work backwards to understand the situation, but I still got stuck.
I said:
d1 = f(Ik)[1-f(Ik)](Tk-Zk)
-0.0353 = f'(Ik)(-0.25) (where I know -0.0353 is the right answer, and -0.25 is Tk - Zk)
0.1412 = f'(Ik)
0.1412 = f(Ik)[1-f(Ik)]
0.1412 = sigmoid(x).(1-sigmoid(x))
...but then I got stuck, if anyone has an idea
The problem is, that the I₁ you got from the previous question is not the same I₁ you need for this task.
The value of I₁ changes depending on the input values(which are different for this question)!
For the solution of this question you can instead use the fact that f(Iₖ) = zₖ:
δₖ = f(Iₖ)·[1 - f(Iₖ)]·(tₖ - zₖ)
= zₖ·[1 - zₖ]·(tₖ - zₖ)
→ δ₁ = 0.83·[1 - 0.83]·(-0.25) = -0.2075·0.17 = -0.035275 ≈ -0.0353
→ δ₂ = 0.26·[1 - 0.26]·(0.70 - 0.26) ≈ -0.0847
→ δ₃ = 0.56·[1 - 0.56]·(0.20 - 0.56) ≈ -0.0887

Fit with the parameter

I am quite new to Matlab and I am trying to use this code I found online.
I am trying to fit a graph described by the HydrodynamicSpectrum. But instead of having it fit after inputting fvA and fmA, I am trying to obtain the fitted parameters for this value also.
I have tried removing them, changing them. But none is working. I was wondering if any one here will be able to point me into the right direction of fixing this.
specFunc = #(f, para)HydrodynamicSpectrum(f, [para fvA fmA]);
[fit.AXfc, fit.AXD] = NonLinearFit(fit.f(indXY), fit.AXSpec(indXY), specFunc, [iguess_AXfc iguess_AXD]);
[fit.AYfc, fit.AYD] = NonLinearFit(fit.f(indXY), fit.AYSpec(indXY), specFunc, [iguess_AYfc iguess_AYD]);
[fit.ASumfc, fit.ASumD] = NonLinearFit(fit.f(indSum), fit.ASumSpec(indSum), specFunc, [iguess_ASumfc iguess_ASumD]);
predictedAX = HydrodynamicSpectrum(fit.f, [fit.AXfc fit.AXD fvA fmA]);
predictedAY = HydrodynamicSpectrum(fit.f, [fit.AYfc fit.AYD fvA fmA]);
predictedASum = HydrodynamicSpectrum(fit.f, [fit.ASumfc fit.ASumD fvA fmA]);
function spec = HydrodynamicSpectrum(f, para);
fc = para(1);
D = para(2);
fv = para(3);
fm = para(4);
f = abs(f); %Kludge!
spec = D/pi^2*(1+sqrt(f/fv))./((fc - f.*sqrt(f./fv) - (f.^2)/fm).^2 + (f + f.*sqrt(f./fv)).^2);
function [fc, D, sfc, sD] = NonLinearFit(f, spec, specFunc, init);
func = #(para, f)spec./specFunc(f, para);
[paraFit, resid, J] = nlinfit(f, ones(1, length(spec)), func, init);
fc = paraFit(1);
D = paraFit(2);
ci = nlparci(real(paraFit), real(resid), real(J)); % Kludge!!
sfc = (ci(1,2) - ci(1,1))/4;
sD = (ci(2,2) - ci(2,1))/4;
[paraFit, resid, J] = nlinfit(f, ones(1, length(spec)), func, init);
It looks like you get your fitted parameter using this line. And you are further processing them to get other stuff out from the function. You can modify your second function to get them out as well.
As there are very few comments, and questions seems to be application specific, there is not much help I can give with what you have presented.

Looking for a more efficient way of writing my MATLAB code

I have written in MATLAB the following
for i = 1:3
alpha11(i) = b+a.*randn(1,1);
alpha22(i) = b+a.*randn(1,1);
alpha12(i) = b+a.*randn(1,1);
alpha21(i) = b+a.*randn(1,1);
AoD11(i) = randi([-180/6 +180/6],1,1);
AoA11(i) = randi([-180/6 +180/6 ],1,1);
AoD22(i) = randi([-180/6 +180/6],1,1);
AoA22(i) = randi([-180/6 +180/6 ],1,1);
AoD21(i) = randi([-180 +180],1,1);
AoA21(i) = randi([-180 +180 ],1,1);
AoD12(i) = randi([-180 +180],1,1);
AoA12(i) = randi([-180 +180 ],1,1);
ctet11(i)= ((2*pi)/lambda)*d*sin(AoD11(i));
ctet22(i)= ((2*pi)/lambda)*d*sin(AoD22(i));
ctet12(i)= ((2*pi)/lambda)*d*sin(AoD12(i));
ctet21(i)= ((2*pi)/lambda)*d*sin(AoD21(i));
f_t11_ula{i}=transpose((1/sqrt(M))*[ 1 exp(j*ctet11(i)) exp(j*2*ctet11(i)) exp(j*3*ctet11(i)) ]);
f_t22_ula{i}=transpose((1/sqrt(M))*[ 1 exp(j*ctet22(i)) exp(j*2*ctet22(i)) exp(j*3*ctet22(i)) ]);
f_t12_ula{i}=transpose((1/sqrt(M))*[ 1 exp(j*ctet12(i)) exp(j*2*ctet12(i)) exp(j*3*ctet12(i)) ]);
f_t21_ula{i}=transpose((1/sqrt(M))*[ 1 exp(j*ctet21(i)) exp(j*2*ctet21(i)) exp(j*3*ctet21(i)) ]);
cter11(i)= ((2*pi)/lambda)*d*sin(AoA11(i));
cter22(i)= ((2*pi)/lambda)*d*sin(AoA22(i));
cter12(i)= ((2*pi)/lambda)*d*sin(AoA12(i));
cter21(i)= ((2*pi)/lambda)*d*sin(AoA21(i));
f_r11_ula{i}=transpose((1/sqrt(O))*[ 1 exp(j*cter11(i)) exp(j*2*cter11(i)) exp(j*3*cter11(i)) ]);
f_r22_ula{i}=transpose((1/sqrt(O))*[ 1 exp(j*cter22(i)) exp(j*2*cter22(i)) exp(j*3*cter22(i))]);
f_r12_ula{i}=transpose((1/sqrt(O))*[ 1 exp(j*cter12(i)) exp(j*2*cter12(i)) exp(j*3*cter12(i)) ]);
f_r21_ula{i}=transpose((1/sqrt(O))*[ 1 exp(j*cter21(i)) exp(j*2*cter21(i)) exp(j*3*cter21(i))]);
channel11{i}= alpha11(i) * f_r11_ula{i}* conj(transpose(f_t11_ula{i})) ;
channel22{i}= alpha22(i) * f_r22_ula{i}* conj(transpose(f_t22_ula{i})) ;
channel12{i}= alpha12(i) * f_r12_ula{i}* conj(transpose(f_t12_ula{i})) ;
channel21{i}= alpha21(i) * f_r21_ula{i}* conj(transpose(f_t21_ula{i})) ;
end
I am writing this question here to ask how I can compress this code, as you can see its not very nicely written and I have basically many repetitions. I don't know how to write them in few commands. Every command is repeated four times and indexed by 11, 12, 21, 22..
P.S If someone wants to run the code the following variables are needed
a = 1;
b = 0;
M=4;
O = 4;
lambda=0.15;
d=lambda/2;
Looking forward for suggestions.
As #David mentioned, it can be done using 3D arrays. This removes much of the code repetition.
Here is an example of how it could be done:
sz=[2,2,3];
alpha=b+a.*randn(sz);
AoD = randi([-180/6 +180/6],sz);
AoA = randi([-180 +180],sz);
mask = logical(repmat(eye(2),1,1,3));
[AoA(mask), AoD(~mask)] = deal(AoD(~mask),AoA(mask));
ctet = 2*pi/lambda * d * sin(AoD);
f_t = reshape(arrayfun(#(x) exp(1j*(0:3)'*ctet(x))/sqrt(M),1:12,'UniformOutput',0),sz);
cter = (2*pi)/lambda*d*sin(AoA);
f_r = reshape(arrayfun(#(x) exp(1j*(0:3)'*cter(x))/sqrt(O),1:12,'UniformOutput',0),sz);
channel = reshape(arrayfun(#(x) alpha(x) * f_r{x} * conj(transpose(f_t{x})), 1:12, 'UniformOutput',0),sz);
Note that for each of the variables mentioned in the question, the same values can be accessed using the corresponding 3D index. For example, using the code above, the value that was previously in AoA11(1) is now in AoA(1,1,1). Similarly, the matrix that was stored in channel11{1} is now in channel{1,1,1}.
Hope this helps.

Julia Method Error converting Complex{Float64}

I'm novice to Julia and I have the following code with this error:
MethodError(convert,(Complex{Float64},[-1.0 - 1.0im])).
I would like to know the source of the error and how to optimize this piece of code for speed.
This is my code:
function OfdmSym()
N = 64
n = 1000
symbol = convert(Array{Complex{Float64},2},ones(n,64)) # I need Array{Complex{Float64},2}
data = convert(Array{Complex{Float64},2},ones(1,48)) # I need Array{Complex{Float64},2}
const unused = convert(Array{Complex{Float64},2},zeros(1,12))
const pilot = convert(Array{Complex{Float64},2},ones(1,4))
const s = convert(Array{Complex{Float64},2},[-1-im -1+im 1-im 1+im])# QPSK Complex Data
for i=1:n # generate 1000 symbols
for j = 1:48 # generate 48 complex data symbols whose basis is s
r = rand(1:4,1) # 1, 2, 3, or 4
data[j] = s[r]
end
symbol[i,:]=[data[1,1:10] pilot[1] data[1,11:20] pilot[2] data[1,21:30] pilot[3] data[1,31:40] pilot[4] data[1,41:48] unused]
end
end
As it's the first day programming in Julia, I tried very hard to reveal the source of the error without success. I also tried to optimize and initialize arrays as I could but when I time the code I realize that it is far from optimal. I appreciate your help.
Try this much simpler code
function OfdmSym()
N = 64
n = 1000
symbol = ones(Complex{Float64}, n, 64)
data = ones(Complex{Float64}, 1, 48)
unused = zeros(Complex{Float64}, 1, 12)
pilot = ones(Complex{Float64}, 1, 4)
s = [-1-im -1+im 1-im 1+im]
for i=1:n # generate 1000 symbols
for j = 1:48 # generate 48 complex data symbols whose basis is s
r = rand(1:4) # 1, 2, 3, or 4
data[j] = s[r]
end
symbol[i,:]=[data[1,1:10] pilot[1] data[1,11:20] pilot[2] data[1,21:30] pilot[3] data[1,31:40] pilot[4] data[1,41:48] unused]
end
end
OfdmSym()
I wouldn't worry too much about optimizing things until you have got it working correctly. The way you have it set up now seems like it'd be kinda inefficient due to all the slicing of arrays - it'd be better to try to build symbol directly.

Extract numbers from specific image

I am involved in a project that I think you can help me. I have multiple images that you can see here Images to recognize. The goal here is to extract the numbers between the dashed lines. What is the best approach to do that? The idea that I have from the beginning is to find the coordinates of the dash lines and do the crop function, then is just run OCR software. But is not easy to find those coordinates, can you help me? Or if you have a better approach tell me.
Best regards,
Pedro Pimenta
You may start by looking at more obvious (bigger) objects in your images. The dashed lines are way too small in some images. Searching for the "euros milhoes" logo and the barcode will be easier and it will help you have an idea of the scale and rotation involved.
To find these objects without using match template you can binarize your image (watch out for the background texture) and use the Hu moments on the contours/blobs.
Don't expect a good OCR accuracy on images where the numbers are smaller than 8-10 pixels.
You can use python-tesseract https://code.google.com/p/python-tesseract/ ,it works with your image.What you need to do is to split the result string.I use your https://www.dropbox.com/sh/kcybs1i04w3ao97/u33YGH_Kv6#f:euro9.jpg to test.And source code is below.UPDATE
# -*- coding: utf-8 -*-
from PIL import Image
from PIL import ImageEnhance
import tesseract
im = Image.open('test.jpg')
enhancer = ImageEnhance.Contrast(im)
im = enhancer.enhance(4)
im = im.convert('1')
w, h = im.size
im = im.resize((w * (416 / h), 416))
pix = im.load()
LINE_CR = 0.01
WHITE_HEIGHT_CR = int(h * (20 / 416.0))
status = 0
white_line = []
for i in xrange(h):
line = []
for j in xrange(w):
line.append(pix[(j, i)])
p = line.count(0) / float(w)
if not p > LINE_CR:
white_line.append(i)
wp = None
for i in range(10, len(white_line) - WHITE_HEIGHT_CR):
k = white_line[i]
if white_line[i + WHITE_HEIGHT_CR] == k + WHITE_HEIGHT_CR:
wp = k
break
result = []
flag = 0
while 1:
if wp < 0:
result.append(wp)
break
line = []
for i in xrange(w):
line.append(pix[(i, wp)])
p = line.count(0) / float(w)
if flag == 0 and p > LINE_CR:
l = []
for xx in xrange(20):
l.append(pix[(xx, wp)])
if l.count(0) > 5:
break
l = []
for xx in xrange(416-1, 416-100-1, -1):
l.append(pix[(xx, wp)])
if l.count(0) > 17:
break
result.append(wp)
wp -= 1
flag = 1
continue
if flag == 1 and p < LINE_CR:
result.append(wp)
wp -= 1
flag = 0
continue
wp -= 1
result.reverse()
for i in range(1, len(result)):
if result[i] - result[i - 1] < 15:
result[i - 1] = -1
result = filter(lambda x: x >= 0, result)
im = im.crop((0, result[0], w, result[-1]))
im.save('test_converted.jpg')
api = tesseract.TessBaseAPI()
api.Init(".","eng",tesseract.OEM_DEFAULT)
api.SetVariable("tessedit_char_whitelist", "0123456789abcdefghijklmnopqrstuvwxyz")
api.SetPageSegMode(tesseract.PSM_AUTO)
mImgFile = "test_converted.jpg"
mBuffer=open(mImgFile,"rb").read()
result = tesseract.ProcessPagesBuffer(mBuffer,len(mBuffer),api)
print "result(ProcessPagesBuffer)=",result
Depends python 2.7 python-tesseract-win32 python-opencv numpy PIL,and be sure to follow python-tesseract's remember to .