Dijkstra's Algorithm with Turn Penalty Giving Sub-Optimal Path - dijkstra

I'm having a problem returning the optimal path from A to E using Dijkstra's algorithm with a turn penalty of 0.25 in the following figure:
My implementation returns path ABDE (since the shortest distance to D is calculated as 3.05 along the curve instead of 3.25 along the straight lines), which has total cost 1 + 0.25 + 1.8 + 0.25 + 1 = 4.3.
However, path ABCDE is the optimal path with total cost 1 + 1 + 0.25 + 1 + 1 = 4.25. How do I modify my implementation to account for this? Right now, all I'm doing is,
if d[u] + w(u, v) + 0.25 < d[v], then d[v] = d[u] + w(u, v) + 0.25.

Dijkstra's algorithm does not work with a turn penalty. If you want to use Dijkstra's algorithm, you will have to eliminate the turn penalty, for example by transforming the graph into a graph of original-node/arrival-direction pairs with edges and edge costs that incorporate the original problem's turn penalty.

Related

Neural network - exercise

I am currently learning for myself the concept of neural networks and I am working with the very good pdf from
http://neuralnetworksanddeeplearning.com/chap1.html
There are also few exercises I did, but there is one exercise I really dont understand, at least one step
Task:
There is a way of determining the bitwise representation of a digit by adding an extra layer to the three-layer network above. The extra layer converts the output from the previous layer into a binary representation, as illustrated in the figure below. Find a set of weights and biases for the new output layer. Assume that the first 3 layers of neurons are such that the correct output in the third layer (i.e., the old output layer) has activation at least 0.99, and incorrect outputs have activation less than 0.01.
I found also the solution, as can be seen on the second image
I understand why the matrix has to have this shape, but I really struggle to understand the step, where the user calculates
0.99 + 3*0.01
4*0.01
I really don't understand these two steps. I would be very happy if someone can help me understand this calculation
Thank you very much for help
Output of previous layer is 10x1(x). Weight matrix is 4x10. New output layer will be 4x1. There are two assumption first:
x is 1 only at one row. xT= [1 0 0 0 0 0 0 0 0 0]. If you multiple this vector with matrix W your output will be yT=[0 0 0 0], because there is only 1 in x. After multiplication by W will be this only 1 multiple by 0th column of W which are zeroes.
Second assumption is, what if x is not 1 anymore, instead of one x can be xT=[0.99 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01]. And if you perform multiplication of x with first row of W result is 0.05(I believe here is typo). When xT=[0.01 0.99 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01] after multiplication with first row of W result is 1.03. Because:
0.01*0 + 0.99*1 + 0.01*0 + 0.01*1 + 0.01*0 + 0.01*1 + 0.01*0 + 0.01*1 + 0.01*0 + 0.01*1 = 1.03
So I believe there is a typo, because author probably assume 4 ones at first row of W, which is not true, because there is 5 ones. Because if there was 4 ones at first first row, than really results will be 0.04 for 0.99 at first row of x and 1.02 for 0.99 at second row of x.

FIR filter length is the intercept included as a coefficient?-- Matlab

I have some confusion about the terminologies and simulation of an FIR system. I shall appreciate help in rectifying my mistakes and informing what is correct.
Assuming a FIR filter with coefficient array A=[1,c2,c3,c4]. The number of elements are L so the length of the filter L but the order is L-1.
Confusion1: Is the intercept 1 considered as a coefficient? Is it always 1?
Confusion2: Is my understanding correct that for the given example the length L= 4 and order=3?
Confusion3: Mathematically, I can write it as:
where u is the input data and l starts from zero. Then to simulate the above equation I have done the following convolution. Is it correct?:
N =100; %number of data
A = [1, 0.1, -0.5, 0.62];
u = rand(1,N);
x(1) = 0.0;
x(2) = 0.0;
x(3) = 0.0;
x(4) = 0.0;
for n = 5:N
x(n) = A(1)*u(n) + A(2)*u(n-1)+ A(3)*u(n-3)+ A(4)*u(n-4);
end
Confusion1: Is the intercept 1 considered as a coefficient? Is it always 1?
Yes it is considered a coefficient, and no it isn't always 1. It is very common to include a global scaling factor in the coefficient array by multiplying all the coefficients (i.e. scaling the input or output of a filter with coefficients [1,c1,c2,c2] by K is equivalent to using a filter with coefficients [K,K*c1,K*c2,K*c3]). Also note that many FIR filter design techniques generate coefficients whose amplitude peaks near the middle of the coefficient array and taper off at the start and end.
Confusion2: Is my understanding correct that for the given example the length L= 4 and order = 3?
Yes, that is correct
Confusion3: [...] Then to simulate the above equation I have done the following convolution. Is it correct? [...]
Almost, but not quite. Here are the few things that you need to fix.
In the main for loop, applying the formula you would increment the index of A and decrement the index of u by 1 for each term, so you would actually get x(n) = A(1)*u(n) + A(2)*u(n-1)+ A(3)*u(n-2)+ A(4)*u(n-3)
You can actually start this loop at n=4
The first few outputs should still be using the formula, but dropping the terms u(n-k) for which n-k would be less than 1. So, for x(3) you'd be dropping 1 term, for x(2) you'd be dropping 2 terms and for x(1) you'd be dropping 3 terms.
The modified code would look like the following:
x(1)=A(1)*u(1);
x(2)=A(1)*u(2) + A(2)*u(1);
x(3)=A(1)*u(3) + A(2)*u(2) + A(3)*u(1);
for n = 4:N
x(n) = A(1)*u(n) + A(2)*u(n-1)+ A(3)*u(n-2)+ A(4)*u(n-3);
end

Possible "Traveling Salesman" function in Matlab?

I am looking to solve a Traveling Salesman type problem using a matrix in order to find the minimum time between transitions. The matrix looks something like this:
A = [inf 4 3 5;
1 inf 3 5;
4 5 inf 3;
6 7 1 inf]
The y-axis represents the "from" node and the x-axis represents the "to" node. I am trying to find the optimal time from node 1 to node 4. I was told that there is a Matlab function called "TravellingSalesman". Is that true, and if not, how would I go about solving this matrix?
Thanks!
Here's an outline of the brute-force algorithm to solve TSP for paths from node 1 to node n:
C = inf
P = zeros(1,n-2)
for each permutation P of the nodes [2..n-1]
// paths always start from node 1 and end on node n
C = A(1,P(1)) + A(P(1),P(2)) + A(P(2),P(3)) + ... +
A(P(n-3),P(n-2)) + A(P(n-2),n)
if C < minCost
minCost = C
minPath = P
elseif C == minCost // you only need this part if you want
minPath = [minPath; P] // ALL paths with the shortest distance
end
end
Note that the first and last factors in the sum are different because you know beforehand what the first and last nodes are, so you don't have to include them in the permutations. So in the example given, with n=4, there are actually only 2!=2 possible paths.
The list of permutations can be precalculated using perms(2:n-1), but that might involve storing a large matrix (n! x n). Or you can calculate the cost as you generate each permutation. There are several files on the Mathworks file exchange with names like nextPerm that should work for you. Either way, as n grows you're going to be generating a very large number of permutations and your calculations will take a very long time.

Matrix division with constraints

If I have a set of linear equations (random matrix generated):
2x + 4y + 6z = 4
5x + 3y + 7z = 1
9x + 7y + 3z = 6
and I want to solve for x, y and z I just do a matrix division. But if I want to set a constraint on this matrix, like x > 0 or x = 4, is there a way of doing this?
Is adding another row correct, for example:
2x + 4y + 6z = 4
5x + 3y + 7z = 1
9x + 7y + 3z = 6
1x + 0y + 0z = 1 <---
and is there a general way of applying these constraints with bigger matrices and more complex coefficients?
In MATLAB, use lsqnonneg for non-negativity constraints (on all variables.) If you have the optimization toolbox, then you would use lsqlin to solve problems with inequality constraints, or where only certain variables are bound constrained.
You could of course use a LP solver like linprog, but if you have linprog, then you also have lsqlin! I suppose you could even use the quadprog solver, but why bother? Use the right tool for the problem.
As for the idea of using an explicitly iterative solver to solve it like fmincon, yes, you could do that, but you will be left with a less exact result that takes more time to solve.
Yes, you should investigate either Lagrange multipliers or simplex method to see how it's done.

Extremely large weighted average

I am using 64 bit matlab with 32g of RAM (just so you know).
I have a file (vector) of 1.3 million numbers (integers). I want to make another vector of the same length, where each point is a weighted average of the entire first vector, weighted by the inverse distance from that position (actually it's position ^-0.1, not ^-1, but for example purposes). I can't use matlab's 'filter' function, because it can only average things before the current point, right? To explain more clearly, here's an example of 3 elements
data = [ 2 6 9 ]
weights = [ 1 1/2 1/3; 1/2 1 1/2; 1/3 1/2 1 ]
results=data*weights= [ 8 11.5 12.666 ]
i.e.
8 = 2*1 + 6*1/2 + 9*1/3
11.5 = 2*1/2 + 6*1 + 9*1/2
12.666 = 2*1/3 + 6*1/2 + 9*1
So each point in the new vector is the weighted average of the entire first vector, weighting by 1/(distance from that position+1).
I could just remake the weight vector for each point, then calculate the results vector element by element, but this requires 1.3 million iterations of a for loop, each of which contains 1.3million multiplications. I would rather use straight matrix multiplication, multiplying a 1x1.3mil by a 1.3milx1.3mil, which works in theory, but I can't load a matrix that large.
I am then trying to make the matrix using a shell script and index it in matlab so only the relevant column of the matrix is called at a time, but that is also taking a very long time.
I don't have to do this in matlab, so any advice people have about utilizing such large numbers and getting averages would be appreciated. Since I am using a weight of ^-0.1, and not ^-1, it does not drop off that fast - the millionth point is still weighted at 0.25 compared to the original points weighting of 1, so I can't just cut it off as it gets big either.
Hope this was clear enough?
Here is the code for the answer below (so it can be formatted?):
data = load('/Users/mmanary/Documents/test/insertion.txt');
data=data.';
total=length(data);
x=1:total;
datapad=[zeros(1,total) data];
weights = ([(total+1):-1:2 1:total]).^(-.4);
weights = weights/sum(weights);
Fdata = fft(datapad);
Fweights = fft(weights);
Fresults = Fdata .* Fweights;
results = ifft(Fresults);
results = results(1:total);
plot(x,results)
The only sensible way to do this is with FFT convolution, as underpins the filter function and similar. It is very easy to do manually:
% Simulate some data
n = 10^6;
x = randi(10,1,n);
xpad = [zeros(1,n) x];
% Setup smoothing kernel
k = 1 ./ [(n+1):-1:2 1:n];
% FFT convolution
Fx = fft(xpad);
Fk = fft(k);
Fxk = Fx .* Fk;
xk = ifft(Fxk);
xk = xk(1:n);
Takes less than half a second for n=10^6!
This is probably not the best way to do it, but with lots of memory you could definitely parallelize the process.
You can construct sparse matrices consisting of entries of your original matrix which have value i^(-1) (where i = 1 .. 1.3 million), multiply them with your original vector, and sum all the results together.
So for your example the product would be essentially:
a = rand(3,1);
b1 = [1 0 0;
0 1 0;
0 0 1];
b2 = [0 1 0;
1 0 1;
0 1 0] / 2;
b3 = [0 0 1;
0 0 0;
1 0 0] / 3;
c = sparse(b1) * a + sparse(b2) * a + sparse(b3) * a;
Of course, you wouldn't construct the sparse matrices this way. If you wanted to have less iterations of the inside loop, you could have more than one of the i's in each matrix.
Look into the parfor loop in MATLAB: http://www.mathworks.com/help/toolbox/distcomp/parfor.html
I can't use matlab's 'filter' function, because it can only average
things before the current point, right?
That is not correct. You can always add samples (i.e, adding or removing zeros) from your data or from the filtered data. Since filtering with filter (you can also use conv by the way) is a linear action, it won't change the result (it's like adding and removing zeros, which does nothing, and then filtering. Then linearity allows you to swap the order to add samples -> filter -> remove sample).
Anyway, in your example, you can take the averaging kernel to be:
weights = 1 ./ [3 2 1 2 3]; % this kernel introduces a delay of 2 samples
and then simply:
result = filter(w,1,[data, zeros(1,3)]); % or conv (data, w)
% removing the delay introduced by the kernel
result = result (3:end-1);
You considered only 2 options:
Multiplying 1.3M*1.3M matrix with a vector once or multiplying 2 1.3M vectors 1.3M times.
But you can divide your weight matrix to as many sub-matrices as you wish and do a multiplication of n*1.3M matrix with the vector 1.3M/n times.
I assume that the fastest will be when there will be the smallest number of iterations and n is such that creates the largest sub-matrix that fits in your memory, without making your computer start swapping pages to your hard drive.
with your memory size you should start with n=5000.
you can also make it faster by using parfor (with n divided by the number of processors).
The brute force way will probably work for you, with one minor optimisation in the mix.
The ^-0.1 operations to create the weights will take a lot longer than the + and * operations to compute the weighted-means, but you re-use the weights across all the million weighted-mean operations. The algorithm becomes:
Create a weightings vector with all the weights any computation would need:
weights = (-n:n).^-0.1
For each element in the vector:
Index the relevent portion of the weights vector to consider the current element as the 'centre'.
Perform the weighted-mean with the weights portion and the entire vector. This can be done with a fast vector dot-multiply followed by a scalar division.
The main loop does n^2 additions and subractions. With n equal to 1.3 million that's 3.4 trillion operations. A single core of a modern 3GHz CPU can do say 6 billion additions/multiplications a second, so that comes out to around 10 minutes. Add time for indexing the weights vector and overheads, and I still estimate you could come in under half an hour.