I am trying to use the google or tools for the vehicle routing problem.
Here is the link https://developers.google.com/optimization/routing/vrp .
I am trying to use the code by google but when I encounter this piece of code:
def add_distance_dimension(routing, distance_callback):
"""Add Global Span constraint"""
distance = 'Distance'
maximum_distance = 3000 # Maximum distance per vehicle.
routing.AddDimension(
distance_callback,
0, # null slack
maximum_distance,
True, # start cumul to zero
distance)
distance_dimension = routing.GetDimensionOrDie(distance)
# Try to minimize the max distance among vehicles.
distance_dimension.SetGlobalSpanCostCoefficient(100)
I don't get the meaning of the last istruction
distance_dimension.SetGlobalSpanCostCoefficient(100)
What's the purpose of this function and what's the meaning of the argument? Why is there a "100" there?
The documentation, which may very well have been updated since this question was posted, spells out the meaning of the 100:
The method SetGlobalSpanCostCoefficient sets a large coefficient (100) for the global span of the routes, which in this example is the maximum of the distances of the routes. This makes the global span the predominant factor in the objective function, so the program minimizes the length of the longest route.
In general (from the API reference), the method
[sets] a cost proportional to the global dimension span, that is the difference between the largest value of route end cumul variables and the smallest value of route start cumul variables. In other words: global_span_cost = coefficient * (Max(dimension end value) - Min(dimension start value)).
Related
I am relatively new to matlab. I found the consecutive mean of a set of 1E6 random numbers that has mean and standard deviation. Initially the calculated mean fluctuate and then converges to a certain value.
I will like to know the index (i.e 100th position) at which the mean converges. I have no idea how to do that.
I tried using the logical operator but i have to go through 1e6 data points. Even with that i still can't find the index.
Y_c= sigma_c * randn(n_r, 1) + mu_c; %Random number creation
Y_f=sigma_f * randn(n_r, 1) + mu_f;%Random number creation
P_u=gamma*(B*B)/2.*N_gamma+q*B.*N_q + Y_c*B.*N_c; %Calculation of Ultimate load
prog_mu=cumsum(P_u)./cumsum(ones(size(P_u))); %Progressive Cumulative Mean of system response
logical(diff(prog_mu==0)); %Find index
I suspect the issue is that the mean will never truly be constant, but will rather fluctuate around the "true mean". As such, you'll most likely never encounter a situation where the two consecutive values of the cumulative mean are identical. What you should do is determine some threshold value, below which you consider fluctuations in the mean to be approximately equal to zero, and compare the difference of the cumulative mean to that value. For instance:
epsilon = 0.01;
const_ind = find(abs(diff(prog_mu))<epsilon,1,'first');
where epsilon will be the threshold value you choose. The find command will return the index at which the variation in the cumulative mean first drops below this threshold value.
EDIT: As was pointed out, this method may potentially fail if the first few random numbers are generated such that the difference between them is less than the epsilon value, but have not yet converged. I would like to suggest a different approach, then.
We calculate the cumulative means, as before, like so:
prog_mu=cumsum(P_u)./cumsum(ones(size(P_u)));
We also calculate the difference in these cumulative means, as before:
df_prog_mu = diff(prog_mu);
Now, to ensure that conversion has been achieved, we find the first index where the cumulative mean is below the threshold value epsilon and all subsequent means are also below the threshold value. To phrase this another way, we want to find the index after the last position in the array where the cumulative mean is above the threshold:
conv_index = find(~df_prog_mu,1,'last')+1;
In doing so, we guarantee that the value at the index, and all subsequent values, have converged below your predetermined threshold value.
I wouldn't imagine that the mean would suddenly become constant at a single index. Wouldn't it asymptotically approach a constant value? I would reccommend a for loop to calculate the mean (it sounds like maybe you've already done this part?) like this:
avg = [];
for k=1:length(x)
avg(k) = mean(x(1:k));
end
Then plot the consecutive mean:
plot(avg)
hold on % this will allow us to plot more data on the same figure later
If you're trying to find the point at which the consecutive mean comes within a certain range of the true mean, try this:
Tavg = 5; % or whatever your true mean is
err = 0.01; % the range you want the consecutive mean to reach before we say that it "became constant"
inRange = avg>(Tavg-err) & avg<(Tavg+err); % gives you a binary logical array telling you which values fell within the range
q = 1000; % set this as high as you can while still getting a value for consIndex
constIndex = [];
for k=1:length(inRange)
if(inRange(k) == sum(inRange(k:k+q))/(q-1);)
constIndex = k;
end
end
The below answer takes a similar approach but makes an unsafe assumption that the first value to fall within the range is the value where the function starts to converge. Any value could randomly fall within that range. We need to make sure that the following values also fall within that range. In the above code, you can edit "q" and "err" to optimize your result. I would recommend double checking it by plotting.
plot(avg(constIndex), '*')
i am trying to learn the basics of matlab ,
i wanted to write a mattlab script ,
in this script i defined a vector x with a "d" step that it's length is (2*pi/1000)
and i wanted to plot two sin function according to x :
the first sin is with a frequency of 1, and the second sin frequency 10.3 ..
this is what i did:
d=(2*pi/1000);
x=-pi:d:pi;
first=sin(x);
second=sin(10.3*x);
plot(x,first,x,second);
my question:
what is the different between :
x=linspace(-pi,pi,1000);
and ..
d=(2*pi/1000);
x=-pi:d:pi;
? i am asking because i got confused since i think they both are the same but i think there is something wrong with my assumption ..
also is there is a more sufficient way to write sin function with a giveng frequency ?
The main difference can be summarizes as predefined size vs predefined step. And your example highlights it very well, indeed (1000 elements vs 1001 elements).
The linspace function produces a fixed-length vector (the length being defined by the third input argument, which defaults to 100) whose lower and upper limits are set, respectively, by the first and the second input arguments. The correct step to use is internally computed by the function itself (step = (x2 - x1) / n).
The colon operator defines a vector of elements whose values range between the specified lower and upper limits. The step, which is an optional parameter that defaults to 1, is the discriminant of the vector length. This means that the length of the result is determined by the number of steps that must be accomplished in order to reach the upper limit, starting from the lower one. On an side note, on this MathWorks thread you can find a very interesting discussion concerning the behavior of the colon operator in respect of floating-point management.
Another difference, related to the first one, is that linspace always includes the upper limit value while the colon operator only contains it if the specified step allows it (0:5:14 = [0 5 10]).
As a general rule, I prefer to use the former when I want to produce a vector of a predefined length (pretty obvious, isn't it?), and the latter when I need to create a sequence whose length has only a marginal relevance (or no relevance at all)
I am trying to find a max value of a curve fitted plot for a certain region in this plot. I have a 4th order fit, and when i use max(x), the ans for this is an extrapolated value, while I am actually looking fot the max value of the 'bump' in my data.
So question, how do I select the max for only a certain region in the data while using a cfit? Or how do I exclude a part of the fit?
LF = pol4Fit(L,F);
Coefs= coeffvalues(LF);
This code does only give the optimum (the max value) of the real points:
L_opt = feval(LF,L);
[F_opt,Num_Length]= max (L_opt);
Opt_Length= L(Num_Length);
So now I was trying something like: y=max(LF(F)), but this is not specific to select a region.
Try to only evaluate the region you are interested in.
For instance, let's say the specific region is a vector named S.
You can simply rewrite your code like below:
L_opt = feval(LF,S);
Use the specific domain region S instead of the whole domain L and it only evaluates the region you are concerned with. Then using max function should work properly for you.
I am trying to evaluate and find the minimum and maximum values of a function over a certain interval. I also want it to evaluate the endpoints to see if they are the maximum or the minimum values. I have the following code which is not giving me what I want. The minimum values should be -1 and 2 but I am getting -0.9999 and 1.9999. Any help would be much appreciated.
minVal1 = fminbnd(f,-1,0);
minVal2 = fminbnd(f,0,2);
I believe that your problem lies in the fact that the default of TolFun for Matlab's fminbnd` function is 0.0001 - so when the function evaluation changes by less than that number, it stops. This may lead to stopping before reaching the true maximum.
If you want to be "right to within 0.0001", you need to set the tolerance on the function evaluation. You could use for example
minVal1 = fminbnd(f, -1, 0, optimset('TolFun', 1e-5));
That ought to get you the precision you need. Make the tolerance even smaller if you need greater precision (a the expense of computation time). See more details on how to fine tune these parameters on the Matlab website
I'm working with Mean shift, this procedure calculates where every point in the data set converges. I can also calculate the euclidean distance between the coordinates where 2 distinct points converged but I have to give a threshold, to say, if (distance < threshold) then this points belong to the same cluster and I can merge them.
How can I find the correct value to use as threshold??
(I can use every value and from it depends the result, but I need the optimal value)
I've implemented mean-shift clustering several times and have run into this same issue. Depending on how many iterations you're willing to shift each point for, or what your termination criteria is, there is usually some post-processing step where you have to group the shifted points into clusters. Points that theoretically shift to the same mode need not practically end up on directly top of each other.
I think the best and most general way to do this is to use a threshold based on the kernel bandwidth, as suggested in the comments. In the past my code to do this post processing has usually looked something like this:
threshold = 0.5 * kernel_bandwidth
clusters = []
for p in shifted_points:
cluster = findExistingClusterWithinThresholdOfPoint(p, clusters, threshold)
if cluster == null:
// create new cluster with p as its first point
newCluster = [p]
clusters.add(newCluster)
else:
// add p to cluster
cluster.add(p)
For the findExistingClusterWithinThresholdOfPoint function I usually use the minimum distance of p to each currently defined cluster.
This seems to work pretty well. Hope this helps.