Matlab linprog constraints: how to stop charge and discharge storage at same time? - matlab

I'm having issues with matlab linprog code. The optimisation function is the overall cost for the 24 period, just considering fuel costs of the boiler.
Purpose of simulation:
Optimisation of the charge/discharge behaviour of a Thermal Energy Storage (TES) for a 24h operation of a system consisting of a boiler, heat demand, and the TES. The price of the gas are time-varying.
Problem:
If the TES is ideal (efficiency=100%), I have no constraint that stops the system from charging and discharging at the same time. I CANNOT use one variable to describe charge and discharge. I do need them separated
At the moment I have the following constraints to describe the min/max charge/discharge rates (and of course some others):
maxChargeThermalTES>=ChargeThermalTES<=0
maxDischargeThermalTES >= DischargeThermalTES <=0
is it possible to realise the following logical rule within the constraints of linprog?
if ChargeThermalTES<0,
DischargeThermalTES=0
end
all approaches, e.g. with a binary variable (to describe if the system is charging or discharging) do not work, as the binary variable always depends on the output of the optimisation.

You cannot enforce such logical rule in linear programming.
However, what you can do is the following :
1\ solve your linear program, without this constraint. Get the optimal cost of your objective function (lets name it OldCost).
2\ Then change your linear program this way :
add a constraint : old objective function should be between OldCost * (1-Epsilon) and OldCost * (1+Epsilon)
the new objective function to minimize is ChargeThermalTES + DischargeThermalTES.
Cheers

Yes, it is possible. You can add your If-Then condition to linprog using one 0-1 binary variable and Big-M.
To realize the Logical rule:
if ChargeThermalTES<0,
DischargeThermalTES=0
end
Condition: If ChargeThermalTES<0, then DischargeThermalTES=0
Let's introduce a binary variable y
So we can rewrite the condition as
ChargeThermalTES - M y < 0
which implies that
if y = 0, then DischargeThermalTES must be = 0
if y=1, DischargeThermalTES can be anything
Let's split the equal to constraint into two inequalities
DischargeThermalTES < M y
DischargeThermalTES > -M y
If y=1, the above two are essentially non-binding.
If y= 0, it will force DischargeThermalTES to become 0.
So with the following constraints combined, you can enforce your logical constraint to the Linear Program.
ChargeThermalTES - M y < 0
DischargeThermalTES < M y
DischargeThermalTES > -M y
y = {0,1} binary, M is a large number.
Hope that helps.

Related

YALMIP outputs "Infeasible" for an easy, feasible SDP

I want to determine whether a given 3x3 matrix is positive-semidefinite or not. To do so, I write the following SDP in YALMIP
v=0.2;
a=sdpvar(1);
b=sdpvar(1);
M=[1 a -v/4 ; b 1 0 ; -v/4 0 0.25];
x=sdpvar(1);
optimize([M+x*eye(3)>=0],x,sdpsettings('solver','sedumi'))
This program gives me the error "Dual infeasible, primal improving direction found". This happens for any value of v in the interval (0,1].
Given that this problem is tractable, I diagonalized the matrix directly obtaining that the three eigenvalues are the three roots of the following polynomial
16*t^3 - 36*t^2 + (24 - 16*a*b - v^2)*t + (-4 + 4*a*b + v^2)
Computing the values of the three roots numerically I see that the three of them are positive for sign(a)=sign(b) (except for a small region in the neighborhood of a,b=+-1), for any value of v. Therefore, the SDP should run without problems and output a negative value of x without further complications.
To make things more interesting, I ran the same code with the following matrix
M=[1 a v/4 ; b 1 0 ; v/4 0 0.25];
This matrix has the same eigenvalues as the previous one, and in this case the program runs without any issues, confirming that the matrix is indeed positive-semidefinite.
I am really curious about the nature of this issue, any help would be really appreciated.
EDIT: I also tried the SDPT3 solver, and results are very similar. In fact, the program runs smoothly for the case of +v, but when i put a minus sign I get the following error
'Unknown problem in solver (Turn on 'debug' in sdpsettings) (Error using & …'
Furthermore, when I add some restrictions to the variables, i.e., I run the following command
optimize([total+w*eye(3)>=0,-1<=a<=1,-1<=b<=1],w,sdpsettings('solver','sdpt3'))
Then the error turns to an 'Infeasible problem' error.
Late answer, but anyway. The matrix you have specified is not symmetric. Semidefinite programming is about optimization over the set of symmetric positive semi-definite matrices.
When you define this unsymmetric matrix constraint in YALMIP, it is simply interpreted as a set of 9 linear elementwise constraints, and for that linear program, the optimal x is unbounded.

Finding the best threshold level without a for loop in MATLAB

Let A and B be two matrices of the same size. For a matrix M, let ht(M,t) threshold all the entries of M by t. That is All entries whose absolute value is less than t are set to 0. Suppose I want to find the optimal threshold t such that norm(ht(A,t)-B,'fro')^2 is minimized.
The only way that I can see to do this is deficient: do a for loop over the unique values of A and threshold A and setting C=ht(A,t)-B, compute sum(sum(C.*C)).
This is just too slow when A is large. I have considered sorting the elements of A and finding some efficient way to set a few entries to zero at a time, but I'm not sure this can all be done without a for loop.
Is there a way to do it?
Here's a very simple example (so simple a for loop works easily in this case):
B =
0.101508820368332 0
0 0.301996943246957
Set
A=B+.1*ones(2)
A =
0.201508820368332 0.1
0.1 0.401996943246957
Simple inspection shows that if we zero out the off-diagonal entries of A we minimize the difference between A and B. There are 3 possible threshold values, given by unique(A)=[.1,.2015,.402]. Given a potential threshold value t, we can hard threshold A by:
function [A_thresholded] = ht(A,t)
%
A_thresholded = A .* (abs(A)>t);
The form of the data in a matrix is irrelevant. You can convert them to vectors and simply compute the square-norm. In fact, you can sort the contents of A in increasing order (and permute B to preserve pairing). When you increase the threshold to include one more value in A, the norm only changes by that one increment. Therefore, you can find your solution in O(n log n). Hope this helps.

Spearman's rank correlation significance

I'm calculating Spearman's rank correlation in matlab with the following code:
[RHO,PVAL] = corr(x,y,'Type','Spearman');
RHO =
0.7211
PVAL =
4.9473e-04
and then with different variables
[RHO,PVAL] = corr(x2,y2,'Type','Spearman');
RHO =
0.3277
PVAL =
0.0060
How do you categorize these as p < 0.05, p < 0.01, p < 0.001 etc. Commonly in scientific journals these pvalues are represented as the examples I've shown and not as one number. Would both of these be p < 0.01? When defining whether a correlation is significant to a specific value do you always look for the smallest error i.e if its PVAL = 0.0005, both p > 0.05 and p > 0.001 would be correct here, do we simply write the lowest i.e. p > 0.001?
As Martin Dinov wrote, this is at least partially a matter of journal policy. But, as long as there is no explicit journal convention against it, I would recommend to always report the actual p-value, in this case in the form p = 4.9·10-4 and p = 0.006, respectively. You can then proceed to say that the effect you found is statistically significant, usually based on comparison with a previously chosen significance level, typically 0.05, unless you need to correct for multiple comparisons.
The reason is that the commonly used significance levels are purely a matter of convention. By only saying that p is below one conventional threshold means to withhold valuable information from the reader, which she might use to make up her own mind about the result – and this truncation is not even justified by relevant saving of print space.
You should also, of course, report the value of the correlation coefficient itself (which in this case doubles as a test statistic and an effect size) as well as the sample size.
At least for the field of psychology, these are official recommendations:
Hypothesis tests. It is hard to imagine a situation in which a dichotomous accept-reject decision is better than reporting an actual p value or, better still, a confidence interval.
…
Effect sizes. Always present effect sizes for primary outcomes. If the units of measurement are meaningful on a practical level (e.g., number of cigarettes smoked per day), then we usually prefer an unstandardized measure (regression coefficient or mean difference) to a standardized measure (r or d).
L. Wilkinson and the Task Force on Statistical Inference, "Statistical Methods in Psychology Journals. Guidelines and Explanations"
You mean pval is < 0.05 and also < 0.001 and not >. In general, you do want to show that it is smaller than the smallest significance level (alpha) threshold that you can. So yes, it is best to say for the second example that the p-value is < 0.001. Depending on the journal convention, it may be preferable to put the actual p-value in (so, for the first example, 4.9473e-04) or just that it's < some good alpha (0.0001 for the first case).

How to generate random matlab vector with these constraints

I'm having trouble creating a random vector V in Matlab subject to the following set of constraints: (given parameters N,D, L, and theta)
The vector V must be N units long
The elements must have an average of theta
No 2 successive elements may differ by more than +/-10
D == sum(L*cosd(V-theta))
I'm having the most problems with the last one. Any ideas?
Edit
Solutions in other languages or equation form are equally acceptable. Matlab is just a convenient prototyping tool for me, but the final algorithm will be in java.
Edit
From the comments and initial answers I want to add some clarifications and initial thoughts.
I am not seeking a 'truly random' solution from any standard distribution. I want a pseudo randomly generated sequence of values that satisfy the constraints given a parameter set.
The system I'm trying to approximate is a chain of N links of link length L where the end of the chain is D away from the other end in the direction of theta.
My initial insight here is that theta can be removed from consideration until the end, since (2) in essence adds theta to every element of a 0 mean vector V (shifting the mean to theta) and (4) simply removes that mean again. So, if you can find a solution for theta=0, the problem is solved for all theta.
As requested, here is a reasonable range of parameters (not hard constraints, but typical values):
5<N<200
3<D<150
L==1
0 < theta < 360
I would start by creating a "valid" vector. That should be possible - say calculate it for every entry to have the same value.
Once you got that vector I would apply some transformations to "shuffle" it. "Rejection sampling" is the keyword - if the shuffle would violate one of your rules you just don't do it.
As transformations I come up with:
switch two entries
modify the value of one entry and modify a second one to keep the 4th condition (Theoretically you could just shuffle two till the condition is fulfilled - but the chance that happens is quite low)
But maybe you can find some more.
Do this reasonable often and you get a "valid" random vector. Theoretically you should be able to get all valid vectors - practically you could try to construct several "start" vectors so it won't take that long.
Here's a way of doing it. It is clear that not all combinations of theta, N, L and D are valid. It is also clear that you're trying to simulate random objects that are quite complex. You will probably have a hard time showing anything useful with respect to these vectors.
The series you're trying to simulate seems similar to the Wiener process. So I started with that, you can start with anything that is random yet reasonable. I then use that as a starting point for an optimization that tries to satisfy 2,3 and 4. The closer your initial value to a valid vector (satisfying all your conditions) the better the convergence.
function series = generate_series(D, L, N,theta)
s(1) = theta;
for i=2:N,
s(i) = s(i-1) + randn(1,1);
end
f = #(x)objective(x,D,L,N,theta)
q = optimset('Display','iter','TolFun',1e-10,'MaxFunEvals',Inf,'MaxIter',Inf)
[sf,val] = fminunc(f,s,q);
val
series = sf;
function value= objective(s,D,L,N,theta)
a = abs(mean(s)-theta);
b = abs(D-sum(L*cos(s-theta)));
c = 0;
for i=2:N,
u =abs(s(i)-s(i-1)) ;
if u>10,
c = c + u;
end
end
value = a^2 + b^2+ c^2;
It seems like you're trying to simulate something very complex/strange (a path of a given curvature?), see questions by other commenters. Still you will have to use your domain knowledge to connect D and L with a reasonable mu and sigma for the Wiener to act as initialization.
So based on your new requirements, it seems like what you're actually looking for is an ordered list of random angles, with a maximum change in angle of 10 degrees (which I first convert to radians), such that the distance and direction from start to end and link length and number of links are specified?
Simulate an initial guess. It will not hold with the D and theta constraints (i.e. specified D and specified theta)
angles = zeros(N, 1)
for link = 2:N
angles (link) = theta(link - 1) + (rand() - 0.5)*(10*pi/180)
end
Use genetic algorithm (or another optimization) to adjust the angles based on the following cost function:
dx = sum(L*cos(angle));
dy = sum(L*sin(angle));
D = sqrt(dx^2 + dy^2);
theta = atan2(dy/dx);
the cost is now just the difference between the vector given by my D and theta above and the vector given by the specified D and theta (i.e. the inputs).
You will still have to enforce the max change of 10 degrees rule, perhaps that should just make the cost function enormous if it is violated? Perhaps there is a cleaner way to specify sequence constraints in optimization algorithms (I don't know how).
I feel like if you can find the right optimization with the right parameters this should be able to simulate your problem.
You don't give us a lot of detail to work with, so I'll assume the following:
random numbers are to be drawn from [-127+theta +127-theta]
all random numbers will be drawn from a uniform distribution
all random numbers will be of type int8
Then, for the first 3 requirements, you can use this:
N = 1e4;
theta = 40;
diffVal = 10;
g = #() randi([intmin('int8')+theta intmax('int8')-theta], 'int8') + theta;
V = [g(); zeros(N-1,1, 'int8')];
for ii = 2:N
V(ii) = g();
while abs(V(ii)-V(ii-1)) >= diffVal
V(ii) = g();
end
end
inline the anonymous function for more speed.
Now, the last requirement,
D == sum(L*cos(V-theta))
is a bit of a strange one...cos(V-theta) is a specific way to re-scale the data to the [-1 +1] interval, which the multiplication with L will then scale to [-L +L]. On first sight, you'd expect the sum to average out to 0.
However, the expected value of cos(x) when x is a random variable from a uniform distribution in [0 2*pi] is 2/pi (see here for example). Ignoring for the moment the fact that our limits are different from [0 2*pi], the expected value of sum(L*cos(V-theta)) would simply reduce to the constant value of 2*N*L/pi.
How you can force this to equal some other constant D is beyond me...can you perhaps elaborate on that a bit more?

Generate a random number with max, min and mean (average) in Matlab

I need to generate random numbers with following properties.
Min must be 1
Max must be 9
Average (mean) is 6.00 (or something else)
Random number must be Integer (positive) only
I have tried several syntaxes but nothing works, for example
r=1+8.*rand(100,1);
This gives me a random number between 1-9 but it's not an integer (for example 5.607 or 4.391) and each time I calculate the mean it varies.
You may be able to define a function that satisfies your requirements based on Matlab's randi function. But be careful, it is easy to define functions of random number generators which do not produce random numbers.
Another approach might suit -- create a probability distribution to meet your requirements. In this case you need a vector of 9 floating-point numbers which sum to 1 and which, individually, express the probability of the i-th integer occurring. For example, a distribution might be described by the following vector:
[0.1 0.1 0.1 0.1 0.2 0.1 0.1 0.1 0.1]
These split the interval [0,1] into 9 parts. Then, take your favourite rng which generates floating-point numbers in the range [0,1) and generate a number, suppose it is 0.45. Read along the interval from 0 to 1 and you find that this is in the 5-th interval, so return the integer 5.
Obviously, I've been too lazy to give you a vector which gives 6 as the mean of the distribution, but that shouldn't be too hard for you to figure out.
Here is an algorithm with a loop to reach a required mean xmean (with required precision xeps) by regenerating a random number from one half of a vector to another according to mean at current iteration. With my tests it reached the mean pretty quick.
n = 100;
xmean = 6;
xmin = 1;
xmax = 9;
xeps = 0.01;
x = randi([xmin xmax],n,1);
while abs(xmean - mean(x)) >= xeps
if xmean > mean(x)
x(find(x < xmean,1)) = randi([xmean xmax]);
elseif xmean < mean(x)
x(find(x > xmean,1)) = randi([xmin xmean]);
end
end
x is the output you need.
You can use randi to get random integers
You could use floor to truncate your random numbers to integer values only:
r = 1 + floor(9 * rand(100,1));
Obtaining a specified mean is a little trickier; it depends what kind of distribution you're after.
If the distribution is not important and all you're interested in is the mean, then there's a particularly simple function that does that:
function x=myrand
x=6;
end
Before you can design your random number generator you need to specify the distribution it should draw from. You've only partially done that: i.e., you specified it draws from integers in [1,9] and that it has a mean that you want to be able to specify. That still leaves an infinity of distributions to chose among. What other properties do you want your distribution to have?
Edit following comment: The mean of any finite sample from a probability distribution - the so-called sample mean - will only approximate the distribution's mean. There is no way around that.
That having been said, the simplest (in the maximum entropy sense) distribution over the integers in the domain [1,9] is the exponential distribution: i.e.,
p = #(n,x)(exp(-x*n)./sum(exp(-x*(1:9))));
The parameter x determines the distribution mean. The corresponding cumulative distribution is
c = cumsum(p(1:9,x));
To draw from the distribution p you can draw a random number from [0,1] and find what sub-interval of c it falls in: i.e.,
samp = arrayfun(#(y)find(y<c,1),rand(n,m));
will return an [n,m] array of integers drawn from p.