How to save parts of linprog solutions - scipy

I am solving a serie of linear programing problems using linprog by indexing each problem in a for-loop:
from scipy.optimize import linprog
for i in range(1,N):
sol[i] = linprog(coa3[N][0], A_ub = coa4[i], b_ub = chvneg[i], options= {"disp": True})
I would like to save in a list (still indexed over i) the function minimization result and the array displaying the values of the variables. I guess I need to add in the for-loop something like minfval[i] = ??? and xval[i] = ???, but actually I don't know how to extract these values from the results provided by linprog. Any suggestions? Thanks a lot.

Consider reading the docs as it's pretty clearly explained what exactly linprog returns.
The good thing is, that you are actually storing these values with your code already because you are storing the whole scipy.optimize.OptimizeResult.
It's now just a matter of accessing it:
from scipy.optimize import linprog
for i in range(1,N):
sol[i] = linprog(coa3[N][0], A_ub = coa4[i], b_ub = chvneg[i], options= {"disp": True})
# iterate over solution-vectors
for i in range(1,N):
print(sol[i].x)
# iterate over objectives
for i in range(1,N):
print(sol[i].fun)
You should also check out the field success just to be safe!
Extraction of the docs (link above) on what is contained in linprog's result:
x : ndarray
The independent variable vector which optimizes the linear
programming problem.
fun : float
Value of the objective function.
slack : ndarray
The values of the slack variables. Each slack variable corresponds
to an inequality constraint. If the slack is zero, then the
corresponding constraint is active.
success : bool
Returns True if the algorithm succeeded in finding an optimal
solution.
status : int
An integer representing the exit status of the optimization::
0 : Optimization terminated successfully
1 : Iteration limit reached
2 : Problem appears to be infeasible
3 : Problem appears to be unbounded
nit : int
The number of iterations performed.
message : str
A string descriptor of the exit status of the optimization.

Related

How to overcome indefinite matrix error (NbClust)?

I'm getting the following error when calling NbClust():
Error in NbClust(data = ds[, sapply(ds, is.numeric)], diss = NULL, distance = "euclidean", : The TSS matrix is indefinite. There must be too many missing values. The index cannot be calculated.
I've called ds <- ds[complete.cases(ds),] just before running NbClust so there's no missing values.
Any idea what's behind this error?
Thanks
I had same issue in my research.
So, I had mailed to Nadia Ghazzali, who is the package maintainer, and got an answer.
I'll attached my mail and her reply.
my e-mail:
Dear Nadia Ghazzali. Hello Nadia. I have some questions about
NbClust function in R library. I have tried googling but could not
find satisfying answers. First, I’m so grateful for you to making
this awsome R library. It is very helpful for my reasearch. I tested
NbClust function in NbClust library with my own data like below.
> clust <- NbClust(data, distance = “euclidean”,
min.nc = 2, max.nc = 10, method = ‘kmeans’, index =”all”)
But soon, an error has occurred. Error: division by zero! Error in
Indices.WBT(x = jeu, cl = cl1, P = TT, s = ss, vv = vv) : object
'scott' not found So, I tried NbClust function line by line and
found that some indices, like CCC, Scott, marriot, tracecovw,
tracew, friedman, and rubin, were not calculated because of object
vv = 0. I’m not very familiar with argebra so I don’t know meaning
of eigen value. But it seems to me that object ss(which is squart of
eigenValues) should not be 0 after prodected.
So, here is my questions.
I assume that my data is so sparse(a lot of zero values) that sqrt(eigenValues) becomes too small, is that right? I’m sorry I
can’t attach my data but I can attach some part of eigenValues and
squarted eigenValues.
> head(eigenValues)
[1] 0.039769880 0.017179826 0.007011972 0.005698736 0.005164871 0.004567238
> head(sqrt(eigenValues))
[1] 0.19942387 0.13107184 0.08373752 0.07548997 0.07186704 0.06758134
And if my assume is right, what can I do for this problems? Only one
way to drop out 7 indices?
Thank you for reading and I’ll waiting your reply. Best regards!
and her reply:
Dear Hansol,
Thank you for your interest. Yes, your understanding is good.
Unfortunately, the seven indices could not be applied.
Best regards,
Nadia Ghazzali
#seni The cause of this error is data related. If you look at the source code of this function,
NbClust <- function(data, diss="NULL", distance = "euclidean", min.nc=2, max.nc=15, method = "ward", index = "all", alphaBeale = 0.1)
{
x<-0
min_nc <- min.nc
max_nc <- max.nc
jeu1 <- as.matrix(data)
numberObsBefore <- dim(jeu1)[1]
jeu <- na.omit(jeu1) # returns the object with incomplete cases removed
nn <- numberObsAfter <- dim(jeu)[1]
pp <- dim(jeu)[2]
TT <- t(jeu)%*%jeu
sizeEigenTT <- length(eigen(TT)$value)
eigenValues <- eigen(TT/(nn-1))$value
for (i in 1:sizeEigenTT)
{
if (eigenValues[i] < 0) {
print(paste("There are only", numberObsAfter,"nonmissing observations out of a possible", numberObsBefore ,"observations."))
stop("The TSS matrix is indefinite. There must be too many missing values. The index cannot be calculated.")
}
}
And I think the root cause of this error is the negative eigenvalues that seep in when the number of clusters is very high, i.e. the max.nc is high. So to solve the problem, you must look at your data. See if it got more columns then rows. Remove missing values, check for issues like collinearity & multicollinearity, variance, covariance etc.
For the other error, invalid clustering method, look at the source code of the method here. Look at line number 168, 169 in the given link. You are getting this error message because the clustering method is empty. if (is.na(method))
stop("invalid clustering method")

Error with matrix assignment

Good evening to all,
I would like to affect values of a matrix B to others differents matrix C.
Then i try this code:
B=imread("deformation_somb.jpg");
xc=pixel_x1;
yc=pixel_y1;
% where [nr,nc,channels]= size([B])
% pixel_x1=(nc/2) & pixel_y1=(nr/2);
j=nr-yc;
i=xc;
i_c=1;
j_c=1;
C_horizontale_droite=[];
C_horizontale_gauche=[];
C_verticale_haut=[];
C_verticale_bas=[];
i_nouveau_h=i;
i_nouveau_b=i;
j_nouveau_d=j;
j_nouveau_g=j;
while(i_nouveau_h!=nr-1 || i_nouveau_b!=0 || j_nouveau_g!=0 || j_nouveau_d!=nc-1)
C_horizontale_gauche(1,j_c,:)=B(i,j_nouveau_g,:);
C_horizontale_droite(1,j_c,:)=B(i,j_nouveau_d,:);
C_verticale_haut(1,j_c,:)=B(i_nouveau_h,j,:);
C_verticale_bas(1,j_c,:)=B(i_nouveau_b,j,:);
i_nouveau_h = i_nouveau_h+1;
i_nouveau_b = i_nouveau_b-1;
j_nouveau_d = j_nouveau_d+1;
j_nouveau_g = j_nouveau_g-1;
j_c=j_c+1;
endwhile
distance_pixel_gauche_centre = find(C_horizontale_gauche=C_horizontale_droite!=0)
distance_pixel_droite_centre = find(C_horizontale_droite=C_horizontale_gauche!=0)
distance_pixel_haut_centre= find(C_verticale_haut=C_verticale_bas!=0)
distance_pixel_bas_centre= find(C_verticale_bas=C_verticale_haut!=0)
But for the first affect of matrix :
C_horizontale_gauche(1,j_c,:)=B(i,j_nouveau_g,:);
I have an error message :
error : LG : Subscript indices must be either positive integers less than 2^31 or logicals.
Then i verify indices they are positive. I give you an example :
For an image 700*700 RGB, nr=nc=700.
Then j=700-350=350, i =350
So, j_nouveau_g=350.
Thus, the first expression should sent to Octave this one:
`C_horizontale_gauche(1,1,:)=B(350,350,:)`
... Well.. Why octave send me an error?
Thx for your answers.
Best Regards
Julien

z3py: How to implement a counter in z3?

I want to design logics similar to a counter in Z3py.
If writing python script, we usually define a variable "counter" and then keep incrementing it when necessary. However, in Z3, there is no variant. Therefore, instead of defining an variant, I define a trace of that variant.
This is a sample code. Suppose there is an array "myArray" of size 5, and the elements in the array are 1 or 2. I want to assert a constraint that there must be two '2's in "myArray"
from z3 import *
s = Solver()
myArray = IntVector('myArray',5)
for i in range(5):
s.add(Or(myArray[i]==1,myArray[i]==2))
counterTrace = IntVector('counterTrace',6)
s.add(counterTrace[0]==0)
for i in range(5):
s.add(If(myArray[i]==2,counterTrace[i+1]==counterTrace[i]+1,counterTrace[i+1]==counterTrace[i]))
s.add(counterTrace[5]==2)
print s.check()
print s.model()
My question is that is this an efficient way of implementing the concept of counter in Z3? In my real problem, which is more complicated, this is really inefficient.
You can do this but it is much easier to create the sum over myArray[i] == 2 ? 1 : 0. That way you don't need to assert anything and you are dealing with normal expressions.

Trying to balance my dataset through sample_weight in scikit-learn

I'm using RandomForest for classification, and I got an unbalanced dataset, as: 5830-no, 1006-yes. I try to balance my dataset with class_weight and sample_weight, but I can`t.
My code is:
X_train,X_test,y_train,y_test = train_test_split(arrX,y,test_size=0.25)
cw='auto'
clf=RandomForestClassifier(class_weight=cw)
param_grid = { 'n_estimators': [10,50,100,200,300],'max_features': ['auto', 'sqrt', 'log2']}
sw = np.array([1 if i == 0 else 8 for i in y_train])
CV_clf = GridSearchCV(estimator=clf, param_grid=param_grid, cv= 10,fit_params={'sample_weight': sw})
But I don't get any improvement on my ratios TPR, FPR, ROC when using class_weight and sample_weight.
Why? Am I doing anything wrong?
Nevertheless, if I use the function called balanced_subsample, my ratios obtain a great improvement:
def balanced_subsample(x,y,subsample_size):
class_xs = []
min_elems = None
for yi in np.unique(y):
elems = x[(y == yi)]
class_xs.append((yi, elems))
if min_elems == None or elems.shape[0] < min_elems:
min_elems = elems.shape[0]
use_elems = min_elems
if subsample_size < 1:
use_elems = int(min_elems*subsample_size)
xs = []
ys = []
for ci,this_xs in class_xs:
if len(this_xs) > use_elems:
np.random.shuffle(this_xs)
x_ = this_xs[:use_elems]
y_ = np.empty(use_elems)
y_.fill(ci)
xs.append(x_)
ys.append(y_)
xs = np.concatenate(xs)
ys = np.concatenate(ys)
return xs,ys
My new code is:
X_train_subsampled,y_train_subsampled=balanced_subsample(arrX,y,0.5)
X_train,X_test,y_train,y_test = train_test_split(X_train_subsampled,y_train_subsampled,test_size=0.25)
cw='auto'
clf=RandomForestClassifier(class_weight=cw)
param_grid = { 'n_estimators': [10,50,100,200,300],'max_features': ['auto', 'sqrt', 'log2']}
sw = np.array([1 if i == 0 else 8 for i in y_train])
CV_clf = GridSearchCV(estimator=clf, param_grid=param_grid, cv= 10,fit_params={'sample_weight': sw})
This is not a full answer yet, but hopefully it'll help get there.
First some general remarks:
To debug this kind of issue it is often useful to have a deterministic behavior. You can pass the random_state attribute to RandomForestClassifier and various scikit-learn objects that have inherent randomness to get the same result on every run. You'll also need:
import numpy as np
np.random.seed()
import random
random.seed()
for your balanced_subsample function to behave the same way on every run.
Don't grid search on n_estimators: more trees is always better in a random forest.
Note that sample_weight and class_weight have a similar objective: actual sample weights will be sample_weight * weights inferred from class_weight.
Could you try:
Using subsample=1 in your balanced_subsample function. Unless there's a particular reason not to do so we're better off comparing the results on similar number of samples.
Using your subsampling strategy with class_weight and sample_weight both set to None.
EDIT: Reading your comment again I realize your results are not so surprising!
You get a better (higher) TPR but a worse (higher) FPR.
It just means your classifier tries hard to get the samples from class 1 right, and thus makes more false positives (while also getting more of those right of course!).
You will see this trend continue if you keep increasing the class/sample weights in the same direction.
There is a imbalanced-learn API that helps with oversampling/undersampling data that might be useful in this situation. You can pass your training set into one of the methods and it will output the oversampled data for you. See simple example below
from imblearn.over_sampling import RandomOverSampler
ros = RandomOverSampler(random_state=1)
x_oversampled, y_oversampled = ros.fit_sample(orig_x_data, orig_y_data)
Here it the link to the API: http://contrib.scikit-learn.org/imbalanced-learn/api.html
Hope this helps!

Find value in vector "p" that corresponds to maximum value in vector "r = f(p)"

As simple as in title. I have nx1 sized vector p. I'm interested in the maximum value of r = p/foo - floor(p/foo), with foo being a scalar, so I just call:
max_value = max(p/foo-floor(p/foo))
How can I get which value of p gave out max_value?
I thought about calling:
[max_value, max_index] = max(p/foo-floor(p/foo))
but soon I realised that max_index is pretty useless. I'm sorry asking this, real beginner here.
Having dropped the issue to pieces, I realized there's no unique corrispondence between values p and values in my related vector p/foo-floor(p/foo), so there's a logical issue rather than a language one.
However, given my input data, I know that the solution is unique. How can I fix this?
I ended up doing:
result = p(p/foo-floor(p/foo) == max(p/foo-floor(p/foo)))
Looks terrible, so if you know any other way...
Once you have the index, use it:
result = p(max_index)
You can create a new vector with your lets say "transformed" values:
p2 = (p/foo-floor(p/foo))
and then just use find to find the max values on p2:
max_index = find(p2 == max(p2))
that will return the index or indices of p2 with the max value of that operation, and finally just lookup the original value in p
p(max_index)
in 1 line, this is:
p(find((p/foo-floor(p/foo) == max((p/foo-floor(p/foo))))))
which is basically the same thing you did in the end :)