set up data in MATLAB - matlab

I set up a problem in AMPL as follow :
Model
set A;
param B {A,A};
Data
set A := 1 , 2 ;
I do not define my param B in my data section and now I want to define value of param B in MATLAB. I went through the examples that provided in the AMPL website but it does not work.
I want B as follow :
B = rand(2,2)
can anyone tell me how I can do that in MATLAB please?

I found the answer fortunately
first the model part and the data part should be loaded in MATLAB. Then these commands can do the desired task :
B = ampl.getParameter('B');
B.setValues(rand(2,2));
ampl.display('B')
B :=
1 1 0.849129
1 2 0.678735
2 1 0.933993
2 2 0.75774
or
B.getValues
i1 i2 | val
1.0 1.0 | 0.8491293058687771
1.0 2.0 | 0.6787351548577735
2.0 1.0 | 0.9339932477575505
2.0 2.0 | 0.7577401305783334

Related

Apple turicreate always return the same label

I'm test-driving turicreate, to resolve a classification issue, in which data consists of 10-uples (q,w,e,r,t,y,u,i,o,p,label), where 'q..p' is a sequence of characters (for now of 2 types), +,-, like this:
q,w,e,r,t,y,u,i,o,p,label
-,+,+,e,e,e,e,e,e,e,type2
+,+,e,e,e,e,e,e,e,e,type1
-,+,e,e,e,e,e,e,e,e,type2
'e' is just a padding character, so that vectors have a fixed lenght of 10.
note:data is significantly tilted toward one label (90% of it), and the dataset is small, < 100 points.
I use Apple's vanilla script to prepare and process the data (derived from here):
import turicreate as tc
# Load the data
data = tc.SFrame('data.csv')
# Note, for sake of investigating why predictions do not work on Swift, the model is deliberately over-fitted, with split 1.0
train_data, test_data = data.random_split(1.0)
print(train_data)
# Automatically picks the right model based on your data.
model = tc.classifier.create(train_data, target='label', features = ['q','w','e','r','t','y','u','i','o','p'])
# Generate predictions (class/probabilities etc.), contained in an SFrame.
predictions = model.classify(train_data)
# Evaluate the model, with the results stored in a dictionary
results = model.evaluate(train_data)
print("***********")
print(results['accuracy'])
print("***********")
model.export_coreml("MyModel.mlmodel")
Note:The model is over-fitted on the whole data (for now). Convergence seems ok,
PROGRESS: Model selection based on validation accuracy:
PROGRESS: ---------------------------------------------
PROGRESS: BoostedTreesClassifier : 1.0
PROGRESS: RandomForestClassifier : 0.9032258064516129
PROGRESS: DecisionTreeClassifier : 0.9032258064516129
PROGRESS: SVMClassifier : 1.0
PROGRESS: LogisticClassifier : 1.0
PROGRESS: ---------------------------------------------
PROGRESS: Selecting BoostedTreesClassifier based on validation set performance.
And the classification works as expected (although over-fitted)
However, when i use the mlmodel in my code, no matter what, it returns always the same label, here 'type2'. The rule is here type1 = only "+" and "e", type2 = all others combinations.
I tried using the text_classifier, the results are far less accurate...
I have no idea what I'm doing wrong....
Just in case someone wants to check, for a small data set, here's the raw data.
q,w,e,r,t,y,u,i,o,p,label
-,+,+,e,e,e,e,e,e,e,type2
-,+,e,e,e,e,e,e,e,e,type2
+,+,-,+,e,e,e,e,e,e,type2
-,-,+,-,e,e,e,e,e,e,type2
+,e,e,e,e,e,e,e,e,e,type1
-,-,+,+,e,e,e,e,e,e,type2
+,-,+,-,e,e,e,e,e,e,type2
-,+,-,-,e,e,e,e,e,e,type2
+,-,-,+,e,e,e,e,e,e,type2
+,+,e,e,e,e,e,e,e,e,type1
+,+,-,-,e,e,e,e,e,e,type2
-,+,-,e,e,e,e,e,e,e,type2
-,-,-,-,e,e,e,e,e,e,type2
-,-,e,e,e,e,e,e,e,e,type2
-,-,-,e,e,e,e,e,e,e,type2
+,+,+,+,e,e,e,e,e,e,type1
+,-,+,+,e,e,e,e,e,e,type2
+,+,+,e,e,e,e,e,e,e,type1
+,-,-,-,e,e,e,e,e,e,type2
+,-,-,e,e,e,e,e,e,e,type2
+,+,+,-,e,e,e,e,e,e,type2
+,-,e,e,e,e,e,e,e,e,type2
+,-,+,e,e,e,e,e,e,e,type2
-,-,+,e,e,e,e,e,e,e,type2
+,+,-,e,e,e,e,e,e,e,type2
e,e,e,e,e,e,e,e,e,e,type1
-,+,+,-,e,e,e,e,e,e,type2
-,-,-,+,e,e,e,e,e,e,type2
-,e,e,e,e,e,e,e,e,e,type2
-,+,+,+,e,e,e,e,e,e,type2
-,+,-,+,e,e,e,e,e,e,type2
And the swift code:
//Helper
extension MyModelInput {
public convenience init(v:[String]) {
self.init(q: v[0], w: v[1], e: v[2], r: v[3], t: v[4], y: v[5], u: v[6], i: v[7], o: v[8], p:v[9])
}
}
let classifier = MyModel()
let data = ["-,+,+,e,e,e,e,e,e,e,e", "-,+,e,e,e,e,e,e,e,e,e", "+,+,-,+,e,e,e,e,e,e,e", "-,-,+,-,e,e,e,e,e,e,e","+,e,e,e,e,e,e,e,e,e,e"]
data.forEach { (tt) in
let gg = MyModelInput(v: tt.components(separatedBy: ","))
if let prediction = try? classifier.prediction(input: gg) {
print(prediction.labelProbability)
}
}
The python code saves a MyModel.mlmodel file, which you can add to any Xcode project and use the code above.
note: the python part works fine, for example:
+---+---+---+---+---+---+---+---+---+---+-------+
| q | w | e | r | t | y | u | i | o | p | label |
+---+---+---+---+---+---+---+---+---+---+-------+
| + | + | + | + | e | e | e | e | e | e | type1 |
+---+---+---+---+---+---+---+---+---+---+-------+
is labelled as expected. But when using the swift code, the label comes out as type2. This thing is driving be berserk (and yes, I checked that the mlmodel replaces the old one whenever i create a new version, and also in Xcode).

Table sort by month

I have a table in MATLAB with attributes in the first three columns and data from the fourth column onwards. I was trying to sort the entire table based on the first three columns. However, one of the columns (Column C) contains months ('January', 'February' ...etc). The sortrows function would only let me choose 'ascend' or 'descend' but not a custom option to sort by month. Any help would be greatly appreciated. Below is the code I used.
sortrows(Table, {'Column A','Column B','Column C'} , {'ascend' , 'ascend' , '???' } )
As #AnonSubmitter85 suggested, the best thing you can do is to convert your month names to numeric values from 1 (January) to 12 (December) as follows:
c = {
7 1 'February';
1 0 'April';
2 1 'December';
2 1 'January';
5 1 'January';
};
t = cell2table(c,'VariableNames',{'ColumnA' 'ColumnB' 'ColumnC'});
t.ColumnC = month(datenum(t.ColumnC,'mmmm'));
This will facilitate the access to a standard sorting criterion for your ColumnC too (in this example, ascending):
t = sortrows(t,{'ColumnA' 'ColumnB' 'ColumnC'},{'ascend', 'ascend', 'ascend'});
If, for any reason that is unknown to us, you are forced to keep your months as literals, you can use a workaround that consists in sorting a clone of the table using the approach described above, and then applying to it the resulting indices:
c = {
7 1 'February';
1 0 'April';
2 1 'December';
2 1 'January';
5 1 'January';
};
t_original = cell2table(c,'VariableNames',{'ColumnA' 'ColumnB' 'ColumnC'});
t_clone = t_original;
t_clone.ColumnC = month(datenum(t_clone.ColumnC,'mmmm'));
[~,idx] = sortrows(t_clone,{'ColumnA' 'ColumnB' 'ColumnC'},{'ascend', 'ascend', 'ascend'});
t_original = t_original(idx,:);

How can I sum up functions that are made of elements of the imported dataset?

See the code and error. I have already tried Do, For,...and it is not working.
CODE + Error from Mathematica:
Import of survival probabilities _{k}p_x and _{k}p_y (calculated in excel)
px = Import["C:\Users\Eva\Desktop\kpx.xlsx"];
px = Flatten[Take[px, All], 1];
NOTE: The probability _{k}p_x can be found on the position px[[k+2, x -16]
i = 0.04;
v = 1/(1 + i);
JointLifeIndep[x_, y_, n_] = Sum[v^k*px[[k + 2, x - 16]]*py[[k + 2, y - 16]], {k , 0, n - 1}]
Part::pkspec1: The expression 2+k cannot be used as a part specification.
Part::pkspec1: The expression 2+k cannot be used as a part specification.
Part::pkspec1: The expression 2+k cannot be used as a part specification.
General::stop: Further output of Part::pkspec1 will be suppressed during this calculation.
Part of dataset (left corner of the dataset):
k\x 18 19 20
0 1 1 1
1 0.999478086278185 0.999363078716059 0.99927911905056
2 0.998841497412202 0.998642656911039 0.99858030519133
3 0.998121451605207 0.99794428814123 0.99788275311401
4 0.997423447323642 0.997247180349674 0.997174407432264
5 0.996726703362208 0.996539285828369 0.996437857252448
6 0.996019178300768 0.995803204773039 0.99563600297737
7 0.995283481416241 0.995001861216016 0.994823584922968
8 0.994482556091416 0.994189960607964 0.99405569519175
9 0.993671079225432 0.99342255996206 0.993339856748282
10 0.992904079096455 0.992707177451333 0.992611817294026
11 0.992189069953677 0.9919796017009 0.991832027835091
Without having the exact same data files to work with it is often easy for each of us to make mistakes that the other cannot reproduce or understand.
From your snapshot of your data set I used Export in Mathematica to try to reproduce your .xlsx file. Then I tried the following
px = Import["kpx.xlsx"];
px = Flatten[Take[px, All], 1];
py = px; (* fake some py data *)
i = 0.04;
v = 1/(1 + i);
JointLifeIndep[x_, y_, n_] := Sum[v^k*px[[k+2,x-16]]*py[[k+2,y-16]], {k,0,n-1}];
JointLifeIndep[17, 17, 12]
and it displays 362.402
Notice I used := instead of = in my definition of JointLifeIndep. := and = do different things in Mathematica. = will immediately evaluate the right hand side of that definition. This is possibly the reason that you are getting the error that you do.
You should also be careful with your subscript values and make sure that every subscript is between 1 and the number of rows (or columns) in your matrix.
So see if you can try this example with an Excel sheet containing only the snapshot of data that you showed and see if you get the same result that I do.
Hopefully that will be enough for you to make progress.

'load data' issue in winbugs (bayesian hierarchical)

I have a hierarchical linear model in Winbugs. Data is a longitudinal one and is made up of three categories(red = 1, blue = 2, white = 3)
k - total observations =280
Structure of the data is as follows:
N[] T[] logs[] logp[] cat[] rank[]
1 1 4.2 8.9 1 1
1 2 4.2 8.1 1 2
1 3 3.5 9.2 1 1
2 1 4.1 7.5 1 2
2 2 4.5 6.5 1 2
3 1 5.1 6.6 2 4
3 2 6.2 6.8 3 7
#N = school
#T = time
#logs = log(score)
#logp = log(average hours of inputs)
#rank - rank of school
#cat = section red, section blue, section white in school
My model is syntactically correct, but when I try to load data, I get error = 'expected square bracket at the end]'
model {
# N brands
# T time periods
for (k in 1:K){
for (i in 1:N) {
for (t in 1:T) {
logs[k,i,t] ~ dnorm(mu[k,i,t], tau)
mu[k,i,t] <- bcons +bprice*(logp[t] - logpricebar)
+ brank[cat[t]]*(rank[t] - rankbar)
}
}
}
# C categories
for (c in 1:C) {
brank[c] ~ dnorm(beta, taub)}
# priors
bcons ~ dnorm(0,1.0E-6)
bprice ~ dnorm(0,1.0E-6)
bad ~ dnorm(0,1.0E-6)
beta ~ dnorm(0,1.0E-6)
tau ~ dgamma(0.001,0.001)
taub ~dgamma(0.001,0.001)
}
I follow the standard process of loading data, I select N and then press 'load data' in dialogue box.
Can someone help figure me out the issue here?

Error attempting to use initial conditions

I'm doing a quick problem in Maple with a differential equation and a few initial conditions, but I'm getting an error message that I can't seem to understand given the context. Can anyone quickly elaborate on what's going on here? How do I fix this issue?
> KVLl2 := -4*(i2(t)-2)-12*(i2(t)-i3(t)) = 0;
-16 i2(t) + 8 + 12 i3(t) = 0
> KVLl3 := -12*(i3(t)-i2(t))-4*i3(t)-3.5*(diff(i3(t), t)) = 0;
/ d \
-16 i3(t) + 12 i2(t) - 3.5 |--- i3(t)| = 0
\ dt /
> mySoln := dsolve({KVLl2, KVLl3, i2(0) = 1, i3(0) = 1}, i2, i3);
Error, (in dsolve) found the following equations not depending
on the unknowns of the input system: {1 = 1}
Thanks in advance
Maple doesn't know what to do with i2 and i3 you provided as target functions. If you look at the help of dsolve (?dsolve), you see that it requires its target functions to be specified in terms of their variables (t in this case) and as a list. Try using this
dsolve({KVLl2, KVLl3, i2(0) = 1, i3(0) = 1}, {i2(t), i3(t)});
No errors here but no solution either (this might be related to your equation)