I'm reasonably new to Matlab, and have been trying to teach myself. I have looked for a similar question, but can't find one that's quite right.
In my workspace I have several structures with similar names. These structures will always start with the same word ('Base'), though the rest of the name will change ('1', '2', '3'), so for example Base1, Base2, Base3... etc. These variables were generated using the data cursor tool in a figure, so contain the fields Target, Position and DataIndex. I am only interested in the value in Base*.Position(1,1). I would like to extract this value from each structure, as many times as there are structures (in one instance there may be 6 structures, another time only 4).
I am considering using the eval function, but it seems to work on exact strings rather than only the first part of a name. Additionally, a lot of documentation seems to advise against using eval.
So far I have:
clearvar except 'Base*'
list_variables=who;
for i=1:length(list_variables)
BaseTS(i) = eval('Base1.Position(1,1)');
end
It's the for loop I'm stuck on, as I don't know how to generalise so it will extract the value .Position(1,1) for each different structure name.
Thanks in advance
Instead of having many structures called Base1, Base2 etc rather put your structure in an array. Then you could rather call Base(1).Position(1,1), Base(2).Position... etc. Your code will be more flexible and manageable this way.
So I suggest when you export using the data cursor, export to a variable called Base_temp and then immediately stick this into the next element of an array:
Base(end+1) = Base_temp
or even:
Position(end+1) = Base_temp.Position(1,1);
Then it's just a case of pressing up and enter after each time you export with the data cursor.
What you have read about avioding eval is correct, it's very rare (if ever) that eval is a good idea. It makes your code hard to read and very hard to debug. But since you're learning, this is how you could fix your loop. (But don't do this way, seriously don't, use arrays rather):
for i=1:length(list_variables)
BaseTS(i) = eval(['Base', num2str(i), '.Position(1,1)']);
end
in other words use string concatenation to build up your string and use the looping variable (i) to get the different numbers. You'll need num2str to convert fromthe number to the string. But don't do it this way. This is a bad way.
Dan's suggestion about avoiding eval is very valid. But if you decide to keep on the structures you have in your workspace, here's something without loops, but again cellfun seems to use loops internally. So, I guess this could be an alternative solution, with the not-so-popular eval -
list1 = who('Base*')
list2 = cellstr(strcat('BaseTS(',num2str([1:numel(list1)]'),')='));%%//'
ev1 = strcat(list2,list1,'.Position(1,1)');%%//'
cellfun(#evalc,ev1,'uni',0)
Related
I have a process which is repeated on a set of data stored in separate folders. Each time a certain folders data is processed I need new variable names as I need to results separate after the initial processing is finished for more processing.
For example at the start of each new block of the repeated function I declare sets of arrays
Set_1 = zeros(dim, number);
vectors_1 = zeros(dim, number);
For the next set of data I need:
`Set_2 = .........`
and so on. There is going to be alot of these sets so I need a way to automate the creation of these variables, and the use the new variables names in the function whilst maintaining that they are separate once all the functions are completed.
I first tried using strcat('Set_1',int2str(number)) = zeros(dim, number) but this does not work, I believe because it means I would be trying to set an array as a string. I'm sure there must be a way to create one function and have the variables dynamically created but it seems to be beyond me, so it's probably quite obvious, so if anyone can tell me a way that would be great.
I'd not do it like this. It's a bad habit, it's better to use a cell array or a struct to keep multiple sets. There is a small overhead (size-wise) per field, but it'll be a lot easier to maintain later on.
If you really, really want to do that use eval on the string you composed.
The MATLAB function genvarname does what you want. In your case it would look something like:
eval(genvarname('Set_', who)) = zeros(dim, number);
However, I would follow the recommendations of previous answers and use a cell or struct to store the results.
This sort of pattern is considered harmful since it requires the eval function. See one of the following for techniques for avoiding it:
http://www.mathworks.co.uk/support/tech-notes/1100/1103.html
http://blogs.mathworks.com/loren/2005/12/28/evading-eval/
If you insist on using eval, then use something like:
eval(sprintf('Set_1%d = zeros(dim, number);', number))
I wish to expand a structure (bac) with a number of fields from another structure (BT). The names of these fields are contained in the cell array (adds) as strings.
this is what i have now (and obviously doesn't do the job, explaining this post):
for i=1:numel(adds)
eval(genvarname('bac.',adds{i})) = eval(strcat('BT.',adds{i}));
end
I also tried using sprintf, which did not seem to work for me. I feel confident one of you knows how to do it, since I feel it should be rather easy.
The best way of doing this is to use dynamic field names:
for i=1:numel(adds)
bac.(adds{i}) = BT.(adds{i});
end
I have multiple vectors (+100) that have been loaded into MATLAB workspace, I would like to write a script that can plot and save them all, but for that I need their name, my question is: is there a way to automatically get the name vectors saved in the workspace.
thanks in advance.
Step one: whoever gave you a *.mat file with 100+ named variables in it, [censored for strong language and scenes some viewers may find upsetting]. I am only partly joking here; if you find yourself in this sort of situation normally it is because something has gone terribly wrong upstream. We can work around it, though.
Step two: use who with the filename to get a list of variables in that file
names = who('-file', 'all');
Step three: load the variables (or a subset of them) into a struct
data = load('all.mat');
Step four: use dynamic structure naming to extract data:
for n = 1:length(names);
plot(data.(names{n})); % or whatever you want to do with this data
end
I would probably just use the loop to dump the data in a cell array so as to make further processing simpler and avoid further use of dynamic field names or worse, eval.
You can use who, which lists all variables alphabetically in the active workspace.
This is maybe a more cosmetic problem, but I find it really annoying as I always end up with some ugly code. And readability is always important, right?
I want to check if a value exists in a hash within a hash. So what I do is this.
already_exists_data[:data][:user_id]
But that can get me a nullpointer exception if :data is nil and checking :data might give me a nullpointer if already_exists_data is nil. So what I end up with is this:
if already_exists_data && already_exists_data[:data] && already_exists_data[:data][:user_id]
# Do stuff
end
Now that's some nasty looking code. Perhaps I should modify the hash to be an object instead. But I keep bumping in to this problem sometimes and was wondering how you guys confront it.
I'm currently coding in Ruby but I've had this problem with multiple other languages.
If I ask my butler to pick up the box of chocolates on the dining room table in Victoria street 34, I ask him just that. I don't want to say: Go find Victoria Street, and, if you find it, please look for number 34, and if you find it ...
I can do that because he catches his own errors: if he doesn't find the street he will just return empty-handed.
So you should use a try with an empty exception handler. In pseudo-code:
try {chocolates = streets("Victoria")(34)("dining room")("table")}
In languages (like ruby, with some homespun syntactic sugar) where blocks are expressions you might write:
if try {already_exists_data(data)(user_id)}
do_stuff
The language itself can also help: in perl, $streets{Victoria}[34]{dining_room}{table} is undefined when e.g. $streets is. Of course, your butler may come home empty-handed for years before you discover that the address is wrong. The try block solution - and your if .. && .... - have the same drawback: only use them when you really don't care if Victoria Street has a number 34 or not.
Language-agnostic solution: do not use nulls. Ever. Enforce this rule across your projects. If you absolutely have to, wrap them into Either/Optional which adds explicitness. Some languages, such as Scala, have the notion of Optional already built in.
Java-specific solution: if you have to use nulls, annotate method arguments and return values with #Nullable and #Nonnull. Decent IDEs (such as IntelliJ) are capable of analysing your code and highlighting possible null dereferences, if a value is acquired from such a method.
Another possibility is to wrap the hash access with a function call, and do the "dirty" work within the function. Then your code will be something like (pseudo-code syntax):
accessHash(already_exists_data, data, userid)
Note: you can get fancy with accessing nested hashes based on the number of parameters passed to the wrapper function, so it works in a multiple of situations.
This is almost a programming question, but geared towards physicists.
Suppose I am writing a piece of software that takes some system parameters as input and then calculates something from it, in my case a spectral function $A(k,\omega)$.
When I want to just take the output and feed it to gnuplot, I should make the program output a simple table with one column for the $k$-values, one for $\omega$ and one for $A(k,\omega)$.
But then I cannot store there all the additional information, such as what parameters were used. And maybe I want to store in that output some additional debugging information such as intermediate quantities. In my example, the spectral function is obtained from the self energy, so in some situations I might want to look at the self energy directly.
I do not want to constantly hack the source code depending on what output I want. It would be nicer if all the relevant data of a "run" would be present in a single file/entity but so that it is still easy to extract tables I can feed to gnuplot.
Not wanting to reinvent the wheel and develop a full-blown file format, are there some "standards" around that are best used when creating, processing and storing data from calculations or simulations? Maybe even in an SQL database format?
There are dozens of methods, and none too good; I'll share two mine:
If the program is worth it, I add a small parser of config files. Then I just make a cofig, let's say, SimA.in, and simulator makes a bunch of files with corresponding data SimA.paths, SimA.stats, SimA.log, etc. Unless the names are unique and I add version of the code to log, this makes the results fully reproducible and the simulation itself portable enough to be easily manageable.
If not, I just wrap a code a bit and use R as a host. Then I just return all the arrays and scalars (R data structures are very flexible, and it is easy to cast native R or C structs) and use R to manage, save/load and of course visualize and analyse the data. Moreover, with Sweave and CacheSweave the whole executing, analysis and reporting can be bunched in an elegant bunch, fully reproducible with one command.
If you want an "enterprise" solution, try NetCDF or HDF5. But I feel it may be an overkill here.
And of course a version control of the simulator code is a must. But that's obvious =)
For a project I'm currently working on that uses Python and C++ (via SWIG), I'm planning to use a short python script as input file. So, in a way, I'll be 'hacking the source' to change parameters, but in an interpreted language, not a compiled one.
Currently, I plan to have an input file like parameters.py, and use it like from parameters import params. But that might be too dependent on correct syntax.
params = {
"foods" : ["spam", "beans", "eggs"],
"costs" : [199, 4, 1],
"customerAge" : 23,
}
Another option might be to just define the variables at the script level in parameters2.py. This loses the nice dictionary packaging, but makes it a little harder for the user to mess it up. And it probably wouldn't be to hard to write a 'parser' that puts those script-level variables into a nice dictionary. A plus to method is that the user could parameterize things that weren't originally considered--from parameters2 import * would overwrite previous definitions of those parameters. Of course, this might be bad if the user overwrites something important.
foods = ["spam", "beans", "eggs"]
costs = [199, 4, 1]
customerAge = 23
parameters3.py would use a class, though it is contraindicated by Python's persnicketiness about indentation. from parameters3 import params:
class params:
foods = ["spam", "beans", "eggs"]
costs = [199, 4, 1]
customerAge = 23
I should also mention, for completeness, that our C++ code also defines a parameters class. That is, in our actual project, parameters.py is a SWIG wrapper for a corresponding C++ class. You'd use like from parameters4 import params. However, this allows only parameters that are already declared in the C++ class.
import parameters
params = parameters.Parameters()
params.foods = ["spam", "beans", "eggs"]
params.costs = [199, 4, 1]
params.customerAge = 23