I thought that nngraph was supposed to make writing neural networks with complex structures (for example, parallel computations) a lot easier, but I ran into some bugs..
Is the main advantage of nngraph the ability to plot the graph afterwards, and also the possibility to chain the modules quite easily?
Why do I have a bug with this:
lookup = nn.LookUpTable(...)
question = lookup(input[1])
answer = lookup(input[2])
or maybe I should do something like
question, answer = lookup({input[1],input[2]}
?
(input[1] and input[2] are just tensors containing integers)
(The bug is that one of question or answer doesn't have a correct output. Dimensions are off, etc.)
Do I have to use ParallelTable even when using nngraph, in a case like this?
Alright, apparently this is independent from nngraph.
What you should do in this case is clone your module, and use the clone:
lookup2 = lookup:clone()
lookup and lookup2 will share the same parameters.
Related
Is it possible to have OSMnx (great tool BTW) include car ferries when building a graph? Failing that, what would be the most direct way to build such a graph? The problem isn't just that the ferry routes themselves aren't present but, without the ferries, islands that are in reality reachable by car aren't included in the road network.
I have tried using osmnx.settings.useful_tags_way to no avail. Using 'route'='ferry' in overpass-api returns what I would like to include in the graph so I have been editing the OSMnx downloads.py file trying to alter the overpass-api call directly.
Thanks!
First pass (hacky)
I've came up with fairly hacky partial solution by editing osmnx's downloads.py file and replacing the line (originally around line 350):
query_str = f"{overpass_settings};(way{osm_filter}(poly:'{polygon_coord_str}');>;);out;"
with:
osm_filter2 = f'["route"="ferry"]'
query_str = f"{overpass_settings};(way{osm_filter}(poly:'{polygon_coord_str}');>;way{osm_filter2}(poly:'{polygon_coord_str}');>;);out;"
The query string format took some trial and error. If this, or something like it, turns out to be the best approach, I'll try to make it more broadly applicable (add "ferry" to the predefined filter list rather than hardcoded, pass a list of filters, etc).
Much Better
I found that building separate graphs and unioning them using networkx.union or networkx.disjoint_union did not give usable results. So I have added a multiple filter capability to my version of the osmnx downloads.py file. I also added some additional network type options. So it's now possible to pass, for instance, "ferry" as a network_type for the osmnx.graph_from_* functions:
network_type = "ferry"
I can also now pass multiple network types using a pipe delimiter:
network_type = "drive|ferry"
to get a graph that is the union of the two. When doing this, I found that it's useful to create a second ferry-only graph and use that as a reference to update the tags in the combined network graph. It is also better to load non-simplified graphs at first and and simplify it/them yourself after this step.
Still experimenting
Still having some issues with non-connectedness - now in the ferry-only network. Car ferries at isolated crossings than don't share a harbor with a through-ferry are not included in the ferries-only graphs and therefore the tags of their counterparts in the combined network aren't getting updated as needed. The ferries in the combined network (network_type = "drive|ferry") are all present and connected correctly and therefore their respective islands are now on the road network - which is great. But, because their tags aren't being updated, these isolated ferries are getting default highway speeds (and I'm doing travel time analysis). I can use work around this using:
if not 'highway' in edge[3]:
but, for a robust solution, I think we want to be able to tag them explicitly.
I'd still love to hear what other think.
I'm working on a script to convert a Simulink library to a plain model, meaning it can be simulated, it does not auto-lock etc.
Is there a way to do this with code aside from basically copy-pasting every single block into a new model? And if it isn't, what is the most efficient way to do the "copy-paste".
I was not able to find any clues as how to approach this problem here, or on Google, or on the official documentation or on the MathWorks forum so I'm at a loss on how to proceed.
Thank you in advance!
I don't think it's possible to convert a library to a model, but you can programmatically add library blocks to models like so:
sys = 'testModel';
new_system(sys);
open_system(sys);
add_block('Simulink/Sources/Sine Wave', [sys, '/MySineWave']);
save_system(sys);
close_system(sys);
sim(sys);
You could even use the find_system command to list all the blocks in a library and then loop through them all and create a new model for each using the above code.
need some help on this.
Currently I am doing a project on computer vision that requires me to train a new model to detect a certain object.
In this case, I am using the system provided by P. Felzenszwalb, D. McAllester, D. Ramaman and his team => Discriminatively trained deformable part models which is implemented in Matlab.
Project webpage: http://www.cs.uchicago.edu/~pff/latent/.
However I have no idea how to direct the system to use my dataset(a collection of images and annotation) which is different from the the PASCAL datasets so as to train a new model.
By directing, I meant a line of code that allows me to change the dataset the system reads from, for training a model.
E.g.
% directory for caching models, intermediate data, and results
cachedir = ['/var/tmp/rbg/YOURPATH/' VOCyear '/'];
I tried looking at their Readme and documentation guides but they do not make any mention. Do correct me if I am wrong.
Let me know if I have not made my problem clear enough.
I tried looking at some files such as global.m but no go.
Your help is much appreciated and thanks in advance!
You can try to read pascal.m in the DPM package(voc-release5), there are similar code working on VOC2007/2010 dataset.
There are plenty of parts that need to be adapted to achieve this. For example the voc_config has to be adapted in order to read from your files.
The same with the pascal_train.m function. Depending on the images and the way you parse them, this may require quite some time to adapt this function.
Other functions to consider:
imreadx
pascal_test
pascaleval
I need to load experimental data into scicoslab, a (pretty badly designed) clone fork of scilab which happens to support graphical modeling. The documentation on the web is pretty poor, but it's reasonably similar to scilab and octave.
The data I need to process is contained into a certain number of text files: Data_005, Data_010, …, Data_100. Each of them can be loaded using the -ascii flag for the loadmatfile command.
The problem comes from the fact that loadmatfile("foo", "-ascii") loads the file foo.mat into a variable named foo. In order to to cycle on the data files, I would need to do something like:
for i = [5:5:100]
name = sprintf("Data_%02d", i);
loadmatfile(name, "-ascii");
x = read_var_from_name(name);
do_something(x);
end
where what I search for is a builtin read_var_from_name which would allow me to access the internal symbol table by string.
Do you know if there exist a similar function?
Notes:
There's no way of overriding this behavior if your file is in ascii format;
In this phase I could also use octave (no graphical modelling is involved), although it behaves in the same way.
>> foo = 3.14; name = 'foo'; eval(name)
foo =
3.1400
The above works in MATLAB, and Scilab's documentation says it also has an eval function. Not sure if I understood you correctly, though.
#arne.b has a good answer.
In your case you can also do that in matlab:
a=load('filename.mat')
x=a.('variable_name')
lets go through your points one by one:
"ScicosLab, a (pretty badly designed) clone of Scilab" This in my opinion is an inaccurate way of introducing the software. ScicosLab is not a clone of Scilab but a fork of it. The team behind ScicosLab (INRIA) are the ones who made scocos (now called xcos in Scilab development line). At some point (from Scilab v4) the Scilab team decided to move away from Tcl/tk towards Java, but the SciccosLab/scicos team departed, keep using the language (Tcl) and it's graphical user interface design package (tk). Giving the ScocosLab community the credit that the whole Scilab documentation and support is not very good in general. :) (more about Scilab and the forks here)
Regarding the technical question I'm not sure what you are trying to achieve here, Scilab/ScicosLab still have the eval function which basically does what you want. However this function is to be deprecated in favor of evstr. There is also the execstr function which worth studying.
The loadmatfile, as far as I have understood, "tries" to load the variables defined in a MATLAB .mat file (MATLAB's proprietary tabular format) into the Scilab workspace. For example if there is a variable foo it will "try" to create the variable foo and loads its value from the MATLAB script. Check this example. I would create a variable x(i) = foo in the for loop. again your question is not completely clear.
As a side note maybe you could consider exporting your data as CSV instead of .mat files.
This is almost a programming question, but geared towards physicists.
Suppose I am writing a piece of software that takes some system parameters as input and then calculates something from it, in my case a spectral function $A(k,\omega)$.
When I want to just take the output and feed it to gnuplot, I should make the program output a simple table with one column for the $k$-values, one for $\omega$ and one for $A(k,\omega)$.
But then I cannot store there all the additional information, such as what parameters were used. And maybe I want to store in that output some additional debugging information such as intermediate quantities. In my example, the spectral function is obtained from the self energy, so in some situations I might want to look at the self energy directly.
I do not want to constantly hack the source code depending on what output I want. It would be nicer if all the relevant data of a "run" would be present in a single file/entity but so that it is still easy to extract tables I can feed to gnuplot.
Not wanting to reinvent the wheel and develop a full-blown file format, are there some "standards" around that are best used when creating, processing and storing data from calculations or simulations? Maybe even in an SQL database format?
There are dozens of methods, and none too good; I'll share two mine:
If the program is worth it, I add a small parser of config files. Then I just make a cofig, let's say, SimA.in, and simulator makes a bunch of files with corresponding data SimA.paths, SimA.stats, SimA.log, etc. Unless the names are unique and I add version of the code to log, this makes the results fully reproducible and the simulation itself portable enough to be easily manageable.
If not, I just wrap a code a bit and use R as a host. Then I just return all the arrays and scalars (R data structures are very flexible, and it is easy to cast native R or C structs) and use R to manage, save/load and of course visualize and analyse the data. Moreover, with Sweave and CacheSweave the whole executing, analysis and reporting can be bunched in an elegant bunch, fully reproducible with one command.
If you want an "enterprise" solution, try NetCDF or HDF5. But I feel it may be an overkill here.
And of course a version control of the simulator code is a must. But that's obvious =)
For a project I'm currently working on that uses Python and C++ (via SWIG), I'm planning to use a short python script as input file. So, in a way, I'll be 'hacking the source' to change parameters, but in an interpreted language, not a compiled one.
Currently, I plan to have an input file like parameters.py, and use it like from parameters import params. But that might be too dependent on correct syntax.
params = {
"foods" : ["spam", "beans", "eggs"],
"costs" : [199, 4, 1],
"customerAge" : 23,
}
Another option might be to just define the variables at the script level in parameters2.py. This loses the nice dictionary packaging, but makes it a little harder for the user to mess it up. And it probably wouldn't be to hard to write a 'parser' that puts those script-level variables into a nice dictionary. A plus to method is that the user could parameterize things that weren't originally considered--from parameters2 import * would overwrite previous definitions of those parameters. Of course, this might be bad if the user overwrites something important.
foods = ["spam", "beans", "eggs"]
costs = [199, 4, 1]
customerAge = 23
parameters3.py would use a class, though it is contraindicated by Python's persnicketiness about indentation. from parameters3 import params:
class params:
foods = ["spam", "beans", "eggs"]
costs = [199, 4, 1]
customerAge = 23
I should also mention, for completeness, that our C++ code also defines a parameters class. That is, in our actual project, parameters.py is a SWIG wrapper for a corresponding C++ class. You'd use like from parameters4 import params. However, this allows only parameters that are already declared in the C++ class.
import parameters
params = parameters.Parameters()
params.foods = ["spam", "beans", "eggs"]
params.costs = [199, 4, 1]
params.customerAge = 23