MATLAB/SIMULINK dynamic bus conversion with embedded Matlab function - matlab

I'm working on automated model building. In some cases I have do convert a bus into another bus (the structure is the same, but there can be variants in the names). It works for a static model where I can change the datatype of the inputs and outputs, but I didn't find any way to do this from the command line or directly in an embedded MATLAB function.
Does anybody know a way to do this?

mfb = find(sfroot, '-isa', 'Stateflow.EMChart', 'Name', 'test');
out = get(mfb, 'Outputs');
out.set('DataType', ['Bus: ' component_source.test]);

Related

Visualizing an AutoDiff MultibodyPlant in PyDrake

I am trying to build a simple multibody plant system in Drake using the basic DrakeVisualizer. However, for my use case, I also want to be able to automatically track the derivatives through the physics simulation, so am using the AutoDiffXd version of system:
timestep = 1e-3
builder = DiagramBuilder_[AutoDiffXd]()
plant = MultibodyPlant(timestep)
scene_graph = SceneGraph_[AutoDiffXd]()
brick_file = FindResourceOrThrow("drake/examples/manipulation_station/models/061_foam_brick.sdf")
parser = Parser(plant)
brick = parser.AddModelFromFile(brick_file, model_name="brick")
plant.Finalize()
plant_ad = plant.ToAutoDiffXd()
plant_ad.RegisterAsSourceForSceneGraph(scene_graph)
scene_graph.AddRenderer("renderer", MakeRenderEngineVtk(RenderEngineVtkParams()))
DrakeVisualizer.AddToBuilder(builder, scene_graph)
builder.AddSystem(plant_ad)
builder.AddSystem(scene_graph)
builder.Connect(plant_ad.get_geometry_poses_output_port(), scene_graph.get_source_pose_port(plant_ad.get_source_id()))
builder.Connect(scene_graph.get_query_output_port(), plant_ad.get_geometry_query_input_port())
diagram = builder.Build()
context = diagram.CreateDefaultContext()
simulator = Simulator_[AutoDiffXd](diagram, context)
simulator.AdvanceTo(2.0)
However, when I run this, I get the following error:
File "/home/craig/Repos/drake-exps/autoDiffExperiment.py", line 102, in auto_phys
DrakeVisualizer.AddToBuilder(builder, scene_graph)
TypeError: AddToBuilder(): incompatible function arguments. The following argument types are supported:
1. (builder: pydrake.systems.framework.DiagramBuilder_[float], scene_graph: drake::geometry::SceneGraph<double>, lcm: pydrake.lcm.DrakeLcmInterface = None, params: pydrake.geometry.DrakeVisualizerParams = <pydrake.geometry.DrakeVisualizerParams object at 0x7ff6274e14b0>) -> pydrake.geometry.DrakeVisualizer
2. (builder: pydrake.systems.framework.DiagramBuilder_[float], query_object_port: pydrake.systems.framework.OutputPort_[float], lcm: pydrake.lcm.DrakeLcmInterface = None, params: pydrake.geometry.DrakeVisualizerParams = <pydrake.geometry.DrakeVisualizerParams object at 0x7ff627736730>) -> pydrake.geometry.DrakeVisualizer
Invoked with: <pydrake.systems.framework.DiagramBuilder_[AutoDiffXd] object at 0x7ff65654f8f0>, <pydrake.geometry.SceneGraph_[AutoDiffXd] object at 0x7ff656562130>
From this error, it appears the DrakeVisualizer class only accepts systems which use float scalars exlusively. So I am stuck --- either I can go back to floats (but lose the autodiff differentiable simulation functionality I was after in the first place), or continue to use autodiffxd systems (but be completely unable to visualize what is going on in my simulation).
Is there a way to get both that I am missing?
Sorry for the pain and inconvenience. Your description and assessment are all spot on. Most of the visualization mechanisms are float only and, in its current state, attempts to visualizing an AutoDiff diagram will fail.
You have a couple of options (neither of which is appealing):
Go with one of the outcomes you've described above (no vis or no derivatives).
Put in a Drake feature request to be able to attach a visualizer to an AutoDiff diagram.
I can come up with some hacky workarounds (that aren't immediately clear would even work). So, if you're desperate for derivatives and visualization, they could be explored. But, ultimately, the feature request and a formal Drake solution would be the best long-term resolution.
=====================================
Big update. As of #14569, the DrakeVisualizer class is now templated on the scalar type (item 2 in the list above). That has two implications:
You can build an AutoDiffXd-valued diagram with a visualizer in it (as in your example), or
You can create a double-valued diagram and scalar convert it (i.e., diagram.ToAutoDiffXd() into an AutoDiffXd-valued diagram.

How to monitor error on a validation set in Chainer framework?

I am kind of new to Chainer and have written a code which trains a simple feed forward neural network. I have a validation set and a train set and want to test on the validation set on each like 500 iterations and if the results are better I want to save my network weights. Can anyone tell me how can I do that?
Here is my code:
optimizer = optimizers.Adam()
optimizer.setup(model)
updater = training.StandardUpdater(train_iter, optimizer, device=0)
trainer = training.Trainer(updater, (10000, 'epoch'), out='result')
trainer.extend(extensions.Evaluator(validation_iter, model, device=0))
trainer.extend(extensions.LogReport())
trainer.extend(extensions.PrintReport(['epoch', 'main/loss', 'validation/main/loss', 'elapsed_time']))
trainer.run()
Error on validation set
It is reported by Evaluator, and printed by PrintReport. Thus it should be shown with your code above. And to control the frequency of execution of these extentions, you can specify trigger keyword argument in trainer.extend function.
For example, below code specifies printing each 500 iteration.
trainer.extend(extensions.PrintReport(['epoch', 'main/loss', 'validation/main/loss', 'elapsed_time']), trigger=(500, 'iteration'))
You can also specify trigger to Evaluator.
Save network weights
You can use snapshot_object extension.
http://docs.chainer.org/en/stable/reference/generated/chainer.training.extensions.snapshot_object.html
It will be invoked every epoch as default.
If you want to invoke it when the loss improves, I think you can set trigger using MinValueTrigger.
http://docs.chainer.org/en/stable/reference/generated/chainer.training.triggers.MinValueTrigger.html

Exporting the output of MATLAB's methodsview

MATLAB's methodsview tool is handy when exploring the API provided by external classes (Java, COM, etc.). Below is an example of how this function works:
myApp = actxserver('Excel.Application');
methodsview(myApp)
I want to keep the information in this window for future reference, by exporting it to a table, a cell array of strings, a .csv or another similar format, preferably without using external tools.
Some things I tried:
This window allows selecting one line at a time and doing "Ctrl+c Ctrl+v" on it, which results in a tab-separated text that looks like this:
Variant GetCustomListContents (handle, int32)
Such a strategy can work when there are only several methods, but not viable for (the usually-encountered) long lists.
I could not find a way to access the table data via the figure handle (w/o using external tools like findjobj or uiinspect), as findall(0,'Type','Figure') "do not see" the methodsview window/figure at all.
My MATLAB version is R2015a.
Fortunately, methodsview.m file is accessible and allows to get some insight on how the function works. Inside is the following comment:
%// Internal use only: option is optional and if present and equal to
%// 'noUI' this function returns methods information without displaying
%// the table. `
After some trial and error, I saw that the following works:
[titles,data] = methodsview(myApp,'noui');
... and returns two arrays of type java.lang.String[][].
From there I found a couple of ways to present the data in a meaningful way:
Table:
dataTable = cell2table(cell(data));
dataTable.Properties.VariableNames = matlab.lang.makeValidName(cell(titles));
Cell array:
dataCell = [cell(titles).'; cell(data)];
Important note: In the table case, the "Return Type" column title gets renamed to ReturnType, since table titles have to be valid MATLAB identifiers, as mentioned in the docs.

How to get triggers from .nxe files with FieldTrip Toolbox

I'm trying to analyse TMS-EEG data from Nexstim with FieldTrip Toolbox. I want to make a trial matrix from my raw .nxe data. But how I know which triggers to assign for cfg.trialdef.eventvalue, when cfg is the output variable. I'm trying to mimic the same kind of code as you can find from the tutorial: http://www.fieldtriptoolbox.org/tutorial/tms-eeg
I came up with a solution to the problem. With a command event = ft_read_event('filename.nxe') I got a struct with fields: type, value, sample, duration and offset and this is all I need.

Modify one odeset parameter

I am using ode15s to solve a DAE problem. I give through odeset the Mass Matrix and some more info:
opts=odeset('Mass',M,'MassSingular','yes','MStateDependence','none');
I calculate also Jpattern from a previous run. To feed it to the function, I could write once again
opts=odeset('Mass',M,'MassSingular','yes','MStateDependence','none', 'JPattern',JPat);
Is there a way to modify that single parameter and keep the rest of the structure?
I tried
opts.JPattern = JPat;
But it is not working.
You can probably do something like:
opts = odeset('Mass',M,'MassSingular','yes','MStateDependence','none');
opts = odeset(opts,'JPattern',JPat);
This is using the syntax (see the documentation):
options = odeset(oldopts,'name1',value1,...) alters an existing
options structure oldopts. This sets options equal to the existing
structure oldopts, overwrites any values in oldopts that are
respecified using name/value pairs, and adds any new pairs to the
structure. The modified structure is returned as an output argument.