How do I create arbitrary parameterized layers in DiffEqFlux.lj neuralODE? Julia Julialang Flux.jl - neural-network

I am able to create and optimize neuralODEs in julia(1.3 and 1.2) using Flux.jl and DiffEqFlux.jl but it fails under a crucial important general case.
what works:
I can train the Neural net parameters if it is built out of the
provided Flux.jl layers like Dense().
I can include an arbitrary function as a layer in the network chain, e.g. x -> x.*x
What fails:
However if the arbitrary function has parameters I want to train then Flux. Train will not adjust these parameters causing it to fail.
I have tried making these added parameters Tracked and included in the list of parameters given to the training system but it ignores them and they remain unvaried.
The documentation says very cryptically that one can use Flux.#functor on a layer to make sure it's parameters get tracked. However functor was not included in Flux till version 0.10.0 and the only version of Flux compatible with NeuralODEs in DiffEqFlux is 0.9.0
So here's an toy example of a 2 layer neural net I want to use
p = param([1.0])
dudt = chain( x -> p[1]*x.*x, Dense(2,2) )
ps = Flux.params(dudt)
then I use the flux train on this. when I do this the parameter p is not varied, but the parameters in the Dense layer are.
I have tried explicitly including like this
ps = Flux.Params([p,dudt])
but that has the same result and the same problem
I think what I need to do is build a struct with an associted function that implements the
x->p[1]*x*x
then call #functor on this. That struct can then be used in the chain.
But as I noted the version of Flux with #functor is not compatible with DiffEqFlux of any version.
So I need a way to make flux pay attention to my custom parameters, not just the ones in Dense()
How???

I think I get what your question is, but please clarify if I am answering the wrong question here. The issue is that the p is only grabbing from a global reference and thus not differentiated during adjoints. A much better way to handle this in 2020 is to use FastChain. The FastChan interface lets you define layer functions and their parameter dependencies, so this is a nice way to make your neural network incorporate arbitrary functions with parameters. Here's what that looks like:
using DifferentialEquations
using Flux, Zygote
using DiffEqFlux
x = Float32[2.; 0.]
p = Float32[2.0]
tspan = (0.0f0,1.0f0)
mylayer(x,p) = p[1]*x
DiffEqFlux.paramlength(::typeof(mylayer)) = 1
DiffEqFlux.initial_params(::typeof(mylayer)) = rand(Float32,1)
dudt = FastChain(FastDense(2,50,tanh),FastDense(50,2),mylayer)
p = DiffEqFlux.initial_params(dudt)
function f(u,p,t)
dudt(u,p)
end
ex_neural_ode(x,p) = solve(ODEProblem(f,x,tspan,p),Tsit5())
solve(ODEProblem(f,x,tspan,p),Tsit5())
du0,dp = Zygote.gradient((x,p)->sum(ex_neural_ode(x,p)),x,p)
where the last value of p is the one parameter for p in mylayer. Or you can directly use Flux:
using DifferentialEquations
using Flux, Zygote
using DiffEqFlux
x = Float32[2.; 0.]
p2 = Float32[2.0]
tspan = (0.0f0,1.0f0)
dudt = Chain(Dense(2,50,tanh),Dense(50,2))
p,re = Flux.destructure(dudt)
function f(u,p,t)
re(p[1:end-1])(u) |> x-> p[end]*x
end
ex_neural_ode() = solve(ODEProblem(f,x,tspan,[p;p2]),Tsit5())
grads = Zygote.gradient(()->sum(ex_neural_ode()),Flux.params(x,p,p2))
grads[x]
grads[p]
grads[p2]

Related

New Feature in Scikit-Learn Pipeline - Interaction between two existing Features

I have two features in my data set: height and Area. I want to create a new feature by Interacting Area and Height using pipeline in scikit-learn.
Can anyone please guide me on how I can achieve this?
Thanks
You can achieve this with a custom transformer, implementing a fit and transform method. Optionnaly you can make it inherit from sklearn TransformerMixin for bullet-profing.
from sklearn.base import TransformerMixin
class CustomTransformer(TransformerMixin):
def fit(self, X, y=None):
"""The fit method doesn't do much here,
but it still required if your pipeline
ever need to be fit. Just returns self."""
return self
def transform(self, X, y=None):
"""This is where the actual transformation occurs.
Assuming you want to compute the product of your feature
height and area.
"""
# Copy X to avoid mutating the original dataset
X_ = X.copy()
# change new_feature and right member according to your needs
X_["new_feature"] = X_["height"] * X_["area"]
# you then return the newly transformed dataset. It will be
# passed to the next step of your pipeline
return X_
You can test it with this code :
import pandas as pd
from sklearn.pipeline import Pipeline
# Instantiate fake DataSet, your Transformer and Pipeline
X = pd.DataFrame({"height": [10, 23, 34], "area": [345, 33, 45]})
custom = CustomTransformer()
pipeline = Pipeline([("heightxarea", custom)])
# Test it
pipeline.fit(X)
pipeline.transform(X)
For such a simple processing, it might seem like an overkill, but it is a good practice to put any dataset manipulations into Transformers. They are more reproducible that way.

FMU 2.0 interaction - requires parallel "container" for parameter values etc?

I work with pyfmi in Jupyter notebooks to run simulations and I like to work interactively and evaluate incremental changes in parameters etc. Long time ago I found it necessary to introduce a dictionary that work as a "container" for parameter and initial values. Now I wonder if here is a way to get rid of this "container" that after all is partly a parallel structure to "model"?
A typical workflow look like this:
create a diagram where results from different simulations below should be shown
model = load_fmu(fmu_model)
parDict['model.x_0'] = 1
parDict['model.a'] = 2
for key in parDict.keys(): model.set(key,parDict[key])
sim_res = model.simulate(10)
plot results...
model = load_fmu(fmu_model)
parDict['model.x_0'] = 3
for key in parDict.keys(): model.set(key,parDict[key])
sim_res = model.simulate(10)
plot results...
There is a function model.reset() that brings the state back to default values at compilation without loading again, but you need to do more than the following
model.reset()
parDict['model.x_0'] = 3
for key in parDict.keys(): model.set(key,parDict[key])
sim_res = model.simulate(10)
plot results...
So,  this does NOT work...
and after all parameters and initial values needs to be brought back and we still need parDict, but we may avoid the load-command though.

Visualizing an AutoDiff MultibodyPlant in PyDrake

I am trying to build a simple multibody plant system in Drake using the basic DrakeVisualizer. However, for my use case, I also want to be able to automatically track the derivatives through the physics simulation, so am using the AutoDiffXd version of system:
timestep = 1e-3
builder = DiagramBuilder_[AutoDiffXd]()
plant = MultibodyPlant(timestep)
scene_graph = SceneGraph_[AutoDiffXd]()
brick_file = FindResourceOrThrow("drake/examples/manipulation_station/models/061_foam_brick.sdf")
parser = Parser(plant)
brick = parser.AddModelFromFile(brick_file, model_name="brick")
plant.Finalize()
plant_ad = plant.ToAutoDiffXd()
plant_ad.RegisterAsSourceForSceneGraph(scene_graph)
scene_graph.AddRenderer("renderer", MakeRenderEngineVtk(RenderEngineVtkParams()))
DrakeVisualizer.AddToBuilder(builder, scene_graph)
builder.AddSystem(plant_ad)
builder.AddSystem(scene_graph)
builder.Connect(plant_ad.get_geometry_poses_output_port(), scene_graph.get_source_pose_port(plant_ad.get_source_id()))
builder.Connect(scene_graph.get_query_output_port(), plant_ad.get_geometry_query_input_port())
diagram = builder.Build()
context = diagram.CreateDefaultContext()
simulator = Simulator_[AutoDiffXd](diagram, context)
simulator.AdvanceTo(2.0)
However, when I run this, I get the following error:
File "/home/craig/Repos/drake-exps/autoDiffExperiment.py", line 102, in auto_phys
DrakeVisualizer.AddToBuilder(builder, scene_graph)
TypeError: AddToBuilder(): incompatible function arguments. The following argument types are supported:
1. (builder: pydrake.systems.framework.DiagramBuilder_[float], scene_graph: drake::geometry::SceneGraph<double>, lcm: pydrake.lcm.DrakeLcmInterface = None, params: pydrake.geometry.DrakeVisualizerParams = <pydrake.geometry.DrakeVisualizerParams object at 0x7ff6274e14b0>) -> pydrake.geometry.DrakeVisualizer
2. (builder: pydrake.systems.framework.DiagramBuilder_[float], query_object_port: pydrake.systems.framework.OutputPort_[float], lcm: pydrake.lcm.DrakeLcmInterface = None, params: pydrake.geometry.DrakeVisualizerParams = <pydrake.geometry.DrakeVisualizerParams object at 0x7ff627736730>) -> pydrake.geometry.DrakeVisualizer
Invoked with: <pydrake.systems.framework.DiagramBuilder_[AutoDiffXd] object at 0x7ff65654f8f0>, <pydrake.geometry.SceneGraph_[AutoDiffXd] object at 0x7ff656562130>
From this error, it appears the DrakeVisualizer class only accepts systems which use float scalars exlusively. So I am stuck --- either I can go back to floats (but lose the autodiff differentiable simulation functionality I was after in the first place), or continue to use autodiffxd systems (but be completely unable to visualize what is going on in my simulation).
Is there a way to get both that I am missing?
Sorry for the pain and inconvenience. Your description and assessment are all spot on. Most of the visualization mechanisms are float only and, in its current state, attempts to visualizing an AutoDiff diagram will fail.
You have a couple of options (neither of which is appealing):
Go with one of the outcomes you've described above (no vis or no derivatives).
Put in a Drake feature request to be able to attach a visualizer to an AutoDiff diagram.
I can come up with some hacky workarounds (that aren't immediately clear would even work). So, if you're desperate for derivatives and visualization, they could be explored. But, ultimately, the feature request and a formal Drake solution would be the best long-term resolution.
=====================================
Big update. As of #14569, the DrakeVisualizer class is now templated on the scalar type (item 2 in the list above). That has two implications:
You can build an AutoDiffXd-valued diagram with a visualizer in it (as in your example), or
You can create a double-valued diagram and scalar convert it (i.e., diagram.ToAutoDiffXd() into an AutoDiffXd-valued diagram.

FMU-Export in Dymola: Is it possible to make a Modelica enumeration type variable "tunable" when exported as FMU / FMI

I have implemented three similar publications in one Modelica model, using an enumeration type variable to select the publication. The goal is to switch between calculation methods (i.e. between publications) by changing the value of the enumeration type variable online.
The calculation consists of three steps, each of which has its own enumeration variable. This allows for mixed calculation methods, e.g. by setting step 1 to calculate according to publication 1 and steps 2 and 3 according to publication 2.
Each step reads something like this
model Calculation_step
type pubSelect = enumeration(
Publication_1,
Publication_2,
Publication_3);
// ####### Publication Selection #######
parameter pubSelect selection = pubSelect.Publication_2;
// ##### End Publication Selection #####
Modelica.Blocks.Interfaces.RealInput incoming;
Modelica.Blocks.Interfaces.RealOutput outgoing;
parameter Real factor = 5;
equation
if selection == pubSelect.Publication_1 then
outgoing = factor * sin(incoming);
elseif selection == pubSelect.Publication_2 then
outgoing = factor * sin(incoming)^2;
elseif selection == pubSelect.Publication_3 then
outgoing = factor * sin(incoming)^3;
else
outgoing = 99999;
end if;
annotation (uses(Publicationica(version="3.2.1"), Modelica(version="3.2.1")));
end Calculation_step;
The model will not be calculated in Dymola. Instead, a functional mock-up unit (FMU) is created using Dymola. This creates an XML file describing the model. In order to enable online changes, a variable has to have the attribute variability="tunable" set in this XML.
However, the variable selection is not tunable, as shown in the following excerpt of the XML:
-<ModelVariables>
<!-- Index for next variable = 1 -->
-<ScalarVariable name="selection" variability="constant" valueReference="100663296">
<Enumeration start="2" declaredType="Calculation_step"/>
</ScalarVariable>
Using the same code for the declaration of the variable factor yields a tunable FMU variable:
<!-- Index for next variable = 4 -->
-<ScalarVariable name="factor" variability="tunable" valueReference="16777216" causality="parameter">
<Real start="5"/>
</ScalarVariable>
tl;dr:
Is it possible to make a Modelica enumeration type variable "tunable" when exported as FMU / FMI?
Dymola Version 2015 FD01 (32-bit), 2014-11-04
I tried to add a start value to the selection parameter, and with annotation (Evaluate=false) it became tunable.
parameter pubSelect selection(start=pubSelect.Publication_2) annotation (Evaluate=false);
It will give you a warning about an unassigned parameter tho, I have not really tried if it really works(change the value at events/communication points), please let me know the result if you have a chance to give it a try. Thanks~

Modify one odeset parameter

I am using ode15s to solve a DAE problem. I give through odeset the Mass Matrix and some more info:
opts=odeset('Mass',M,'MassSingular','yes','MStateDependence','none');
I calculate also Jpattern from a previous run. To feed it to the function, I could write once again
opts=odeset('Mass',M,'MassSingular','yes','MStateDependence','none', 'JPattern',JPat);
Is there a way to modify that single parameter and keep the rest of the structure?
I tried
opts.JPattern = JPat;
But it is not working.
You can probably do something like:
opts = odeset('Mass',M,'MassSingular','yes','MStateDependence','none');
opts = odeset(opts,'JPattern',JPat);
This is using the syntax (see the documentation):
options = odeset(oldopts,'name1',value1,...) alters an existing
options structure oldopts. This sets options equal to the existing
structure oldopts, overwrites any values in oldopts that are
respecified using name/value pairs, and adds any new pairs to the
structure. The modified structure is returned as an output argument.