Reshape of Inducing Variables - GPflow - gpflow

I have an SGPR model:
import numpy as np
import gpflow
X, Y = np.random.randn(50, 2), np.random.randn(50, 1)
Z1 = np.random.randn(13, 2)
k = gpflow.kernels.SquaredExponential()
m = gpflow.models.SGPR(data=(X, Y), kernel=k, inducing_variable=Z1)
And I would like to assign inducing variable but with different shape, like:
Z2 = np.random.randn(29, 2)
m.inducing_variable.Z.assign(Z2)
But if I do it, I got:
ValueError: Shapes (13, 2) and (29, 2) are incompatible
is there a way to reassign the inducing variables without redefining the model?
Context: Instead of optimizing the model with the inducing variables, I would like to optimize the model without optimizing the inducing variables, manually reassigning the inducing variables at each step of the optimization.

UPDATE: This issue is resolved by https://github.com/GPflow/GPflow/pull/1594, which will become part of the next GPflow patch release (2.1.4).
With that fix, you don't need a custom class. All you need to do is explicitly set the static shape with None along the first dimension:
inducing_variable = gpflow.inducing_variables.InducingPoints(
tf.Variable(
Z1, # initial value
trainable=False, # True does not work - see Note below
shape=(None, Z1.shape[1]), # or even tf.TensorShape(None)
dtype=gpflow.default_float(), # required due to tf's 32bit default
)
)
m = gpflow.models.SGPR(data=(X, Y), kernel=k, inducing_variable=inducing_variable)
Then m.inducing_variable.Z.assign(Z2) should work just fine.
Note that in this case Z cannot be trainable, as the TensorFlow optimizers need to know the shape at construction time and don't support dynamic shapes.
Right now (as of GPflow 2.1.2) there is no built-in way to change the shape of inducing variables for SGPR, though it is in principle possible. You can get what you want with your own inducing variable class though:
class VariableInducingPoints(gpflow.inducing_variables.InducingPoints):
def __init__(self, Z, name=None):
super().__init__(Z, name=name)
# overwrite with Variable with None as first element in shape so
# we can assign arrays with arbitrary length along this dimension:
self.Z = tf.Variable(Z, dtype=gpflow.default_float(),
shape=(None, Z.shape[1])
)
def __len__(self):
return tf.shape(self.Z)[0] # dynamic shape
# instead of the static shape returned by the InducingPoints parent class
and then do
m = gpflow.models.SGPR(
data=(X, Y), kernel=k, inducing_variable=VariableInducingPoints(Z1)
)
instead. Then your m.inducing_variable.Z.assign() should work as you like it.
(For SVGP, the size of the inducing variable and the distribution defined by q_mu and q_sqrt has to match, as well as be known at construction time, so in this case changing the number of inducing variables is less trivial.)

Related

Can operations on a numpy.memmap be deferred?

Consider this example:
import numpy as np
a = np.array(1)
np.save("a.npy", a)
a = np.load("a.npy", mmap_mode='r')
print(type(a))
b = a + 2
print(type(b))
which outputs
<class 'numpy.core.memmap.memmap'>
<class 'numpy.int32'>
So it seems that b is not a memmap any more, and I assume that this forces numpy to read the whole a.npy, defeating the purpose of the memmap. Hence my question, can operations on memmaps be deferred until access time?
I believe subclassing ndarray or memmap could work, but don't feel confident enough about my Python skills to try it.
Here is an extended example showing my problem:
import numpy as np
# create 8 GB file
# np.save("memmap.npy", np.empty([1000000000]))
# I want to print the first value using f and memmaps
def f(value):
print(value[1])
# this is fast: f receives a memmap
a = np.load("memmap.npy", mmap_mode='r')
print("a = ")
f(a)
# this is slow: b has to be read completely; converted into an array
b = np.load("memmap.npy", mmap_mode='r')
print("b + 1 = ")
f(b + 1)
Here's a simple example of an ndarray subclass that defers operations on it until a specific element is requested by indexing.
I'm including this to show that it can be done, but it almost certainly will fail in novel and unexpected ways, and require substantial work to make it usable.
For a very specific case it may be easier than redesigning your code to solve the problem in a better way.
I'd recommend reading over these examples from the docs to help understand how it works.
import numpy as np
class Defered(np.ndarray):
"""
An array class that deferrs calculations applied to it, only
calculating them when an index is requested
"""
def __new__(cls, arr):
arr = np.asanyarray(arr).view(cls)
arr.toApply = []
return arr
def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
## Convert all arguments to ndarray, otherwise arguments
# of type Defered will cause infinite recursion
# also store self as None, to be replaced later on
newinputs = []
for i in inputs:
if i is self:
newinputs.append(None)
elif isinstance(i, np.ndarray):
newinputs.append(i.view(np.ndarray))
else:
newinputs.append(i)
## Store function to apply and necessary arguments
self.toApply.append((ufunc, method, newinputs, kwargs))
return self
def __getitem__(self, idx):
## Get index and convert to regular array
sub = self.view(np.ndarray).__getitem__(idx)
## Apply stored actions
for ufunc, method, inputs, kwargs in self.toApply:
inputs = [i if i is not None else sub for i in inputs]
sub = super().__array_ufunc__(ufunc, method, *inputs, **kwargs)
return sub
This will fail if modifications are made to it that don't use numpy's universal functions. For instance percentile and median aren't based on ufuncs, and would end up loading the entire array. Likewise, if you pass it to a function that iterates over the array, or applies an index to substantial amounts the entire array will be loaded.
This is just how python works. By default numpy operations return a new array, so b never exists as a memmap - it is created when + is called on a.
There's a couple of ways to work around this. The simplest is to do all operations in place,
a += 1
This requires loading the memory mapped array for reading and writing,
a = np.load("a.npy", mmap_mode='r+')
Of course this isn't any good if you don't want to overwrite your original array.
In this case you need to specify that b should be memmapped.
b = np.memmap("b.npy", mmap+mode='w+', dtype=a.dtype, shape=a.shape)
Assigning can be done by using the out keyword provided by numpy ufuncs.
np.add(a, 2, out=b)

updating subset of parameters in dynet

Is there a way to update a subset of parameters in dynet? For instance in the following toy example, first update h1, then h2:
model = ParameterCollection()
h1 = model.add_parameters((hidden_units, dims))
h2 = model.add_parameters((hidden_units, dims))
...
for x in trainset:
...
loss.scalar_value()
loss.backward()
trainer.update(h1)
renew_cg()
for x in trainset:
...
loss.scalar_value()
loss.backward()
trainer.update(h2)
renew_cg()
I know that update_subset interface exists for this and works based on the given parameter indexes. But then it is not documented anywhere how we can get the parameter indexes in dynet Python.
A solution is to use the flag update = False when creating expressions for parameters (including lookup parameters):
import dynet as dy
import numpy as np
model = dy.Model()
pW = model.add_parameters((2, 4))
pb = model.add_parameters(2)
trainer = dy.SimpleSGDTrainer(model)
def step(update_b):
dy.renew_cg()
x = dy.inputTensor(np.ones(4))
W = pW.expr()
# update b?
b = pb.expr(update = update_b)
loss = dy.pickneglogsoftmax(W * x + b, 0)
loss.backward()
trainer.update()
# dy.renew_cg()
print(pb.as_array())
print(pW.as_array())
step(True)
print(pb.as_array()) # b updated
print(pW.as_array())
step(False)
print(pb.as_array()) # b not updated
print(pW.as_array())
For update_subset, I would guess that the indices are the integers suffixed at the end of parameter names (.name()).
In the doc, we are supposed to use a get_index function.
Another option is: dy.nobackprop() which prevents the gradient to propagate beyond a certain node in the graph.
And yet another option is to zero the gradient of the parameter that do not need to be updated (.scale_gradient(0)).
These methods are equivalent to zeroing the gradient before the update. So, the parameter will still be updated if the optimizer uses its momentum from previous training steps (MomentumSGDTrainer, AdamTrainer, ...).

sympy derivative with boolean

I am trying to take the derivative of a function including a boolean variable with sympy.
My expected result:
Two different derivatives, depending on the boolean being either True or False (i.e. 1 or 0).
Example:
import sympy as sy
c, x = sy.symbols("c x", positive=True, real=True)
bo = sy.Function("bo")
fct1 = sy.Function("fct1")
fct2 = sy.Function("fct2")
FOC2 = sy.Function("FOC2")
y = 5
a = 2
b = 4
def fct1(x):
return -0.004*x**2 + 0.25*x + 4
# the following gives the smaller positive intercept with the x-axis)
# this intercept is the threshold value for the boolean function, bo
min(sy.solve(fct1(x)-y, x))
def bo(x):
if fct1(x) <= y:
return 1
else:
return 0
def fct2(c, x):
return a + b*c + bo(x)*c
def FOC2(c, x):
return sy.diff(fct2(c, x), c)
print(FOC2(c, x))
The min-function after the comments shows me the threshold of x for bo being True or False would be 4.29..., thus positive and real.
Output:
TypeError: cannot determine truth value of Relation
I understand that the truth value depends on x, which is a symbol. Thus, without knowing x one cannot determine bo.
But how would I get my expected result, where bo is symbolic?
First off, I would advise you to carefully consider what is going on in your code the way it is pasted above. You first define a few sympy functions, e.g.
fct1 = sy.Function("fct1")
So after this, fct1 is an undefined sympy.Function - undefined in the sense that it is neither specified what its arguments are, nor what the function looks like.
However, then you define same-named functions explicitly, as in
def fct1(x):
return -0.004*x**2 + 0.25*x + 4
Note however, that at this point, fct1 ceases to be a sympy.Function, or any sympy object for that matter: you overwrite the old definition, and it is now just a regular python function!
This is also the reason that you get the error: when you call bo(x), python tries to evaluate
-0.004*x**2 + 0.25*x + 4 <= 5
and return a value according to your definition of bo(). But python does not know whether the above is true (or how to make that comparison), so it complains.
I would suggest 2 changes:
Instead of python functions, as in the code, you could simply use sympy expressions, e.g.
fct1 = -0.004*x**2 + 0.25*x + 4
To get the truth value of your condition, I would suggest to use the Heaviside function (wiki), which evaluates to 0 for a negative argument, and to 1 for positive. Its implementation in sympy is sympy.Heaviside.
Your code could then look as follows:
import sympy as sy
c, x = sy.symbols("c x", positive=True, real=True)
y = 5
a = 2
b = 4
fct1 = -0.004*x**2 + 0.25*x + 4
bo = sy.Heaviside(y - fct1)
fct2 = a + b*c + bo * c
FOC2 = sy.diff(fct2, c)
print(FOC2)
Two comments on the line
bo = sy.Heaviside(y - fct1)
(1) The current implementation does not evaluate sympy.Heaviside(0)by default; this is beacause there's differing definitions around (some define it to be 1, others 1/2). You'd want it to be 1, to be in accordance with the (weak) inequality in the OP. In sympy 1.1, this can be achieved by passing an additional argument to Heaviside, namely whatever you want Heaviside(0) to evaluate to:
bo = sy.Heaviside(y - fct1, 1)
This is not supported in older versions of sympy.
(2) You will get your FOC2, again involving a Heaviside term. What I like about this, is that you could keep working with this expression, say if you wanted to take a second derivative and so on. If, for the sake of readability, you would prefer a piecewise expression - no problem. Just replace the according line with
bo = sy.Heaviside(y - fct1)._eval_rewrite_as_Piecewise(y-fct1)
Which will translate to a piecewise function automatically. (note that under older versions, this automatically implicitly uses Heaviside(0) = 0.5 - best to use (1) and (2) together:
bo = sy.Heaviside(y - fct1, 1)._eval_rewrite_as_Piecewise(y-fct1)
Unfortunately, I don't have a working sympy 1.1 at my hands right now and can only test the old code.
One more noteconcerning sympy's piecewise functions: they are much more readable if using sympy's latex printing, by inserting
sy.init_printing()
early in the code.
(Disclaimer: I am by no means an expert in sympy, and there might be other, preferable solutions out there. Just trying to make a suggestion!)

Including time as an explicit variable in constraint in a Pyomo Model

I am using PyOMO to model a semi-batch reaction.
Consider an ODE system that describes a semi-batch reactor where one of the reactants is fed at a given volume flow for t1 units of time, the reaction goes on until t end, and obviously t1 < t end.
To specify the stop in the flow, I can either use a conditional rule (assume t1 = 3.5*60):
def _vol_flow_in_schedule(mod,t):
if t<=3.5*60:
return mod.vol_flow_in[t] == (12.3/1000)/(3.5*60)
else:
return mod.vol_flow_in[t] == 0
m1.vol_flow_in_schedule = Constraint(m1.time,rule=_vol_flow_in_schedule)
which will create a discontinuity (and then my model does not converge). What I want to do is use a sigmoidal function that will transition the flow to zero without a discontinuity.
To implement the sigmoidal though I need to refer to the time variable itself.
The below MATLAB code gives me the result I want:
t=[0:1:500];
acc=2; %Acceleration parameter, higher values yields sharper change.
time_of_step=3.5*60;
init_value = (12.3/1000)/(3.5*60);
end_value = 0;
sigmoidal=(init_value+(end_value-init_value)/2)...
+((end_value-init_value)/2)*atan((t-time_of_step)*acc)/atan(max(t));
This implementation however needs the time variable explicitly in the function. How can I access the time variable inside the PyOMO rule? I tried the below, but I get an " Cannot treat the scalar component 't_of_step' as an array" error:
m1.init_value = Param(initialize = (12.3/1000)/(3.5*60))
m1.end_value = Param(initialize = 0)
m1.t_of_step = Param(initialize = 210)
m1.acc = Param(initialize = 5)
.
.
def _vol_flow_sigmoidal (mod,t):
return mod.vol_flow_in[t] == (mod.init_value+(mod.end_value-mod.init_value)/2)+((mod.end_value-mod.init_value)/2)*atan((t-mod.t_of_step)*mod.acc)/atan(1500)
m1.vol_flow_sigmoidal = Constraint(m1.time,rule=_vol_flow_sigmoidal)
Hopefully I've described clearlyt what I am after. Any hints are most welcome,
Thanks!
Sal
How are you declaring the m1.time index?
My guess is that you are using a NumPy array to initialize the m1.time index. There is a known problem in Pyomo (see Issue #31) where the NumPy operator overloading and the Pyomo operator overloading end up fighting with each other (basically, NumPy gets fooled into thinking Pyomo scalars are actually indexed and attempts to treat them like arrays).
I was able to reproduce the error with the following complete example:
# pyomo 4.4.1
from pyomo.environ import *
import numpy as np
m1 = ConcreteModel()
m1.time = Set(initialize=np.array([0,100,200,300,400,500]))
m1.vol_flow_in = Var(m1.time)
m1.init_value = Param(initialize = (12.3/1000)/(3.5*60))
m1.end_value = Param(initialize = 0)
m1.t_of_step = Param(initialize = 210)
m1.acc = Param(initialize = 5)
def _vol_flow_sigmoidal (mod,t):
return mod.vol_flow_in[t] == (mod.init_value+(mod.end_value-mod.init_value)/2)\
+((mod.end_value-mod.init_value)/2)*atan((t-mod.t_of_step)*mod.acc)/atan(1500)
m1.vol_flow_sigmoidal = Constraint(m1.time,rule=_vol_flow_sigmoidal)
There are two alternatives that do work, both based on avoiding using NumPy arrays to initialize Pyomo Sets. You can either completely avoid Numpy:
m1.time = Set(initialize=[0,100,200,300,400,500])
or explicitly cast the NumPy array to a list:
timeArray = np.array([0,100,200,300,400,500])
m1.time = Set(initialize=timeArray.tolist())
Finally, for completeness, two other notes:
This also applies to initializing ContinuousSet objects in pyomo.dae
You will see the same behavior even if you avoid the explicit Pyomo Set declaration. That is, the following will also generate the error:
m1.time = np.array([0,100,200,300,400,500])
# ...
m1.vol_flow_sigmoidal = Constraint(m1.time,rule=_vol_flow_sigmoidal)
This is because Pyomo will quietly create the Set object for you behind the scenes as m1.vol_flow_sibmodial_index and then use that Set to index the Constraint.

How does rowfun know to reference variables inside a table

From the documentation, we see the following example:
g = gallery('integerdata',3,[15,1],1);
x = gallery('uniformdata',[15,1],9);
y = gallery('uniformdata',[15,1],2);
A = table(g,x,y)
func = #(x, y) (x - y);
B = rowfun(func,A,...
'GroupingVariable','g',...
'OutputVariableName','MeanDiff')
When the function func is applied to A in rowfun how does it know that there are variables in A called x and y?
EDIT: I feel that my last statement must not be true, as you do not get the same result if you did A = table(g, y, x).
I am still very confused by how rowfun can use a function that does not actually use any variables defined within the calling environment.
Unless you specify the rows (and their order) with the Name/Value argument InputVariables, Matlab will simply take column 1 as first input, column 2 as second input etc, ignoring eventual grouping columns.
Consequently, for better readability and maintainability of your code, I consider it good practice to always specify InputVariables explicitly.