Can operations on a numpy.memmap be deferred? - numpy-memmap

Consider this example:
import numpy as np
a = np.array(1)
np.save("a.npy", a)
a = np.load("a.npy", mmap_mode='r')
print(type(a))
b = a + 2
print(type(b))
which outputs
<class 'numpy.core.memmap.memmap'>
<class 'numpy.int32'>
So it seems that b is not a memmap any more, and I assume that this forces numpy to read the whole a.npy, defeating the purpose of the memmap. Hence my question, can operations on memmaps be deferred until access time?
I believe subclassing ndarray or memmap could work, but don't feel confident enough about my Python skills to try it.
Here is an extended example showing my problem:
import numpy as np
# create 8 GB file
# np.save("memmap.npy", np.empty([1000000000]))
# I want to print the first value using f and memmaps
def f(value):
print(value[1])
# this is fast: f receives a memmap
a = np.load("memmap.npy", mmap_mode='r')
print("a = ")
f(a)
# this is slow: b has to be read completely; converted into an array
b = np.load("memmap.npy", mmap_mode='r')
print("b + 1 = ")
f(b + 1)

Here's a simple example of an ndarray subclass that defers operations on it until a specific element is requested by indexing.
I'm including this to show that it can be done, but it almost certainly will fail in novel and unexpected ways, and require substantial work to make it usable.
For a very specific case it may be easier than redesigning your code to solve the problem in a better way.
I'd recommend reading over these examples from the docs to help understand how it works.
import numpy as np
class Defered(np.ndarray):
"""
An array class that deferrs calculations applied to it, only
calculating them when an index is requested
"""
def __new__(cls, arr):
arr = np.asanyarray(arr).view(cls)
arr.toApply = []
return arr
def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
## Convert all arguments to ndarray, otherwise arguments
# of type Defered will cause infinite recursion
# also store self as None, to be replaced later on
newinputs = []
for i in inputs:
if i is self:
newinputs.append(None)
elif isinstance(i, np.ndarray):
newinputs.append(i.view(np.ndarray))
else:
newinputs.append(i)
## Store function to apply and necessary arguments
self.toApply.append((ufunc, method, newinputs, kwargs))
return self
def __getitem__(self, idx):
## Get index and convert to regular array
sub = self.view(np.ndarray).__getitem__(idx)
## Apply stored actions
for ufunc, method, inputs, kwargs in self.toApply:
inputs = [i if i is not None else sub for i in inputs]
sub = super().__array_ufunc__(ufunc, method, *inputs, **kwargs)
return sub
This will fail if modifications are made to it that don't use numpy's universal functions. For instance percentile and median aren't based on ufuncs, and would end up loading the entire array. Likewise, if you pass it to a function that iterates over the array, or applies an index to substantial amounts the entire array will be loaded.

This is just how python works. By default numpy operations return a new array, so b never exists as a memmap - it is created when + is called on a.
There's a couple of ways to work around this. The simplest is to do all operations in place,
a += 1
This requires loading the memory mapped array for reading and writing,
a = np.load("a.npy", mmap_mode='r+')
Of course this isn't any good if you don't want to overwrite your original array.
In this case you need to specify that b should be memmapped.
b = np.memmap("b.npy", mmap+mode='w+', dtype=a.dtype, shape=a.shape)
Assigning can be done by using the out keyword provided by numpy ufuncs.
np.add(a, 2, out=b)

Related

How to access a collection of heterogenous functions randomly?

I am implementing an evolutionary algorithm where I have a numerical genetic encoding (0-n). Where each number from 0 to n represents a function. I have implemented a numpy version where it is possible to do the following. The actual implementation is a bit more complicated but this snippet captures the core functionality.
n = 3
max_ops = 10
# Generate randomly generated args and OPs
for i in range(number_of_iterations):
args = np.random.randint(min_val_arg, max_val_arg, size=(arg_count, arg_shape[0], arg_shape[1])
gene_of_operations = np.random.randint(0,n,size=(max_ops))
# A collection of OP encodings and operations. Doesn't need to be a dict.
dict_of_n_OPs = {
0:np.add,
1:np.multiply,
2:np.diff
}
#njit
def execute_genome(gene_of_operations, args, dict_of_n_OPs):
result = 0
for op, arg in zip(gene_of_operations,args)
result+= op(arg)
return result
## executing the gene
execute_genome(gene_of_operations, args, dict_of_n_OPs)
print(results)
Now when adding the njit decorator expects a statically typed function. Where heterogenously typed collections such as my dict_of_n_OPs are not supported, I have tried rendering it as a numpy array, numba.typed.Dict, numba.typed.List. But discovered none supports heteregoenous types.
What would be a numba compliant approach that allows for executing different functions based on a numerical encoding such as '00201'. Where number 0 would execute function 0?
Is the only way an n line if else statement for n unique operations/functions?

Transforming `PCollection` with many elements into a single element

I am trying to convert a PCollection, that has many elements, into a PCollection that has one element. Basically, I want to go from:
[1,2,3,4,5,6]
to:
[[1,2,3,4,5,6]]
so that I can work with the entire PCollection in a DoFn.
I've tried CombineGlobally(lamdba x: x), but only a portion of elements get combined into an array at a time, giving me the following result:
[1,2,3,4,5,6] -> [[1,2],[3,4],[5,6]]
Or something to that effect.
This is my relevant portion of my script that I'm trying to run:
import apache_beam as beam
raw_input = range(1024)
def run_test():
with TestPipeline() as test_pl:
input = test_pl | "Create" >> beam.Create(raw_input)
def combine(x):
print(x)
return x
(
input
| "Global aggregation" >> beam.CombineGlobally(combine)
)
pl.run()
run_test()
I figured out a pretty painless way to do this, which I missed in the docs:
The more general way to combine elements, and the most flexible, is
with a class that inherits from CombineFn.
CombineFn.create_accumulator(): This creates an empty accumulator. For
example, an empty accumulator for a sum would be 0, while an empty
accumulator for a product (multiplication) would be 1.
CombineFn.add_input(): Called once per element. Takes an accumulator
and an input element, combines them and returns the updated
accumulator.
CombineFn.merge_accumulators(): Multiple accumulators could be
processed in parallel, so this function helps merging them into a
single accumulator.
CombineFn.extract_output(): It allows to do additional calculations
before extracting a result.
I suppose supplying a lambda function that simply passes its argument to the "vanilla" CombineGlobally wouldn't do what I expected initially. That functionality has to be specified by me (although I still think it's weird this isn't built into the API).
You can find more about subclassing CombineFn here, which I found very helpful:
A CombineFn specifies how multiple values in all or part of a
PCollection can be merged into a single value—essentially providing
the same kind of information as the arguments to the Python “reduce”
builtin (except for the input argument, which is an instance of
CombineFnProcessContext). The combining process proceeds as follows:
Input values are partitioned into one or more batches.
For each batch, the create_accumulator method is invoked to create a fresh initial “accumulator” value representing the combination of
zero values.
For each input value in the batch, the add_input method is invoked to combine more values with the accumulator for that batch.
The merge_accumulators method is invoked to combine accumulators from separate batches into a single combined output accumulator value,
once all of the accumulators have had all the input value in their
batches added to them. This operation is invoked repeatedly, until
there is only one accumulator value left.
The extract_output operation is invoked on the final accumulator to get the output value. Note: If this CombineFn is used with a transform
that has defaults, apply will be called with an empty list at
expansion time to get the default value.
So, by subclassing CombineFn, I wrote this simple implementation, Aggregated, that does exactly what I want:
import apache_beam as beam
raw_input = range(1024)
class Aggregated(beam.CombineFn):
def create_accumulator(self):
return []
def add_input(self, accumulator, element):
accumulator.append(element)
return accumulator
def merge_accumulators(self, accumulators):
merged = []
for a in accumulators:
for item in a:
merged.append(item)
return merged
def extract_output(self, accumulator):
return accumulator
def run_test():
with TestPipeline() as test_pl:
input = test_pl | "Create" >> beam.Create(raw_input)
(
input
| "Global aggregation" >> beam.CombineGlobally(Aggregated())
| "print" >> beam.Map(print)
)
pl.run()
run_test()
You can also accomplish what you want with side inputs, e.g.
with beam.Pipeline() as p:
pcoll = ...
(p
# Create a PCollection with a single element.
| beam.Create([None])
# This will process the singleton exactly once,
# with the entirity of pcoll passed in as a second argument as a list.
| beam.Map(
lambda _, pcoll_as_side: ...consume pcoll_as_side here...,
pcoll_as_side=beam.pvalue.AsList(pcoll))

How to translate this code into python 3?

This code is originally written in Python 2 and I need to translate it in python 3!
I'm sorry for not sharing enough information:
Also, here's the part where self.D was first assigned:
def __init__(self,instance,transformed,describe1,describe2):
self.D=[]
self.instance=instance
self.transformed=transformed
self.describe1,self.describe2=describe1,describe2
self.describe=self.describe1+', '+self.describe2 if self.describe2 else self.describe1
self.column_num=self.tuple_num=self.view_num=0
self.names=[]
self.types=[]
self.origins=[]
self.features=[]
self.views=[]
self.classify_id=-1
self.classify_num = 1
self.classes=[]
def generateViews(self):
T=map(list,zip(*self.D))
if self.transformed==0:
s= int( self.column_num)
for column_id in range(s):
f = Features(self.names[column_id],self.types[column_id],self.origins[column_id])
#calculate min,max for numerical,temporal
if f.type==Type.numerical or f.type==Type.temporal:
f.min,f.max=min(T[column_id]),max(T[column_id])
if f.min==f.max:
self.types[column_id]=f.type=Type.none
self.features.append(f)
continue
d={}
#calculate distinct,ratio for categorical,temporal
if f.type == Type.categorical or f.type == Type.temporal:
for i in range(self.tuple_num):
print([type(self.D[i]) for i in range(self.tuple_num)])
if self.D[i][column_id] in d:
d[self.D[i][column_id]]+=1
else:
d[self.D[i][column_id]]=1
f.distinct = len(d)
f.ratio = 1.0 * f.distinct / self.tuple_num
f.distinct_values=[(k,d[k]) for k in sorted(d)]
if f.type==Type.temporal:
self.getIntervalBins(f)
self.features.append(f)
TypeError: 'map' object is not subscriptable
The snippet you have given is not enough to solve the problem. The problem lies in self.D which you are trying to subscript using self.D[i]. Please look into your code where self.D is instantiated and make sure that its an array-like variable so that you can subscript it.
Edit
based on your edit, please confirm that whether self.D[i] is also array-like for all i in the range mentioned in the code. you can do that by simply
print([type(self.D[i]) for i in range(self.tuple_num))
share the response of this code, so that I may help further.
Edit-2
As per your comments and the edited code snippet, it seems that self.D is the output of some map function. In python 2, map is a function that returns a list. However, in python3 map is a class that when invoked, creates a map object.
The simplest way to resolve this is the find out the line where self.D was first assigned, and whatever code is in the RHS, wrap it with a list(...) function.
Alternately, just after this line
T=map(list,zip(*self.D))
add the following
self.D = list(self.D)
Hope this will resolve the issue
We don't have quite enough information to answer the question, but in Python 3, generator and map objects are not subscriptable. I think it may be in your
self.D[i]
variable, because you claim that self.D is a list, but it is possible that self.D[i] is a map object.
In your case, to access the indexes, you can convert it to a list:
list(self.D)[i]
Or use unpacking to implicitly convert to a list (this may be more condensed, but remember that explicit is better than implicit):
[*self.D[i]]

updating subset of parameters in dynet

Is there a way to update a subset of parameters in dynet? For instance in the following toy example, first update h1, then h2:
model = ParameterCollection()
h1 = model.add_parameters((hidden_units, dims))
h2 = model.add_parameters((hidden_units, dims))
...
for x in trainset:
...
loss.scalar_value()
loss.backward()
trainer.update(h1)
renew_cg()
for x in trainset:
...
loss.scalar_value()
loss.backward()
trainer.update(h2)
renew_cg()
I know that update_subset interface exists for this and works based on the given parameter indexes. But then it is not documented anywhere how we can get the parameter indexes in dynet Python.
A solution is to use the flag update = False when creating expressions for parameters (including lookup parameters):
import dynet as dy
import numpy as np
model = dy.Model()
pW = model.add_parameters((2, 4))
pb = model.add_parameters(2)
trainer = dy.SimpleSGDTrainer(model)
def step(update_b):
dy.renew_cg()
x = dy.inputTensor(np.ones(4))
W = pW.expr()
# update b?
b = pb.expr(update = update_b)
loss = dy.pickneglogsoftmax(W * x + b, 0)
loss.backward()
trainer.update()
# dy.renew_cg()
print(pb.as_array())
print(pW.as_array())
step(True)
print(pb.as_array()) # b updated
print(pW.as_array())
step(False)
print(pb.as_array()) # b not updated
print(pW.as_array())
For update_subset, I would guess that the indices are the integers suffixed at the end of parameter names (.name()).
In the doc, we are supposed to use a get_index function.
Another option is: dy.nobackprop() which prevents the gradient to propagate beyond a certain node in the graph.
And yet another option is to zero the gradient of the parameter that do not need to be updated (.scale_gradient(0)).
These methods are equivalent to zeroing the gradient before the update. So, the parameter will still be updated if the optimizer uses its momentum from previous training steps (MomentumSGDTrainer, AdamTrainer, ...).

Including time as an explicit variable in constraint in a Pyomo Model

I am using PyOMO to model a semi-batch reaction.
Consider an ODE system that describes a semi-batch reactor where one of the reactants is fed at a given volume flow for t1 units of time, the reaction goes on until t end, and obviously t1 < t end.
To specify the stop in the flow, I can either use a conditional rule (assume t1 = 3.5*60):
def _vol_flow_in_schedule(mod,t):
if t<=3.5*60:
return mod.vol_flow_in[t] == (12.3/1000)/(3.5*60)
else:
return mod.vol_flow_in[t] == 0
m1.vol_flow_in_schedule = Constraint(m1.time,rule=_vol_flow_in_schedule)
which will create a discontinuity (and then my model does not converge). What I want to do is use a sigmoidal function that will transition the flow to zero without a discontinuity.
To implement the sigmoidal though I need to refer to the time variable itself.
The below MATLAB code gives me the result I want:
t=[0:1:500];
acc=2; %Acceleration parameter, higher values yields sharper change.
time_of_step=3.5*60;
init_value = (12.3/1000)/(3.5*60);
end_value = 0;
sigmoidal=(init_value+(end_value-init_value)/2)...
+((end_value-init_value)/2)*atan((t-time_of_step)*acc)/atan(max(t));
This implementation however needs the time variable explicitly in the function. How can I access the time variable inside the PyOMO rule? I tried the below, but I get an " Cannot treat the scalar component 't_of_step' as an array" error:
m1.init_value = Param(initialize = (12.3/1000)/(3.5*60))
m1.end_value = Param(initialize = 0)
m1.t_of_step = Param(initialize = 210)
m1.acc = Param(initialize = 5)
.
.
def _vol_flow_sigmoidal (mod,t):
return mod.vol_flow_in[t] == (mod.init_value+(mod.end_value-mod.init_value)/2)+((mod.end_value-mod.init_value)/2)*atan((t-mod.t_of_step)*mod.acc)/atan(1500)
m1.vol_flow_sigmoidal = Constraint(m1.time,rule=_vol_flow_sigmoidal)
Hopefully I've described clearlyt what I am after. Any hints are most welcome,
Thanks!
Sal
How are you declaring the m1.time index?
My guess is that you are using a NumPy array to initialize the m1.time index. There is a known problem in Pyomo (see Issue #31) where the NumPy operator overloading and the Pyomo operator overloading end up fighting with each other (basically, NumPy gets fooled into thinking Pyomo scalars are actually indexed and attempts to treat them like arrays).
I was able to reproduce the error with the following complete example:
# pyomo 4.4.1
from pyomo.environ import *
import numpy as np
m1 = ConcreteModel()
m1.time = Set(initialize=np.array([0,100,200,300,400,500]))
m1.vol_flow_in = Var(m1.time)
m1.init_value = Param(initialize = (12.3/1000)/(3.5*60))
m1.end_value = Param(initialize = 0)
m1.t_of_step = Param(initialize = 210)
m1.acc = Param(initialize = 5)
def _vol_flow_sigmoidal (mod,t):
return mod.vol_flow_in[t] == (mod.init_value+(mod.end_value-mod.init_value)/2)\
+((mod.end_value-mod.init_value)/2)*atan((t-mod.t_of_step)*mod.acc)/atan(1500)
m1.vol_flow_sigmoidal = Constraint(m1.time,rule=_vol_flow_sigmoidal)
There are two alternatives that do work, both based on avoiding using NumPy arrays to initialize Pyomo Sets. You can either completely avoid Numpy:
m1.time = Set(initialize=[0,100,200,300,400,500])
or explicitly cast the NumPy array to a list:
timeArray = np.array([0,100,200,300,400,500])
m1.time = Set(initialize=timeArray.tolist())
Finally, for completeness, two other notes:
This also applies to initializing ContinuousSet objects in pyomo.dae
You will see the same behavior even if you avoid the explicit Pyomo Set declaration. That is, the following will also generate the error:
m1.time = np.array([0,100,200,300,400,500])
# ...
m1.vol_flow_sigmoidal = Constraint(m1.time,rule=_vol_flow_sigmoidal)
This is because Pyomo will quietly create the Set object for you behind the scenes as m1.vol_flow_sibmodial_index and then use that Set to index the Constraint.