Insert a key using Ruamel - ruamel.yaml

I am using the Ruamel Python library to programmatically edit human-edited YAML files. The source files have keys that are sorted alphabetically.
I'm not sure if this is a basic Python question, or a Ruamel question, but all methods I have tried to sort Ruamel's OrderedDict structure are failing for me.
I am quite confused, for instance, why the following code, based on this recipe, isn't working:
import ruamel.yaml
import collections
def read_file(f):
with open(f, 'r') as _f:
return ruamel.yaml.round_trip_load(
_f.read(),
preserve_quotes=True
)
def write_file(f, data):
with open(f, 'w') as _f:
_f.write(ruamel.yaml.dump(
data,
Dumper=ruamel.yaml.RoundTripDumper,
explicit_start=True,
width=1024
))
data = read_file('in.yaml')
data = collections.OrderedDict(sorted(data.items(), key=lambda t: t[0]))
write_file('out.yaml', data)
But given this input file:
---
bananas: 1
apples: 2
The following output file is produced:
--- !!omap
- apples: 2
- bananas: 1
I.e. it's turned my file into a YAML ordered map.
Is there an easy way to do this? Also, can I simply insert into the data structure somehow?

If you round_trip a mapping in ruamel.yaml¹ , a mapping doesn't get represented as a collections.OrderedDict(), it gets represented as a ruamel.yaml.comments.CommentedMap(). The latter can be a subclass of collections.OrderedDict() depending on which version of Python you are working with (e.g. in Python 2 it uses the much faster C implementation fromruamel.ordereddict)
The representer doesn't automatically interpret "normal" ordered dictionaries (whether from collections or ruamel.ordereddict) as special in round_trip_dump mode. But if you drop the collections:
import ruamel.yaml
def read_file(f):
with open(f, 'r') as _f:
return ruamel.yaml.round_trip_load(
_f.read(),
preserve_quotes=True
)
def write_file(f, data):
with open(f, 'w') as _f:
ruamel.yaml.dump(
data,
stream=_f,
Dumper=ruamel.yaml.RoundTripDumper,
explicit_start=True,
width=1024
)
data = read_file('in.yaml')
data = ruamel.yaml.comments.CommentedMap(sorted(data.items(), key=lambda t: t[0]))
write_file('out.yaml', data)
your out.yaml will be:
---
apples: 2
bananas: 1
Please note that I also removed an inefficiency in your write_file routine. If you don't specify a stream, all data will be streamed to a StringIO instance first (in memory) and then returned (which you wrote to a stream with _f.write(), it is much more efficient to directly write to the stream.
As for your final question: yes you can insert using:
data.insert(1, 'apricot', 3)
¹ Disclaimer: I am the author of both ruamel.yaml as well as ruamel.ordereddict.

Related

Import csv as MATLAB struct

I have a large csv log file. Here is a simplified sample:
ts,a.b.c,a.b.d,a.b.e,b.c,b.d,c.d.e,c.d.f,c.g
2021-03-29 06:38:39,1.0000,2,3,28.20,1,2,3,4
2021-03-29 06:38:40,1.0000,2,3,28.20,1,2,3,0.000000
I am using MATLAB's Import Data tool to import it, but, unfortunately, it removes all dots from the header and imports all variables as, e.g.: abc, abd, abe etc.
What is an efficient way to import a csv like the one above as structs?
It am looking for a way to have data imported as structs: a, b and c for this particular log file, so that I can easily access variables as a.b.c or c.d.f.
Here is what I came up with, by simply using readtable.
function res = log_import(logfile)
log_table = readtable(logfile);
res = [];
for i = 1:width(log_table)
str_path = log_table.Properties.VariableDescriptions{i};
fields = strsplit(str_path,'.');
res = setfield(res, fields{1:end}, log_table{:, i});
end
end

How to union multiple dynamic inputs in Palantir Foundry?

I want to Union multiple datasets in Palantir Foundry, the name of the datasets are dynamic so I would not be able to give the dataset names in transform_df() statically. Is there a way I can dynamically take multiple inputs into transform_df and union all of those dataframes?
I tried looping over the datasets like:
li = ['dataset1_path', 'dataset2_path']
union_df = None
for p in li:
#transforms_df(
my_input = Input(p),
Output(p+"_output")
)
def my_compute_function(my_input):
return my_input
if union_df is None:
union_df = my_compute_function
else:
union_df = union_df.union(my_compute_function)
But, this doesn't generate the unioned output.
This should be able to work for you with some changes, this is an example of dynamic dataset with json files, your situation would maybe be only a little different. Here is a generalized way you could be doing dynamic json input datasets that should be adaptable to any type of dynamic input file type or internal to foundry dataset that you can specify. This generic example is working on a set of json files uploaded to a dataset node in the platform. This should be fully dynamic. Doing a union after this should be a simple matter.
There's some bonus logging going on here as well.
Hope this helps
from transforms.api import Input, Output, transform
from pyspark.sql import functions as F
import json
import logging
def transform_generator():
transforms = []
transf_dict = {## enter your dynamic mappings here ##}
for value in transf_dict:
#transform(
out=Output(' path to your output here '.format(val=value)),
inpt=Input(" path to input here ".format(val=value)),
)
def update_set(ctx, inpt, out):
spark = ctx.spark_session
sc = spark.sparkContext
filesystem = list(inpt.filesystem().ls())
file_dates = []
for files in filesystem:
with inpt.filesystem().open(files.path) as fi:
data = json.load(fi)
file_dates.append(data)
logging.info('info logs:')
logging.info(file_dates)
json_object = json.dumps(file_dates)
df_2 = spark.read.option("multiline", "true").json(sc.parallelize([json_object]))
df_2 = df_2.withColumn('upload_date', F.current_date())
df_2.drop_duplicates()
out.write_dataframe(df_2)
transforms.append(update_logs)
return transforms
TRANSFORMS = transform_generator()
So this question breaks down in two questions.
How to handle transforms with programatic input paths
To handle transforms with programatic inputs, it is important to remember two things:
1st - Transforms will determine your inputs and outputs at CI time. Which means that you can have python code that generates transforms, but you cannot read paths from a dataset, they need to be hardcoded into your python code that generates the transform.
2nd - Your transforms will be created once, during the CI execution. Meaning that you can't have an increment or special logic to generate different paths whenever the dataset builds.
With these two premises, like in your example or #jeremy-david-gamet 's (ty for the reply, gave you a +1) you can have python code that generates your paths at CI time.
dataset_paths = ['dataset1_path', 'dataset2_path']
for path in dataset_paths:
#transforms_df(
my_input = Input(path),
Output(f"{path}_output")
)
def my_compute_function(my_input):
return my_input
However to union them you'll need a second transform to execute the union, you'll need to pass multiple inputs, so you can use *args or **kwargs for this:
dataset_paths = ['dataset1_path', 'dataset2_path']
all_args = [Input(path) for path in dataset_paths]
all_args.append(Output("path/to/unioned_dataset"))
#transforms_df(*all_args)
def my_compute_function(*args):
input_dfs = []
for arg in args:
# there are other arguments like ctx in the args list, so we need to check for type. You can also use kwargs for more determinism.
if isinstance(arg, pyspark.sql.DataFrame):
input_dfs.append(arg)
# now that you have your dfs in a list you can union them
# Note I didn't test this code, but it should be something like this
...
How to union datasets with different schemas.
For this part there are plenty of Q&A out there on how to union different dataframes in spark. Here is a short code example copied from https://stackoverflow.com/a/55461824/26004
from pyspark.sql import SparkSession, HiveContext
from pyspark.sql.functions import lit
from pyspark.sql import Row
def customUnion(df1, df2):
cols1 = df1.columns
cols2 = df2.columns
total_cols = sorted(cols1 + list(set(cols2) - set(cols1)))
def expr(mycols, allcols):
def processCols(colname):
if colname in mycols:
return colname
else:
return lit(None).alias(colname)
cols = map(processCols, allcols)
return list(cols)
appended = df1.select(expr(cols1, total_cols)).union(df2.select(expr(cols2, total_cols)))
return appended
Since inputs and outputs are determined at CI time, we cannot form true dynamic inputs. We will have to somehow point to specific datasets in the code. Assuming the paths of datasets share the same root, the following seems to require minimum maintenance:
from transforms.api import transform_df, Input, Output
from functools import reduce
datasets = [
'dataset1',
'dataset2',
'dataset3',
]
inputs = {f'inp{i}': Input(f'input/folder/path/{x}') for i, x in enumerate(datasets)}
kwargs = {
**{'output': Output('output/folder/path/unioned_dataset')},
**inputs
}
#transform_df(**kwargs)
def my_compute_function(**inputs):
unioned_df = reduce(lambda df1, df2: df1.unionByName(df2), inputs.values())
return unioned_df
Regarding unions of different schemas, since Spark 3.1 one can use this:
df1.unionByName(df2, allowMissingColumns=True)

Can operations on a numpy.memmap be deferred?

Consider this example:
import numpy as np
a = np.array(1)
np.save("a.npy", a)
a = np.load("a.npy", mmap_mode='r')
print(type(a))
b = a + 2
print(type(b))
which outputs
<class 'numpy.core.memmap.memmap'>
<class 'numpy.int32'>
So it seems that b is not a memmap any more, and I assume that this forces numpy to read the whole a.npy, defeating the purpose of the memmap. Hence my question, can operations on memmaps be deferred until access time?
I believe subclassing ndarray or memmap could work, but don't feel confident enough about my Python skills to try it.
Here is an extended example showing my problem:
import numpy as np
# create 8 GB file
# np.save("memmap.npy", np.empty([1000000000]))
# I want to print the first value using f and memmaps
def f(value):
print(value[1])
# this is fast: f receives a memmap
a = np.load("memmap.npy", mmap_mode='r')
print("a = ")
f(a)
# this is slow: b has to be read completely; converted into an array
b = np.load("memmap.npy", mmap_mode='r')
print("b + 1 = ")
f(b + 1)
Here's a simple example of an ndarray subclass that defers operations on it until a specific element is requested by indexing.
I'm including this to show that it can be done, but it almost certainly will fail in novel and unexpected ways, and require substantial work to make it usable.
For a very specific case it may be easier than redesigning your code to solve the problem in a better way.
I'd recommend reading over these examples from the docs to help understand how it works.
import numpy as np
class Defered(np.ndarray):
"""
An array class that deferrs calculations applied to it, only
calculating them when an index is requested
"""
def __new__(cls, arr):
arr = np.asanyarray(arr).view(cls)
arr.toApply = []
return arr
def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
## Convert all arguments to ndarray, otherwise arguments
# of type Defered will cause infinite recursion
# also store self as None, to be replaced later on
newinputs = []
for i in inputs:
if i is self:
newinputs.append(None)
elif isinstance(i, np.ndarray):
newinputs.append(i.view(np.ndarray))
else:
newinputs.append(i)
## Store function to apply and necessary arguments
self.toApply.append((ufunc, method, newinputs, kwargs))
return self
def __getitem__(self, idx):
## Get index and convert to regular array
sub = self.view(np.ndarray).__getitem__(idx)
## Apply stored actions
for ufunc, method, inputs, kwargs in self.toApply:
inputs = [i if i is not None else sub for i in inputs]
sub = super().__array_ufunc__(ufunc, method, *inputs, **kwargs)
return sub
This will fail if modifications are made to it that don't use numpy's universal functions. For instance percentile and median aren't based on ufuncs, and would end up loading the entire array. Likewise, if you pass it to a function that iterates over the array, or applies an index to substantial amounts the entire array will be loaded.
This is just how python works. By default numpy operations return a new array, so b never exists as a memmap - it is created when + is called on a.
There's a couple of ways to work around this. The simplest is to do all operations in place,
a += 1
This requires loading the memory mapped array for reading and writing,
a = np.load("a.npy", mmap_mode='r+')
Of course this isn't any good if you don't want to overwrite your original array.
In this case you need to specify that b should be memmapped.
b = np.memmap("b.npy", mmap+mode='w+', dtype=a.dtype, shape=a.shape)
Assigning can be done by using the out keyword provided by numpy ufuncs.
np.add(a, 2, out=b)

How mimic the function map.getORelse to a CSV file

I have a CSV file that represent a map[String,Int], then I am reading the file as follows:
def convI2N (vkey:Int):String={
val in = new Scanner("dictionaryNV.csv")
loop.breakable{
while (in.hasNext) {
val nodekey = in.next(',')
val value = in.next('\n')
if (value == vkey.toString){
n=nodekey
loop.break()}
}}
in.close
n
}
the function give the String given the Int. The problem here is that I must browse the whole file, and the file is to big, then the procedure is too slow. Someone tell me that this is O(n) complexity time, and recomend me to pass to O(log n). I suppose that the function map.getOrElse is O(log n).
Someone can help me to find a way to get a best performance of this code?
As additional comment, the dictionaryNV file is sorted by the Int values
maybe I can divide the file by lines, or set of lines. The CSV has like 167000 Tuples [String,Int]
or in another way how you make some kind of binary search through the csv in scala?
If you are calling confI2N function many times then definitely the job will be slow because each time you have to scan the big file. So if the function is called many times then it is recommended to store them in temporary variable as properties or hashmap or collection of tuple2 and change the other code that is eating the memory.
You can try following way which should be faster than scanner way
Assuming that your csv file is comma separated as
key1,value1
key2,value2
Using Source.fromFile can be your solution as
def convI2N (vkey:Int):String={
var n = "not found"
val filtered = Source.fromFile("<your path to dictionaryNV.csv>")
.getLines()
.map(line => line.split(","))
.filter(sline => sline(0).equalsIgnoreCase(vkey.toString))
for(str <- filtered){
n = str(0)
}
n
}

Can pysnmp return octectstring values only

I am doing a small script to get SNMP traps with PySnmp.
I am able to get the oid = value pairs, but the value is too long with a small information in the end. How can I access the octectstring only which comes in the end of the value. Is there a way other than string manipulations? Please comment.
OID =_BindValue(componentType=NamedTypes(NamedType('value', ObjectSyntax------------------------------------------------(DELETED)-----------------(None, OctetString(b'New Alarm'))))
Is it possible to get the output like the following, as is available from another SNMP client:
.iso.org.dod.internet.private.enterprises.xxxx.1.1.2.2.14: CM_DAS Alarm Traps:
Edit - the codes are :
**for oid, val in varBinds:
print('%s = %s' % (oid.prettyPrint(), val.prettyPrint()))
target.write(str(val))**
On screen, it shows short, but on file, the val is so long.
Usage of target.write( str(val[0][1][2])) does not work for all (program stops with error), but the 1st oid(time tick) gets it fine.
How can I get the value from tail as the actual value is found there for all oids.
Thanks.
SNMP transfers information in form of a sequence of OID-value pairs called variable-bindings:
variable_bindings = [[oid1, value1], [oid2, value2], ...]
Once you get the variable-bindings sequence from SNMP PDU, to access value1, for example, you might do:
variable_binding1 = variable_bindings[0]
value1 = variable_binding1[1]
To access the tail part of value1 (assuming it's a string) you could simply subscribe it:
tail_of_value1 = value1[-10:]
I guess in your question you operate on a single variable_binding, not a sequence of them.
If you want pysnmp to translate oid-value pair into a human-friendly representation (of MIB object name, MIB object value), you'd have to pass original OID-value pair to the ObjectType class and run it through MIB resolver as explained in the documentation.
Thanks...
the following codes works like somwwhat I was looking for.
if str(oid)=="1.3.6.1.2.1.1.3.0":
target.write(" = str(val[0][1]['timeticks-value']) = " +str(val[0][1]['timeticks-value'])) # time ticks
else:
target.write("= val[0][0]['string-value']= " + str(val[0][0]['string-value']))