Layernorm from different Frameworks produce different results - neural-network

why
tf.keras.layers.LayerNormalization(axis=-1, epsilon=0.001,
center=False,scale=False)(x)
and
torch.nn.LayerNorm(torch_zweiter_schritt.shape, eps=0.001,
elementwise_affine=False)(x)
produce different results ?

Related

Is it possible to merge raster bands from several folders using GDAL?

I have two folders containing about 15 000 .tif files. Each file in the first folder is a raster with 5 bands, named AA_"number" meaning it looks like
AA_1.tif,
AA_2.tif,
...,
AA_15000.tif.
Each file in the second folder is a raster with 2 bands named BB_"number" and looks like
BB_1.tif,
BB_2.tif,
...,
BB_15000.tif.
My goal is to add bands 1-3 from first file from folder AA with band 1 from the first file in folder BB to create a 4 band raster, and make 15000 4 band rasters. After doing some research and testing things out in QGIS I believe the tool Merge from GDAL could solve this task, but I have not been able make it find the right files in different folders. And as I have 2x 15 000 files, it is not possible to do this selection manually. Is there anyone who know a smart solution to this, preferably using GDAL or QGIS?
There are many ways to do this, and it really depends on what the exact use case is. Like the type of analysis/visualization that needs to be done on the result.
With this many files, it could for example be nice to merge them using a VRT. That will avoid creating redundant data, but whether that's actually the best solution depends. Just stacking them in a new tiff-file would of course also work.
Unfortunately, creating a VRT using gdalbuildvrt / gdal.BuildVRT is not possible with multi-band inputs.
If your inputs are homogeneous in terms of properties, it should be fairly simple to set up a template where you fill in the file locations and write the VRT to disk. For more inputs with heterogeneous properties it might still be possible, but you'll have to be careful to take it all into account.
Conceptually such a VRT would look something like:
<VRTDataset rasterXSize="..." rasterYSize="...">
<SRS>...</SRS>
<GeoTransform>....</GeoTransform>
<VRTRasterBand dataType="..." band="1">
<ComplexSource>
<SourceFilename relativeToVRT="0">//some_drive/aa_folder/aa_file1.tif</SourceFilename>
<SourceBand>1</SourceBand>
...
</ComplexSource>
</VRTRasterBand>
<VRTRasterBand dataType="..." band="2">
<ComplexSource>
<SourceFilename relativeToVRT="0">//some_drive/aa_folder/aa_file1.tif</SourceFilename>
<SourceBand>2</SourceBand>
...
</ComplexSource>
</VRTRasterBand>
<VRTRasterBand dataType="..." band="3">
<ComplexSource>
<SourceFilename relativeToVRT="0">//some_drive/aa_folder/aa_file1.tif</SourceFilename>
<SourceBand>3</SourceBand>
...
</ComplexSource>
</VRTRasterBand>
<VRTRasterBand dataType="..." band="4">
<ComplexSource>
<SourceFilename relativeToVRT="0">//some_drive/bb_folder/bb_file1.tif</SourceFilename>
<SourceBand>1</SourceBand>
...
</ComplexSource>
</VRTRasterBand>
</VRTDataset>
You can first use gdalbuildvrt on some of your files to find all the properties that need to be filled in, like projection, pixel dimensions etc. That will work, but gdalbuildvrt will only be able to take the first band from the inputs. If all bands have homogeneous properties (like nodata value etc), that should be fine as a reference.

Reduxtoolkit createApi how to mix and parse two endpoints results?

tldr; How to mix two or more createApi endpoints results ?
So I'm using createApi from reduxToolkit and the problem I have quite simple but I'm kinda lost in this huge documentation of this beautiful tool.
The idea is that I have a view that will mix data coming from two different api endpoints , for example:
/users
/cars
That view will display an array mixing both results (for example the cars images are only in /cars).
A little bit like transformResponse but for 2 endpoints
What is the right way to do this mixing ? (doing that in the view doesn't seems the best and I don't want to that backend neither).
You may tell me use a reducer, but where a reducer/slice takes places in the createApi pattern ? that's what I don't get.
You can combine the result outside of rtk query.
const {data: data1} = useAQuery(...);
const {data: data2} = useBQuery(...);
const combined = useMemo(() => {...combine data1, data2}, [data1, data2]);
If it's needed in multiple components, you can create a custom hook useCarsAndUsers(...) to avoid code duplication.

How can I find a match in 2 separate python lists? [duplicate]

I tried using cmp(list1, list2) to learn it's no longer supported in Python 3.3. I've tried many other complex approaches, but none have worked.
I have two lists of which both contain just words and I want it to check to see how many words feature in both and return the number for how many.
You can find the length of the set intersection using & like this:
len(set(list1) & set(list2))
Example:
>>>len(set(['cat','dog','pup']) & set(['rat','cat','wolf']))
1
>>>set(['cat','dog','pup']) & set(['rat','cat','wolf'])
{'cat'}
Alternatively, if you don't want to use sets for some reason, you can always use collections.Counter, which supports most multiset operations:
>>> from collections import Counter
>>> print(list((Counter(['cat','dog','wolf']) & Counter(['pig','fish','cat'])).elements()))
['cat']
If you just want the count of how many words are common
common = sum(1 for i in list1 if i in list2)
If you actually want to get a list of the shared words
common_words = set(list1).intersection(list2)

combing colls together - MaxMSP

I work on a project with MaxMSP where I have multiple colls. I want to combine all the lists in there in one single coll. Is there a way to do that directly without unpacking and repacking everything?
In order to be more clear, let’s say I have two colls, with the first one being:
0, 2
1, 4
2, 4
….
99, 9
while the second one is:
100, 8
101, 4
…
199, 7
I would like the final coll to be one list from 0-199.
Please keep in mind I don’t want to unpack everything ( with uzi for instance) cause my lists are very long and I find that it is problematic for the cpu to use colls with such long lists.That’s why I broke my huge list into sublists/subcolls in the first place
Hope that’s clear enough.
If the two colls do not have overlapping indices, then you can just dump one into the other, like this:
----------begin_max5_patcher----------
524.3ocyU0tSiCCD72IOEQV7ybnZmFJ28pfPUNI6AlKwIxeTZEh28ydsCDNB
hzdGbTolTOd20yXOd6CoIjp98flj8irqxRRdHMIAg7.IwwIjN995VtFCizAZ
M+FfjGly.6MHdisaXDTZ6DxVvfYvhfCbS8sB4MaUPsIrhWxNeUdFsf5esFex
bPYW+bc5slwBQinhFbA6qt6aaFWwPXlCCPnxDxSEQaNzhnDhG3wzT+i7+R4p
AS1YziUvTV44W3+r1ozxUnrKNdYW9gKaIbuagdkpGTv.HalU1z26bl8cTpkk
GufK9eI35911LMT2ephtnbs+0l2ybu90hl81hNex241.hHd1usga3QgGUteB
qDoYQdDYLpqv3dJR2L+BNLQodjc7VajJzrqivgs5YSkMaprkjZwroVLI03Oc
0HtKv2AMac6etChsbiQIprlPKto6.PWEfa0zX5+i8L+TnzlS7dBEaLPC8GNN
OC8qkm4MLMKx0Pm21PWjugNuwg9A6bv8URqP9m+mJdX6weocR2aU0imPwyO+
cpHiZ.sQH4FQubRLtt+YOaItUzz.3zqFyRn4UsANtZVa8RYyKWo4YSwmFane
oXSwBXC6SiMaV.anmHaBlZ9vvNPoikDIhqa3c8J+vM43PgLLDqHQA6Diwisp
Hbkqimwc8xpBMc1e4EjPp8MfRZEw6UtU9wzeCz5RFED
-----------end_max5_patcher-----------
mzed's answer works, as stated if the lists have no overlapping indices which they shouldn't based on the design you specify.
If you are treating your 'huge list' as multiple lists, or vice versa, that might help come up with an answer. One question some may ask is "why are you merging it again?"
you consider your program to have one large list
that large list is really an interface that handles how you interact with several sub-lists for efficiency sake
the interface to your data persistence (the lists) for storing and retrieval then acts like one large list but works with several under-the-hood
an insertion and retrieval mechanism for handling the multiple lists as one list should exist for your interface then
save and reload the sublists individually as well
If you wrap this into a poly~, the voice acts as the sublist, so when I say voice I basically mean sublist:
You could use a universal send/receive in and out of a poly~ abstraction that contains your sublist's unique coll, the voice# from poly~ can append uniquely to your sublist filename that is reading/saving to for that voice's [coll].
With that set up, you could specify the number of sublists (voices) and master list length you want in the poly~ arguments like:
[poly~ sublist_manager.maxpat 10 1000] // 10 sublists emulating a 1000-length list
The math for index lookup is:
//main variables for master list creation/usage
master_list_length = 1000
sublist_count = 10
sublist_length = master_list_length/sublist_count;
//variables created when inserting/looking up an index
sublist_number = (desired_index/sublist_count); //integer divide to get the base sublist you'll be performing the lookup in
sublist_index = (desired_index%sublist_length); //actual index within your sublist to access
If the above ^ is closer to what you're looking for I can work on a patch for that. cheers

Double-metaphone errors

I'm using Lawrence Philips Double-Metaphone algorithm with great success, but I have found the odd "unexpected result" for some combinations.
Does anyone else have additions or changes to the algorithm for other parts of it they wouldn't mind sharing, or just the combinations that they've found that do not work as expected.
eg. I had issues between:
Peashill and Bushley. (both match with PXL)
Rockliffe and Rockcliffe (RKLF and RKKL)
All Soundex, Metaphone and variant schemes are occasionally going to give results that aren't identical to what you expect. This is unavoidable - they can be regarded as more or less simple hash algorithms with special information preserving properties, and will sometimes produce collisions when you'd rather they didn't, and will sometimes produce differences when you'd rather they didn't.
One possible way of improving things is using 'synonym rings'. This basically produces lists of words that should be regarded as synonyms, independent of the spelling. I encountered them in the context of name matching. For example, variants on Chaudri
included:
CHAUDARY
CHAUDERI
CHAUDERY
CHAUDHARY
CHAUDHERI
CHAUDHERY
CHAUDHRI
CHAUDHRY
CHAUDHURI
CHAUDHURY
CHAUDHY
CHAUDREY
CHAUDRI
CHAUDRY
CHAUDURI
CHAWDHARY
CHAWDHRY
CHAWDHURY
CHDRY
CHODARY
CHODHARI
CHODHOURY
CHODHRY
CHODREY
CHODRY
CHODURY
CHOUDARI
CHOUDARY
CHOUDERY
CHOUDHARI
CHOUDHARY
CHOUDHERY
CHOUDHOURY
CHOUDHRI
CHOUDHRY
CHOUDHURI
CHOUDHURY
CHOUDREY
CHOUDRI
CHOUDRY
CHOUDURY
CHOUWDHRY
CHOWDARI
CHOWDARY
CHOWDHARY
CHOWDHERY
CHOWDHRI
CHOWDHRY
CHOWDHURI
CHOWDHURRYY
CHOWDHURY
CHOWDORY
CHOWDRAY
CHOWDREY
CHOWDRI
CHOWDRURY
CHOWDRY
CHOWDURI
CHOWDURY
CHUDARY
CHUDHRY
CHUDORY
COWDHURY
regular metaphone is returning a difference between Peashill and Bushley
Peashill PXL
Bushley BXL