Projection anomaly between function vs string projection definition - proj4js

We recently switched our definitions from the first to the second format, because OpenLayers threw exceptions on the first one.
The used definitions:
Old:
proj4.defs["EPSG:28992"] = "+proj=sterea +lat_0=52.15616055555555 +lon_0=5.38763888888889 +k=0.9999079 +x_0=155000 +y_0=463000 +ellps=bessel +towgs84=565.417,50.3319,465.552,-0.398957,0.343988,-1.8774,4.0725 +units=m +no_defs";
New:
proj4.defs("EPSG:28992", "+proj=sterea +lat_0=52.15616055555555 +lon_0=5.38763888888889 +k=0.9999079 +x_0=155000 +y_0=463000 +ellps=bessel +towgs84=565.040,49.910,465.840,-0.40939,0.35971,-1.86849,4.0772 +units=m +no_defs")
Strange enough, the latter one does correctly transform points. Our points seem to be misaligned, and not by a specific offset, they seem to be just wrong positioned at all. We think this is due to the towgs84 property. Question now is, how is the first format parsed/ handled differently than the secondly? What are the differences? (I am using the same code and newest version of proj4js in both occasions).

I was accidentally loading pro4j twice, once trough potree, and once manually (for openlayers). Turned out one of the two was still on version 2.2.1...

Related

Mozilla Deep Speech SST suddenly can't spell

I am using deep speech for speech to text. Up to 0.8.1, when I ran transcriptions like:
byte_encoding = subprocess.check_output(
"deepspeech --model deepspeech-0.8.1-models.pbmm --scorer deepspeech-0.8.1-models.scorer --audio audio/2830-3980-0043.wav", shell=True)
transcription = byte_encoding.decode("utf-8").rstrip("\n")
I would get back results that were pretty good. But since 0.8.2, where the scorer argument was removed, my results are just rife with misspellings that make me think I am now getting a character level model where I used to get a word-level model. The errors are in a direction that looks like the model isn't correctly specified somehow.
Now I when I call:
byte_encoding = subprocess.check_output(
['deepspeech', '--model', 'deepspeech-0.8.2-models.pbmm', '--audio', myfile])
transcription = byte_encoding.decode("utf-8").rstrip("\n")
I now see errors like
endless -> "endules"
service -> "servic"
legacy -> "legaci"
earning -> "erting"
before -> "befir"
I'm not 100% that it is related to removing the scorer from the API, but it is one thing I see changing between releases, and the documentation suggested accuracy improvements in particular.
Short: The scorer matches letter output from the audio to actual words. You shouldn't leave it out.
Long: If you leave out the scorer argument, you won't be able to detect real world sentences as it matches the output from the acoustic model to words and word combinations present in the textual language model that is part of the scorer. And bear in mind that each scorer has specific lm_alpha and lm_beta values that make the search even more accurate.
The 0.8.2 version should be able to take the scorer argument. Otherwise update to 0.9.0, which has it as well. Maybe your environment is changed in a way. I would start in a new dir and venv.
Assuming you are using Python, you could add this to your code:
ds.enableExternalScorer(args.scorer)
ds.setScorerAlphaBeta(args.lm_alpha, args.lm_beta)
And check the example script.

Parsing XML and retrieving attributes from (nested?) elements

I am trying to get specific data from an XML file, namely X, Y coordinates that are appear, to my beginners eyes, attributes of an element called "Point" in my file. I cannot get to that data with anything other than a sledgehammer approach and would gratefully accept some help.
I have used the following successfully:
for Shooter in root.iter('Shooter'):
print(Shooter.attrib)
But if I try the same with "Point" (or "Points") there is no output. I cannot even see "Point" when I use the following:
for child in root:
print(child.tag, child.attrib)
So: the sledgehammer
print([elem.attrib for elem in root.iter()])
Which gives me the attributes for every element. This file is a single collection of data and could contain hundreds of data points and so I would rather try to be a little more subtle and home in on exactly what I need.
My XML file
https://pastebin.com/abQT3t9k
UPDATE: Thanks for the answers so far. I tried the solution posted and ended up with 7000 lines of which wasn't quite what I was after. I should have explained in more detail. I also tried (as suggested)
def find_rec(node, element, result):
for item in node.findall(element):
result.append(item)
find_rec(item, element, result)
return result
print(find_rec(ET.parse(filepath_1), 'Shooter', [])) #Returns <Element 'Shooter' at 0x125b0f958>
print(find_rec(ET.parse(filepath_1), 'Point', [])) #Returns None
I admit I have never worked with XML files before, and I am new to Python (but enjoying it). I wanted to get the solution myself but I have spent days getting nowhere.
I perhaps should have just asked from the beginning how to extract the XY data for each ShotNbr (in this file there is just one) but I didn't want code written for me.
I've managed to get the XY from this file but my code will never work if there is more than one shot, or if I want to specifically look at, say, shot number 20.
How can I find shot number 2 (ShotNbr="2") and extract only its XY data points?
Assuming that you are using:
xml.etree.ElementTree,
You are only looking at the direct children of root.
You need to recurse into the tree to access elements lower in the hierarchical tree.
This seems to be the same problem as ElementTree - findall to recursively select all child elements
which has an excellent answer that I am not going to plagiarize.
Just apply it.
Alternatively,
import xml.etree.ElementTree as ET
root = ET.parse("file.xml")
print root.findall('.//Point')
Should work.
See: https://docs.python.org/2/library/xml.etree.elementtree.html#supported-xpath-syntax

Get the number of features displayed on a map after a filter (with featuresIn)

First of all, I'm aware that querySourceFeatures could fix that. But unfortunately the new version is effective with all types but not with symbols, which I'm using. So I'm still coding under version 0.14.x.
That said, I filter my map with the setFilter function and I need to catch back the number of features displayed once the filter is done.
I thought about transform the whole world (-90,-180,90,180) map coordinates into pixels and then pass it into a featuresIn function.
With fiddle below, featuresIn returns nothing [EDIT : that was due to not setting interacive : true, now it's done but issue is still here]. Do you have any idea how to get the number of features displayed on my map?
EDIT : Please find my jsFiddle : https://jsfiddle.net/y7hoa0gy/7/
No features are being returned from featuresIn because you did not set "interactive": true on the "route" layer, as specified in the documentation (but no longer on our official docs page because we have changed this API).
/*Now I want to know how many features are still displayed after that filter
My thought was to get the bbox of the whole map (-180,-90,180,90) and make a featuresIn of that bbox.*/
More fundamentally, this approach will not work. featuresIn only returns features in the current viewport. It does not return all features.

Get real position in lexer. Added example

I writing editor with netbeans7 and ANTLR4
I have line in my.g4 file
Label : {(getCharPositionInLine()==0)}? ID;
That works well for static files, but while editing getCharPositionInLine() returns 0 often in other places.
How get a real position in lexer?
updated
I created example with this problem
https://github.com/daimor/SimpleANTLR
Your error is likely in the way you are constructing your lexer and/or input stream (i.e. code that you have not shown here). The predicate you describe above will work as expected for an ANTLR 4 lexer.
Also, if getCharPositionInLine()==0, then exactly one of the following conditions is also true:
_input.index()==0
_input.LA(-1)=='\n'

Data Processing, how to approach

I have the following Problem, given this XML Datastructure:
<level1>
<level2ElementTypeA></level2ElementTypeA>
<level2ElementTypeB>
<level3ElementTypeA>String1Ineed<level3ElementTypeB>
</level2ElementTypeB>
...
<level2ElementTypeC>
<level3ElementTypeB attribute1>
<level4ElementTypeA>String2Ineed<level4ElementTypeA>
<level3ElementTypeB>
<level2ElementTypeC>
...
<level2ElementTypeD></level2ElementTypeD>
</level1>
<level1>...</level1>
I need to create an Entity which contain: String1Ineed and String2Ineed.
So every time I came across a level3ElementTypeB with a certain value in attribute1, I have my String2Ineed. The ugly part is how to obtain String1Ineed, which is located in the first element of type level2ElementTypeB above the current level2ElementTypeC.
My 'imperative' solution looks like that that I always keep an variable with the last value of String1Ineed and if I hit criteria for String2Ineed, I simply use that. If we look at this from a plain collection processing point of view. How would you model the backtracking logic between String1Ineed and String2Ineed? Using the State Monad?
Isn't this what XPATH is for? You can find String2Ineed and then change the axis to search back for String1Ineed.