How to iterate "along" a Marpa parse forest rather than "through" its parse trees? - ambiguity

Say I have a nice ambiguous Marpa grammar and a nice ambiguous input string.
I can parse the string with Marpa and end up with a parse forest. I can even iterate through each parse tree in the forest.
But how can I iterate "along" the parse forest?
To describe what I mean:
A parse forest is a kind of graph which can have nodes where alternatives split off, and nodes where alternatives join back together into a "main stream".
Say these are the alternative parse trees of one parse forest:
A B1 C
A B2 C
A B3 B4 C
There is a main stream A ... C but an ambiguous B section.
Of course in real world parses there can be many levels of branching upon branching and there may be streams that do not rejoin a single main stream. But in general there will be a lot of parts common to two or many interpretations.
What approaches can be used to iterate along the chain of unambiguous and ambiguous nodes?
In fact can I output the entire graph?

This gist shows 2 examples (basic and advanced) of iterating over an ASF nodes to produce a list of serialized ASTs.
Both are based on the code from Marpa::R2 test suite (cpan/t/sl_panda(1).t).
Hope it helps.
P.S. This gist will probably serve you better — it prints all ASF nodes in the order of visiting — you can use
$spans->{ $literal }->{ $start }
hash to see if a node is ambiguous or not and build the graph from there based on span intervals ($start, $start + $length) to build child/parent links.

The interface to do this just went from alpha to stable in Marpa::R2, so the question is well timed. Look at https://metacpan.org/pod/distribution/Marpa-R2/pod/ASF.pod and https://metacpan.org/pod/distribution/Marpa-R2/pod/Glade.pod.
Can you output the entire graph? Yes, but that's the easy thing to provide. The hard part was coming up with a nice way to drilling down to parts of interest, without going exponential.
Btw, another Marpa expert, may chime in here, one who's at this point got more experience working with my interface than I have. Perhaps you'd like to wait a bit for his answer, which you might like better than mine. :-)

Related

Text Preprocessing in Spark-Scala

I want to apply preprocessing phase on a large amount of text data in Spark-Scala such as Lemmatization - Remove Stop Words(using Tf-Idf) - POS tagging , there is any way to implement them in Spark - Scala ?
for example here is one sample of my data:
The perfect fit for my iPod photo. Great sound for a great price. I use it everywhere. it is very usefulness for me.
after preprocessing:
perfect fit iPod photo great sound great price use everywhere very useful
and they have POS tags e.g (iPod,NN) (photo,NN)
there is a POS tagging (sister.arizona) is it applicable in Spark?
Anything is possible. The question is what YOUR preferred way of doing this would be.
For example, do you have a stop word dictionary that works for you (it could just simply be a Set), or would you want to run TF-IDF to automatically pick the stop words (note that this would require some supervision, such as picking the threshold at which the word would be considered a stop word). You can provide the dictionary, and Spark's MLLib already comes with TF-IDF.
The POS tags step is tricky. Most NLP libraries on the JVM (e.g. Stanford CoreNLP) don't implement java.io.Serializable, but you can perform the map step using them, e.g.
myRdd.map(functionToEmitPOSTags)
On the other hand, don't emit an RDD that contains non-serializable classes from that NLP library, since steps such as collect(), saveAsNewAPIHadoopFile, etc. will fail. Also to reduce headaches with serialization, use Kryo instead of the default Java serialization. There are numerous posts about this issue if you google around, but see here and here.
Once you figure out the serialization issues, you need to figure out which NLP library to use to generate the POS tags. There are plenty of those, e.g. Stanford CoreNLP, LingPipe and Mallet for Java, Epic for Scala, etc. Note that you can of course use the Java NLP libraries with Scala, including with wrappers such as the University of Arizona's Sista wrapper around Stanford CoreNLP, etc.
Also, why didn't your example lower-case the processed text? That's pretty much the first thing I would do. If you have special cases such as iPod, you could apply the lower-casing except in those cases. In general, though, I would lower-case everything. If you're removing punctuation, you should probably first split the text into sentences (split on the period using regex, etc.). If you're removing punctuation in general, that can of course be done using regex.
How deeply do you want to stem? For example, the Porter stemmer (there are implementations in every NLP library) stems so deeply that "universe" and "university" become the same resulting stem. Do you really want that? There are less aggressive stemmers out there, depending on your use case. Also, why use stemming if you can use lemmatization, i.e. splitting the word into the grammatical prefix, root and suffix (e.g. walked = walk (root) + ed (suffix)). The roots would then give you better results than stems in most cases. Most NLP libraries that I mentioned above do that.
Also, what's your distinction between a stop word and a non-useful word? For example, you removed the pronoun in the subject form "I" and the possessive form "my," but not the object form "me." I recommend picking up an NLP textbook like "Speech and Language Processing" by Jurafsky and Martin (for the ambitious), or just reading the one of the engineering-centered books about NLP tools such as LingPipe for Java, NLTK for Python, etc., to get a good overview of the terminology, the steps in an NLP pipeline, etc.
There is no built-in NLP capability in Apache Spark. You would have to implement it for yourself, perhaps based on a non-distributed NLP library, as described in marekinfo's excellent answer.
I would suggest you to take a look in spark's ml pipeline. You may not get everything out of the box yet, but you can build your capabililties and use pipeline as a framework..

How to implement a binary tree in matlab

Can somebody please help me with implementing a binary tree in matlab? Can we do it the same way we implement the same in C/C++ using pointers? I happen to read a question related to the same and the solution too using 'struct' but that code executes 'n' number of times given that n is predefined. But I deal with a problem where the tree has to be formed dynamically. ie,
1.Take a node
1.1 Do some processing
1.2 If the resulting two answers satisfy the condition, they are added as the left and right children
1.3 Continue the process till the condition is false.
2.Trace back and move to the next node.
Thanks in advance.
This may only partly answer your question. To get anywhere close to the mechanisms of pointers in C/C++, you might start by checking the object oriented features of MATLAB. Namely the ability to create handle classes.
There is a fully documented example for the implementation of a doubly-linked list, which comes pretty close to a binary tree.

How to diff hierarchical-data?

Are there any tools which diff hierarchies?
IE, consider the following hierarchy:
A has child B.
B has child C.
which is compared to:
A has child B.
A has child C.
I would like a tool that shows that C has moved from a child of B to a child of A. Do any such utilities exist? If there are no specific tools, I'm not opposed to writing my own, so what are some good algorithms which are applicable to this problem?
A great general resource for diffing hierarchies (not specifically XML, HTML, etc) is the Hierarchical-Diff github project based on a bit of Dartmouth research. They have a pretty extensive list of related work ranging from XML diffing, to configuration file diffing to HTML diffing.
In general, actually performing diffs/patches on tree structures is a fairly well-solved problem, but displaying those diffs in a manner that makes sense to humans is still the wild west. That's double true when your data structure already has some semantic meaning like with HTML.
You might consider our SmartDifferencer tools.
These tools compare computer source code files in a diff-like way. Unlike diff, which is line oriented, these tools see changes according to code structure (variable name, expression, statement, block, function, class, etc.) as plausible edits ("move, insert, delete, replace, copy, rename"), producing answers that makes sense to programmers.
These computer source codes have exactly the "hierarchy" structure you are suggesting; the various constructs nest. Specifically to your topic, typically code blocks can nest inside code blocks. The SmartDifferencer tools use target-language accurate parsers to "deconstruct" the source text into these hierarchical entities. We have a Smart Differencer for XML in which you can obviously write nested tags.
The answer isn't reported as "Nth child of M has moved" although it is actually computed that way, by operating on the parse trees produced by the parsers. Rather it is reported as "code fragment of type at line x col y to line a col b has moved/..."
The answer my good sir is: Depth-first search, also known as Depth-first traversal. You might find some use of the Visitor pattern.
You can't swing a dead cat without hitting some sort of implementation for this when dealing with comparing XML trees. Take a gander at diffxml for an example.

Pocketsphinx - Adding words and Improving accuracy

I've managed to finally build and run pocketsphinx (pocketsphinx_continuous). The problem I'm running into, is how to a improve accuracy. From what I understand, you can specify a dictionary file (-dict test.dic). So I took the default dictionary file and added some more pronunciations of the same words, for example:
pencil P EH N S AH L
pencil(2) P EH N S IH L
spaghetti S P AH G EH T IY
spaghetti(2) S P UH G EH T IY
Yet pocketsphinx still does not recognize either word at all. I know there is a jsgf file you can specify as well , but that seems more for phrases and grammar. How can I get pocketsphinx to recognize common words such as pencil and spaghetti?
thanks
-Mike
With something like this, you can't be certain, but I can offer the following suggestions:
Perhaps the language model somehow has low probabilities for "spaghetti" and "pencil". As you suggested, you could use a JSGF to test out how it does for recognition if it doesn't use the N-gram models, but instead does a simple grammar (give it like twenty words, including spaghetti and pencil). This way you can see if it is perhaps the language model which makes it difficult to recognize these words, and it can do okay if it considers all the words to have equal probability.
Perhaps you simply pronounce these words poorly, even with the alternative dictionary entries. Try either A. Testing other peoples' voices, or B. Adapting the acoustic model to your voice (see http://cmusphinx.sourceforge.net/wiki/tutorialam)
Also, what is it recognizing them as when it is failing? If possible, remove the words it misrecognizes as from the dictionary.
Again, for overall accuracy, only three things are going to really help you: restricting the grammar, adapting the accoustic model, and perhaps getting higher quality recording input.
To improve accuracy you may want to try adapting the acoustic model to your voice.
http://cmusphinx.sourceforge.net/wiki/tutorialadapt
To learn how to add new words: http://ghatage.com/tech/2012/12/13/Make-Pocketsphinx-recognize-new-words/
Make sure you put a tab (not a space) after the word and before the start of the pronunciation.
May be the problem is with Pocketsphinx. I too was not getting good results with Pocketsphinx. But I was getting very good accuracy with Sphinx4 (for a US speaker with a noise-cancelling microphone.) Therefore I did a comparison between the two using the same audio recordings. For pocketsphinx I used pocketsphinx_batch with the WSJ audio model and a small vocabulary language model and dictionary (created online with the CMU Cambridge language modelling toolkit.) For Sphinx4 I wrote a small Java program using the Sphinx4 library. The result was that Sphinx4 was much more accurate. All the gory details are at http://www.jaivox.com/pocketsphinx.html.
To achieve good accuracy with a pocketshinx:
Important! Check that your mic, audio device, file supports 16 kHz while the general model is trained with 16 kHz acoustic examples.
You should create your own limited dictionary you cannot use cmusphinx-voxforge-de.dic while accuracy is dramatically dropped.
You should create your own language model.
You can search for Jasper project on GitLab to see how it's implemented.
Also, please check the documentation
This is on the CMUSphinx website
"There are various phonesets to represent phones, such as IPA or SAMPA. CMUSphinx does not yet require you to use any well-known phoneset, moreover, it prefers to use letter-only phone names without special symbols. This requirement simplifies some processing algorithms, for example, you can create files with phone names as part of the filenames without any violating of the OS filename requirements.
A dictionary should contain all the words you are interested in, otherwise the recognizer will not be able to recognize them. However, it is not sufficient to have the words in the dictionary. The recognizer looks for a word in both the dictionary and the language model. Without the language model, a word will not be recognized, even if it is present in the dictionary."
https://cmusphinx.github.io/wiki/tutorialdict/

How should I restructure a large Perl script?

I have a more or less large Perl script of ~ 1000 lines. The script accepts a few arguments and it runs straight forward. No modules, no functions. The script could be divided into three parts, initialization part, arguments parsing part and work part, but I don't know how to do that. Everything must be kept in a single file. Please, can anyone give me instructions/advice how to structure my Perl script?
Thanks.
You ask for advice on how to refactor your script, but you don't appear to understand why to refactor it. Without the why, the how isn't going to do you much good. And with the why, the how may fall out quite naturally.
If your script is working perfectly and needs no modification and all you'll ever do with it is run it, then you probably don't have a reason to refactor it - and I say that from the perspective of despising long routines. But...
If something's wrong with it
If you are trying to find a bug in your 1,000-line program, you have some hard work ahead of you. The problem could be anywhere. Break it up into smaller pieces so that you can verify the input and output at different stages - ideally, write tests for the smaller pieces. Fine-grained unit tests will tell you what isn't working, the nature of the error, and where the error exists.
If you need to modify it
If you need to change the script to - say - accommodate a new graphics format, or take advantage of multiple processors, or record its activities to a log - you will find it easier to extend if the program elements that need revision or extension are better isolated.
If you're trying to explain it to someone else, or show it off
You will find it much easier to convey the ideas in your script to another developer if the ideas are broken out into discrete methods.
So, there are some reasons why you might choose to refactor. If any of them apply, refactor accordingly; the how will drop out naturally. Extract Method may be your best friend.
If you can see logical parts of your script, you should definitely abstract them into functions. Having a single script of over 1000 lines, and not breaking it up into whatever abstraction units your language provides (functions, classes, etc.) is a very bad idea. Maintaining your script, i. e. adding features and fixing bugs, will be a nightmare.
I strongly suggest you read the book Clean Code by Robert C. Martin. It uses Java for examples, but the ideas are applicable to any language. The one that is most relevant here is "Make your functions small. Then make them smaller."
1000 lines and no functions? why not? this is a vague question.
You could break each section into a separate function, and then have a function that runs through each of these functions in the correct order called 'run()' or something similar. This would allow you to break up the program into more mangeable chunks.
p.s. man I think I used the word function too many times in this answer!
Have you refactored at all? At 1000 lines I'd suspect to see some code that could be broken down into functions internal to the script.
Well, if you have three separate sections that's the logical choice.
You could make each one into a function and then have a simple linear control at the top:
my $var1, $var2, $var3;
$var1 = init();
$var2 = parseInput();
$doWork();
sub init() {
some code here
}
sub parseInput() {
some code here
}
sub doWork() {
some code here
}
The big issue is you're going to be using globals a lot. I'd build them into a structure or two. I would also expect to see the big three broken down into functions themselves. Back in the 80s when the big thing I was learning was structured programming (the best design here I think) the rule of thumb was a function should fit on roughly one screen or less.
Most people would typically answer something like "a subroutine should do one thing" and "a subroutine should only take up one page in your editor". You can try to keep these things in mind when you refactor your code.
Try to identify parts of your code that can be split off into logical sections. You've started this process by spotting 'initialization', 'argument parsing', and 'work'. See if there are some sub-sections within that that can be pruned off into other subroutines.
Also, why do you not use any modules? One that springs to mind is Getopt::Long, which is a core module, so you won't have to install it manually. It will handle all of your argument parsing, and by using it you will probably avoid bugs and could shorten your code to make it more maintainable. By using standard modules like this, you not only (hopefully!) reduce the number of bugs in your code, you make it easier for other Perl programmers to understand.
You could look at search.cpan.org, maybe some Perl module suits your needs. For example there is a CGI::Application