do you know a package to find synonyms of english words for the Dart language?
For example something similar to NLTK for python it would be perfect.
hope someone can help me
Thank you :)
I write these words at random otherwise it won't make me postI write these words at random otherwise it won't make me postI write these words at random otherwise it won't make me postI write these words at random otherwise it won't make me post
After doing some research the following packages popped up:
Lemmatizer
Lemmatizer for text in English. Inspired by Python's nltk.corpus.reader.wordnet.morphy
Sadly it doesn't support null safety.
Stemmer.
This package implements a stemming algorithm in Dart. Currently, it supports PorterStemmer and SnowballStemmer. It is a port of the exceptional Python NLTK library.
oxford_dictionary - you would need api key, so based on that I think it is a paid service.
The Oxford Dictionaries API offers an easy way to access powerful lexical data (words, definitions, translations, audio pronunciations, synonyms, antonyms, parts of speech, and more) to use in your apps and websites.
And if you are not in a hurry: the Chaquopy Flutter plugin is planning to support NLTK library in the future. As it says in the description - it is only available in Android.
This is a chaquopy plugin to run python code on android. This is the simplest version, where you can write you code and run it.
I don't know if these packages will do the work, but they could be starting point.
Edit:
As #Dabbel mentioned in his comment:
Lemmatizerx
Lemmatizer for text in English. Inspired by Python's nltk.corpus.reader.wordnet.morphy.
Related
I'm reading and writings some text files in Scala. As a complete beginner in the language, I wanted to make sure to find the right way to do it, e.g. get the encoding right.
So most of the stuff I found (also on SO ) recommends I use io.Source.fromFile.However, after trying it out like so, reading a UTF-8 file:
val user_list = Source.fromFile("usernames.txt").getLines.toList
val user_list = Source.fromFile("usernames.txt", enc="UTF8").getLines.toList
I looked at the docs but was left with some questions.
Get the encoding right:
the docs show that I can set an encoding in Source.fromFile as I tried above. Looking at the man on Codec and the types listed there, I was wondering if those are all my codec options - is there e.g. no Utf-16, Big-Endian vs Little-Endian, etc.?
I am slightly obsessed with this since it used to trip me up in Python a lot. Is this less of concern with Scala for some reason?
Get the reading in right:
All the examples I looked at used the getLines method and postprocessed it with MkString or List, etc. Is there any advantage to that over just reading in the entire file (my files are small) in one go?
Get the writing out right:
Every source I could find tells me that Scala has no file writing function and to use the Java FileWriter. I was surprised by this - is this still accurate?
Looking at it I feel the question might be a little broad for SO, so I'd be happy to take it back if it does not meet the requirements. At this point, I'm not struggling with specific examples but rather trying to set things up in a way I don't get in trouble later.
Thanks!
Scala only has a basic IO api in the standard library. For the most part you just use the java apis. The fact that a decent api from java exists is probably why the Scala team is not prioritizing having a robust and fully featured IO api.
There are also third party scala libraries you could use as well however. Better Files I've never used but heard good things about as a Scala file api. As well as fs2 which provides functional, streaming IO. I'm sure there are others out there as well.
For encoding, there are many possible encoding available. It's just that only a couple of the most common ones are available as static fields, the rest you typically access through Codec("Encoding Name"). Most apis will also let you just enter a String directly instead of needing to get a Codec instance first. The codec is really just a wrapper over java.nio.charset.Charset. You can run java.nio.charset.Charset.availableCharsets() to see all of the encodings available on your system.
As far as reading, if the files are small you can load them fully into memory if you prefer that. The only reason not to do so is if you want to avoid the extra memory use of loading the entire file at once if reading through line by line is enough. You may want to use Vector instead of List for efficiency reasons (Vector is better in many cases and should probably be preferred as a default collection, but tradition and old habits die hard and most people/guides seem to default to List, but this is a whole other topic)
I'm looking for a tool to convert a SBML model into a Matlab function. I've tried SBMLTranslate() function from libSBML but this returns a Matlab struct, not a function. Does anybody know if such tool exists? Thanks
There are at least three efforts in this direction:
Frank Bergmann offers an online service for SBML translation where you can upload an SBML file and it will generate a MATLAB file. The comments at the top of the generated MATLAB file explain how to use the results. The C++ source code is available on SourceForge.
Bergmann's code referenced above was used by Stanley Gu to create sbml2matlab, a Windows standalone program. Off-hand, I don't know whether Gu's version changed or enhanced the algorithm used by the Bergmann version, but it seems likely. (Note: Gu now works at Google and does not maintain this code anymore, as far as I know.)
The Systems Biology Format Converter (SBFC) is a framework written principally by Nicolas Rodriguez; it includes a collection of converters, one of which is an SBML-to-MATLAB converter. This converter is written in Java.
I have not compared the results of the translators myself yet, so cannot speak to the differences or quality of output. If you try them and have any feedback to relate, please let the authors know. Knowing what has or hasn't worked for real users will help improve things in the future.
A final caveat is that all of these have been research projects, so make sure to set your expectations accordingly. (This is not a criticism of the authors; the authors are very good – I know most of them personally – but the reality of academic development work is that we all lack the time and resources to make these systems comprehensive, hardened, polished, and documented to the degree that we wish we could.)
I want to apply preprocessing phase on a large amount of text data in Spark-Scala such as Lemmatization - Remove Stop Words(using Tf-Idf) - POS tagging , there is any way to implement them in Spark - Scala ?
for example here is one sample of my data:
The perfect fit for my iPod photo. Great sound for a great price. I use it everywhere. it is very usefulness for me.
after preprocessing:
perfect fit iPod photo great sound great price use everywhere very useful
and they have POS tags e.g (iPod,NN) (photo,NN)
there is a POS tagging (sister.arizona) is it applicable in Spark?
Anything is possible. The question is what YOUR preferred way of doing this would be.
For example, do you have a stop word dictionary that works for you (it could just simply be a Set), or would you want to run TF-IDF to automatically pick the stop words (note that this would require some supervision, such as picking the threshold at which the word would be considered a stop word). You can provide the dictionary, and Spark's MLLib already comes with TF-IDF.
The POS tags step is tricky. Most NLP libraries on the JVM (e.g. Stanford CoreNLP) don't implement java.io.Serializable, but you can perform the map step using them, e.g.
myRdd.map(functionToEmitPOSTags)
On the other hand, don't emit an RDD that contains non-serializable classes from that NLP library, since steps such as collect(), saveAsNewAPIHadoopFile, etc. will fail. Also to reduce headaches with serialization, use Kryo instead of the default Java serialization. There are numerous posts about this issue if you google around, but see here and here.
Once you figure out the serialization issues, you need to figure out which NLP library to use to generate the POS tags. There are plenty of those, e.g. Stanford CoreNLP, LingPipe and Mallet for Java, Epic for Scala, etc. Note that you can of course use the Java NLP libraries with Scala, including with wrappers such as the University of Arizona's Sista wrapper around Stanford CoreNLP, etc.
Also, why didn't your example lower-case the processed text? That's pretty much the first thing I would do. If you have special cases such as iPod, you could apply the lower-casing except in those cases. In general, though, I would lower-case everything. If you're removing punctuation, you should probably first split the text into sentences (split on the period using regex, etc.). If you're removing punctuation in general, that can of course be done using regex.
How deeply do you want to stem? For example, the Porter stemmer (there are implementations in every NLP library) stems so deeply that "universe" and "university" become the same resulting stem. Do you really want that? There are less aggressive stemmers out there, depending on your use case. Also, why use stemming if you can use lemmatization, i.e. splitting the word into the grammatical prefix, root and suffix (e.g. walked = walk (root) + ed (suffix)). The roots would then give you better results than stems in most cases. Most NLP libraries that I mentioned above do that.
Also, what's your distinction between a stop word and a non-useful word? For example, you removed the pronoun in the subject form "I" and the possessive form "my," but not the object form "me." I recommend picking up an NLP textbook like "Speech and Language Processing" by Jurafsky and Martin (for the ambitious), or just reading the one of the engineering-centered books about NLP tools such as LingPipe for Java, NLTK for Python, etc., to get a good overview of the terminology, the steps in an NLP pipeline, etc.
There is no built-in NLP capability in Apache Spark. You would have to implement it for yourself, perhaps based on a non-distributed NLP library, as described in marekinfo's excellent answer.
I would suggest you to take a look in spark's ml pipeline. You may not get everything out of the box yet, but you can build your capabililties and use pipeline as a framework..
I am using Eclipse (version: Kepler Service Release 1) with Prolog Development Tool (PDT) plug-in for Prolog development in Eclipse. Used these installation instructions: http://sewiki.iai.uni-bonn.de/research/pdt/docs/v0.x/download.
I am working with Multi-Agent IndiGolog (MIndiGolog) 0 (the preliminary prolog version of MIndiGolog). Downloaded from here: http://www.rfk.id.au/ramblings/research/thesis/. I want to use MIndiGolog because it represents time and duration of actions very nicely (I want to do temporal planning), and it supports planning for multiple agents (including concurrency).
MIndiGolog is a high-level programming language based on situation calculus. Everything in the language is exactly according to situation calculus. This however does not fit with the project I'm working on.
This other high-level programming language, Incremental Deterministic (Con)Golog (IndiGolog) (Download from here: http://sourceforge.net/p/indigolog/code/ci/master/tree/) (also made with Prolog), is also (loosly) based on situation calculus, but uses fluents in a very different way. It makes use of causes_val-predicates to denote which action changes which fluent in what way, and it does not include the situation in the fluent!
However, this is what the rest of the team actually wants. I need to rewrite MIndiGolog so that it is still an offline planner, with the nice representation of time and duration of actions, but with the causes_val predicate of IndiGolog to change the values of the fluents.
I find this extremely hard to do, as my knowledge in Prolog and of situation calculus only covers the basics, but they see me as the expert. I feel like I'm in over my head and could use all the help and/or advice I can get.
I already removed the situations from my fluents, made a planning domain with causes_val predicates, and tried to add IndiGolog code into MIndiGolog. But with no luck. Running the planner just returns "false." And I can make little sense of the trace, even when I use the GUI-tracer version of the SWI-Prolog debugger or when I try to place spy points as strategically as possible.
Thanks in advance,
Best, PJ
If you are still interested (sounds like you might not be): this isn't actually very hard.
If you look at Reiter's book, you will find that causes_vals are just effect axioms, while the fluents that mention the situation are usually successor-state-axioms. There is a deterministic way to convert from the former to the latter, and the correct interpretation of the causes_vals is done in the implementation of regression. This is always the same, and you can just copy that part of Prolog code from indiGolog to your flavor.
Does anyone know of any examples of code written in prolog to implement a DSL to generate perl code?
DCGs might be an excellent choice!
I have used a similar approach for generation of UML class diagrams (really, graphviz code for such diagrams) from simple English sentences (shameless-plug: paper here). It should be possible to do something similar with generation of Perl code instead.
In the paper above, we use a constraint store (CHR) as intermediate representation which allows some extra reasoning power. Alternatively you can build a representation as an output feature/argument of the DCG.
Note that DCGs can be useful both for the parsing of your sentences and the generation of your Perl code.
Well, not exactly what you are asking for, but maybe you can use AI::Prolog for what you are looking for. That way you may be able to use Perl and generate the Perl code you want.
I'm not sure why you would want to do that?
Perl is a very expressive language, I'm not sure why you'd want to try to generate Perl code from Prolog; in order to make it useful, you'd be getting closer and closer to Perl in your "DSL", by which point you'd be better off just writing some Perl, surely?
I think you need to expand this question a bit to cover what you're trying to achieve in a little more detail.
SWI-Prolog library(http/html_write) library builds on DCG a DSL for page layout.
It shows a well tought model for integrating Prolog and HTML, but doesn't attempt to cover the entire problem. The 'residual logic' on the client side remains underspecified, but this is reasonable, being oriented on practical issues 'reporting' from RDF.
Thus the 'small detail' client interaction logic is handled in a 'black box' fashion, and such demanded to YUI components in the published application (the award winner Cliopatria).
The library it's extensible, but being very detailed, I guess for your task you should eventually reuse just the ideas behind.