How does Server.call work in elixir-mongo? - mongodb

I'm learning Elixir and attempting to use the elixir-mongo library. During the auth/1 command, A the function uses Server.call, piping in the MongoDB request string. looking at the Mongo.Server class, it does not appear to be an actual genserver, nor have a method to match call/1. How is this working?

With high probability it doesn't work. Mongo.Server module doesn't export call function. There are no macros that generate it magically. My guess is that master branch is currently broken. If you are using the library and want to dig into the sources make sure you are looking at the same tag as the version you are using in your project.
Also, there are no classes and methods in Elixir. There are modules and functions :)

Related

Using `qlot` from a Lisp REPL

I'm interested in using the qlot library from inside of a Lisp image to manage multiple local instances of quicklisp.
There doesn't seem to be any documentation on how to use it, except through a non-Lisp CLI interface, and the obvious
(qlot:with-local-quicklisp (#P"/a/path/here/") (qlot:install :skippy))
or
(qlot:with-local-quicklisp (#P"/a/path/here/") (qlot:quickload :skippy))
give me
Component "skippy" not found
[Condition of type ASDF/FIND-SYSTEM:MISSING-COMPONENT]
What I'm looking for is a way to install a particular library by name. Basically, exactly how one would use ql:quickload, but targeting a specific, local directory instead of ~/quicklisp. What am I doing wrong?
It looks like the intent is to modify dynamically scoped variables in a way that makes using ql:quickload directly possible.
So
(qlot:with-local-quicklisp (#P"/a/path/to/some/quicklisp/")
(qlot/util:with-package-functions :ql (quickload)
(quickload :skippy)))
will result in skippy being installed in the quicklisp instance at #P"/a/path/to/some/quicklisp/" instead of the default location.
This leaves me a bit perplexed as to what qlot:quickload is for; its describe output doesn't shed additional light.

Removing annotations from a Modelica model

I'm developing a Modelica library and need to produce a document with source code listings. I'd like to be able to include the source of the Modelica models without annotations.
I could manually edit them out, but I'm looking for a more automated strategy. I'm guessing the most convenient and straightforward approach is to use some tool to save .mo files with no annotations and include those in my document (I'm using \lstinputlisting in LaTeX).
Is it possible to do this? I have access to Dymola, OpenModelica and JModelica. Dymola is obviously capable of producing such a listing, as it's able to include it in the automatically generated documentation (File > Export > HTML...). I've been looking into scripting with Dymola and OpenModelica, but haven't found a way to do this either.
JModelica seems like it could be a good option, but I don't have experience working with Python. If this is possible and someone gives me some pointers, I'm willing to look into it myself. I found a mention to a prettyprint function that might do the job, but I'm not sure where to start. I can't even find reference to that function in the latest documentation.
It would also be more convenient for me to find a way of doing it with Dymola/OpenModelica (whether through the UI or by using a script). Have I missed something?
I think you could use saveTotalModel("total.mo", MyModelName) in OpenModelica. This will strip most annotations (not ones used for code generation if I remember correctly) and pretty-print the source code including all dependencies. Then you just copy-paste the models/packages that you want to include in the listing. Or if you prefer, you can do something like the following to only include code for a particular model:
loadModel(Modelica);
loadFile("MyModel.mo");
saveTotalModel("total.mo", MyModel.A.B);
clear();
loadFile(MyModel);
str := list(MyModel.A.B);
writeFile("MyModel.A.B.listing", str);

how to run the example of uima-text-segmenter?

I want to call the API of uima-text-segmenter https://code.google.com/p/uima-text-segmenter/source/browse/trunk/INSTALL?r=22 to run an example.
But I don`t know how to call the API...
the readme said,
With the DocumentAnalyzer, run the following descriptor
`desc/textSegmenter/wst-snowball-C99-JTextTilingAAE.xml` by taking the
uima-examples data as input.
Could anyone give me some code which could be run directly in main func for example?
Thanks a lot!
Long answer:
The link describes how you would set up the application from within the Eclipse UIMA environment. This sort of set-up is typically targeted at subject matter specialists with little or no coding experience. It allows them to work (relatively fast) with UIMA in a declarative way: all data structures and analysis engines (computing blocks within UIMA) are declared in xml (with a GUI on top of it), after which the framework takes care of the rest. In this scenario, you would typically run a UIMA pipeline using a run configuration from within Eclipse (or the included UIMA pipeline runner application). Luckily, UIMA allows you to do exactly the same from code, but I would recommend using UIMAFit (http://uima.apache.org/d/uimafit-current/tools.uimafit.book.html#d5e137) for this purpose instead of of UIMA, as it bundles lots of useful things and coding shortcuts.
Short answer:
Using UIMAFit, you can call Factory methods that create CollectionReader (read input), AnalysisEngine (process input) and Consumer objects (write/do other stuff) from (third-party provided) XML files. Use these methods to construct your pipeline and the SimplePipeline class to run it. To extract the data you need, you would manipulate the CAS object (containing your data) in a Consumer object, possibly with a callback. You could also do this in a Analysis Engine object. I recommend using DKPro's FeaturePathFactory (https://code.google.com/p/dkpro-core-asl/source/browse/de.tudarmstadt.ukp.dkpro.core-asl/trunk/de.tudarmstadt.ukp.dkpro.core.api.featurepath-asl/src/main/java/de/tudarmstadt/ukp/dkpro/core/api/featurepath/FeaturePathFactory.java?spec=svn1811&r=1811) to quickly access the feature you are after.
Code examples:
http://uima.apache.org/d/uimafit-current/tools.uimafit.book.html#d5e137 contains examples, but they all go in the opposite direction (class objects are used in the factory methods, instead of XML files - XML is generated from these classes). Take a look at the UIMAFit API to find the method you need, AnalysisEngineDescription from XML for example: http://uima.apache.org/d/uimafit-current/api/org/apache/uima/fit/factory/AnalysisEngineFactory.html#createEngineDescriptionFromPath-java.lang.String-java.lang.Object...-

How can I perform dynamic reconfiguration in Scala (like Dyre or XMonad)?

A fairly common method of configuration for Haskell applications is having the program as a library, with a main function provided with a bunch of optional parameters for configuration. Upon being run, the executable itself looks for a dotfile containing a main function using this default function, which it then compiles and run instead. This sort of configuration scheme allows the user to add arbitrarily complex functionality without recompiling the entire program. Examples of this are the Dyre library and the XMonad window manager. How can this be done in Scala cleanly? It appears that SBT does something similarly internally.
Using SBT externally would require having the sources of the whole program somewhere, and lacks the cleanliness of just having a single dotfile. Typesafe config, Configrity, Bee Config, and fig all seem to only be meant for normal string based configuration.
https://github.com/typesafehub/config is a great config library.
supports files in three formats: Java properties, JSON, and a human-friendly JSON superset

How do I find all the modules used by a Perl script?

I'm getting ready to try to deploy some code to multiple machines. As far as I know, using a Makefile.pm to track dependencies is the best way to ensure they are installed everywhere. The problem I have is I'm not sure our Makefile.pm has been updated as this application has passed through a few different developers.
Is there any way to automatically parse through either my source or a few full runs of my program to determine exactly what versions of what modules my application is depending on? On top of that, is there any way to filter it based on CPAN packages? (So that I only depend on Moose instead of every single module that comes with Moose.)
A third related question is, if you depend on a version of a module that is not the latest, what is the best way to have someone else install it? Should I start including entire localized Perl installations with my application?
Just to be clear - you can not generically get a list of modules that the app depends on by code analysis alone. E.g. if your apps does eval { require $module; $module->import() }, where $module is passed via command line, then this can ONLY be detected by actually running the specific command line version with ALL the module values.
If you do wish to do this, you can figure out every module used by a combination of runs via:
Devel::Cover. Coverage reports would list 100% of modules used. But you don't get version #s.
Print %INC at every single possible exit point in the code as slu's answer said. This should probably be done in END{} block as well as __DIE__ handler to cover all possible exit points, and even then may be not fully 100% covering in generic case if somewhere within the program your __DIE__ handler gets overwritten.
Devel::Modlist (also mentioned by slu's answer) - the downside compared to Devel::Cover is that it does NOT seem to be able to aggregate a database across multiple sample runs like Devel::Cover does. On the plus side, it's purpose-built, so has a lot of very useful options (CPAN paths, versions).
Please note that the other module (Module::ScanDeps) does NOT seem to allow you to do runtime analysis based on arbitrary command line arguments (e.g. it seems at first glance to only allow you to execute the program with no arguments) and if that's true, is inferior to all the above 3 methods for any code that may possibly load modules dynamically.
Module::ScanDeps - Recursively scan Perl code for dependencies
Does both static and runtime scanning. Just modules, I don't know of any exact way of verifying what versions from what distributions. You could get old packages from BackPan, or just package your entire chain of local dependencies up with PAR.
You could look at %INC, see http://www.perlmonks.org/?node_id=681911 which also mentions Devel::Modlist
I would definitely use Devel::TraceUse, which also shows a tree of the modules, so it's easy to guess where they are being loaded.