I want to call the API of uima-text-segmenter https://code.google.com/p/uima-text-segmenter/source/browse/trunk/INSTALL?r=22 to run an example.
But I don`t know how to call the API...
the readme said,
With the DocumentAnalyzer, run the following descriptor
`desc/textSegmenter/wst-snowball-C99-JTextTilingAAE.xml` by taking the
uima-examples data as input.
Could anyone give me some code which could be run directly in main func for example?
Thanks a lot!
Long answer:
The link describes how you would set up the application from within the Eclipse UIMA environment. This sort of set-up is typically targeted at subject matter specialists with little or no coding experience. It allows them to work (relatively fast) with UIMA in a declarative way: all data structures and analysis engines (computing blocks within UIMA) are declared in xml (with a GUI on top of it), after which the framework takes care of the rest. In this scenario, you would typically run a UIMA pipeline using a run configuration from within Eclipse (or the included UIMA pipeline runner application). Luckily, UIMA allows you to do exactly the same from code, but I would recommend using UIMAFit (http://uima.apache.org/d/uimafit-current/tools.uimafit.book.html#d5e137) for this purpose instead of of UIMA, as it bundles lots of useful things and coding shortcuts.
Short answer:
Using UIMAFit, you can call Factory methods that create CollectionReader (read input), AnalysisEngine (process input) and Consumer objects (write/do other stuff) from (third-party provided) XML files. Use these methods to construct your pipeline and the SimplePipeline class to run it. To extract the data you need, you would manipulate the CAS object (containing your data) in a Consumer object, possibly with a callback. You could also do this in a Analysis Engine object. I recommend using DKPro's FeaturePathFactory (https://code.google.com/p/dkpro-core-asl/source/browse/de.tudarmstadt.ukp.dkpro.core-asl/trunk/de.tudarmstadt.ukp.dkpro.core.api.featurepath-asl/src/main/java/de/tudarmstadt/ukp/dkpro/core/api/featurepath/FeaturePathFactory.java?spec=svn1811&r=1811) to quickly access the feature you are after.
Code examples:
http://uima.apache.org/d/uimafit-current/tools.uimafit.book.html#d5e137 contains examples, but they all go in the opposite direction (class objects are used in the factory methods, instead of XML files - XML is generated from these classes). Take a look at the UIMAFit API to find the method you need, AnalysisEngineDescription from XML for example: http://uima.apache.org/d/uimafit-current/api/org/apache/uima/fit/factory/AnalysisEngineFactory.html#createEngineDescriptionFromPath-java.lang.String-java.lang.Object...-
Related
The vscode-antlr4 plugin for VisualStudio Code has a nice call-graph feature which visualizes (as a dendrogram) how grammar (and lexer) rules interact. You can save the graphic as SVG.
Is there a way to export the information as JSON? I wouldn't mind going into the plugin's code to find a way to do it.
My aim is to create reachability graphs for individual rules, i.e. graphs that show from which other rules a particular rule can be reached (transitively). The "calls" and "is-called" information from the call-graph feature would be a nice starting point.
The data for the call graph comes from a source context instance (for each grammar file there's a single source context to manage all details for it). See the function getReferenceGraph, which collects the relations into a map object. You can use that object to generate a JSON object from it. Or you create another function, taking this one as template, to generate the JSON directly, without the overhead required for the UI.
Let's consider a scenario, we have to run the performance test for "create an account api" which takes input as header/path param "Auth token" and input data like user account name . So for above scenario we have 2 feature file as,
to run performance test for POST http://baseUrl/auth_param/create/input_data
1. One feature(e.g: generateAuth.feature) file which will have the auth
token
2. Second feature(createAccount.feature) file which take parameter as a
auth token, input data.
Here is my simulation class,
class <MyClass> extends Simulation {
before {
println("Simulation is about to start!")
}
val generateAuthTest = scenario("generateAuth").exec(karateFeature("classpath:path/generateAuth.feature"))
val createAccountTest = scenario("test").exec(karateFeature("classpath:path/createAccount.feature"))
setUp(
createAccountTest.inject(rampUsers(1) over (10 seconds))).maxDuration(1 minutes)
after {
println("Simulation is finished!")
}
}
Here, can i read auth from generateAuth.feature file which is input for createAccount.feature file, so that i can pass as a parameter?
Please suggest me how to pass parameters to createAccount.feature while calling in karateFeature method.
Let me put a requirement here,
let's say we have some feature files for CRUD operations on a particular data. Here how i go to write functional scenario,
I will create new feature file to write a scenario
just use CRUD files to test a SINGLE flow.
Now if i go for Performance test cases on individual operation, i feel there are 2 ways,
Create new 4 performance test feature files (one for each CRUD
method) and call these CRUD feature files in the respective test
feature file. Finally we just call test feature files in the
respective gatling simulation class.
**(In this case, I will end up with creating more test feature files as well simulation classes for
performance, which I want to avoid) **
Just call CRUD files in the respective gatling simulation class and
pass the required parameters to them.(In this case , we just need to create only 4 simulation
classes and run them on the basic of operation like create,read,delete and so on)
Here just wanted to know 2nd way of performance test, is it achievable or not in karate and if yes please let me know how?
Summary- I think its achievable using 3rd feature file (extra) for
individual use case but I do not want to make an extra feature file
for each case so that I can avoid maintenance work and can take
advantage of re-usability of existing feature file from functional
test to performance test.
Just use the normal Karate concepts such as karate-config.js
You can easily switch environments by setting the karate.env system property.
For example:
mvn test -DargLine="-Dkarate.env=e2e"
EDIT: after you edited your question, it is clear you have a SINGLE flow you want to test. please use a SINGLE feature. I suggest you move the generateAuth into the Background of the feature. Also refer to the docs on callSingle() for advanced options.
If you are expecting 2 feature files to magically share data that is not possible and not needed if you structure your tests correctly.
If you really really need this, please create a Java singleton and access it from each feature. Totally don't recommend this though.
EDIT: In Karate 0.9.0 onwards, you can call a single scenario within a feature if it has a tag:
classpath:animals/cats/create.feature#sometagname
In Paper [A Software Product Line for Static Analyses(2014)], there is an illustration related constructing call graph(Listing7).
In this example, Line14 is related to construct call graph. while i check the src code and API, what i could find is DefaultCHACallGraphDomain.scala which has no implementation of construct call graph.
As my purpose is using OPAL to construct call graph. Is there any demo or documents help me understanding existing CallGraphDomain in OPAL? currently, i can only find some class declaration.
I'll be really appreciated if anyone can give me some suggestions related this topic.
Thanks in advance.
Jiang
The interface that was shown in the paper doesn't exist anymore, so you can totally forget about it.
The default interface to get a CallGraph class is provided by the Project object you retrieve when you load the bytecode a Java project.
A general code Example:
val project = ... // a java project
val computedCallGraph = project.get(/* Some call graph key */)
val callGraph = computedCallGraph.callGraph // the final call graph interface.
The computed call graph contains several things. It contains the entry points, unresolved method calls, exceptions when something went wrong at the construction time and the actual call graph.
OPAL provides you several call graph algorithms, you can retrieve each by passing the corresponding call graph key to the Project's get method.
Currently, the following two keys are available and can be passed to Project.get (more information is available in the documentation of this classes):
CHACallGraphKey
VTACallGraphKey
Analysis mode - Library vs Application
To construct a valid call graph for a software project it depends on the project kind which analysis mode to chose. While applications provide complete information (except incomplete projects, class loading and so on), software libraries are intended to be used by other projects. However, those two different scenarios have to be kept in mind, when construction call graphs. More details can be found here: org.opalj.AnalysisModes
OPAL offers the following analysis modes:
DesktopApplication (safe for application call graphs)
LibraryWithClosePackagesAssumption (safe for call graphs that are used for security-insensitive analyses)
LibraryWithOpenPackagesAssumption (very conservative/safe for security analyses)
The analysis mode can be either configured in OPAL's config file or set as project setting at runtime. You can find the config file in the Common project under /src/main/resources/reference.conf.
All of those analysis modes are supported by the the CHACallGraphKey while VTACallGraphKey only supports applications so far.
NOTE: The interface may change in upcoming versions again.
I need to implement a custom ResultHandler but I am confused about how to actually integrate my custom class into the software package.
I have read this: http://elki.dbs.ifi.lmu.de/wiki/HowTo/InvokingELKIFromJava but my question is how are you meant to implement a custom result handler such that it shows up in the GUI?
The only way I can think of doing it is by extracting the elki.jar package and manually inserting my custom class into the source code, and then re-jarring the package. However I am fairly sure this is not the way it is meant to be done.
Also, in my resulthandler I need to output all the rows to a single text file with the cluster that each row belongs to displayed. How tips on how I can achieve this?
There are two questions in here.
in order to make your class instantiable by the UIs (both MiniGUI and command line), the classes must implement our Parameterization API. There are essentially two choices to make your class instantiable:
Add a public constructor without parameters (the UI won't know how to set your parameters!)
Add an inner static class Parameterizer that handles parameterization
in order to add your class to autocompletion (dropdown menu), the classes must be discovered by the MiniGUI/CLI/other UIs. ELKI uses two methods of discovery:
for .jar files, it reads the META-INF/elki/interfacename service files. This is a classic service-loader approach; except that we also allow ordering instances.
for directories only, ELKI will also scan for all .class files, and inspect them. This is mostly meant for development time, to avoid having to update the service files all the time. For performance reasons, we do not inspect the contents of .jar files; these are expected to use service files.
You do not need your class to be in the dropdown menu - you can always type the full class name. If this does not work, adding the name to the service file will not help either, but ELKI can either not find the class at all, or cannot instantiate it.
There is also a tutorial on implementing a custom result handler, but it does not discuss how to add it to the menu. In "development mode" - when having a folder with .class files - it will show up automatically.
I'm currently making good use of GWT's ClientBundles in my app. It works fine, but I have a large number of resources and it becomes tedious to manually create Java interfaces for each file:
#ClientBundle.Source("world_war_ii.txt")
public ExternalTextResource worldWarII();
#ClientBundle.Source("spain.txt")
public ExternalTextResource spain();
#ClientBundle.Source("france.txt")
public ExternalTextResource france();
I'd like to be able to (perhaps at compile time) dynamically list every *.txt file in a given directory, and then have run-time access to them, perhaps as an array ExternalTextResource[], rather than having to explicitly list them in my code. There may be hundreds of such resources, and enumerating them manually as code would be very painful and unmaintainable.
The ClientBundle documentation explicitly says that "to provide a file-system abstraction" is a non-goal, so unfortunately this seems to disallow what I'm trying to do.
What's the best way to deal with a large number of external resources that must be available at run-time? Would a generator help?
There's an automatic generator for CssResource - maybe you could look at its code and modify it to your needs?
I ended up following this advice: perform the file operations on the server, and then return a list of the file (meta)data via an RPC call.
This turns out to be fairly simple, and also allows me to return lightweight references (filenames) in the list, which I use to populate a Tree client-side; when the user clicks on a TreeItem the actual text contents are downloaded.