mallet inferencer for hLDA - mallet

I'm trying to use hLDA to create a topic model and then to make inferences based on that model. But as far as I've seen, the topic inferencer tool only works on LDA models, am I right? Is there a way of inferencing topics from a hLDA model?

I found an hLDA Inferencer here from chyikwei on Github that I believe will get you there.
I've tested it and have successfully (I think) used it to score new documents.
Following the MALLET build instructions here, I rebuilt MALLET with the HierarchicalLDAInferencer.java code in src/cc/mallet/topics following chyikwei's README

Have you tried out this?
\mallet run cc.mallet.topics.tui.HierarchicalLDATUI --input [FILENAME] --output-state [FILENAME]
worked for me.

Related

MS azure, zeppelin load scala file

I am following an OReilly book, "Advanced Analytics with spark" book. It seems that they expect you to use the command shell to follow the examples in the book (PuTTY). But i don't want to use that. I'd prefer to use Zeppelin. I'd like to create notebooks, but my own comments into the code etc.
So, using an Azure subscription, I spin up a Spark cluster and go into zeppelin. I am able to follow the guide fine for the most part. But there is one bit that trips me up. And its probably pretty basic.
You are asked to create a scala file called "StatsWithMissing.scala" with code in it. I do that. I upload it to blob to: //user/Zeppelin
(this is where i expect the Zeppelin user directory to be)
Then it asks you to run the following;
":load StatsWithMissing.scala"
At this point it gives the error:
:1: error: illegal start of definition
My first question is, where exactly is this scala file supposed to be on Blob Storage for Zeppelin to see it? How do i determine that? Is where i am putting it correct?
And second what does this message mean? Does it not like the Load statement?
I believe the Interpreter set at the top of the page is Livy, and that covers scala.
Any help would be great.
Regards
Conor

dReal SMT solver counterexamples

Does the dReal SMT solver return counterexamples? I have seen examples where :produce-models is true, but I do not know how to generate counterexamples. Also, the dReach tool has a --visualize option, so it would seem that dReal would need to produce some model information. However, when I run it on .smt2 files, I can't seem to find a way to view counterexamples.
O.k., it is trivial :). dReal does not follow the usual .smt2 convention of using (get-model), but you can get models by using the command line option --model.
E.g.: dReal --model microwave1.smt2

Running Memnet model using caffe

I try to use the dataset of mnist, which is an example provided in github, to run Memnet model on the interface of cmd.
The model is downloaded from here
I modified its deploy.prototxt accordingly. Having no idea... Can someone help me with this?
But it keeps telling me something wrong going on, as the pic shows:
The command line interface should get as a -solver a solver.prototxt file (which has train_val.prototxt as one of its parameters). You cannot supply train_val.prototxt directly to caffe train.
You can look at the examples subfolder of caffe and find some examples of solver.prototxt. A simple one can be found in examples/mnist, you can check out lenet_solver.prototxt.

Is it possible to update and use updated .ini and .ned files when Omnet++ simulation is running?

I am trying to run Omnet++ and matlab software in parallel and want them to communicate. When Omnet++ is running, I want to update the position of the node and for that I want to edit the .ned and .int files with matlab results continuously. During simulation I want to generate the result file using the updated files. I want just to update the position and don't want to add or delete any node. Please suggest me a way for proceeding?
matlab_loop
{
matlab_writes_position_in_ned_file;
delay(100ms);
}
omnet_loop
{
omnet_loads_ned_and_simulates;
//sca and vec should update;
delay(100ms);
}
Thank you.
NED and Ini files are read only during initialization of the model. You can't "read" them again after the simulation started. On the other hand, you are free to modify your parameters and create/delete modules using OMNeT++'s C++ API. What you want to achieve is basicaly: set your node position based on some calculations carried out by matlab code. The proper way to do it:
Generate C code from your matlab code.
Link that code to your OMNeT++ model
Create a new mobility model (assuming you are using INET) that is using the matlab code
What you are looking for seems to be more of a project rather than a question/problem which can be solved in Q&A site like stackoverflow.
Unfortunately, I have little understanding of matlab and V-REP to provide you a satisfactory answer. However, it seems that you will need to play around with APIs in lower levels.
As an example of coupling different simulation tools to form a simulation framework in case of need consider reading this paper and this
Also note the answer given by #Rudi. He seems to know what he is talking about.

training a new model using pascal kit

need some help on this.
Currently I am doing a project on computer vision that requires me to train a new model to detect a certain object.
In this case, I am using the system provided by P. Felzenszwalb, D. McAllester, D. Ramaman and his team => Discriminatively trained deformable part models which is implemented in Matlab.
Project webpage: http://www.cs.uchicago.edu/~pff/latent/.
However I have no idea how to direct the system to use my dataset(a collection of images and annotation) which is different from the the PASCAL datasets so as to train a new model.
By directing, I meant a line of code that allows me to change the dataset the system reads from, for training a model.
E.g.
% directory for caching models, intermediate data, and results
cachedir = ['/var/tmp/rbg/YOURPATH/' VOCyear '/'];
I tried looking at their Readme and documentation guides but they do not make any mention. Do correct me if I am wrong.
Let me know if I have not made my problem clear enough.
I tried looking at some files such as global.m but no go.
Your help is much appreciated and thanks in advance!
You can try to read pascal.m in the DPM package(voc-release5), there are similar code working on VOC2007/2010 dataset.
There are plenty of parts that need to be adapted to achieve this. For example the voc_config has to be adapted in order to read from your files.
The same with the pascal_train.m function. Depending on the images and the way you parse them, this may require quite some time to adapt this function.
Other functions to consider:
imreadx
pascal_test
pascaleval