Annotation processing got disabled, since it requires a 1.6 compliant JVM - annotations

I am using the most updated University Edition of Any Logic.
I created a data set in a Parameter Variation Experiment. In running the experiment I get the following error "Annotation processing got disabled, since it requires a 1.6 compliant JVM"
I dont know what is causing this error. Im sure Any Logic comes with the required Java
I set the program to display the results of 20 runs for a certain count variable. The program runs, but no output is being shown on the interface. Output is being shown to the console a long with the error message in the title. I am wondering if getting rid of the error will display output in the user interface.

Related

Silence "unknown OID 17227: failed to recognize type of 'geography'. It will be treated as String."

I'm using columns of type "geography" for some few "point within polygon" queries. They are too few and too simple to bundle a GIS gem, I handle it all on the SQL level.
However, every time Rails boots (rake tasks, console etc), the following warning is spit:
unknown OID 17227: failed to recognize type of 'geography'. It will be treated as String.
I'm fine with "geography" being treated as "String", but the warning triggers warning mails every time a cronjob executes any rake task.
Any idea how I can silence this warning?
Thanks for your hints!
Looking at the source of ActiveRecord, I can answer the question myself:
The warning is hardcoded and therefore can't be silenced by AR configuration. However, RUBYOPT=-W0 gets rid of warnings altogether. This is a big chopper of course, but since I'm still getting those warnings in local development, I can live with a totally warning-less production system.
Since this is the first question/answer that appears given that the title is the error, I would recommend following this answer to fix it: What is the source of "unknown OID" errors in Rails?

VSTS Test fails but vstest.console passes; the assert executes before the code for some reason?

Well the system we have has a bunch of dependencies, but I'll try to summarize what's going on without divulging too much details.
Test assembly in the form of a .dll is the one being executed. A lot of these tests call an API.
In the problematic method, there's 2 API calls that have an await on them: one to write a record to that external interface, and another to extract all records and then read the last one in that external interface, both via API. The test is simply to check if writing the last record was successful in an end-to-end context, that's why there's both a write and then a read.
If we execute the test in Visual Studio, everything works as expected. I also tested it manually via command lining vstest.console.exe, and the expected results always come out as well.
However, when it comes to VS Test task in VSTS, it fails for some reason. We've been trying to figure it out, and eventually we reached the point where we printed the list from the 'read' part. It turns out the last record we inserted isn't in the data we pulled, but if we check the external interface via a different method, we confirmed that the write process actually happened. What gives? Why is VSTest getting like an outdated set of records?
We also noticed two things:
1.) For the tests that passed, none of the Console.WriteLine outputs appear in the logs. Only on Failed test do they do so.
2.) Even if our Data.Should.Be call is at the very end of the TestMethod, the logs report the fail BEFORE it prints out the lines! And even then, the printing should happen after reading the list of records, and yet when the prints do happen we're still missing the record we just wrote.
Is there like a bottom-to-top thing we're missing here? It really seems to me like VSTS vstest is executing the assert before the actual code. The order of TestMethods happen the right order though (the 4th test written top-to-bottom in the code is executed 4th rather than 4th to last) and we need them to happen in the right order because some of the later tests depend on the former tests succeeding.
Anything we're missing here? I'd put a source code but there's a bunch of things I need to scrub first if so.
Turns out we were sorely misunderstanding what 'await' does. We're using .Wait() instead for the culprit and will also go back through the other tests to check for quality.

openbugs generating inital values error

I have a model that I can run in winbugs but I get an error in openbugs when loading and generating initial values even when using the same code and the same data.
When loading initial values in both winbugs and openbugs I get the message that the "chain contains uninitalized variables". Then I can successfully generate initial values in winbugs but in openbugs I get the error "error for node Ny[2] of type GraphBinoimial. Node second argument must be integer valued." My initial values for Ny are Ny=c(4,4,4,4,4,4,4,4). These seem like integers to me. Plus these same initial values are accepted in winbugs.
You may be asking yourself, if I can get it work in winbugs why do I care to get it to work in openbugs? I am most interested in running the whole process through R. I've been trying to do this with R2Winbugs or Brugs (openbugs) but I get errors on initialization with both of these and it even crashes R (I'd never seen the r-bomb before but now I can make it happen repeatedly). I think this is probably for the same reason that I can't run the data manually in openbugs.
Being able to run this through R would greatly reduce the frustration of interacting with win/openbugs especially when running a series of models with different data.
Thanks in advance for your insight.
lg

Peculiar error for transition coverage

Hi all,
I am facing a strange error message while debugging a code for functional coverage specifically transition coverage.There are two level pins for fifo1 and fifo2 respectively while doing coverage for the first level pin ie level1 the code is parsed successfully but for level2 pin its throwing an error which says:
***Error:Syntax error(probably an infinite recursion in macro expansion)
Before loading your code, do trace macro. This will show which macros are being expanded. Look in your docs for more details.
Also, unless you're just writing some simple prototyping code, 'tick notation' for accessing signals is VERY slow. It's the old method. Cadence's recommendation is to use ports instead of 'tick access'. We sped up our test runs by a factor of ~3-10x ( can't remember precisely) by using ports instead of ticks when we did the switch back in version 6.01 of Specman.

Why does Matlab standalone application exit with error "TooManyOutputs"?

I have created a standalone application in Matlab, actually it works, it displays the desired output but it closes immediately, not even enough time to examine the output and read the error message on DOS (standalone mode) that says:
MATLAB:TooManyOutputs
Warning: 1 visible figure(s) exist at MCR Termination
If your application has terminated unexpectedly, please note that
applications generated by the MATLAB Compiler terminate when there are no
visible figure windows. See the documentation for WaitForFiguresToDie and
WAITFORCALLBACKS for more information.
Any help would be appreciated.
Looking at the first line of your message, TooManyOutputs suggests that you have an assignment somewhere of the form
[a b] = somefunction(parameters)
so you want the outputs of somefunction to be put in a and b, but somefunction only returns one parameter. This bug causes your program to terminate, and then MCR realizes the program exits without closing your figure window, causing the later error messages.
If I'm right about TooManyOutputs, you should already have that error message when running your code directly in Matlab; have you tried that before creating a standalone application?
If this doesn't help, you should probably post some of your code to make it clearer where the problem could come from.