As an openmdao 1.x problem handles only one driver and as optimizers are still supposed to be drivers, how a multi-level formulation could be implemented? Should I use two problems? Should I call an optimizer directly within a solve_nonlinear component method? Thanks.
There have been some changes, and Problem is no longer a system. The best way to do this now is to create a Component that contains the sub-problem, tells it when to run, and passes data in and out. See example here:
How to use nested problems in OpenMDAO 1.x?
The planned way to handle this is going to be to use nested Problem instances. This is not implemented yet in the problem class, but it very easy to implement by hand.
All you would need to do is define your own solve_nonlinear method in a SubClass of Problem. if you're going to use analytic derivatives you would also need to implement a Jacobian and apply_linear methods as well and do use post-optimiality sensitivities if you had nested optimizers. Or you could force finite difference to happen in the containing parent group.
Your solve_linear will take in params, unknowns, and resids dictionaries and passes the relevant variables down into the problems vector. Essentially, the framework was designed to not know if you're using nested problems. The top level framework thinks that the inner one is just a regular component.
Related
i'm using Epsilon Comparison Language for the first time. I am writing a code in order to compare two models, in particular i want to show some information on the default output stream console when the code finds differences between the models. I want to visualize, for example, the name of the involved rule and the differences between the fields under investigation. When the comparison ends without differences i can visualize, for example, all that i need using the matchInfo variable in a "do" block. How can i solve the problem when the code find some differences? Thanks.
ECL does not provide any built-in differencing or difference visualisation capabilities. If you need such capabilities my suggestion would be to consider using EMFCompare.
My team has a plan to apply optaplanner to existing system.
Existing system has its own rule-sets.
it tries own rule-sets one by one and pick the one as best result.
We want to start from its result as heuristics
and start to solve the problem as meta-heuristics.
We have reviewed optaplanner manual especially in repeated planning section.
but we can't find the way.
Is there a way to accept existing system's result?
your cooperation would be highly appreciated
Best regards.
For OptaPlanner, it makes no difference where the input solution comes from. Consider the following code:
MyPlanningSolution solution = readSolution();
Solver<MyPlanningSolution> solver = SolverFactory.create(...)
.buildSolver();
solver.solve(solution);
Notice how solution comes from a custom method, readSolution(). Whether that method generates the initial solution randomly, reads it from a file, from a database etc., that does not matter to the solver. It also does not matter if it is initialized or not - construction heuristic, if configured, will just skip the initialized entities.
That means you have absolute freedom in how you create your initial solution and, to the solver, they all look the same.
We are working in a very complex solution using drools 6 (Fusion) and I would like your opinion about best way to read Objects created during the correlation results over time.
My first basic approach was to read Working Memory every certain time, looking for new objects and reporting them to external Service (REST).
AgendaEventListener does not seems to be the "best" approach beacuse I dont care about most of the objects being inserted in working memory, so maybe, best approach would be to inject particular "object" in some sort of service inside DRL. Is this a good approach?
You have quite a lot of options. In decreasing order of my preference:
AgendaEventListener is probably the solution requiring the smallest amount of LOC. It might be useful for other tasks as well; all you have on the negative side is one additional method call and a class test per inserted fact. Peanuts.
You can wrap the insert macro in a DRL function and collect inserted fact of class X in a global List. The problem you have here is that you'll have to pass the KieContext as a second parameter to the function call.
If the creation of a class X object is inevitably linked with its insertion into WM, you could add the registry of new objects into a static List inside class X, to be done in a factory method (or the constructor).
I'm putting your "basic approach" last because it requires much more cycles than the listener (#1) and tons of overhead for maintaining the set of X objects that have already been put to REST.
I am comparing a number of different methods for organizing the nodes at the "frontier" in dijkstra's single source shortest path algorithm. One of the implementations that I am playing around with is using q: scala.collection.mutable.Queue.
Essentially, each time I add a node to q, I sort q. This method, as expected, takes significantly longer than using scala.collection.mutable.PriorityQueue and a MinHeap that I implemented. My question is, what kind of sort is Queue using when I call q.sorted? I am specifically interested in the time complexity of the sorted implementation.
I have tried looking at the API (http://www.scala-lang.org/api/2.10.2/index.html#scala.collection.mutable.Queue) and code (https://github.com/scala/scala/blob/v2.10.2/src/library/scala/collection/mutable/Queue.scala#L1) but haven't been able to track this down.
Thank you in advance for your help.
Queue inherits sorted method from SeqLike. And you can see, that it creates new array of same elements, sorts array via java.util.Arrays.sort and then creates new structure of original type.
This question relates to project design. The project takes an electrical system and defines it function programatically. Now that I'm knee-deep in defining the system, I'm incorporating a significant amount of interaction which causes the system to configure itself appropriately. Example: the system opens and closes electrical contactors when certain events occur. Because this system is on an airplane, it relies on air/ground logic and thus incorporates two different behaviors depending on where it is.
I give all of this explanation to demonstrate the level of complexity that this application contains. As I have continued in my design, I have employed the use of if/else constructs as a means of extrapolating the proper configurations in this electrical system. However, the deeper I get into the coding, the more if/else constructs are required. I feel that I have reached a point where I am inefficiently programing this system.
For those who tackled projects like this before, I ask: Am I treading a well-known path (when it comes to defining EVERY possible scenario that could occur) and I should continue to persevere... or can I employ some other strategies to accomplish the task of defining a real-world system's behavior.
At this point, I have little to no experience using delegates, but I wonder if I could utilize some observers or other "cocoa-ey" goodness for checking scenarios in lieu of endless if/else blocks.
Since you are trying to model a real world system, I would suggest creating a concrete object oriented design that well defines the is-a and a has-a relationships and apply good old fashioned object oriented design and apply it into breaking the real world system into a functional decomposition.
I would suggest you look into defining protocols that handle the generic case, and using them on specific cases.
For example, you can have many types of events adhering to an ElectricalEvent protocol and depending on the type you can better decide how an ElectricalContactor discriminates on a GeneralElectricEvent versus a SpecializedElectricEvent using the isKindOfClass selector.
If you can define all the states in advance, you're best of implementing this as a finite state machine. This allows you to define the state-dependent logic clearly in one central place.
There are some implementations that you could look into:
SCM allows you to generate state machine code for Objective-C
OFC implements them as DFSM
Of course you can also roll your own customized implementation if that suits you better.