user cuts callbacks are prioritized over cplex own cuts? - callback

I implemented an MIP model which includes a knapsack constraint in CPLEX. I also have a user cut callback routine to add my own cuts. The results on different instances show that no cuts other than user cut callbacks were added to the model. There are several cover cuts that CPLEX can generate by its own but it doesn't. I even set CPLEX parameters to aggresivley generate cover cuts but none is generated.
Is there some priority on the cuts added to the model? My user cut callback routine seems to find violated cuts at almost every node of the B&B tree so I assume that's the reason why cover cuts are not generated by CPLEX. I don't find any hint about this in the CPLEX user manual. Does anybody know why this happens?

Related

Implementating spell drawing/casting mechanism in Luau (Roblox)

I am coding a spell-casting system where you draw a symbol with your wand (mouse), and it can recognize said symbol.
There are two methods I believe might work; neural networking and an "invisible grid system"
The problem with the neural networking system is that It would be (likely) suboptimal in Roblox Luau, and not be able to match the performance nor speed I wish for. (Although, I may just be lacking in neural networking knowledge. Please let me know whether I should continue to try implementing it this way)
For the invisible grid system, I thought of converting the drawing into 1s and 0s (1 = drawn, 0 = blank), then seeing if it is similar to one of the symbols. I create the symbols by making a dictionary like:
local Symbol = { -- "Answer Key" shape, looks like a tilted square
00100,
01010,
10001,
01010,
00100,
}
The problem is that user error will likely cause it to be inaccurate, like this "spell"'s blue boxes, showing user error/inaccuracy. I'm also sure that if I have multiple Symbols, comparing every value in every symbol will surely not be quick.
Do you know an algorithm that could help me do this? Or just some alternative way of doing this I am missing? Thank you for reading my post.
I'm sorry if the format on this is incorrect, this is my first stack-overflow post. I will gladly delete this post if it doesn't abide to one of the rules. ( Let me know if there are any tags I should add )
One possible approach to solving this problem is to use a template matching algorithm. In this approach, you would create a "template" for each symbol that you want to recognize, which would be a grid of 1s and 0s similar to what you described in your question. Then, when the user draws a symbol, you would convert their drawing into a grid of 1s and 0s in the same way.
Next, you would compare the user's drawing to each of the templates using a similarity metric, such as the sum of absolute differences (SAD) or normalized cross-correlation (NCC). The template with the lowest SAD or highest NCC value would be considered the "best match" for the user's drawing, and therefore the recognized symbol.
There are a few advantages to using this approach:
It is relatively simple to implement, compared to a neural network.
It is fast, since you only need to compare the user's drawing to a small number of templates.
It can tolerate some user error, since the templates can be designed to be tolerant of slight variations in the user's drawing.
There are also some potential disadvantages to consider:
It may not be as accurate as a neural network, especially for complex or highly variable symbols.
The templates must be carefully designed to be representative of the expected variations in the user's drawings, which can be time-consuming.
Overall, whether this approach is suitable for your use case will depend on the specific requirements of your spell-casting system, including the number and complexity of the symbols you want to recognize, the accuracy and speed you need, and the resources (e.g. time, compute power) that are available to you.

Rare question about interaction terms, main effects, and mean-centering - mix & match?

This question is not about the longstanding discussion of 'to mean-center, or not mean-center' interaction terms, or what mean-centering the variables in an interaction gets you (or doesn't get you).
The question is if it is reasonable to have a model that includes an uncentered predictor serving to model a main effect (e.g. Education's effect on income), and an interaction term that is computed by multiplying a mean-centered version of that variable with another term (say, gender, to see if the education effect on income is conditional on gender), while leaving the 'standalone' education variable on its original scale.
For the sake of keeping the focus on this rare combination of an uncentered version of a variable with an interaction that is based on a centered version of the same variable in the same model, let's ignore the reasons for doing this (i.e. interpretability vs. collinearity).
Everything I can find about these issues seems to always assume that either all versions of the variables (as an individual variable/main effect, and as a part of the interaction/product term) are either centered or uncentered. Hence the labeling of this as a 'rare question' about mean-centering and interactions.
My instinct is that this (mixing and matching centered and uncentered) is problematic because, despite the linear similarities between the centered and uncentered versions, you end up with a model where one of the components of the interaction is technically absent. But this may also be just because I am not a fan of arguments - still common in a lot of places - that collinearity is the reason to mean-center.
What do people here think?

integrate Modelica variable without influencing state selection

I want to integrate a Modelica variable over time, just for convenience in plotting and post-processing. The variable I want to integrate over time is the power of a compressor so that I get the total energy. The first idea would be to add these lines:
Modelica.Units.SI.Power P_comp;
Modelica.Units.SI.Energy E_comp;
equation
P_comp = der(E_comp);
Is that the recommended way, or are there (better?) alternatives? Is it expected to influence the selection of dynamic states?
Assuming that those two lines are the only ones using E_comp that should work.
Basically E_comp will be part of its own separate state-selection block and changes there shouldn't influence anything else.
However, state selection consists of a number of algorithms and heuristics so it is difficult to formally guarantee that any change does not influence it.
I could imagine some strange possibilities that would break this, but I don't think anyone has implemented them - and I don't see a use-case for them (except to mess up cases like this).
And if you instead of integrating want to differentiate a signal it is a lot messier.

What are does the gray squares represent in MiniZinc?

In Minizinc while visualising the execution tree (created via Profile search) I obtain a tree containing gray square.
What do they represent ?
The gray squares are back-jumps. They are parts of the tree for which the solver was able to prove no solution would be present.
In a general constraint programming solver, the solver performs a tree search. Whenever you find that one branch doesn't contain any solutions, you go to another branch. Traditionally, there are two branches for every search decision. For example, a value assignment and its negation. But it is also possible to create a branch for every possible value that the variable can take.
In Lazy Clause Generation solvers, search works a bit differently. Whenever you find that search failed, you let the SAT backend generate a reason, generally referred to as a "no-good". This no-good explains why this branch didn't contain any solutions, and can from then on be enforced as a new constraint. If you just revisit your last decision, then this new constraint might still be violated. Instead, these LCG solvers use a back-jump mechanism to jump up to the last decision where the no-good was not yet violated.

MATLAB: How to Retrieve Intensity-Based Registration Data (with imregister) to re-Perform Registration?

I thought this should be a simple task, I just can't find the way to do it:
I am using 'imregister' (MATLAB) to register two medical X-ray images.
To ensure I get the best registration outcome as possible, I use some image processing techniques such as contrast enhancement, blackening of objects that are different between images and even cropping.
The outcome of this seems to be quite satisfying.
Now, I want to perform the exact same registaration on the original images, so that I can display the two ORIGINAL images automatically in alignment.
I think that an output parameter such as tform serves this purpose of performing a certain registration on any two images, but unfortunately 'imregister' does not provide such a parameter, as far as I know.
It does provide as an output the spatial referencing object R_reg which might be the answer, but I still haven't figured out how to use it to re-preform the registration.
I should mention that since I am dealing with medical X-ray images on which non of the feature-detecting algorithms seem to work well enough to perform registration, I can only use intensity-based (as opposed to feature-based) registration, and therefore am using 'imregister'.
Does anyone know how I can accomplish this?
Thanks!
Noga
So to make an answer out of my comment, there are 2 things you can do depending on the Matlab release you are using:
Option 1: R2013a and earlier
I suggest modifying the built-in imregister function by forcing tform to be an output and save that function under another name.
For example:
function [movingReg,Rreg,tform] = imregister2(varargin)
save that, add it to your path and you're good to go. If you type edit imregister you will notice that the 1st line calls imregtform to get the geometric transformation required, while the last line calls imwarp, to apply that geometric transformation. Which leads us to Option 2.
Option 2: R2013b and later
Well in that case you can directly use imregtform to get the tform object and then use imwarpto apply it. Easy isn't it?
Hope that makes things clearer!