Implementing a priority queue in matlab in order to solve optimization problems using BRANCH AND BOUND - matlab

I'm trying to code a priority queue in MATLAB, I know there is the SIMULINK toolbox for priority queue, but I'm trying to code it in MATLAB. I have a pseudo code that uses priority queue for a method called BEST First Search with Branch and Bound. The branch and bound algorithm design strategy is a state space tree and it is used to solve optimization problems. simple explanation of what is branch and bound
I have read chapter 5: Branch and Bound from a book called 'FOUNDATIONS OF ALGORITHMS', it's the 4th edition by Richard Neapolitan and Kumarss Naimipour , and the text is about designing algorithms, complexity analysis of algorithms, and computational complexity (analysis of problems), very interesting book, and I came across this pseudocode:
Void BeFS( state_space_tree T, number& best)
{
priority _queue-of_node PQ;
node(u,v);
initialize (PQ) % initialize PQ to be empty
u=root of T;
best=value(v);
insert(PQ,v) insert(PQ,v) is a procedure that adds v to the priority queue PQ
while(!empty(PQ){ % remove node with best bound
remove(PQ,v);
remove(PQ,v) is a procedure that removes the node with the best bound and it assigns its value to v
if(bound(v) is better than best) % check if node is still promising
for (each child of u of v){
if (value (u) is better than best)
(best=value(u);
if (bound(u) is better than best)
insert(PQ,u)
}
}
}
I don't know how to code it in matlab, and branch and bound is an interesting general algorithm for finding optimal solutions of various optimization problems, especially in discrete and combinatorial optimization, instead of using heuristics to find an optimal solution, since branch and bound reduces calculation time and finds the optimal solution faster.
EDIT:
I have checked everywhere whether a solution already has been implemented , before posting a question here. And I came here to get ideas of how I can get started to implement this code

I have included this in your post so people can know better what you expect of them. However, 'ideas to get started to implement' is still not much more specific than 'how to write code in matlab'.
However, I will still try to answer:
Make the structure of the code, write the basic loops and fill them with comments of what you want to do
Pick (the easiest or first) one of those comments, and see whether you can make it happen in a few lines, you can test it by generating some dummy input for that piece of code
Keep repeating step 2 untill all comments have the required code
If you get stuck in one of the blocks, and have searched but not found the answer to a specific question. Then this is not a bad place to ask.

Related

Designing an Asynchronous Down Counter (13 to 1 and reset) using JK-FF with PRN and CLRN

I decided to mess around with asynchronous counters and tried to create an asynchronous down counter using FF-JK which counts from 13 to 1 and wraps around. However, I ran into various problems.
My RESET signal expression: Q0.Q1.Q2.Q3 + (Q1.Q2.Q3)' where Q0 is LSB and Q3 is MSB
My circuit is as follows:
However, when I simulated the circuit, it gave me the wrong results.
I hope I described my problem detailedly, and if there was anything I missed, please correct me. Thank you very much, and have a wonderful day.
I tried reconnecting my reset signal from PRN to CLRN and vice versa, I have also tried using T-FF, SR-FF, D-FF (the implementations were different).
I found a workaround for this problem, though, I would not consider it an optimal solution. However, it does the job now, and I look forward to receiving insights on the issues I faced.
Instead of designing the RESET signal and having the circuit loop in some undefined state, I designed a MOD-13 counter which goes through all 13 states from 0000 to 1100 and maps the output to the desired values (13 to 1 respectively).
MOD-13 counters are easier to design, as they prevent undefined states from being reached so easily, and the circuit responsible for mapping the states with the desired values are simple combinatorial circuits which are also easy to design.
Of course, there exists a much better way to do this, but I am not able to implement it at the moment. With that being said, I am always open for discussions, and I look forward to seeing what kind of error I had accumulated during implementation. Have a wonderful day!

Determining direct-feedthrough paths without compilation/execution

I am currently working on a tool written in M-Script that executes a set of checks on a given simulink model. This tool does not compile/execute the model, I'm using find_system and get_param to retrieve all the information I need in order to run the routines of my tool.
I've reached a point where I need to determine whether a certain block has direct-feedthrough or not. I am not entirely sure how to do this. Two possible solutions come to mind:
A property might store this information and might be accessible via get_param. After investigating this, I could not find any such property.
Some block types have direct-feedthrough (Sum, Logic, ...), some other do not (Unit Delay, Integrator), so I could use the block type to determine whether a block has direct-feedthrough or not. Since I'm not an experienced Simulink modeller, I'm not sure if its possible to tell whether a block has direct-feedthrough by solely looking at its block type. Also, this would require a lookup table including all Simulink block types. An impossible task, since additional block types might get added to Simulink via third party modules.
Any help or pointers to possible solutions are greatly appreciated.
after some further research...
There is an "official solution" by Matlab:
just download the linked m-file
It shows that my idea was not that bad ;)
and for the record, my idea:
I think it's doable quite easily. I cannot present you some code yet, but I'll see what I can do. My idea is the following:
programatically create a new model
Add a Constant source block and a Terminator
add the Block you want to get to know the direct feedthrough ability in the middle
add_lines
run the simulation and log the states, which will give you the xout variable in the workspace.
If there is direct feedthrough the vector is empty, otherwise not.
probably you need to include some try/catch error catching for special cases
This way you can analyse a block for direct feedthrough by just migrating it to another model, without compiling your actual main model. It's not the fastest solution, but I can not imagine that performance matters that much for you.
Here we go, this script works fine for my examples:
function feedthrough = hasfeedthrough( input )
% get block path
blockinfo = find_system('simulink','Name',input);
blockpath = blockinfo{1};
% create new system
new_system('feed');
open_system('feed');
% add test model elements
src = add_block('simulink/Sources/Constant','feed/Constant');
src_ports = get_param(src,'PortHandles');
src_out = src_ports.Outport;
dest = add_block('simulink/Sinks/To Workspace','feed/simout');
dest_ports = get_param(dest,'PortHandles');
dest_in = dest_ports.Inport;
test = add_block(blockpath,'feed/test');
test_ports = get_param(test,'PortHandles');
test_in = test_ports.Inport;
test_out = test_ports.Outport;
add_line('feed',src_out,test_in);
add_line('feed',test_out,dest_in);
% setup simulation
set_param('feed','StopTime','0.1');
set_param('feed','Solver','ode3');
set_param('feed','FixedStep','0.05');
set_param('feed','SaveState','on');
% run simulation and get states
sim('feed');
% if condition for blocks like state space
feedthrough = isempty(xout);
if ~feedthrough
a = simout.data;
if ~any(a == xout);
feedthrough = ~feedthrough;
end
end
delete system
close_system('feed',1)
delete('feed');
end
When enter for example 'Gain' it will return 1, when you enter 'Integrator' it will return 0.
Execution time on my ancient machine is 1.3sec, not that bad.
Things you probably still have to do:
add another parameter, to define whether the block is continuous or discrete time and set the solver accordingly.
test some "extraordinary" blocks, maybe it's not working for everything. Also I haven implemented anything which could deal with logic, but actually the constant is 1 so it should work as well.
Just try out everything, at least it's a good base for you to work on.
A famous exception is the StateSpace Block which can have direct feedthrough AND states. But there are not sooo much standard blocks with this "behaviour". If you also have to deal with third party blocks you could get into some trouble, I have to admit that.
possible solution for the state space: if one compares xout with yout than one can find another indicator for direct feedthrough: if there is, the vectors are not equal. If so, than they are equal. Just an example, but I can imagine that it is possible to find more general ways to test things like that.
besides the added simout block above one needs the condition:
% if condition for blocks like state space
feedthrough = isempty(xout);
if ~feedthrough
a = simout.data;
if ~any(a == xout);
feedthrough = ~feedthrough;
end
end
From the documentation:
Tip
To determine if a block has direct feedthrough:
Double-click the
block. The block parameter dialog box opens.
Click the Help button in
the block parameter dialog box. The block reference page opens.
Scroll
to the Characteristics section of the block reference page, which
lists whether or not that block has direct feedthrough.
I couldn't find a programmatic equivalent though...
Based on a similar approach to the one by #thewaywewalk, you could set up a temporary model that contains an algebraic loop, similar to,
(Note that you would replace the State-Space block with any block that you want to test.)
Then set the diagnostics to error out if there is an algebraic loop,
If an error occurs when the model is compiled
>> modelname([],[],[],'compile');
(and you should check that it is the Algebraic Loop error that has occured), then the block has direct feed though.
If no error occurs then the block does not have direct feed though.
At this point you would need to terminate the model using
>> modelname([],[],[],'term');
If the block has multiple inports or outprts then you'll need to iterate over all combinations of them.

Implementing snapshot in FRP

I'm implementing an FRP framework in Scala and I seem to have run into a problem. Motivated by some thinking, this question I decided to restrict the public interface of my framework so Behaviours could only be evaluated in the 'present' i.e.:
behaviour.at(now)
This also falls in line with Conal's assumption in the Fran paper that Behaviours are only ever evaluated/sampled at increasing times. It does restrict transformations on Behaviours but otherwise we find ourselves in huge problems with Behaviours that represent some input:
val slider = Stepper(0, sliderChangeEvent)
With this Behaviour, evaluating future values would be incorrect and evaluating past values would require an unbounded amount of memory (all occurrences used in the 'slider' event would have to be stored).
I am having trouble with the specification for the 'snapshot' operation on Behaviours given this restriction. My problem is best explained with an example (using the slider mentioned above):
val event = mouseB // an event that occurs when the mouse is pressed
val sampler = slider.snapshot(event)
val stepper = Stepper(0, sampler)
My problem here is that if the 'mouseB' Event has occurred when this code is executed then the current value of 'stepper' will be the last 'sample' of 'slider' (the value at the time the last occurrence occurred). If the time of the last occurrence is in the past then we will consequently end up evaluating 'slider' using a past time which breaks the rule set above (and your original assumption). I can see a couple of ways to solve this:
We 'record' the past (keep hold of all past occurrences in an Event) allowing evaluation of Behaviours with past times (using an unbounded amount of memory)
We modify 'snapshot' to take a time argument ("sample after this time") and enforce that that time >= now
In a more wacky move, we could restrict creation of FRP objects to the initial setup of a program somehow and only start processing events/input after this setup is complete
I could also simply not implement 'sample' or remove 'stepper'/'switcher' (but I don't really want to do either of these things). Has anyone any thoughts on this? Have I misunderstood anything here?
Oh I see what you mean now.
Your "you can only sample at 'now'" restriction isn't tight enough, I think. It needs to be a bit stronger to avoid looking into the past. Since you are using an environmental conception of now, I would define the behavior construction functions in terms of it (so long as now cannot advance by the mere execution of definitions, which, per my last answer, would get messy). For example:
Stepper(i,e) is a behavior with the value i in the interval [now,e1] (where e1 is the
time of first occurrence of e after now), and the value of the most recent occurrence of e afterward.
With this semantics, your prediction about the value of stepper that got you into this conundrum is dismantled, and the stepper will now have the value 0. I don't know whether this semantics is desirable to you, but it seems natural enough to me.
From what I can tell, you are worried about a race condition: what happens if an event occurs while the code is executing.
Purely functional code does not like to have to know that it gets executed. Functional techniques are at their finest in the pure setting, such that it does not matter in what order code is executed. A way out of this dilemma is to pretend that every change happened in one sensitive (internal, probably) piece of imperative code; pretend that any functional declarations in the FRP framework happen in 0 time so it is impossible for something to change during their declaration.
Nobody should ever sleep, or really do anything time sensitive, in a section of code that is declaring behaviors and things. Essentially, code that works with FRP objects ought to be pure, then you don't have any problems.
This does not necessarily preclude running it on multiple threads, but to support that you might need to reorganize your internal representations. Welcome to the world of FRP library implementation -- I suspect your internal representation will fluctuate many times during this process. :-)
I'm confused about your confusion. The way I see is that Stepper will "set" the behavior to a new value whenever the event occurs. So, what happens is the following:
The instant in which the event mouseB occurs, the value of the slider behavior will be read (snapshot). This value will be "set" into the behavior stepper.
So, it is true that the Stepper will "remember" values from the past; the point is that it only remembers the latest value from the past, not everything.
Semantically, it is best to model Stepper as a function like luqui proposes.

Performance of immutable set implementations in Scala

I have recently been diving into Scala and (perhaps predictably) have spent quite a bit of time studying the immutable collections API in the Scala standard library.
I am writing an application that necessarily does many +/- operations on large sets. For this reason, I want to ensure that the implementation I choose is a so-called "persistent" data structure so that I avoid doing copy-on-write. I saw this answer by Martin Odersky, but it didn't really clear up the issue for me.
I wrote the following test code to compare the performance of ListSet and HashSet for add operations:
import scala.collection.immutable._
object TestListSet extends App {
var set = new ListSet[Int]
for(i <- 0 to 100000) {
set += i
}
}
object TestHashSet extends App {
var set = new HashSet[Int]
for(i <- 0 to 100000) {
set += i
}
}
Here is a rough runtime measurement of the HashSet:
$ time scala TestHashSet
real 0m0.955s
user 0m1.192s
sys 0m0.147s
And ListSet:
$ time scala TestListSet
real 0m30.516s
user 0m30.612s
sys 0m0.168s
Cons on a singly linked list is a constant-time operation, but this performance looks linear or worse. Is this performance hit related to the need to check each element of the set for object equality to conform to the no-duplicates invariant of Set? If this is the case, I realize it's not related to "persistence".
As for official documentation, all I could find was the following page, but it seems incomplete: Scala 2.8 Collections API -- Performance Characteristics. Since ListSet seems initially to be a good choice for its memory footprint, maybe there should be some information about its performance in the API docs.
An old question but also a good example of conclusions being drawn on the wrong foundation.
Connor, basically you're trying to do a microbenchmark. That is generally not recommended and damn hard to do properly.
Why? Because the JVM is doing many other things than executing the code in your examples. It's loading classes, doing garbage collection, compiling bytecode to native code, etc. All dynamically and based on different metrics sampled at runtime.
So you cannot conclude anything about the performance of the two collections with the above test code. For example, what you could actually be measuring could be the compilation time of the += method of HashSet and garbage collection times of ListSet. So it's a comparison between apples and pears.
To do a micro benchmark properly, you should:
Warm up the JVM: Load all classes, ensure all code paths in the benchmark are run and hot spots in the code are compiled (e.g. the += method).
Run the benchmark and ensure neither the GC or the compiler runs during the test (use the JVM flags -XX:-PrintCompilation and -XX:-PrintGC). If either runs during the test, discard the result.
Repeat step 2 and sample 10-15 good measurements. Calculate variance and standard deviation.
Evaluate: If the mean of each benchmark +/- 3 std do not overlap, then you can draw a conclusion about which is faster. Otherwise, it's a blurry result (depending on the amount of overlap).
I can recommend reading Oracle's recommendations for doing micro benchmarks and a great article about benchmark pitfalls by Brian Goetz.
Also, if you want to use a good tool, which does all the above for you, try Caliper by Google.
The key line from the ListSet source is (within subclass Node):
override def +(e: A): ListSet[A] = if (contains(e)) this else new Node(e)
where you can see that an item is added only if it is not already contained. So adding to the set is O(n). You can generally assume that XMap has similar performance characteristics to XSet, and ListMap is listed as linear time all the way along. This is why, and it is how a set is supposed to behave.
P.S. In the TestHashSet case you're measuring startup time. It's way more than 30x faster.
Since a set has to have with no duplicates, before adding an element, a Set must check to see if it already contains the element. This search in a list that has no guarantee of an element's position will be O(N) linear time. The same general idea applies to its remove operation.
With a HashSet, the class defines a function that picks a location for any element in O(1), which makes the contains(element) method much quicker, at the expense of taking up more space to decrease the chance of element location collisions.

Does MATLAB perform tail call optimization?

I've recently learned Haskell, and am trying to carry the pure functional style over to my other code when possible. An important aspect of this is treating all variables as immutable, i.e. constants. In order to do so, many computations that would be implemented using loops in an imperative style have to be performed using recursion, which typically incurs a memory penalty due to the allocation a new stack frame for each function call. In the special case of a tail call (where the return value of a called function is immediately returned to the callee's caller), however, this penalty can be bypassed by a process called tail call optimization (in one method, this can be done by essentially replacing a call with a jmp after setting up the stack properly). Does MATLAB perform TCO by default, or is there a way to tell it to?
If I define a simple tail-recursive function:
function tailtest(n)
if n==0; feature memstats; return; end
tailtest(n-1);
end
and call it so that it will recurse quite deeply:
set(0,'RecursionLimit',10000);
tailtest(1000);
then it doesn't look as if stack frames are eating a lot of memory. However, if I make it recurse much deeper:
set(0,'RecursionLimit',10000);
tailtest(5000);
then (on my machine, today) MATLAB simply crashes: the process unceremoniously dies.
I don't think this is consistent with MATLAB doing any TCO; the case where a function tail-calls itself, only in one place, with no local variables other than a single argument, is just about as simple as anyone could hope for.
So: No, it appears that MATLAB does not do TCO at all, at least by default. I haven't (so far) looked for options that might enable it. I'd be surprised if there were any.
In cases where we don't blow out the stack, how much does recursion cost? See my comment to Bill Cheatham's answer: it looks like the time overhead is nontrivial but not insane.
... Except that Bill Cheatham deleted his answer after I left that comment. OK. So, I took a simple iterative implementation of the Fibonacci function and a simple tail-recursive one, doing essentially the same computation in both, and timed them both on fib(60). The recursive implementation took about 2.5 times longer to run than the iterative one. Of course the relative overhead will be smaller for functions that do more work than one addition and one subtraction per iteration.
(I also agree with delnan's sentiment: highly-recursive code of the sort that feels natural in Haskell is typically likely to be unidiomatic in MATLAB.)
There is a simple way to check this. Create this function tail_recursion_check:
function r = tail_recursion_check(n)
if n > 1
r = tail_recursion_check(n - 1);
else
error('error');
end
end
and run tail_recursion_check(10), for example. You are going to see a very long stack trace with 10 items that says error at line 3. If there were tail call optimization, you would only see one.