OCaml "reading" a matrix (list of lists) - eclipse

I have this problem in which i want to change the value of the element in col ln of a matrix i already have a function for that but i think i can make a better one, the only thing is i can´t think of another way of getting an element from the matrix and putting it back
i can get it using
List.nth c (List.nth lb m)
but im having trouble putting it back
what i have for now is (fun left and right not done)
matrixleft m #(( List.nth c (List.nth lb m) ) + 1 )::matrixright m

This code looks OK to me on a complexity basis, though it's going to traverse the input matrix twice--once to get the old value and once to install the new one. You can get the answer by traversing just once if you don't mind some more fiddly coding.
If you aren't following some externally imposed requirement, you would be better off using a real matrix (an array of arrays). Then there's no traversing, so you get constant time updates.

Related

ASP Core-2: Infinite Loop in Hamiltonian Path Solver

I am totally new in answer set proramming (ASP Core-2 with Clingo) and am struggling with a problem I have not been able to solve.
The goal is to solve the 'Hamiltonian Path' problem, which is described as follows:
In a directed graph we're looking for a path which visits all nodes of the graph exactly once.
We can assume that all edge relations are known as facts, and that the input graph does actually contain a Hamiltonian Path. The desired output are the predicates
visited(NodeName, StepInOrder)
that each contains a node and the number at which step this node is reached. So for example, an output could be
visited(a, 1), visited(c, 2), visited(b, 3)
See my code below. The problem is, that at the last line, the program seems to enter an infinite loop. And I do not understand what the cause of this could probably be.
% pick one random start node
1 <= {startNode(N) : node(N)} <= 1.
% define helper predicate inPath which is true once and false once for each edge of the graph
{inPath(X, Y)} :- edge(X,Y).
% create possible paths
visited(X, 1) :- startNode(X).
visited(Y, C+1) :- visited(X, C), inPath(X, Y), not visited(Y, _). % infinite loop here
% some killing constraints to eliminate invalid solution candidates...
My guess is, that the program is generating an infinite number of answer sets, which all differ in their #stepInOrder value, because of some sort of cycle, but I thought this should be prevented by the not visited(Y, _).
If you need any additional context, let me know. Thanks in advance!
Lets go through your code:
1 <= {startNode(N) : node(N)} <= 1.
I guess this works, but just writing 1 {startNode(N) : node(N)} 1. or {startNode(N) : node(N)} == 1. would do the same.
% define helper predicate inPath which is true once and false once for each edge of the graph
{inPath(X, Y)} :- edge(X,Y).
This one works, allthough there are more efficient approaches to write it.
% create possible paths
visited(X, 1) :- startNode(X).
visited(Y, C+1) :- visited(X, C), inPath(X, Y), not visited(Y, _). % infinite loop here
You basically say: a node Y is visited at time C+1, if a node X was visited at time C, there is a path from X to Y, and at no time Y was visited or will be visited. So you clearly want to generate something but if you generate it you violate the rule which generated it. In clingo atoms can not change values. If an atom is labeled as True, it is True the whole time.
So I would probably write something like this:
1 { visited(Y,C+1) : inPath(X,Y) } 1 :- visited(X, C).
which reads: given X is visited at time C, the number of outgoing marked edges from X to any node Y is exactly 1. Mark Y as visited at time C+1.
All what is missing now, is a constraint to include all nodes to be visited.
You might want to have a look at this question from around the same time. The solution of the user has a different approach, he or she does not assign numbers to the nodes to indicate an order.

How to merge two lists(or arrays) while keeping the same relative order?

For example,
A=[a,b,c,d]
B=[1,2,3,4]
my question is: how to generate all possible ways to merge A and B, such that in the new list we can have a appears before b, b appears before c,etc., and 1 appears before 2, 2 appears before 3,etc.?
I can think of one implementation:
We choose 4 slots from 8,then for each possible selection, there are 2 possible ways--A first or B first.
I wonder is there a better way to do this?
EDIT:
I've just learned a more intuitive way--use recursion.
For each spot, there are two possible cases, either taken from A or taken from B; keep recursing until A or B is empty, and concatenate the remaining.
If the relative order is different than what constitutes a sorted list (I assume it is, because otherwise it would not be a problem), then you need to formalize the initial order. Multiple ways to do that. the easiest being remembering the index of each element in each list. Example: valid position for a is 1 in the first array [...]
Then you could just go ahead and join the lists, then generate all the permutations of elements. Any valid permutation is one that keeps the order relationship of the new indexes with the order you have stored
Example of one valid permutation array
a12b3cd4
You can know and check that this is valid permutation because the index of element 'a' is smaller than the index of b, and so on. and you know the indexes must be smaller because this is what you have formulated at the first step
Similarly an invalid permutation array is
ba314cd2
same way of checking

how can I calculate order of an element in finite field using ntl?

I'm trying to calculate order of an element in finite field (Group) using ntl. but I did not find any function to do this!
can anyone guide me please?
I think there is no built in way to do this.
But you can write a script yourself.
A field F has two operations, addition (+) and multiplication (*). First you have to specify if you want to know the order of an element g in the group (F,+) or the group (F \ {0}, *).
Find the order of g in (F,+):
This is the easy case, since the order of every element in this group is p, if the field has pm elements.
Find the order of g in (F \ {0}, *):
This is a litte bit hard. The order of g in (F \ {0}, *) is also called the discrete logarithm. Basicly you can try gk for every k=1,...,pm. But this will take a while. A simple way would be the baby-step giant-step algorithm.
I have never tried it, but you may also take a look at this discrete logarithm implementation using NTL.

Matlab: Multiple assignment through logical indexing

I am wondering if there is some way, how to multiple assign values to different variables according logical vector.
For example:
I have variables a, b, c and logical vector l=[1 0 1] and vector with values v but just for a and c. Vector v is changing its dimension, but everytime, it has the same size as the number of true in l.
I would like to assign just new values for a and c but b must stay unchanged.
Any ideas? Maybe there is very trivial way but I didn't figure it out.
Thanks a lot.
I think your problem is, that you stored structured data in an unstructured way. You assume a b c to have a natural order, which is pretty obvious but not represented in your code.
Replacing a b c with a vector x makes it a really easy task.
x(l)=v(l);
Assuming you want to keep your variable names, the simplest possibility I know would be to write a function:
function varargout=update(l,v,varargin)
varargout=varargin;
l=logical(l);
varargout{l}=v(l);
end
Usage would be:
[a,b,c]=update(l,v,a,b,c)

optimal way of storing multidimensional array/tensor

I am trying to create a tensor (can be conceived as a multidimensional array) package in scala. So far I was storing the data in a 1D Vector and doing index arithmetic.
But slicing and subarrays are not so easy to get. One needs to do a lot of arithmetic to convert multidimensional indices to 1D indices.
Is there any optimal way of storing a multidimensional array? If not, i.e. 1D array is the best solution, how one can optimally slice arrays (some concrete code would really help me)?
The key to answering this question is: when is pointer indirection faster than arithmetic? The answer is pretty much never. In-order traversals can be about equally fast for 2D, and things get worse from there:
2D random access
Array of Arrays - 600 M / second
Multiplication - 1.1 G / second
3D in-order
Array of Array of Arrays - 2.4G / second
Multiplication - 2.8 G / second
(etc.)
So you're better off just doing the math.
Now the question is how to do slicing. Initially, if you have dimensions of n1, n2, n3, ... and indices of i1, i2, i3, ..., you compute the offset into the array by
i = i1 + n1*(i2 + n2*(i3 + ... ))
where typically i1 is chosen to be the last (innermost) dimension (but in general it should be the dimension most often in the innermost loop). That is, if it were an array of arrays of (...), you would index into it as a(...)(i3)(i2)(i1).
Now suppose you want to slice this. First, you might give an offset o1, o2, o3 to every index:
i = (i1 + o1) + n1*((i2 + o2) + n2*((i3 + o3) + ...))
and then you will have a shorter range on each (let's call these m1, m2, m3, ...).
Finally, if you eliminate a dimension entirely--let's say, for example, that m2 == 1, meaning that i2 == 0, you just simplify the formula:
i = (i1 + o1 + n1*o2) + (n1+n2)*((i3 + o3) + ... ))
I will leave it as an exercise to the reader to figure out how to do this in general, but note that we can store new constants o1 + n1*o21 and n1+n2 so we don't need to keep doing that math on the slice.
Finally, if you are allowing arbitrary dimensions, you just put that math into a while loop. This does, admittedly, slow it down a little bit, but you're still at least as well off as if you'd used a pointer dereference (in almost every case).
From my own general experience: If you have to write a multidimensional (rectangular) array class yourself, do not aim to store the data as Array[Array[Double]] but use a one-dimensional storage and add helper methods for converting the multidimensional access tuples to a simple index and vice versa.
When using lists of lists, you need to do much to much bookkeeping that all lists are of the same size and you need to be careful when assigning a sublist to another sublist (because this makes the assigned to sublist identical to the first and you wonder why changing the item at (0,5) also changes (3,5)).
Of course, if you expect a certain dimension to be sliced much more often than another and you want to have reference semantics for that dimension as well, a list of lists will be the better solution, as you may pass around those inner lists as a slice to the consumer without making any copy. But if you don’t expect that, it is a better solution to add a proxy class for the slices which maps to the multidimensional array (which in turn maps to the one-dimensional storage array).
Just an idea: how about a map with Int-tuples as keys?
Example:
val twoDimMatrix = Map((1,1) -> -1, (1,2) -> 5, (2,1) -> 7.7, (2,2) -> 9)
and then you could
scala> twoDimMatrix.filterKeys{_._2 == 1}.values
res1: Iterable[AnyVal] = MapLike(-1, 7.7)
or
twoDimMatrix.filterKeys{tuple => { val (dim1, dim2) = tuple; dim1 == dim2}} //diagonal
this way the index arithmetics would be done by the map. I don't know how practical and fast this is though.
As soon as the number of dimension is known before the design, you can use a collection of collection ...(n times) of collection. If you must be able to build a verctor for any number of dimension, then, there's nothing convenient in the scala API to do it (as far as I know).
You can simply store information in a mulitdimensional array (eg. `Array[Array[Double]]).
If the tensors are small and can fit in cache, you can have a performance improvement with 1D arrays because of data memory locality. It should also be faster to copy the whole tensor.
For slicing arithmetic. It depends what kind of slicing you require. I suppose you already have a function for extracting an element based on indices. So write a basic splicing loop based on indices iteration, insert manually the expression for extracting element, and then try to simplify the whole loop. It is often simpler than to write a correct expression from scratch.