Minizinc large array dimension - minizinc

Am generating a large 2d array in Minizinc(for instance, 250000X300) as I'm searching for solutions in this array. However, Minizinc seems not to be able to handle such large arrays and keeps stop working. Is there any way to solve this problem so that I can still get my solutions??? - do I need to rewrite my code - but I still need this large array which contains the solutions.

Related

plans and subset of rows in pyfftw

I want to use pyfft to repeatedly compute the discrete Fourier transform of a subset of rows for a two-dimensional array. I do not know in advance which rows I need to transform, that depends on the output from the previous round. I do know that doing it for all rows is wasteful.
It is my understanding that a 'plan' in FFTW3 is associated with the type of transform (c2c, r2c, etc) and the input/output length, which is always a vector in the 1D case. In pyfftw it looks like a 'plan' is associated to the type of transform and the input/output shape, so my interpretation is that it uses the same FFTW3 plan for every row.
My question is: is it possible to use the same FFTW3 plan for some of the rows, without creating separate pyfftw.FFTW objects for all possible combinations of rows?
On a different note, I am wondering how pyfftw uses multiple cores: does it use multiple cores for each row (this appears natural in view of FFTW3 documentation) or does it farm out different rows to different cores (which was my initial assumption)?
If you can create a numpy array from a view, you can plan for it with pyFFTW - all valid numpy arrays should work just fine.
This means several things:
Your array needs to have regular strides, but those strides can be arbitrary.
ND arrays are planned as ND transforms, with the selected axes being used.
You can probably do something cunning with stride tricks and it will probably work (but might not do what you expect if you do something too nefarious like overlapping rows and then use threads).
One solution that I've used quite a bit is to copy the rows that you want to transform into an interim array, and transforming that. You might well find that's the fastest option (particularly when you can allow for getting the byte offset correct).
Obviously, this doesn't work if you always have a different number of rows. You might still find that if you plan for the largest number of rows that are transformed and then copy in a subset you still do faster than otherwise.
The problem you're going to come up against, even if you go down to the C level, is that the planning overhead might well dominate if you're changing your transform sizes often.
You could also try pyfftw.interfaces.numpy_fft which is normally faster that numpy and has the ability to cache repeated transform sizes.

linear probing vs separate chaining in hashes

I am well aware that there's another question about this, but my question is different.
I know for sure that searching using separate chaining will us O(N/M) and if we sort the lists we get O( log(N/M)).
However the running time of searching or deleting using linear probing is not clear to me. As far as I know it is the load factor but that's it.
Additionally, when we have cases like a full array (worst case), is it better to use separate chain or linear probing?
If we have a sparse array, which one is to choose as well?
I can't seem to figure out where the advantages of them over each other.
thank you

Efficient way to store single matrices generated in a loop in Matlab?

I would like to know whether there is a way to reduce the amount of memory used by the following piece of code in Matlab:
n=3;
T=100;
r=T*2;
b=80;
BS=1000
bsuppostmp_=cell(1,BS);
bslowpostmp_=cell(1,BS);
bsuppnegtmp_=cell(1,BS);
bslownegtmp_=cell(1,BS);
for w=1:BS
bsuppostmp_{w}= randi([0,1],n*T,2^(n-1),r,b);
bslowpostmp_{w}=randi([0,3],n*T,2^(n-1),r,b);
bsuppnegtmp_{w}=randi([0,4],n*T,2^(n-1),r,b);
bslownegtmp_{w}=randi([0,2],n*T,2^(n-1),r,b);
end
I have decided to use cells of matrices because after this loop I need to call separately each single matrix in another loop.
If I run this code I get the message error "Your system has run out of application memory".
Do you know a more efficient (in terms of memory) way to store each single matrix?
Let's refer the page about Strategies for Efficient Use of Memory:
Because simple numeric arrays (comprising one mxArray) have the least overhead, you should use them wherever possible. When data is too complex to store in a simple array (or matrix), you can use other data structures.
Cell arrays are comprised of separate mxArrays for each element. As a result, cell arrays with many small elements have a large overhead.
I doubt that the overhead for cell array is really large ...
Let me give a possible explanation. What if Matlab cannot use the swap file in case of storing the 4D arrays into a cell array? When storing large numeric arrays, there is no out-of-memory error because Matlab uses the swap file for caching each variable when the used memory becomes too big. Whereas if each 4D array is stored in a super cell array, Matlab sees it as a single variable and cannot fragment it part in the RAM and part in the swap file. Ok I don't work at Mathworks so I don't know if I'm right or not, it's just an idea about this topic so I would be glad to know what is the real reason.
So my advice is the same as other comments: try to free matrices as soon as you've done with them. There is not so many possibilities to store many dense arrays: one big array (NOT recommended here, will reach out-of-memory sooner because Matlab makes it contiguous), cell array or struct array (and if I correctly understand the documentation, the overhead can be equivalent). In all cases, the data amount over all 4D arrays is really large, so the best thing to do is to care about keeping the memory constantly as low as possible by discarding some data once they are used and keep in memory only the results of computation (in case they take lower memory usage ...).

Efficient Access of elements in Matrix in Matlab

I have an m x n matrix of integers and where n is a fairly big number m and n ~1000. I want to iterate through all of these and perform a some operations, like accessing a particular cell and assigning a value of a particular cells.
However, at least in my implementation, this is rather inefficient as I have two for loops with Matrix(a,b) = Matrix(a,b+1) or something along these lines. Is there any other way to do this seeing as my current implementation takes a long time to traverse through about 100,000 cells and perform some operations.
Thank you
In matlab, it's almost always possible to avoid loops.
If you want to do Matrix(a,b)=Matrix(a,b+1), you should just do Matrix2=Matrix(:,2:end);
If you are more precise about what you do inside the loop, I can help you more.
Matlab uses column major ordering of matrixes in memory (unlike C). Are you sure you are iterating the indexes in the correct order? If not, try switching them and see if performance improves..
If you can't get rid of the for loops, one possibility would be to rewrite the expensive operations in C and create a MEX file as described here.

Storing characters in MATLAB arrays

I want to store a character along with numbers ? Is using Cells the only way ?
Yes, unless you store the ASCII values but I don't think it would be very useful.
Edit: Or an array of structures?
a.num = [1 2 3]
a.char = 'A'
I don't know exactly what you are trying to achieve…
This is a classic Computer Science 101 sort of question. An array holds 1 type of data traditionally. In matlab the term gets abused.
Here are some things to know:
An array of characters is called a string
An array can only store one data type
The size of an array can’t change
But matlab has an abstraction on top of all this so the engineer that didn't study programming for a year can still get the job done. Whilst matlab lets you change the size of a 1D matrix, it still won't let you have different types of data in the same array. Keep in mind that matlab 1D arrays aren't strictly arrays because this fact. Similarly with arrays of arrays with differing in size. Matlab doesn’t allow different data structures for optimization reasons.
This question arises from not knowing the containers which are available.
List: A container indexed elements (great for sorting and adding elements quickly)
Set: for a collection of unique elements (great for ensuring that there are no duplicates)
Map: Great for quickly retrieving elements based on a unique identifier
Java has implementations for these and you can use these within matlab if you want which is the general way it is done if you need a collection other than a matrix. I don’t think matlab bothered to wrap these classes themselves because they would be exactly the same anway.
In general its not a great idea to store different data types in these collections if you can avoid it, do so but otherwise so be it.
PS I don't think structs should ever be used because there is no way to know what members they have without debugging them.
If you do
a.num = [1 2 3]
a.char = 'A'
Unless you tell everyone a.num and a.char exist there is no way of knowing that a has char and num, without running code. Bad bad practice.