How to store a huge matrix into database - postgresql

I plan to use PostgreSQL to store a huge matrix.
The structure of the dataset is like below:
It's a 20,000 * 20,000 matrix
Each element in the matrix has 5ish
records to describe the features of the interactions between two
nodes.
Is there any way to construct the database to make it easy to store and efficient to query?
Thanks in advance!

My first advice would be to design you table with the column and row of each matrix element.
Eg. table = {row,column,record1,record2,record3,record4,record5}, with {row,column} been the primary key.
Hope it helps.

Related

Intersecting two tables with one common row elements in matlab

I have two different tables (.csv files) as:
I need to merge these two tables in MATLAB, while intersecting first columns of both the tables. I want to make a new separate table with six number of columns(combined columns of both the tables) and number of rows will be equal to the number of intersecting elements of first column of both the tables.
How should I do the intersection and merging of these two tables?
I'm proposing an answer. I'm not claiming it is the best answer. In fact, I think there are probably much faster ones! Also note that I do not have MATLAB in front of me right now and can't test this. There might be some mistakes.
First of all, read the .csv files into memory. In table 1, convert the first column into numeric data (currently, it looks like they are strings). After this step, you want to have two double matricies I'll call table1 (which is 3296x5) and table2 (which is 3184x3).
Second, (this is where it gets mildly interesting, step 1 was the boring stuff), is to find all IDs that are common to both tables. This can be done by calling commonIDs=unique([table1(:,1) ; table2(:,1)]).
Third, get the indicies of the common rows for table1. Then repeat for table2. This is done using the ismember function as follows:
goodEntries1=ismember(table1(:,1),commonIDs);
goodEntries2=ismember(table2(:,1),commonIDs);
Lastly, we extract data and combine to get a result. Note that I only include the ID column once:
result=[table1(goodEntries1,:) table2(goodEntries2,2:end)];
You will need to test this to make sure it is robust. I think that this will keep the right rows together, but depending on how ismember works, you might end up combining rows out of order (for instance, table1's ID=5 with table2's ID=6).

Designed PostGIS Database...Points table and polygon tables...How to make more efficient?

This is a conceptual question, but I should have asked it long ago on this forum.
I have a PostGIS database, and I have many tables in it. I have researched some on the use of keys in databases, but I'm not sure how to incorporate keys in the case of the point data that is dynamic and increases with time.
I'm storing point data in one table, and this data grows each day. It's about 10 million rows right now and will probably grow about 10 million rows each year or so. There are lat, lon, time, and the_geom columns.
I have several other tables, each representing different polygon groups (converted shapefiles to tables with shp2pgsql), like counties, states, etc.
I'm writing queries that relate the point data to the spatial tables to see if points are inside of the polygons, resulting in things like "55 points in X polygon in the past 24 hours", etc.
The problem is, I don't have a key that relates the point table to the other tables. I think this is probably inhibiting query efficiency, but I'm not sure.
I know this question is fairly vague, and I'm happy to clarify anything, but I basically have a bunch of points in a table that I'm spatially comparing to other tables, and I'm trying to find the best way to design things.
Thanks for any help!
If you don't have already, you should build a spatial index on both the point and polygons table.
Anyway, spatial comparisons are usually slower than numerical comparison.
So adding one or more keys to the point table referencing the other tables, and using them on your select queries instead of spatial operations, will surely speed up.
Obviously, inserts will be slower, but, given the numbers you gave (10millions per year), it should not be a problem.
Probably, just adding a foreign key to the smallest entities (cities for example) and joining the others to get results (countries, states...) will be faster than spatial comparison.
Foreign keys (and other constraints) are not needed to query. Moreover they arise as a consequence of whatever design arises appropriate to the application per priciples of good design.
They just tell the DBMS that a list of values under a list of columns in a table also appear elsewhere as a list of values under a list of columns in some table. (For avoiding errors and improving optimization.)
You still would want indices on columns that will be involved in joins. Eg you might want X coordinates in two tables to hav sorted indices, in the same order. This is independent of whether one column's values form a subset of another's, ie whether a foreign key constraint holds between them.

Sparse table in MATLAB, is it possible?

\ am dealing with a matrix in MATLAB which is sparse and has many rows and columns. In this case, the row and columns of the matrix are the ids for particular items. Let's assume them as id1 and id2.
It would be nice if the ids for rows and columns could be embedded so I can have access to them easily to them without the need for creating extra variables that keep the two ids.
The answer would be probably to use a table data type. Tables are very ideal answer for my need however I was wondering if I could create a table data type for a sparse matrix?
A [m*n] sparse matrix %% m & n are huge
id1 [1*m] , id2 [1*n] %% two vectors containing numeric ids for rows and column
Could we obtain?
T [m*n] sparse table matrix
Thanks for sharing your view with me.
I will address the question and the comments in order to clear some confusion.
The short answer
There is no sparse table class in Matlab. Cannot do. Use sparse() matrices.
The long answer
There is a reason why sparse tables make little sense:
Philosophically speaking, the advantage of having nice row and column labels, is completely lost if you are working with a big panel of data and/or if the data is sparse.
Scrolling through 246829 rows and 33336 columns? Can only be useful at very isolated times if you are debugging your code and a specific outlier is causing you results to go off. Also, all you might see is just a sea of zeros.
Technically a table can have more columns for the same variable, i.e. table(rand(10,2), rand(10,1)) is a valid table. How would you consider define sparsity on such table?
Fine, suppose you are working with a matrix-like table, i.e. one element per table cell and same numeric class. Still, none of the algebraic operators are defined on a table(). So you need to extract the content first, in order to be able to perform any operation that spans more than a single column of data. Just to be clear, once the data is extracted, then you have e.g. your double (full) matrix or in an ideal case a double sparse matrix.
Now, a few misconceptions to clear:
Less variables implies clearer/cleaner code. Not true. You are probably thinking about the extreme case (in bad practices) of how do I make a series of variables a1, a2, a3, etc..
There is a sweet spot between verbosity and number of variables, amount of comments, and code clarity/maintainability. Only with time and experience you find the right balance.
Control over data cannot go without visual inspection. This approach does NOT scale with big data and the sooner you abandon it, the faster your code will become more reliable. You need to verify your results systematically, rather than relying on visual inspection. Failure to (visually) spot a problem in the data, grows exponentially with its dimension, faster than with systematic tests.
Some background info on my work:
I work with high-frequency prices, that's terabytes of data. I also extended the table() class with additional methods and fixes to help me with my work (see https://github.com/okomarov/tableutils), but I do not see how sparsity is a useful feature to add to table().

Database solution to store and aggregate vectors?

I'm looking for a way to solve a data storage problem for a project.
The Data:
We have a batch process that generates 6000 vectors of size 3000 each daily. Each element in the vectors is a DOUBLE. For each of the vectors, we also generate tags like "Country", "Sector", "Asset Type" and so on (It's financial data).
The Queries:
What we want to be able to do is see aggregates by tag of each of these vectors. So for example if we want to see the vectors by sector, we want to get back a response that gives us all the unique sectors and a 3000x1 vector that is the sum of all the vectors of each element tagged by that sector.
What we've tried:
It's easy enough to implement a normalized star schema with 2 tables, one with the tagging information and an ID and a second table that has "VectorDate, ID, ElementNumber, Value" which will have a row to represent each element for each vector. Unfortunately, given the size of the data, it means we add 18 million records to this second table daily. And since our queries need to read (and add up) all 18 million of these records, it's not the most efficient of operations when it comes to disk reads.
Sample query:
SELECT T1.country, T2.ElementNumber, SUM(T2.Value)
FROM T1 INNER JOIN T2 ON T1.ID=T2.ID
WHERE VectorDate = 20140101
GROUP BY T1.country, T2.ElementNumber
I've looked into NoSQL solutions (which I don't have experience with) but seen that some, like MongoDB allow for storing entire vectors as part of a single document - but I'm unsure if they would allow aggregations like we're trying efficiently (adding each element of the vector in a document to the corresponding element of other documents' vectors). I read the $unwind operation required isn't that efficient either?
It would be great if someone could point me in the direction of a database solution that can help us solve our problem efficiently.
Thanks!

What is the best way to preallocate a table in Matlab?

Usually I preallocate using cell(), zeros() or ones() depending on the type of data, but what is the best way to preallocate a table as it can hold various data structures?
I am talking about the table() functionality added in Matlab 2013b.
Obviously I can reserve memory using code like this:
T = table(cell(x,y))
but when my table is supposed to hold various datatypes I run into problems. Just imagine I want to fill in a column of integers now, or like in my case fill each row with an observation containing a string, an integer and a floating point number. T
How should Matlab know, how much memory to allocate, when you don't want to tell it what data is stored in the table? I don't think there is a good answer to your question besides "don't do it". If you know what is stored in each column, create the variables and add the rows as you go.
Or create the Data in preallocated Matrixes/Cells and create the table from them at the end.